venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title TransFool: An Adversarial Attack against Neural Machine Translation Models Abstract Deep neural networks have been shown to be vulnerable to small perturbations of their inputs known as adversarial attacks. In this paper, we consider the particular task of Neural Machine Translation (NMT), where security is often critical. We investigate the vulnerability of NMT models to adversarial attacks and propose a new attack algorithm called TransFool. It builds on a multi-term optimization problem and a gradient projection step to compute adversarial examples that fool NMT models. By integrating the embedding representation of a language model in the proposed attack, we generate fluent adversarial examples in the source language that maintain a high level of semantic similarity with the clean samples and render the attack largely undetectable. Experimental results demonstrate that, for multiple translation tasks and different NMT architectures, our white-box attack can severely degrade the translation quality for more than 60% of the sentences while the semantic similarity between the original sentence and the adversarial example stays very high. Moreover, we show that the proposed attack is transferable to unknown target models and can fool those quite easily. Finally, based on automatic and human evaluations, our method leads to improvement in terms of success rate, semantic similarity, and fluency compared to the existing attacks both in white-box and black-box settings. Hence, TransFool permits to better characterize the vulnerability of NMT systems and outlines the necessity to design strong defense mechanisms and more robust NMT systems for real-life applications. 1 INTRODUCTION The impressive performance of Deep Neural Networks (DNNs) in different areas such as computer vision (He et al., 2016) and Natural Language Processing (NLP) (Vaswani et al., 2017) has led to their widespread usage in various applications. With such an extensive usage of these models, it is important to analyze their robustness and potential vulnerabilities. In particular, it has been shown that the outputs of these models are susceptible to imperceptible changes in the input, known as adversarial attacks (Szegedy et al., 2014). Adversarial examples, which differ from the original inputs in an imperceptible manner, cause the target model to generate incorrect outputs. If these models are not robust enough to these attacks, they cannot be reliably used in applications with security requirements. To address this issue, many studies have been recently devoted to the effective generation of adversarial examples, the defense against attacks, and the analysis of the vulnerabilities of DNN models (Moosavi-Dezfooli et al., 2016; Madry et al., 2018; Ortiz-Jiménez et al., 2021). The dominant methods to craft imperceptible attacks for continuous data, e.g., audio and image data, are based on gradient computing and various optimization strategies. However, these methods cannot be directly extended to NLP models due to the discrete nature of the tokens in the corresponding representations (i.e., words, subwords, and characters). Another challenge in dealing with textual data is the characterization of the imperceptibility of the adversarial perturbation. The ℓpnorm is highly utilized in image data to measure imperceptibility but it does not apply to textual data where manipulating only one token in a sentence may significantly change the semantics. Moreover, in gradient-based methods, it is challenging to incorporate linguistic constraints in a differentiable manner. Hence, optimization-based methods are more difficult and less investigated for adversarial attacks against NLP models. Currently, most attacks in textual data are gradient-free and simply based on heuristic word replacement, which may result in sub-optimal performance (Alzantot et al., 2018; Ren et al., 2019; Zang et al., 2020; Jin et al., 2020; Morris et al., 2020; Guo et al., 2021; Sadrizadeh et al., 2022). In the literature, adversarial attacks have been mainly studied for text classifiers, but less for other NLP tasks such as Neural Machine Translation (NMT) (Zhang et al., 2020b). In text classifiers, the number of output labels of the model is limited, and the adversary’s goal is to mislead the target model to classify the input into any wrong class (untargeted attack) or a wrong predetermined class (targeted attack). However, in NMT systems, the output of the target model is a sequence of tokens, which is a much larger space than that of a text classifier (Cheng et al., 2020a), and it is probable that the ground-truth translation changes after perturbing the input sequence. Hence, it is important to craft meaning-preserving adversarial sentences with a low impact on the ground-truth translation. In this paper, we propose TransFool to build meaning-preserving and fluent adversarial attacks against NMT models. We build a new solution to the challenges associated with gradient-based adversarial attacks against textual data. To find an adversarial sentence that is fluent and semantically similar to the input sentence but highly degrades the translation quality of the target model, we propose a multi-term optimization problem over the tokens of the adversarial example. We consider the white-box attack setting, where the adversary has access to the target model and its parameters. White-box attacks are widely studied since they reveal the vulnerabilities of the systems and are used in benchmarks. To ensure that the generated adversarial examples are imperceptibly similar to the original sentences, we incorporate a Language Model (LM) in our method in two ways. First, we consider the loss of a Causal Language Model (CLM) in our optimization problem in order to impose the syntactic correctness of the adversarial example. Second, by working with the embedding representation of LMs, instead of the NMT model, we ensure that similar tokens are close to each other in the embedding space (Tenney et al., 2019). It enables the definition of a similarity term between the respective tokens of the clean and adversarial sequences. Hence, we include a similarity constraint in the proposed optimization problem, which uses the LM embeddings. Finally, our optimization contains an adversarial term to maximize the loss of the target NMT model. The generated adversarial example, i.e., the minimizer of the proposed optimization problem, should consist of meaningful tokens, and hence, the proposed optimization problem should be solved in a discrete space. By using a gradient projection technique, we first consider the continuous space of the embedding space and perform a gradient descent step and then, we project the resultant embedding vectors to the most similar valid token. In the projection step, we use the LM embedding representation and project the output of the gradient descent step into the nearest meaningful token in the embedding space (with maximum cosine similarity). We test our method against different NMT models with transformer structures, which are now widely used for their exceptional performance. For different NMT architectures and translation tasks, experiments show that our white-box attack can reduce the BLEU score, a widely-used metric for translation quality evaluation (Post, 2018), to half for more than 60% of the sentences while it maintains a high level of semantic similarity with the clean samples. Furthermore, we extend TransFool to black-box settings and show that it can fool unknown target models. Overall, automatic and human evaluations show that in both white-box and black-box settings, TransFool outperforms the existing heuristic strategies in terms of success rate, semantic similarity, and fluency. In summary, our contributions are as follows: • We define a new optimization problem to compute semantic-preserving and fluent attacks against NMT models. The objective function contains several terms: adversarial loss to maximize the loss of the target NMT model; a similarity term to ensure that the adversarial example is similar to the original sentence; and loss of a CLM to generate fluent and natural adversarial examples. • We propose a new strategy to incorporate linguistic constraints in our attack in a differentiable manner. Since LM embeddings provide a meaningful representation of the tokens, we use them instead of the NMT embeddings to compute the similarity between two tokens. • We design a white-box attack algorithm, TransFool, against NMT models by solving the proposed optimization problem with gradient projection. Our attack, which operates at the token level, is effective against state-of-the-art transformer-based NMT models and outperforms prior works. • By using the transferability of adversarial attacks to other models, we extend the proposed whitebox attack to the black-box setting. Our attack is highly effective even when the target languages of the target NMT model and the reference model are different. To our knowledge, this type of transfer attack, cross-lingual, has not been investigated. The rest of the paper is organized as follows. We review the related works in Section 2. In Section 3, we formulate the problem of adversarial attacks against NMT models, and propose an optimization problem to build adversarial attacks. We describe our attack algorithm in Section 4. In Section 5, we discuss the experimental results and evaluate our algorithm against different transformer models and translation tasks. Moreover, we evaluate our attack in black-box settings and show that TransFool has very good transfer properties. Finally, the paper is concluded in Section 6. 2 RELATED WORK Machine translation, an important task in NLP, is the task of automatically converting a sequence of words in a source language to a sequence of words in a target language (Bahdanau et al., 2015). By using DNN models, NMT systems are reaching exceptional performance, which has resulted in their usage in a wide variety of areas, especially in safety and security sensitive applications. But any faulty output of NMT models may result in irreparable incidents in real-world applications. Hence, we need to better understand the vulnerabilities of NMT models to perturbations of input samples, in particular to adversarial examples, to ensure security of applications and robustness of such models. Adversarial attacks against NMT systems have been studied in recent years. First, Belinkov & Bisk (2018) show that character-level NMT models are highly vulnerable to character manipulations such as typos in a block-box setting. Similarly, Ebrahimi et al. (2018a) investigate the robustness of character-level NMT models. They propose a white-box adversarial attack based on HotFlip (Ebrahimi et al., 2018b) and greedily change the important characters to decrease the translation quality (untargeted attack) or mute/push a word in the translation (targeted attack). However, character-level manipulations can be easily detected. To circumvent this issue, many of the adversarial attacks against NMT models are rather based on word replacement. Cheng et al. (2019) propose a white-box attack where they first select random words of the input sentence and replace them with a similar word. In particular, in order to limit the search space, they find some candidates with the help of a language model and choose the token that aligns best with the gradient of the adversarial loss to cause more damage to the translation. Michel et al. (2019) and Zhang et al. (2021) find important words in the sentence and replace them with a neighbor word in the embedding space to create adversarial examples. However, these methods use heuristic strategies which may result in sub-optimal performance. There are also some other types of attacks against NMT models in the literature. In (Wallace et al., 2020), a new type of attack, i.e., universal adversarial attack, is proposed, which consists of a single snippet of text that can be added to any input sentence to mislead the NMT model. However, the added phrase is meaningless, hence easily detectable. Cheng et al. (2020a) propose Seq2Sick, a targeted white-box attack against NMT models. They introduce an optimization problem and solve it by gradient projection. The proposed optimization problem contains an adversarial loss and a group lasso term to ensure that only a few words of the sentence are modified. Although they have a projection step to the nearest embedding vector, they use the NMT embeddings, which may not preserve semantic similarity. Other types of attacks against NMT models with different threat models and purposes have also been investigated in the literature. Some papers focus on making NMT models robust to perturbation to the inputs (Cheng et al., 2018; 2020b; Tan et al., 2021). Some other papers use adversarial attacks to enhance the NMT models in some aspects, such as word sense disambiguation (Emelin et al., 2020), robustness to subword segmentation (Park et al., 2020), and robustness of unsupervised NMT (Yu et al., 2021). In (Xu et al., 2021; Wang et al., 2021), the data poisoning attacks against NMT models are studied. Another type of attack whose purpose is to change multiple words while ensuring that the output of the NMT model remains unchanged is explored in (Chaturvedi et al., 2019; 2021). Another attack approach is presented in (Cai et al., 2021), where the adversary uses the hardware faults of systems to fool NMT models. In summary, most of the existing adversarial attacks against NMT models are not undetectable since they are based on character manipulation, or they use the NMT embedding space to find similar tokens. Also, heuristic strategies based on word-replacement are likely to have sub-optimal performance. Finally, none of these attacks study the transferability to black-box settings. We introduce TransFool to craft effective and fluent adversarial sentences which are similar to the original ones. 3 OPTIMIZATION PROBLEM In this section, we first present our new formulation for generating adversarial examples against NMT models, along with different terms that form our optimization problem. Adversarial Attack. Consider X to be the source language space and Y to be the target language space. The NMT model f : X → Y generally has an encoder-decoder structure (Bahdanau et al., 2015; Vaswani et al., 2017) and aims to maximize the translation probability p(yref|x), where x ∈ X is the input sentence in the source language and yref ∈ Y is the ground-truth translation in the target language. To process textual data, each sentence is decomposed into a sequence of tokens. Therefore, the input sentence x = x1x2...xk is split into a sequence of k tokens, where xi is a token from the vocabulary set VX of the NMT model, which contains all the tokens from the source language. For each token in the translated sentence yref = yref,1, ...,yref,l, the NMT model generates a probability vector over the target language vocabulary set VY by applying a softmax function to the decoder output. The adversary is looking for an adversarial sentence x′, which is tokenized into a sequence of k tokens x′ = x′1x ′ 2...x ′ k, in the source language that fools the target NMT model, i.e., the translation of the adversarial example f(x′) is far from the true translation. However, the adversarial example x′ and the original sentence x should be imperceptibly close so that the translation of the adversarial example stays similar to yref. As is common in the NMT models (Vaswani et al., 2017; Junczys-Dowmunt et al., 2018; Tang et al., 2020), to feed the discrete sequence of tokens into the NMT model, each token is converted to a continuous vector, known as an embedding vector, using a lookup table. In particular, let emb(.) be the embedding function that maps the input token xi to the continuous embedding vector emb(xi) = ei ∈ Rm, where m is the embedding dimension of the target NMT model. Therefore, the input of the NMT model is a sequence of embedding vectors representing the tokens of the input sentence, i.e., ex = [e1, e2, ..., ek] ∈ R(k×m). In the same manner, ex′ = [e′1, e′2, ..., e′k] ∈ R(k×m) is defined for the adversarial example. To generate an adversarial example for a given input sentence, we introduce an optimization problem with respect to the embedding vectors of the adversarial sentence ex′ . Our optimization problem is composed of multiple terms: an adversarial loss, a similarity constraint, and the loss of a language model. An adversarial loss causes the target NMT model to generate faulty translation. Moreover, with a language model loss and a similarity constraint, we impose the generated adversarial example to be a fluent sentence and also semantically similar to the original sentence, respectively. The proposed optimization problem, which finds the adversarial example x′ from its embedding representation ex′ by using a lookup table, is defined as follows: x′ ← argmin e′i∈EVX [LAdv + αLSim + βLLM ], (1) where α and β are the hyperparameters that control the relative importance of each term. Moreover, we call the continuous space of the embedding representations the embedding space and denote it by E , and we show the discrete subspace of the embedding space E containing the embedding representation of every token in the source language vocabulary set by EVX . We now discuss the different terms of the optimization function in detail. Adversarial Loss. In order to create an adversarial example whose translation is far away from the reference translation yref, we try to maximize the training loss of the target NMT model. Since the NMT models are trained to generate the next token of the translation given the translation up until that token, we are looking for the adversarial example that maximizes the probability of wrong translation (i.e., minimizes the probability of correct translation) for the i-th token, given that the NMT model has produced the correct translation up to step (i− 1): LAdv = 1 l l∑ i=1 log(pf (yref,i|ex′ , {yref,1, ..., yref,(i−1)})), (2) where pf (yref,i|ex′ , {yref,1, ..., yref,(i−1)}) is the cross entropy between the predicted token distribution by the NMT model and the delta distribution on the token yref,i, which is one for the correct translated token, yref,i, and zero otherwise. By minimizing log(pf (.)), normalized by the sentence length l, we force the output probability vector of the NMT model to differ from the delta distribution on the token yref,i, which may cause the predicted translation to be wrong. Similarity Constraint. To ensure that the generated adversarial example is similar to the original sentence, we need to add a similarity constraint to our optimization problem. It has been shown that the embedding representation of a language model captures the semantics of the tokens (Tenney et al., 2019; Shavarani & Sarkar, 2021). Suppose that the embedding representation by a language model of the original sentence (which may differ from the NMT embedding representation ex) is vx = [v1,v2, ...,vk] ∈ R(k×n), where n is the embedding dimension of the language model. Likewise, let vx′ denote the sequence of LM embedding vectors regarding the tokens of the adversarial example. We can define the distance between the i-th tokens of the original and the adversarial sentences by computing the cosine distance between their corresponding LM embedding vectors: ∀i ∈ {1, ..., k} : ri = 1− v⊺i v ′ i ∥vi∥2.∥v′i∥2 . (3) The cosine distance is zero if the two tokens are the same and it has larger values for two unrelated tokens. We want the adversarial sentence to differ from the original sentence in only a few tokens. Therefore, the cosine distance between most of the tokens in the original and adversarial sentence should be zero, which causes the cosine distance vector [r1, r2, ..., rk] to be sparse. To ensure the sparsity of the cosine distance vector, instead of the ℓ0 norm, which is not differentiable, we can define the similarity constraint as the ℓ1 norm relaxation of the cosine distance vector normalized to the length of the sentence: LSim = 1 k k∑ i=1 1− v ⊺ i v ′ i ∥vi∥2.∥v′i∥2 . (4) Language Model Loss. Causal language models are trained to maximize the probability of a token given the previous tokens. Hence, we can use the loss of a CLM, i.e., the negative log-probability, as a rough and differentiable measure for the fluency of the generated adversarial sentence. The loss of a CLM, which is normalized to the sentence length, is as follows: LLM = − 1 k k∑ i=1 log(pg(v ′ i|v′1, ...,v′(i−1))), (5) where g is a CLM, and pg(v′i|v′1, ...,v′(i−1)) is the cross entropy between the predicted token distribution by the language model and the delta distribution on the token v′i, which is one for the corresponding token in the adversarial example, v′i, and zero otherwise. To generate adversarial examples against a target NMT model, we propose to solve the optimization problem (1), which contains an adversarial loss term, a similarity constraint, and a CLM loss. 4 TRANSFOOL ATTACK ALGORITHM We now introduce our algorithm for generating adversarial examples against NMT models. The block diagram of our proposed attack is presented in Figure 1. We are looking for an adversarial example with tokens in the vocabulary set VX and the corresponding embedding vectors in the subspace EVX . Hence, the optimization problem (1) is discrete. The high-level idea of our algorithm is to use gradient projection to solve equation 1 in the discrete subspace EVX . The objective function of equation 1 is a function of NMT and LM embedding representations of the adversarial example, ex′ and vx′ , respectively. Since we aim to minimize the optimization problem with respect to ex′ , we need to find a transformation between the embedding space of the language model and the target NMT model. To this aim, as depicted in Figure 1, we propose to replace the embedding layer of a pre-trained language model with a Fully Connected (FC) layer, which gets the embedding vectors of the NMT model as its input. Then, we train the language model and the FC layer simultaneously with the causal language modeling objective. Therefore, we can compute the LM embedding vectors as a function of the NMT embedding vectors: vi = FC(ei), where FC ∈ Rm×n is the trained FC layer. Algorithm 1 TransFool Adversarial Attack Input: f(.): Target NMT model, VX : Vocabulary set FC: Fully connected layer, x: Input sentence yref : Ground-truth translation of x λ: BLEU score ratio, α, β: Hyperparameters K: Maximum No. of iterations, γ: step size Output: x′: Generated adversarial example initialization: s← empty set, itr ← 0 thr ← BLEU(f(ex),yref ))× λ ∀i ∈ {1, ..., k} eg,i, ep,i ← ei while itr < K do itr ← itr + 1 Step 1: Gradient descent in the continuous embedding space: eg ← eg − γ.∇ex′ (Ladv +αLSim + βLLM ) vg ← FC(eg) Step 2: Projection to the discrete subspace EVX and update if the sentence is new: for i ∈ {1, ..., k} do ep,i ← argmax e∈EVX FC(e)⊤vg,i ∥FC(e)∥2.∥vg,i∥2 end for if ep not in set s then add ep to set s eg ← ep if BLEU(f(ep),yref )) ≤ thr then break (adversarial example is found) end if end if end while return ex′ ← ep The pseudo-code of our attack can be found in Algorithm 1. In more detail, we first convert the discrete tokens of the sentence to continuous embedding vectors of the target NMT model, then we use the FC layer to compute the embedding representations of the tokens by the language model. Afterwards, we consider the continuous relaxation of the optimization problem, which means that we assume that the embedding vectors are in the continuous embedding space E instead of EVX . In each iteration of the algorithm, we first update the sequence of embedding vectors ex′ in the opposite direction of the gradient (gradient descent). Let us denote the output of the gradient descent step for the i-th token by eg,i. Then we project the resultant embedding vectors, which are not necessarily in EVX , to the nearest token in the vocabulary set VX . Since the distance in the embedding space of the LM model represents the relationship between the tokens, we use the LM embedding representations with cosine similarity metric in the projection step to find the most similar token in the vocabulary. We can apply the trained fully connected layer FC to find the LM embedding representations: vg = FC(eg). Hence, the projected NMT embedding vector, ep,i, for the i-th token is: ep,i = argmax e∈EVX FC(e)⊤vg,i ∥FC(e)∥2.∥vg,i∥2 . (6) However, due to the discrete nature of data, by applying the projection step in every iteration of the algorithm, we may face an undesirable situation where the algorithm gets stuck in a loop of previously computed steps. In order to circumvent this issue, we will only update the embedding vectors by the output of the projection step if the projected sentence has not been generated before. We perform the gradient descent and projection steps iteratively until a maximum number of iterations is reached, or the translation quality of the adversarial example relative to the original translation quality is less than a threshold. To evaluate the translation quality, we use the BLEU score, which is a widely used metric in the literature: BLEU(f(ex′),yref )) BLEU(f(ex),yref )) ≤ λ. (7) 5 EXPERIMENTS In this section, we first discuss our experimental setup, and then we evaluate TransFool against different models and translation tasks, both in white-box and black-box settings. 5.1 EXPERIMENTAL SETUP We conduct experiments on the English-French (En-Fr), English-German (En-De), and EnglishChinese (En-Zh) translation tasks. We use the test set of WMT14 (Bojar et al., 2014) for the En-Fr and En-De tasks, and the test set of OPUS-100 (Zhang et al., 2020a) for the En-Zh task. Some statistics of these datasets are presented in Appendix A. We evaluate TransFool against transformer-based NMT models. To verify that our attack is effective against various model architectures, we attack the HuggingFace implementation of the Marian NMT models (Junczys-Dowmunt et al., 2018) and mBART50 multilingual NMT model (Tang et al., 2020). As explained in Section 4, the similarity constraint and the LM loss of the proposed optimization problem require an FC layer and a CLM. To this aim, for each NMT model, we train an FC layer and a CLM (with GPT-2 structure (Radford et al., 2019)) on WikiText-103 dataset. We note that the input of the FC layer is the target NMT embedding representation of the input sentence. To find the minimizer of our optimization problem (1), we use the Adam optimizer (Kingma & Ba, 2014) with step size γ = 0.016. Moreover, we set the maximum number of iterations to 500. Our algorithm has three parameters: coefficients α and β in the optimization function (1), and the relative BLEU score ratio λ in the stopping criteria (7). We set λ = 0.4, β = 1.8, and α = 20. We chose these parameters experimentally according to the ablation study, which is available in Appendix B, in order to optimize the performance in terms of success rate, semantic similarity, and fluency. We compare our attack with (Michel et al., 2019), which is a white-box untargeted attack against NMT models.1 We only consider one of their attacks, called kNN, which substitutes some words with their neighbors in the embedding space; the other attack considers swapping the characters, which is too easy to detect. We also adapted Seq2Sick (Cheng et al., 2020a), a targeted attack against NMT models based on an optimization problem in the NMT embedding space, to our untargeted setting. For evaluation, we report different performance metrics: (1) Attack Success Rate (ASR), which measures the rate of successful adversarial examples. Similar to (Ebrahimi et al., 2018a), we define the adversarial example as successful if the BLEU score of its translation is less than half of the BLEU score of the original translation. (2) Relative decrease of translation quality, by measuring the translation quality in terms of BLEU score2 and chrF (Popović, 2015). We denote these two metrics by RDBLEU and RDchrF, respectively. We choose to compute the relative decrease in translation quality so that scores are comparable across different models and datasets (Michel et al., 2019). (3) Semantic Similarity (Sim.), which is computed between the original and adversarial sentences and commonly approximated by the universal sentence encoder (Yang et al., 2020)3. (4) Perplexity score (Perp.), which is a measure of the fluency of the adversarial example computed with the perplexity score of GPT-2 (large). (5) Token Error Rate (TER), which measures the imperceptibility by computing the rate of tokens modified by an adversarial attack. 5.2 RESULTS OF THE WHITE-BOX ATTACK Now we evaluate TransFool in comparison to kNN and Seq2Sick against different NMT models. Table 1 shows the results in terms of different evaluation metrics.4 Overall, our attack is able to decrease the BLEU score of the target model to less than half of the BLEU score of the original translation for more than 60% of the sentences for all tasks and models (except for the En-Zh mBART50 model, where ASR is 57.50%). Also, in all cases, semantic similarity is more than 0.83, which shows that our attack can maintain a high level of semantic similarity with the clean sentences. In comparison to the baselines, TransFool obtains a higher success rate against different model structures and translation tasks, and it is able to reduce the translation quality more severely. Since the algorithm uses the gradients of the proposed optimization problem and is not based on token replacement, TransFool can highly degrade the translation quality. Furthermore, the perplexity score of the adversarial example generated by TransFool is much less than the ones of both baselines (except for the En-Fr Marian model, where it is a little higher than Seq2Sick), which is due to the 1Code of (Cheng et al., 2019; 2020b), untargeted white-box attacks against NMTs, is not publicly available. 2We use case-sensitive SacreBLEU (Post, 2018) on detokenized sentences. 3We use the multilingual version since we are dealing with multiple languages. 4We discard the sentences whose original BLEU score is zero to prevent improving the results artificially. We should also note that all results are computed after the re-tokenization of the adversarial example. Since we are generating the adversarial example at the token-level, there is a small chance that, when the generated adversarial example is converted to text, the re-tokenization does not produce the same set of tokens. integration of the LM embeddings and the LM loss term in the optimization problem. Moreover, the token error rate of our attack is lower than both baselines, and the semantic similarity is preserved better by TransFool in almost all cases since we use the LM embeddings instead of the NMT ones for the similarity constraint. While kNN can also maintain semantic similarity, Seq2Sick does not perform well in this criterion. We also computed similarity by BERTScore (Zhang et al., 2019) and BLEURT-20 (Sellam et al., 2020) that highly correlate with human judgments in Appendix D, which shows that TransFool is better than both baselines in maintaining the semantics. Moreover, as presented in Appendix D.2, the successful attacks by the baselines, as opposed to TransFool, are not semantic-preserving or fluent sentences. Finally, the complete setup and results of our human evaluation are presented in Appendix H, which also shows the superiority of TransFool. We also compare the runtime of TransFool and that of the two baselines. In each iteration of our proposed attack, we need to perform a back-propagation through the target NMT model and the language model to compute the gradients. Also, in some iterations (27 iterations per sentence on average), a forward pass is required to compute the output of the target NMT model to check the stopping criteria. For the Marian NMT (En-Fr) model, on a system equipped with an NVIDIA A100 GPU, it takes 26.45 seconds to generate adversarial examples by TransFool. On the same system, kNN needs 1.45 seconds, and Seq2Sick needs 38.85 seconds to generate adversarial examples for less effective adversarial attacks, however. Table 2 shows some adversarial examples against mBART50 (En-De). In comparison to the baselines, TransFool makes smaller changes to the sentence. The generated adversarial example is a correct English sentence, and it is similar to the original sentence. However, kNN and Seq2Sick generate adversarial sentences that are not necessarily natural or similar to the original sentences. More examples generated by TransFool, kNN, and Seq2Sick can be found in Appendix D.2. We also provide some adversarial sentences when we do not use the LM embeddings in our algorithm in order to show the importance of this component. Indeed, TransFool outperforms both baselines in terms of success rate. It is able to generate more natural adversarial examples with a lower number of perturbations (TER) and higher semantic similarity with the clean samples in almost all cases. A complete study of hyperparameters and the effect of using LM embeddings instead of NMT embeddings for computing similarity on TransFool performance is presented in Appendix B and C, respectively. 5.3 PERFORMANCE IN BLACK-BOX ATTACK SETTINGS In practice, the adversary’s access to the learning system may be limited. Hence, we propose to analyze the performance of TransFool in a black-box scenario. It has been shown that adversarial attacks often transfer to another model that has a different architecture and is even trained with different datasets (Szegedy et al., 2014). By utilizing this property of adversarial attacks, we extend TransFool to the black-box scenario. We consider that we have complete access to one NMT model (the reference model), including its gradients. We implement the proposed gradient-based attack in algorithm 1 with this model. However, for the stopping criteria of the algorithm, we query the black-box target NMT model to compute the BLEU score. We can also implement the black-box transfer attack in the case where the source languages of the reference model and the target model are the same, but their target languages are different. Since Marian NMT is faster and lighter than mBART50, we use it as the reference model and evaluate the performance of the black-box attack against mBART50. We compare the performance of TransFool with WSLS (Zhang et al., 2021), a black-box untargeted attack against NMT models based on word-replacement (the choice of backtranslation model used in WSLS is investigated in Appendix F). We also evaluate the performance of kNN and Seq2Sick in the black-box settings by attacking mBART50 with the adversarial example generated against Marian NMT (in the white-box settings). The results are reported in Table 3. We also report the performance when attacking Google Translate, some generated adversarial samples, and similarity performance computed by BERTScore and BLEURT-20 in Appendix E. In all tasks, with a few queries to the target model, our black-box attack achieves better performance than the white-box attack against the target model (mBART50) but a little worse performance than the white-box attack against the reference model (Marian NMT). In all cases, the success rate, token error rate, and perplexity of TransFool are better than all baselines (except for the En-Fr task, where perplexity is a little higher than Seq2Sick). The ability of TransFool and WSLS to maintain semantic similarity is comparable and better than both other baselines. However, WSLS has the highest token error rate, which makes the attack detectable. The effect of TransFool on BLEU score is larger than that of the other methods, and its effect on chrF metric comes after WSLS (except for the En-DE task, where RDchrF of TransFool is the best). Regarding the complexity, TransFool requires only a few queries to the target model for translation, while WSLS queries the model more than a thousand times, which is costly and may not be feasible in practice. For the En-Fr task, on a system equipped with an NVIDIA A100 GPU, it takes 43.36 and 1904.98 seconds to generate adversarial examples by TransFool and WSLS, respectively, which shows that WSLS is very time-consuming. We also analyze the transferability of the generated adversarial examples to a black-box NMT model with the same source language but a different target language. Since we need a dataset with the same set of sentences for different language pairs, we use the validation set of WMT14 for En-Fr and EnDe tasks. Table 4 shows the results for two cases: Marian NMT or mMBART50 as the target model. We use Marian NMT as the reference model with a different target language than that of the target model. In all settings, the generated adversarial examples are highly transferable to another NMT model with a different target language (i.e., they have high attack success rate and large semantic similarity). The high transferability of TransFool shows that it is able to capture the common failure modes in different NMT models, which can be dangerous in real-world applications. 6 CONCLUSION In this paper, we proposed TransFool, a white-box adversarial attack against NMT models, by introducing a new optimization problem solved by an iterative method based on gradient projection. We utilized the embedding representation of a language model to impose a similarity constraint on the adversarial examples. Moreover, by considering the loss of a language model in our optimization problem, the generated adversarial examples are more fluent. Extensive automatic and human evaluations show that TransFool is highly effective in different translation tasks and against different NMT models. Our attack is also transferable to black-box settings with different structures and even different target languages. In both white-box and black-box scenarios, TransFool obtains improvement over the baselines in terms of success rate, semantic similarity, and fluency. It is important to analyze adversarial attacks against NMT models such as TransFool to find the vulnerabilities of NMT models, measure their robustness, and eventually build more robust NMT models. Ethics Statement We introduced TransFool, an adversarial attack against NMT models, with the motivation of revealing the vulnerabilities of NMT models and paving the way for designing stronger defenses and building robust NMT models in real-life scenarios. While it remains a possibility that a threat actor may misuse our attack, we do not condone using our method with the intent of attacking a real NMT system. Reproducibility Statement The source code will be publicly available as soon as possible to help reproduce our results. Moreover, Appendix G contains the license information and more details of the assets (datasets, codes, and models). Supplementary Material TransFool: An Adversarial Attack against Neural Machine Translation Models ABSTRACT In this supplementary material, we first provide some statistics of the evaluation datasets in Section A. The ablation study of the hyperparameters of TransFool is presented in Section B. We investigate the effect of the LM embedding representation on TransFool and kNN in Section C. More results of the white-box attack are reported in D: the results of other similarity metrics (Section D.1), performance over successful attacks (Section D.2), and some generated adversarial examples (Section D.4). Section E provides more experiments on the black-box attack: the performance of attacking Google Translate (Section E.1), results of other similarity metrics (Section E.2), and some generated adversarial examples (Section E.3). We discuss the effect of the back-translation model choice on WSLS in Section F. Finally, the license information and more details of the assets (datasets, codes, and models) are provided in Section G. A SOME STATISTICS OF THE DATASETS Some statistics, including the number of samples, the Average length of the sentences, and the translation quality of Marian NMT and mBART50, of the evaluation datasets, i.e., OPUS100 (En-Zh) WMT14 (En-FR) and (En-De), are reported in table 5. B ABLATION STUDY In this Section, we analyze the effect of different hyperparameters (including the coefficients α and β in our optimization problem (1), the step size of the gradient descent γ, and the relative BLEU score ratio λ in the stopping criteria Eq. (7)) on the white-box attack performance in terms of success rate, semantic similarity, and perplexity score. In all the experiments, we consider English to French Marian NMT model and evaluate over the first 1000 sentences of the test set of WMT14. The default values for the hyperparameters are as follows, except for the hyperparameter that varies in the different experiments, respectively: α = 20, β = 1.8, γ = 0.016, and λ = 0.4. Effect of the similarity coefficient α. This hyperparameter determines the strength of the similarity term in the optimization problem (1). Figure 2a shows the effect of α on the performance of our attack. By increasing the similarity coefficient of the proposed optimization problem, we are forcing our algorithm to find adversarial sentences that are more similar to the original sentence. Therefore, as shown in Figure 2a, larger values of α result in higher semantic similarity. However, in this case, it is harder to fool the NMT model, i.e., lower attack success rate, RDBLEU, and RDchrF. Moreover, it seems that, since the generated adversarial examples are more similar to the original sentence, they are more natural, and their perplexity score is lower. Effect of the language model loss coefficient β. We analyze the impact of the hyperparameter β, which controls the importance of the language model loss term in the proposed optimization 5 15 25 35 Similarity Coefficient 40 60 80 100 At ta ck S uc es s R at e ASR Sim. Perp. 0.70 0.75 0.80 0.85 0.90 200 300 400 500 (a) 1.25 1.50 1.75 2.00 LM loss Coefficient 50 55 60 65 70 75 ASR Sim. Perp. 0.83 0.84 0.85 0.86 0.87 0.88 140 160 180 200 (b) 1.2 1.4 1.6 1.8 2.0 Step Size 50 60 70 80 ASR Sim. Perp. 0.80 0.82 0.84 0.86 0.88 150 200 250 300 350 x1e-2 (c) 0.2 0.4 0.6 0.8 BLEU Score Ratio 30 40 50 60 70 ASR Sim. Perp. 0.83 0.84 0.85 0.86 0.87 0.88 Se m an tic S im ila rit y 120 140 160 180 200 Pe rp le xi ty S co re (d) 5 15 25 35 Similarity Coefficient 0.45 0.50 0.55 0.60 0.65 0.70 RD BL EU BLEU chrF 0.20 0.25 0.30 0.35 (e) 1.25 1.50 1.75 2.00 LM loss Coefficient 0.48 0.51 0.54 0.57 BLEU chrF 0.19 0.20 0.21 0.22 0.23 0.24 (f) 1.2 1.4 1.6 1.8 2.0 Step Size 0.50 0.55 0.60 0.65 BLEU chrF 0.19 0.21 0.23 0.25 0.27 x1e-2 (g) 0.2 0.4 0.6 0.8 BLEU Score Ratio 0.35 0.40 0.45 0.50 0.55 0.60 BLEU chrF 0.16 0.18 0.20 0.22 0.24 RD ch rF (h) Figure 2: Effect of different hyperparameters on the performance of TransFool. problem, in Figure 2b. By increasing this coefficient, we weaken the effect of the similarity term, i.e., the generated adversarial examples are less similar to the original sentence. As a result, the success rate and the effect on translation quality, i.e., RDBLEU and RDchrF, increase. Effect of the step size γ. The step size of the gradient descent step of the algorithm can impact the performance of our attack, which is investigated in Figure 2c. Increasing the step size results in larger movement in the embedding space in each iteration of the algorithm. Hence, the generated adversarial examples are more aggressive, which results in lower semantic similarity and higher perplexity scores. However, we can find adversarial examples more easily and achieve a higher attack success rate, RDBLEU, and RDchRF. Effect of the BLEU score ratio λ. This hyperparameter determines the stopping criteria of our iterative algorithm. Figure 2d studies the effects of this hyperparameter on the performance of our attack. As this figure shows, a higher BLEU score ratio causes the algorithm to end in earlier iterations. Therefore, the changes applied to the sentence are less aggressive, and hence, we achieve higher semantic similarity and a lower perplexity score. However, the attack success rate, RDBLEU, and RDchrF decrease since we make fewer changes to the sentences. C EFFECT OF THE LM EMBEDDING REPRESENTATION Table 6 shows the results of TransFool and kNN when we use LM embeddings or NMT embeddings for measuring similarity between two tokens.5 The LM embeddings result in lower perplexity and higher semantic similarity for both methods, which demonstrates the importance of this component in generat- ing meaning-preserving fluent adversarial examples. 5In order to have a fair comparison, we fine-tuned hyperparameters of Transfool, in the case when we do not use LM embeddings, to have a similar attack success rate. D MORE RESULTS ON THE WHITE-BOX ATTACK D.1 SEMANTIC SIMILARITY COMPUTED BY OTHER METRICS To better assess the ability of adversarial attacks in maintaining semantic similarity, we can compute the similarity between the original and adversarial sentences using other metrics such as BERTScore (Zhang et al., 2019) and BLEURT-20 (Sellam et al., 2020). It is shown in (Zhang et al., 2019) that BERTScore correlates well with human judgments. BLEURT-20 is also shown to correlates better with human judgment than traditional measures (Freitag et al., 2021). The results are reported in Table 7. These results indicate that the TransFool is indeed more capable of preserving the semantics of the input sentence. In the two cases where kNN has better similarity by using the Universal Sentence Encoder (USE) (Yang et al., 2020), the performance of TransFool is better in terms of BERTScore and BLEURT-20. D.2 PERFORMANCE OVER SUCCESSFUL ATTACKS The evaluation metrics of the successful adversarial examples that strongly affect the translation quality are also important, and they show the capability of the adversarial attack. Hence, we evaluate TransFool, kNN, and Seq2Sick only over the successful adversarial examples.6 The results for the white-box setting are presented in Table 8. By comparing this Table and Table 1, which shows the results on the whole dataset, we can see that TransFool performance is consistent among successful and unsuccessful attacks. Moreover, successful adversarial examples generated by TransFool are still semantically similar to the original sentences, and their perplexity score is low. However, the successful adversarial examples generated by Seq2Sick and kNN do not preserve the semantic similarity and are not fluent sentences; hence, they are not valid adversarial sentences. D.3 TRADE-OFF BETWEEN SUCCESS RATE AND SIMILARITY/FLUENCY The results in our ablation study B show that there is a trade-off between the quality of adversarial example, in terms of semantic-preservation and fluency, and the attack success rate. As studied in 6As defined in Section 5, the adversarial example is successful if the BLEU score of its translation is less than half of the BLEU score of the original translation. (Morris et al., 2020), we can filter adversarial examples with low quality based on hard constraints on semantic similarity and the number of added grammatical errors caused by adversarial perturbations. We can analyze the trade-off between success rate and similarity/fluency by setting different thresholds for filtering adversarial examples. If we evaluate the similarity by the sentence encoder suggested in (Morris et al., 2020), the success rate with different threshold values for similarity in the case of Marian (EnFr) is depicted in Figure 3b. By considering only the adversarial examples with a similarity higher than a threshold, the success rate decreases as the threshold increases, and the quality of the adversarial examples increases. Similarly, we can do the same analysis for fluency. As suggested in (Morris et al., 2020), we count the grammatical errors by LanguageTool (Naber et al., 2003) for the original sen- tences and the adversarial examples. Figure 3a depicts the success rate for different thresholds of the number of added grammatical errors caused by adversarial perturbations. These analyses show that with tighter constraints, we can generate better adversarial examples while the success rate decreases. All in all, according to these results, TransFool outperforms the baselines for different thresholds of similarity and grammatical errors. D.4 MORE ADVERSARIAL EXAMPLES In this Section, we present more adversarial examples generated by TransFool, kNN, and Seq2Sick. In order to show the effect of using LM embeddings on the performance of TransFool, we also include the generated adversarial examples against English to French Marian NMT model when we do not use LM embeddings. In all these tables, the tokens modified by TransFool are written in blue in the original sentence, and the modified tokens by different adversarial attacks are written in red in their corresponding adversarial sentences. Moreover, the changes made by the adversarial attack to the translation that are not directly related to the modified tokens are written in orange, while the changes that are the direct result of modified tokens are written in brown. As can be seen in the examples presented in Tables 9 and 10, TransFool makes smaller changes to the sentence. The generated adversarial example is a correct English sentence, and it is similar to the original sentence. However, kNN, Seq2Sick, and our method with the NMT embeddings make changes that are perceptible, and the adversarial sentences are not necessarily similar to the original sentence. The higher semantic similarity of the adversarial sentences generated by TransFool is due to the integration of LM embeddings and the LM loss in the proposed optimization problem. We should highlight that TransFool is able to make changes to the adversarial sentence translation that are not directly related to the modifications of the original sentence but are the result of the NMT model failure. Other examples against different tasks and models are presented in Tables 11 to 16. E MORE RESULTS ON THE BLACK-BOX ATTACK E.1 ATTACKING GOOGLE TRANSLATE To evaluate the effect of different attacks in practice, we attack Google Translate7 by TransFool, kNN, and Seq2Sick. Since querying Google Translate is limited per day, we were not able to attack with WSLS, which requires high number of queries. Table 17 presents the performance of the English to French translation task. The results demonstrate that adversarial sentences crafted by TransFool can degrade the translation quality more while preserving the semantics better. The perplexity score and word error rate of TransFool compete with those metrics of Seq2Sick, but Seq2Sick is not meaning-preserving and is less effective. We also performed the cross-lingual black-box attack. We consider Marian NMT (En-Fr) as the reference model and attack En-De Google Translate. The results for TransFool are reported in Table 18. E.2 SEMANTIC SIMILARITY COMPUTED BY OTHER METRICS Similar to the white-box attack, we compute the similarity between the adversarial and original sentences by BERTScore and BLEURT-20, since they correlate well with human judgments. The similarity performance of TransFool and WSLS8 in the black-box settings are demonstrated in Table 19. According to Table 19, TransFool is better at maintaining semantic similarity. It may be because we used LM embeddings instead of the NMT ones in the similarity constraint. E.3 SOME ADVERSARIAL EXAMPLES We also present some adversarial examples generated by TransFool and WSLS, in the black-box setting, in Tables 20 to 22. In these tables, the tokens modified by TransFool are written in blue in the original sentence, and the modified tokens by different adversarial attacks are written in red in their corresponding adversarial sentences. Moreover, the changes made by the adversarial attack to the translation that are not directly related to the modified tokens are written in orange, while the changes that are the direct result of modified tokens are written in brown. These examples show that modifications made by TransFool are less detectable, i.e., the generated adversarial examples are more natural and similar to the original sentence. Moreover, TransFool makes changes to the translation that are not the direct result of the modified tokens of the adversarial sentence. 7We should note that since we do not have a tokenizer, we compute Word Error Rate (WER) instead of Token Error Rate (TER). 8The results of kNN and Seq2Sick are not reported since they are transfer attacks, and their performance is already reported in Table 7. F EFFECT OF BACK-TRANSLATION MODEL CHOICE ON WSLS PERFORMANCE WSLS uses a back-translation model for crafting an adversarial example. In (Zhang et al., 2021), the authors investigate the En-De task and use the winner model of the WMT19 DeEn sub-track (Ng et al., 2019) for the back-translation model. However, they do not evaluate their method for En-Fr and En-Zh tasks. To evaluate the performance of WSLS in Table 3, We have used pre-trained Marian NMT models for all three back-translation models. In order to show the effect of our choice of back-translation model, we compare the performance of WSLS for the En-De task when we use Marian NMT or (Ng et al., 2019) as the back-translation model in Table 23. As this Table shows, WSLS with Marian NMT as the back-translation model results in even more semantic similarity and lower perplexity score. On the other hand, WSLS with (Ng et al., 2019) as the back-translation model has a slightly more success rate. These results show that our choice of back-translation model does not highly affect the performance of WSLS. G LICENSE INFORMATION AND DETAILS In this Section, we provide some details about the datasets, codes, and models used in this paper. We should note that we used the models and datasets that are available in HuggingFace transformers (Wolf et al., 2020) and datasets (Lhoest et al., 2021) libraries.9 They are licensed under Apache License 2.0. Moreover, we used PyTorch for all experiments (Paszke et al., 2019), which is released under the BSD license10. G.1 DATASETS WMT14 In the Ninth Workshop on Statistical Machine Translation, WMT14 was introduced for four tasks. We used the En-De and En-Fr news translation tasks. There is no license available for this dataset. OPUS-100 OPUS-100 is a multilingual translation corpus for 100 languages, which is randomly sampled from the OPUS collection (Tiedemann, 2012). There is no license available for this dataset. G.2 MODELS Marian NMT Marian is a Neural Machine Translation framework, which is mainly developed by the Microsoft Translator team, and it is released under MIT License11. This model uses a beam size of 4. mBART50 mBART50 is a multilingual machine translation model of 50 languages, which has been introduced by Facebook. This model is published in the Fairseq library, which is released under MIT License12. This model uses a beam size of 5. 9These two libraries are available at this GitHub repository: https://github.com/huggingface. 10https://github.com/pytorch/pytorch/blob/master/LICENSE 11https://github.com/marian-nmt/marian/blob/master/LICENSE.md 12https://github.com/facebookresearch/fairseq/blob/main/LICENSE G.3 CODES kNN In order to compare our method with kNN (Michel et al., 2019), we used the code provided by the authors, which is released under the BSD 3-Clause "New" or "Revised" License.13 Seq2Sick To compare our method with Seq2Sick (Cheng et al., 2020a), we used the code published by the authors.14 There is no license available for their code. WSLS We implemented and evaluated WSLS (Zhang et al., 2021) using the source code published by the authors.15 There is no license available for this GitHub repository. H HUMAN EVALUATION We conduct a preliminary human evaluation campaign of TransFool, kNN, and Seq2Sick attacks on Marian NMT (En-Fr) in the white-box setting. We randomly choose 90 sentences from the test set of the WMT14 (En-FR) dataset with the adversarial samples and their translations by the NMT model. We split 90 sentences into three different surveys to obtain a manageable size for each annotator. We recruited two annotators for each survey. For the English surveys, we ensure that the annotators are highly proficient English speakers. Similarly, for the French survey, we ensure that the annotators are highly proficient in French. Before starting the rating task, we provided annotators with detailed guidelines similar to (Cer et al., 2017; Michel et al., 2019). The task is to rate the sentences for each criterion on a continuous scale (0-100) inspired by WMT18 practice (Ma et al., 2018) and Direct Assessment (Graham et al., 2013; 2017). For each sentence, we evaluate three aspects in three different surveys: • Fluency: We show the three adversarial sentences and the original sentence on the same page (in random order). We ask the annotators how much they agree with the "The sentence is fluent." statement for each sentence. • Semantic preservation: We show the original sentence on top and the three adversarial sentences afterwards (in random order). We ask the annotators how much they agree with the "The sentence is similar to the reference text." statement for each sentence. • Translation quality: Inspired by monolingual direct assessment (Ma et al., 2018; Graham et al., 2013; 2017), we evaluate the translation quality by showing the reference translation on top and the translations of three adversarial sentences afterwards (in random order). We ask the annotators how much they agree with the "The sentence is similar to the reference text." statement for each translation. We calculate 95% confidence intervals by using 15K bootstrap replications. The results are depicted in Figure 4. These results demonstrate that although the adversarial examples generated by TransFool are more semantic-preserving and fluent than both baselines. According to the provided guide to the annotators for semantic similarity, the score of 67.8 shows that the two sentences are roughly equivalent, but some details may differ. Moreover, a fluency of 66.4 demonstrates that although the generated adversarial examples by TransFool are more fluent than the baselines, there is still room to improve the performance in this regard. We follow the direct assessment strategy to measure the effectiveness of the adversarial attacks on translation quality. According to (Ma et al., 2018), since a sufficient level of agreement of translation quality is difficult to achieve with human evaluation, direct assessment simplifies the task to a simpler monolingual assessment instead of a bilingual task. The similarity of the translations of the adversarial sentences with the reference translation is shown in Figure 4c. The similarity of Seq2Sick is worse than other attacks. However, its similarity in the source language is worse. Therefore, we compute the decrease of similarity (between the original and adversarial sentences) 13The source code is available at https://github.com/pmichel31415/translate/tree/ paul/pytorch_translate/research/adversarial/experiments and the license is avialable at https://github.com/pmichel31415/translate/blob/paul/LICENSE 14The source code is available at https://github.com/cmhcbb/Seq2Sick. 15https://github.com/JHL-HUST/AdvNMT-WSLS/tree/79945881f75d92ae44e9ebc10500d8590c09bb13 from the source language to the target language. The results in Figure 4d show that all attacks affect the translation quality and the effect of TransFool is more pronounced than that of both baselines. Finally, we calculate Inter-Annotator Agreement (IAA). There are two human judgments for each sentence. We average both scores to compute the final score for each sentence. To ensure that the two annotators agree, we only consider sentences where their two corresponding scores are less than 30. We compute IAA in terms of Pearson Correlation coefficient instead of the commonly used Cohen’s K since scores are in a continuous scale. The results are presented in Table 24. Overall, we conclude that we achieve a reasonable inter-annotator agreement for all sentence types and evaluation metrics.
1. What is the focus and contribution of the paper regarding machine translation models? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity and effectiveness in attacking NMT models? 3. What are the weaknesses of the paper, especially regarding its goals and effectiveness in improving robustness? 4. Do you have any concerns about the effectiveness of the proposed attack when using beam search or similar mechanisms in translation models? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper defines a new optimization objective function which combines the fluency, similarity and translation error to adversarially attack machine translation models. A gradient projection algorithm is applied to solve this optimization. Experiment results show the proposed method outperforms baselines. The transferability is also examined. Strengths And Weaknesses Strength The proposed method is simple and straightforward. The experiment results show superior performance in attack success rate and significant decrease in translation quality. This work also demonstrates the capability and efficiency of blackbox attack. weaknesses Existing adversarial attack on NMT aims at improving the robustness of these models. However, in this work, achieving a high attack success rate seems to be the goal. Therefore, I’m wondering if TransFool is as effective as baselines in improving robustness. Or what is the desired use case for this method? Translation models use beam search or similar mechanisms to generate high-quality output. Is the proposed attack still effective when using these mechanisms? Missing human validation. Although automatic metrics show significant decrease in translation quality, I’m not convinced that the algorithm triggers incorrect translation. Maybe the translation is correct but has a very low BLEU or chrF score (i.e., the attack method is attacking the automatic metrics instead of the NMT model, which could also be an interesting finding). Clarity, Quality, Novelty And Reproducibility The space after figures and tables are being squeezed too much.
ICLR
Title TransFool: An Adversarial Attack against Neural Machine Translation Models Abstract Deep neural networks have been shown to be vulnerable to small perturbations of their inputs known as adversarial attacks. In this paper, we consider the particular task of Neural Machine Translation (NMT), where security is often critical. We investigate the vulnerability of NMT models to adversarial attacks and propose a new attack algorithm called TransFool. It builds on a multi-term optimization problem and a gradient projection step to compute adversarial examples that fool NMT models. By integrating the embedding representation of a language model in the proposed attack, we generate fluent adversarial examples in the source language that maintain a high level of semantic similarity with the clean samples and render the attack largely undetectable. Experimental results demonstrate that, for multiple translation tasks and different NMT architectures, our white-box attack can severely degrade the translation quality for more than 60% of the sentences while the semantic similarity between the original sentence and the adversarial example stays very high. Moreover, we show that the proposed attack is transferable to unknown target models and can fool those quite easily. Finally, based on automatic and human evaluations, our method leads to improvement in terms of success rate, semantic similarity, and fluency compared to the existing attacks both in white-box and black-box settings. Hence, TransFool permits to better characterize the vulnerability of NMT systems and outlines the necessity to design strong defense mechanisms and more robust NMT systems for real-life applications. 1 INTRODUCTION The impressive performance of Deep Neural Networks (DNNs) in different areas such as computer vision (He et al., 2016) and Natural Language Processing (NLP) (Vaswani et al., 2017) has led to their widespread usage in various applications. With such an extensive usage of these models, it is important to analyze their robustness and potential vulnerabilities. In particular, it has been shown that the outputs of these models are susceptible to imperceptible changes in the input, known as adversarial attacks (Szegedy et al., 2014). Adversarial examples, which differ from the original inputs in an imperceptible manner, cause the target model to generate incorrect outputs. If these models are not robust enough to these attacks, they cannot be reliably used in applications with security requirements. To address this issue, many studies have been recently devoted to the effective generation of adversarial examples, the defense against attacks, and the analysis of the vulnerabilities of DNN models (Moosavi-Dezfooli et al., 2016; Madry et al., 2018; Ortiz-Jiménez et al., 2021). The dominant methods to craft imperceptible attacks for continuous data, e.g., audio and image data, are based on gradient computing and various optimization strategies. However, these methods cannot be directly extended to NLP models due to the discrete nature of the tokens in the corresponding representations (i.e., words, subwords, and characters). Another challenge in dealing with textual data is the characterization of the imperceptibility of the adversarial perturbation. The ℓpnorm is highly utilized in image data to measure imperceptibility but it does not apply to textual data where manipulating only one token in a sentence may significantly change the semantics. Moreover, in gradient-based methods, it is challenging to incorporate linguistic constraints in a differentiable manner. Hence, optimization-based methods are more difficult and less investigated for adversarial attacks against NLP models. Currently, most attacks in textual data are gradient-free and simply based on heuristic word replacement, which may result in sub-optimal performance (Alzantot et al., 2018; Ren et al., 2019; Zang et al., 2020; Jin et al., 2020; Morris et al., 2020; Guo et al., 2021; Sadrizadeh et al., 2022). In the literature, adversarial attacks have been mainly studied for text classifiers, but less for other NLP tasks such as Neural Machine Translation (NMT) (Zhang et al., 2020b). In text classifiers, the number of output labels of the model is limited, and the adversary’s goal is to mislead the target model to classify the input into any wrong class (untargeted attack) or a wrong predetermined class (targeted attack). However, in NMT systems, the output of the target model is a sequence of tokens, which is a much larger space than that of a text classifier (Cheng et al., 2020a), and it is probable that the ground-truth translation changes after perturbing the input sequence. Hence, it is important to craft meaning-preserving adversarial sentences with a low impact on the ground-truth translation. In this paper, we propose TransFool to build meaning-preserving and fluent adversarial attacks against NMT models. We build a new solution to the challenges associated with gradient-based adversarial attacks against textual data. To find an adversarial sentence that is fluent and semantically similar to the input sentence but highly degrades the translation quality of the target model, we propose a multi-term optimization problem over the tokens of the adversarial example. We consider the white-box attack setting, where the adversary has access to the target model and its parameters. White-box attacks are widely studied since they reveal the vulnerabilities of the systems and are used in benchmarks. To ensure that the generated adversarial examples are imperceptibly similar to the original sentences, we incorporate a Language Model (LM) in our method in two ways. First, we consider the loss of a Causal Language Model (CLM) in our optimization problem in order to impose the syntactic correctness of the adversarial example. Second, by working with the embedding representation of LMs, instead of the NMT model, we ensure that similar tokens are close to each other in the embedding space (Tenney et al., 2019). It enables the definition of a similarity term between the respective tokens of the clean and adversarial sequences. Hence, we include a similarity constraint in the proposed optimization problem, which uses the LM embeddings. Finally, our optimization contains an adversarial term to maximize the loss of the target NMT model. The generated adversarial example, i.e., the minimizer of the proposed optimization problem, should consist of meaningful tokens, and hence, the proposed optimization problem should be solved in a discrete space. By using a gradient projection technique, we first consider the continuous space of the embedding space and perform a gradient descent step and then, we project the resultant embedding vectors to the most similar valid token. In the projection step, we use the LM embedding representation and project the output of the gradient descent step into the nearest meaningful token in the embedding space (with maximum cosine similarity). We test our method against different NMT models with transformer structures, which are now widely used for their exceptional performance. For different NMT architectures and translation tasks, experiments show that our white-box attack can reduce the BLEU score, a widely-used metric for translation quality evaluation (Post, 2018), to half for more than 60% of the sentences while it maintains a high level of semantic similarity with the clean samples. Furthermore, we extend TransFool to black-box settings and show that it can fool unknown target models. Overall, automatic and human evaluations show that in both white-box and black-box settings, TransFool outperforms the existing heuristic strategies in terms of success rate, semantic similarity, and fluency. In summary, our contributions are as follows: • We define a new optimization problem to compute semantic-preserving and fluent attacks against NMT models. The objective function contains several terms: adversarial loss to maximize the loss of the target NMT model; a similarity term to ensure that the adversarial example is similar to the original sentence; and loss of a CLM to generate fluent and natural adversarial examples. • We propose a new strategy to incorporate linguistic constraints in our attack in a differentiable manner. Since LM embeddings provide a meaningful representation of the tokens, we use them instead of the NMT embeddings to compute the similarity between two tokens. • We design a white-box attack algorithm, TransFool, against NMT models by solving the proposed optimization problem with gradient projection. Our attack, which operates at the token level, is effective against state-of-the-art transformer-based NMT models and outperforms prior works. • By using the transferability of adversarial attacks to other models, we extend the proposed whitebox attack to the black-box setting. Our attack is highly effective even when the target languages of the target NMT model and the reference model are different. To our knowledge, this type of transfer attack, cross-lingual, has not been investigated. The rest of the paper is organized as follows. We review the related works in Section 2. In Section 3, we formulate the problem of adversarial attacks against NMT models, and propose an optimization problem to build adversarial attacks. We describe our attack algorithm in Section 4. In Section 5, we discuss the experimental results and evaluate our algorithm against different transformer models and translation tasks. Moreover, we evaluate our attack in black-box settings and show that TransFool has very good transfer properties. Finally, the paper is concluded in Section 6. 2 RELATED WORK Machine translation, an important task in NLP, is the task of automatically converting a sequence of words in a source language to a sequence of words in a target language (Bahdanau et al., 2015). By using DNN models, NMT systems are reaching exceptional performance, which has resulted in their usage in a wide variety of areas, especially in safety and security sensitive applications. But any faulty output of NMT models may result in irreparable incidents in real-world applications. Hence, we need to better understand the vulnerabilities of NMT models to perturbations of input samples, in particular to adversarial examples, to ensure security of applications and robustness of such models. Adversarial attacks against NMT systems have been studied in recent years. First, Belinkov & Bisk (2018) show that character-level NMT models are highly vulnerable to character manipulations such as typos in a block-box setting. Similarly, Ebrahimi et al. (2018a) investigate the robustness of character-level NMT models. They propose a white-box adversarial attack based on HotFlip (Ebrahimi et al., 2018b) and greedily change the important characters to decrease the translation quality (untargeted attack) or mute/push a word in the translation (targeted attack). However, character-level manipulations can be easily detected. To circumvent this issue, many of the adversarial attacks against NMT models are rather based on word replacement. Cheng et al. (2019) propose a white-box attack where they first select random words of the input sentence and replace them with a similar word. In particular, in order to limit the search space, they find some candidates with the help of a language model and choose the token that aligns best with the gradient of the adversarial loss to cause more damage to the translation. Michel et al. (2019) and Zhang et al. (2021) find important words in the sentence and replace them with a neighbor word in the embedding space to create adversarial examples. However, these methods use heuristic strategies which may result in sub-optimal performance. There are also some other types of attacks against NMT models in the literature. In (Wallace et al., 2020), a new type of attack, i.e., universal adversarial attack, is proposed, which consists of a single snippet of text that can be added to any input sentence to mislead the NMT model. However, the added phrase is meaningless, hence easily detectable. Cheng et al. (2020a) propose Seq2Sick, a targeted white-box attack against NMT models. They introduce an optimization problem and solve it by gradient projection. The proposed optimization problem contains an adversarial loss and a group lasso term to ensure that only a few words of the sentence are modified. Although they have a projection step to the nearest embedding vector, they use the NMT embeddings, which may not preserve semantic similarity. Other types of attacks against NMT models with different threat models and purposes have also been investigated in the literature. Some papers focus on making NMT models robust to perturbation to the inputs (Cheng et al., 2018; 2020b; Tan et al., 2021). Some other papers use adversarial attacks to enhance the NMT models in some aspects, such as word sense disambiguation (Emelin et al., 2020), robustness to subword segmentation (Park et al., 2020), and robustness of unsupervised NMT (Yu et al., 2021). In (Xu et al., 2021; Wang et al., 2021), the data poisoning attacks against NMT models are studied. Another type of attack whose purpose is to change multiple words while ensuring that the output of the NMT model remains unchanged is explored in (Chaturvedi et al., 2019; 2021). Another attack approach is presented in (Cai et al., 2021), where the adversary uses the hardware faults of systems to fool NMT models. In summary, most of the existing adversarial attacks against NMT models are not undetectable since they are based on character manipulation, or they use the NMT embedding space to find similar tokens. Also, heuristic strategies based on word-replacement are likely to have sub-optimal performance. Finally, none of these attacks study the transferability to black-box settings. We introduce TransFool to craft effective and fluent adversarial sentences which are similar to the original ones. 3 OPTIMIZATION PROBLEM In this section, we first present our new formulation for generating adversarial examples against NMT models, along with different terms that form our optimization problem. Adversarial Attack. Consider X to be the source language space and Y to be the target language space. The NMT model f : X → Y generally has an encoder-decoder structure (Bahdanau et al., 2015; Vaswani et al., 2017) and aims to maximize the translation probability p(yref|x), where x ∈ X is the input sentence in the source language and yref ∈ Y is the ground-truth translation in the target language. To process textual data, each sentence is decomposed into a sequence of tokens. Therefore, the input sentence x = x1x2...xk is split into a sequence of k tokens, where xi is a token from the vocabulary set VX of the NMT model, which contains all the tokens from the source language. For each token in the translated sentence yref = yref,1, ...,yref,l, the NMT model generates a probability vector over the target language vocabulary set VY by applying a softmax function to the decoder output. The adversary is looking for an adversarial sentence x′, which is tokenized into a sequence of k tokens x′ = x′1x ′ 2...x ′ k, in the source language that fools the target NMT model, i.e., the translation of the adversarial example f(x′) is far from the true translation. However, the adversarial example x′ and the original sentence x should be imperceptibly close so that the translation of the adversarial example stays similar to yref. As is common in the NMT models (Vaswani et al., 2017; Junczys-Dowmunt et al., 2018; Tang et al., 2020), to feed the discrete sequence of tokens into the NMT model, each token is converted to a continuous vector, known as an embedding vector, using a lookup table. In particular, let emb(.) be the embedding function that maps the input token xi to the continuous embedding vector emb(xi) = ei ∈ Rm, where m is the embedding dimension of the target NMT model. Therefore, the input of the NMT model is a sequence of embedding vectors representing the tokens of the input sentence, i.e., ex = [e1, e2, ..., ek] ∈ R(k×m). In the same manner, ex′ = [e′1, e′2, ..., e′k] ∈ R(k×m) is defined for the adversarial example. To generate an adversarial example for a given input sentence, we introduce an optimization problem with respect to the embedding vectors of the adversarial sentence ex′ . Our optimization problem is composed of multiple terms: an adversarial loss, a similarity constraint, and the loss of a language model. An adversarial loss causes the target NMT model to generate faulty translation. Moreover, with a language model loss and a similarity constraint, we impose the generated adversarial example to be a fluent sentence and also semantically similar to the original sentence, respectively. The proposed optimization problem, which finds the adversarial example x′ from its embedding representation ex′ by using a lookup table, is defined as follows: x′ ← argmin e′i∈EVX [LAdv + αLSim + βLLM ], (1) where α and β are the hyperparameters that control the relative importance of each term. Moreover, we call the continuous space of the embedding representations the embedding space and denote it by E , and we show the discrete subspace of the embedding space E containing the embedding representation of every token in the source language vocabulary set by EVX . We now discuss the different terms of the optimization function in detail. Adversarial Loss. In order to create an adversarial example whose translation is far away from the reference translation yref, we try to maximize the training loss of the target NMT model. Since the NMT models are trained to generate the next token of the translation given the translation up until that token, we are looking for the adversarial example that maximizes the probability of wrong translation (i.e., minimizes the probability of correct translation) for the i-th token, given that the NMT model has produced the correct translation up to step (i− 1): LAdv = 1 l l∑ i=1 log(pf (yref,i|ex′ , {yref,1, ..., yref,(i−1)})), (2) where pf (yref,i|ex′ , {yref,1, ..., yref,(i−1)}) is the cross entropy between the predicted token distribution by the NMT model and the delta distribution on the token yref,i, which is one for the correct translated token, yref,i, and zero otherwise. By minimizing log(pf (.)), normalized by the sentence length l, we force the output probability vector of the NMT model to differ from the delta distribution on the token yref,i, which may cause the predicted translation to be wrong. Similarity Constraint. To ensure that the generated adversarial example is similar to the original sentence, we need to add a similarity constraint to our optimization problem. It has been shown that the embedding representation of a language model captures the semantics of the tokens (Tenney et al., 2019; Shavarani & Sarkar, 2021). Suppose that the embedding representation by a language model of the original sentence (which may differ from the NMT embedding representation ex) is vx = [v1,v2, ...,vk] ∈ R(k×n), where n is the embedding dimension of the language model. Likewise, let vx′ denote the sequence of LM embedding vectors regarding the tokens of the adversarial example. We can define the distance between the i-th tokens of the original and the adversarial sentences by computing the cosine distance between their corresponding LM embedding vectors: ∀i ∈ {1, ..., k} : ri = 1− v⊺i v ′ i ∥vi∥2.∥v′i∥2 . (3) The cosine distance is zero if the two tokens are the same and it has larger values for two unrelated tokens. We want the adversarial sentence to differ from the original sentence in only a few tokens. Therefore, the cosine distance between most of the tokens in the original and adversarial sentence should be zero, which causes the cosine distance vector [r1, r2, ..., rk] to be sparse. To ensure the sparsity of the cosine distance vector, instead of the ℓ0 norm, which is not differentiable, we can define the similarity constraint as the ℓ1 norm relaxation of the cosine distance vector normalized to the length of the sentence: LSim = 1 k k∑ i=1 1− v ⊺ i v ′ i ∥vi∥2.∥v′i∥2 . (4) Language Model Loss. Causal language models are trained to maximize the probability of a token given the previous tokens. Hence, we can use the loss of a CLM, i.e., the negative log-probability, as a rough and differentiable measure for the fluency of the generated adversarial sentence. The loss of a CLM, which is normalized to the sentence length, is as follows: LLM = − 1 k k∑ i=1 log(pg(v ′ i|v′1, ...,v′(i−1))), (5) where g is a CLM, and pg(v′i|v′1, ...,v′(i−1)) is the cross entropy between the predicted token distribution by the language model and the delta distribution on the token v′i, which is one for the corresponding token in the adversarial example, v′i, and zero otherwise. To generate adversarial examples against a target NMT model, we propose to solve the optimization problem (1), which contains an adversarial loss term, a similarity constraint, and a CLM loss. 4 TRANSFOOL ATTACK ALGORITHM We now introduce our algorithm for generating adversarial examples against NMT models. The block diagram of our proposed attack is presented in Figure 1. We are looking for an adversarial example with tokens in the vocabulary set VX and the corresponding embedding vectors in the subspace EVX . Hence, the optimization problem (1) is discrete. The high-level idea of our algorithm is to use gradient projection to solve equation 1 in the discrete subspace EVX . The objective function of equation 1 is a function of NMT and LM embedding representations of the adversarial example, ex′ and vx′ , respectively. Since we aim to minimize the optimization problem with respect to ex′ , we need to find a transformation between the embedding space of the language model and the target NMT model. To this aim, as depicted in Figure 1, we propose to replace the embedding layer of a pre-trained language model with a Fully Connected (FC) layer, which gets the embedding vectors of the NMT model as its input. Then, we train the language model and the FC layer simultaneously with the causal language modeling objective. Therefore, we can compute the LM embedding vectors as a function of the NMT embedding vectors: vi = FC(ei), where FC ∈ Rm×n is the trained FC layer. Algorithm 1 TransFool Adversarial Attack Input: f(.): Target NMT model, VX : Vocabulary set FC: Fully connected layer, x: Input sentence yref : Ground-truth translation of x λ: BLEU score ratio, α, β: Hyperparameters K: Maximum No. of iterations, γ: step size Output: x′: Generated adversarial example initialization: s← empty set, itr ← 0 thr ← BLEU(f(ex),yref ))× λ ∀i ∈ {1, ..., k} eg,i, ep,i ← ei while itr < K do itr ← itr + 1 Step 1: Gradient descent in the continuous embedding space: eg ← eg − γ.∇ex′ (Ladv +αLSim + βLLM ) vg ← FC(eg) Step 2: Projection to the discrete subspace EVX and update if the sentence is new: for i ∈ {1, ..., k} do ep,i ← argmax e∈EVX FC(e)⊤vg,i ∥FC(e)∥2.∥vg,i∥2 end for if ep not in set s then add ep to set s eg ← ep if BLEU(f(ep),yref )) ≤ thr then break (adversarial example is found) end if end if end while return ex′ ← ep The pseudo-code of our attack can be found in Algorithm 1. In more detail, we first convert the discrete tokens of the sentence to continuous embedding vectors of the target NMT model, then we use the FC layer to compute the embedding representations of the tokens by the language model. Afterwards, we consider the continuous relaxation of the optimization problem, which means that we assume that the embedding vectors are in the continuous embedding space E instead of EVX . In each iteration of the algorithm, we first update the sequence of embedding vectors ex′ in the opposite direction of the gradient (gradient descent). Let us denote the output of the gradient descent step for the i-th token by eg,i. Then we project the resultant embedding vectors, which are not necessarily in EVX , to the nearest token in the vocabulary set VX . Since the distance in the embedding space of the LM model represents the relationship between the tokens, we use the LM embedding representations with cosine similarity metric in the projection step to find the most similar token in the vocabulary. We can apply the trained fully connected layer FC to find the LM embedding representations: vg = FC(eg). Hence, the projected NMT embedding vector, ep,i, for the i-th token is: ep,i = argmax e∈EVX FC(e)⊤vg,i ∥FC(e)∥2.∥vg,i∥2 . (6) However, due to the discrete nature of data, by applying the projection step in every iteration of the algorithm, we may face an undesirable situation where the algorithm gets stuck in a loop of previously computed steps. In order to circumvent this issue, we will only update the embedding vectors by the output of the projection step if the projected sentence has not been generated before. We perform the gradient descent and projection steps iteratively until a maximum number of iterations is reached, or the translation quality of the adversarial example relative to the original translation quality is less than a threshold. To evaluate the translation quality, we use the BLEU score, which is a widely used metric in the literature: BLEU(f(ex′),yref )) BLEU(f(ex),yref )) ≤ λ. (7) 5 EXPERIMENTS In this section, we first discuss our experimental setup, and then we evaluate TransFool against different models and translation tasks, both in white-box and black-box settings. 5.1 EXPERIMENTAL SETUP We conduct experiments on the English-French (En-Fr), English-German (En-De), and EnglishChinese (En-Zh) translation tasks. We use the test set of WMT14 (Bojar et al., 2014) for the En-Fr and En-De tasks, and the test set of OPUS-100 (Zhang et al., 2020a) for the En-Zh task. Some statistics of these datasets are presented in Appendix A. We evaluate TransFool against transformer-based NMT models. To verify that our attack is effective against various model architectures, we attack the HuggingFace implementation of the Marian NMT models (Junczys-Dowmunt et al., 2018) and mBART50 multilingual NMT model (Tang et al., 2020). As explained in Section 4, the similarity constraint and the LM loss of the proposed optimization problem require an FC layer and a CLM. To this aim, for each NMT model, we train an FC layer and a CLM (with GPT-2 structure (Radford et al., 2019)) on WikiText-103 dataset. We note that the input of the FC layer is the target NMT embedding representation of the input sentence. To find the minimizer of our optimization problem (1), we use the Adam optimizer (Kingma & Ba, 2014) with step size γ = 0.016. Moreover, we set the maximum number of iterations to 500. Our algorithm has three parameters: coefficients α and β in the optimization function (1), and the relative BLEU score ratio λ in the stopping criteria (7). We set λ = 0.4, β = 1.8, and α = 20. We chose these parameters experimentally according to the ablation study, which is available in Appendix B, in order to optimize the performance in terms of success rate, semantic similarity, and fluency. We compare our attack with (Michel et al., 2019), which is a white-box untargeted attack against NMT models.1 We only consider one of their attacks, called kNN, which substitutes some words with their neighbors in the embedding space; the other attack considers swapping the characters, which is too easy to detect. We also adapted Seq2Sick (Cheng et al., 2020a), a targeted attack against NMT models based on an optimization problem in the NMT embedding space, to our untargeted setting. For evaluation, we report different performance metrics: (1) Attack Success Rate (ASR), which measures the rate of successful adversarial examples. Similar to (Ebrahimi et al., 2018a), we define the adversarial example as successful if the BLEU score of its translation is less than half of the BLEU score of the original translation. (2) Relative decrease of translation quality, by measuring the translation quality in terms of BLEU score2 and chrF (Popović, 2015). We denote these two metrics by RDBLEU and RDchrF, respectively. We choose to compute the relative decrease in translation quality so that scores are comparable across different models and datasets (Michel et al., 2019). (3) Semantic Similarity (Sim.), which is computed between the original and adversarial sentences and commonly approximated by the universal sentence encoder (Yang et al., 2020)3. (4) Perplexity score (Perp.), which is a measure of the fluency of the adversarial example computed with the perplexity score of GPT-2 (large). (5) Token Error Rate (TER), which measures the imperceptibility by computing the rate of tokens modified by an adversarial attack. 5.2 RESULTS OF THE WHITE-BOX ATTACK Now we evaluate TransFool in comparison to kNN and Seq2Sick against different NMT models. Table 1 shows the results in terms of different evaluation metrics.4 Overall, our attack is able to decrease the BLEU score of the target model to less than half of the BLEU score of the original translation for more than 60% of the sentences for all tasks and models (except for the En-Zh mBART50 model, where ASR is 57.50%). Also, in all cases, semantic similarity is more than 0.83, which shows that our attack can maintain a high level of semantic similarity with the clean sentences. In comparison to the baselines, TransFool obtains a higher success rate against different model structures and translation tasks, and it is able to reduce the translation quality more severely. Since the algorithm uses the gradients of the proposed optimization problem and is not based on token replacement, TransFool can highly degrade the translation quality. Furthermore, the perplexity score of the adversarial example generated by TransFool is much less than the ones of both baselines (except for the En-Fr Marian model, where it is a little higher than Seq2Sick), which is due to the 1Code of (Cheng et al., 2019; 2020b), untargeted white-box attacks against NMTs, is not publicly available. 2We use case-sensitive SacreBLEU (Post, 2018) on detokenized sentences. 3We use the multilingual version since we are dealing with multiple languages. 4We discard the sentences whose original BLEU score is zero to prevent improving the results artificially. We should also note that all results are computed after the re-tokenization of the adversarial example. Since we are generating the adversarial example at the token-level, there is a small chance that, when the generated adversarial example is converted to text, the re-tokenization does not produce the same set of tokens. integration of the LM embeddings and the LM loss term in the optimization problem. Moreover, the token error rate of our attack is lower than both baselines, and the semantic similarity is preserved better by TransFool in almost all cases since we use the LM embeddings instead of the NMT ones for the similarity constraint. While kNN can also maintain semantic similarity, Seq2Sick does not perform well in this criterion. We also computed similarity by BERTScore (Zhang et al., 2019) and BLEURT-20 (Sellam et al., 2020) that highly correlate with human judgments in Appendix D, which shows that TransFool is better than both baselines in maintaining the semantics. Moreover, as presented in Appendix D.2, the successful attacks by the baselines, as opposed to TransFool, are not semantic-preserving or fluent sentences. Finally, the complete setup and results of our human evaluation are presented in Appendix H, which also shows the superiority of TransFool. We also compare the runtime of TransFool and that of the two baselines. In each iteration of our proposed attack, we need to perform a back-propagation through the target NMT model and the language model to compute the gradients. Also, in some iterations (27 iterations per sentence on average), a forward pass is required to compute the output of the target NMT model to check the stopping criteria. For the Marian NMT (En-Fr) model, on a system equipped with an NVIDIA A100 GPU, it takes 26.45 seconds to generate adversarial examples by TransFool. On the same system, kNN needs 1.45 seconds, and Seq2Sick needs 38.85 seconds to generate adversarial examples for less effective adversarial attacks, however. Table 2 shows some adversarial examples against mBART50 (En-De). In comparison to the baselines, TransFool makes smaller changes to the sentence. The generated adversarial example is a correct English sentence, and it is similar to the original sentence. However, kNN and Seq2Sick generate adversarial sentences that are not necessarily natural or similar to the original sentences. More examples generated by TransFool, kNN, and Seq2Sick can be found in Appendix D.2. We also provide some adversarial sentences when we do not use the LM embeddings in our algorithm in order to show the importance of this component. Indeed, TransFool outperforms both baselines in terms of success rate. It is able to generate more natural adversarial examples with a lower number of perturbations (TER) and higher semantic similarity with the clean samples in almost all cases. A complete study of hyperparameters and the effect of using LM embeddings instead of NMT embeddings for computing similarity on TransFool performance is presented in Appendix B and C, respectively. 5.3 PERFORMANCE IN BLACK-BOX ATTACK SETTINGS In practice, the adversary’s access to the learning system may be limited. Hence, we propose to analyze the performance of TransFool in a black-box scenario. It has been shown that adversarial attacks often transfer to another model that has a different architecture and is even trained with different datasets (Szegedy et al., 2014). By utilizing this property of adversarial attacks, we extend TransFool to the black-box scenario. We consider that we have complete access to one NMT model (the reference model), including its gradients. We implement the proposed gradient-based attack in algorithm 1 with this model. However, for the stopping criteria of the algorithm, we query the black-box target NMT model to compute the BLEU score. We can also implement the black-box transfer attack in the case where the source languages of the reference model and the target model are the same, but their target languages are different. Since Marian NMT is faster and lighter than mBART50, we use it as the reference model and evaluate the performance of the black-box attack against mBART50. We compare the performance of TransFool with WSLS (Zhang et al., 2021), a black-box untargeted attack against NMT models based on word-replacement (the choice of backtranslation model used in WSLS is investigated in Appendix F). We also evaluate the performance of kNN and Seq2Sick in the black-box settings by attacking mBART50 with the adversarial example generated against Marian NMT (in the white-box settings). The results are reported in Table 3. We also report the performance when attacking Google Translate, some generated adversarial samples, and similarity performance computed by BERTScore and BLEURT-20 in Appendix E. In all tasks, with a few queries to the target model, our black-box attack achieves better performance than the white-box attack against the target model (mBART50) but a little worse performance than the white-box attack against the reference model (Marian NMT). In all cases, the success rate, token error rate, and perplexity of TransFool are better than all baselines (except for the En-Fr task, where perplexity is a little higher than Seq2Sick). The ability of TransFool and WSLS to maintain semantic similarity is comparable and better than both other baselines. However, WSLS has the highest token error rate, which makes the attack detectable. The effect of TransFool on BLEU score is larger than that of the other methods, and its effect on chrF metric comes after WSLS (except for the En-DE task, where RDchrF of TransFool is the best). Regarding the complexity, TransFool requires only a few queries to the target model for translation, while WSLS queries the model more than a thousand times, which is costly and may not be feasible in practice. For the En-Fr task, on a system equipped with an NVIDIA A100 GPU, it takes 43.36 and 1904.98 seconds to generate adversarial examples by TransFool and WSLS, respectively, which shows that WSLS is very time-consuming. We also analyze the transferability of the generated adversarial examples to a black-box NMT model with the same source language but a different target language. Since we need a dataset with the same set of sentences for different language pairs, we use the validation set of WMT14 for En-Fr and EnDe tasks. Table 4 shows the results for two cases: Marian NMT or mMBART50 as the target model. We use Marian NMT as the reference model with a different target language than that of the target model. In all settings, the generated adversarial examples are highly transferable to another NMT model with a different target language (i.e., they have high attack success rate and large semantic similarity). The high transferability of TransFool shows that it is able to capture the common failure modes in different NMT models, which can be dangerous in real-world applications. 6 CONCLUSION In this paper, we proposed TransFool, a white-box adversarial attack against NMT models, by introducing a new optimization problem solved by an iterative method based on gradient projection. We utilized the embedding representation of a language model to impose a similarity constraint on the adversarial examples. Moreover, by considering the loss of a language model in our optimization problem, the generated adversarial examples are more fluent. Extensive automatic and human evaluations show that TransFool is highly effective in different translation tasks and against different NMT models. Our attack is also transferable to black-box settings with different structures and even different target languages. In both white-box and black-box scenarios, TransFool obtains improvement over the baselines in terms of success rate, semantic similarity, and fluency. It is important to analyze adversarial attacks against NMT models such as TransFool to find the vulnerabilities of NMT models, measure their robustness, and eventually build more robust NMT models. Ethics Statement We introduced TransFool, an adversarial attack against NMT models, with the motivation of revealing the vulnerabilities of NMT models and paving the way for designing stronger defenses and building robust NMT models in real-life scenarios. While it remains a possibility that a threat actor may misuse our attack, we do not condone using our method with the intent of attacking a real NMT system. Reproducibility Statement The source code will be publicly available as soon as possible to help reproduce our results. Moreover, Appendix G contains the license information and more details of the assets (datasets, codes, and models). Supplementary Material TransFool: An Adversarial Attack against Neural Machine Translation Models ABSTRACT In this supplementary material, we first provide some statistics of the evaluation datasets in Section A. The ablation study of the hyperparameters of TransFool is presented in Section B. We investigate the effect of the LM embedding representation on TransFool and kNN in Section C. More results of the white-box attack are reported in D: the results of other similarity metrics (Section D.1), performance over successful attacks (Section D.2), and some generated adversarial examples (Section D.4). Section E provides more experiments on the black-box attack: the performance of attacking Google Translate (Section E.1), results of other similarity metrics (Section E.2), and some generated adversarial examples (Section E.3). We discuss the effect of the back-translation model choice on WSLS in Section F. Finally, the license information and more details of the assets (datasets, codes, and models) are provided in Section G. A SOME STATISTICS OF THE DATASETS Some statistics, including the number of samples, the Average length of the sentences, and the translation quality of Marian NMT and mBART50, of the evaluation datasets, i.e., OPUS100 (En-Zh) WMT14 (En-FR) and (En-De), are reported in table 5. B ABLATION STUDY In this Section, we analyze the effect of different hyperparameters (including the coefficients α and β in our optimization problem (1), the step size of the gradient descent γ, and the relative BLEU score ratio λ in the stopping criteria Eq. (7)) on the white-box attack performance in terms of success rate, semantic similarity, and perplexity score. In all the experiments, we consider English to French Marian NMT model and evaluate over the first 1000 sentences of the test set of WMT14. The default values for the hyperparameters are as follows, except for the hyperparameter that varies in the different experiments, respectively: α = 20, β = 1.8, γ = 0.016, and λ = 0.4. Effect of the similarity coefficient α. This hyperparameter determines the strength of the similarity term in the optimization problem (1). Figure 2a shows the effect of α on the performance of our attack. By increasing the similarity coefficient of the proposed optimization problem, we are forcing our algorithm to find adversarial sentences that are more similar to the original sentence. Therefore, as shown in Figure 2a, larger values of α result in higher semantic similarity. However, in this case, it is harder to fool the NMT model, i.e., lower attack success rate, RDBLEU, and RDchrF. Moreover, it seems that, since the generated adversarial examples are more similar to the original sentence, they are more natural, and their perplexity score is lower. Effect of the language model loss coefficient β. We analyze the impact of the hyperparameter β, which controls the importance of the language model loss term in the proposed optimization 5 15 25 35 Similarity Coefficient 40 60 80 100 At ta ck S uc es s R at e ASR Sim. Perp. 0.70 0.75 0.80 0.85 0.90 200 300 400 500 (a) 1.25 1.50 1.75 2.00 LM loss Coefficient 50 55 60 65 70 75 ASR Sim. Perp. 0.83 0.84 0.85 0.86 0.87 0.88 140 160 180 200 (b) 1.2 1.4 1.6 1.8 2.0 Step Size 50 60 70 80 ASR Sim. Perp. 0.80 0.82 0.84 0.86 0.88 150 200 250 300 350 x1e-2 (c) 0.2 0.4 0.6 0.8 BLEU Score Ratio 30 40 50 60 70 ASR Sim. Perp. 0.83 0.84 0.85 0.86 0.87 0.88 Se m an tic S im ila rit y 120 140 160 180 200 Pe rp le xi ty S co re (d) 5 15 25 35 Similarity Coefficient 0.45 0.50 0.55 0.60 0.65 0.70 RD BL EU BLEU chrF 0.20 0.25 0.30 0.35 (e) 1.25 1.50 1.75 2.00 LM loss Coefficient 0.48 0.51 0.54 0.57 BLEU chrF 0.19 0.20 0.21 0.22 0.23 0.24 (f) 1.2 1.4 1.6 1.8 2.0 Step Size 0.50 0.55 0.60 0.65 BLEU chrF 0.19 0.21 0.23 0.25 0.27 x1e-2 (g) 0.2 0.4 0.6 0.8 BLEU Score Ratio 0.35 0.40 0.45 0.50 0.55 0.60 BLEU chrF 0.16 0.18 0.20 0.22 0.24 RD ch rF (h) Figure 2: Effect of different hyperparameters on the performance of TransFool. problem, in Figure 2b. By increasing this coefficient, we weaken the effect of the similarity term, i.e., the generated adversarial examples are less similar to the original sentence. As a result, the success rate and the effect on translation quality, i.e., RDBLEU and RDchrF, increase. Effect of the step size γ. The step size of the gradient descent step of the algorithm can impact the performance of our attack, which is investigated in Figure 2c. Increasing the step size results in larger movement in the embedding space in each iteration of the algorithm. Hence, the generated adversarial examples are more aggressive, which results in lower semantic similarity and higher perplexity scores. However, we can find adversarial examples more easily and achieve a higher attack success rate, RDBLEU, and RDchRF. Effect of the BLEU score ratio λ. This hyperparameter determines the stopping criteria of our iterative algorithm. Figure 2d studies the effects of this hyperparameter on the performance of our attack. As this figure shows, a higher BLEU score ratio causes the algorithm to end in earlier iterations. Therefore, the changes applied to the sentence are less aggressive, and hence, we achieve higher semantic similarity and a lower perplexity score. However, the attack success rate, RDBLEU, and RDchrF decrease since we make fewer changes to the sentences. C EFFECT OF THE LM EMBEDDING REPRESENTATION Table 6 shows the results of TransFool and kNN when we use LM embeddings or NMT embeddings for measuring similarity between two tokens.5 The LM embeddings result in lower perplexity and higher semantic similarity for both methods, which demonstrates the importance of this component in generat- ing meaning-preserving fluent adversarial examples. 5In order to have a fair comparison, we fine-tuned hyperparameters of Transfool, in the case when we do not use LM embeddings, to have a similar attack success rate. D MORE RESULTS ON THE WHITE-BOX ATTACK D.1 SEMANTIC SIMILARITY COMPUTED BY OTHER METRICS To better assess the ability of adversarial attacks in maintaining semantic similarity, we can compute the similarity between the original and adversarial sentences using other metrics such as BERTScore (Zhang et al., 2019) and BLEURT-20 (Sellam et al., 2020). It is shown in (Zhang et al., 2019) that BERTScore correlates well with human judgments. BLEURT-20 is also shown to correlates better with human judgment than traditional measures (Freitag et al., 2021). The results are reported in Table 7. These results indicate that the TransFool is indeed more capable of preserving the semantics of the input sentence. In the two cases where kNN has better similarity by using the Universal Sentence Encoder (USE) (Yang et al., 2020), the performance of TransFool is better in terms of BERTScore and BLEURT-20. D.2 PERFORMANCE OVER SUCCESSFUL ATTACKS The evaluation metrics of the successful adversarial examples that strongly affect the translation quality are also important, and they show the capability of the adversarial attack. Hence, we evaluate TransFool, kNN, and Seq2Sick only over the successful adversarial examples.6 The results for the white-box setting are presented in Table 8. By comparing this Table and Table 1, which shows the results on the whole dataset, we can see that TransFool performance is consistent among successful and unsuccessful attacks. Moreover, successful adversarial examples generated by TransFool are still semantically similar to the original sentences, and their perplexity score is low. However, the successful adversarial examples generated by Seq2Sick and kNN do not preserve the semantic similarity and are not fluent sentences; hence, they are not valid adversarial sentences. D.3 TRADE-OFF BETWEEN SUCCESS RATE AND SIMILARITY/FLUENCY The results in our ablation study B show that there is a trade-off between the quality of adversarial example, in terms of semantic-preservation and fluency, and the attack success rate. As studied in 6As defined in Section 5, the adversarial example is successful if the BLEU score of its translation is less than half of the BLEU score of the original translation. (Morris et al., 2020), we can filter adversarial examples with low quality based on hard constraints on semantic similarity and the number of added grammatical errors caused by adversarial perturbations. We can analyze the trade-off between success rate and similarity/fluency by setting different thresholds for filtering adversarial examples. If we evaluate the similarity by the sentence encoder suggested in (Morris et al., 2020), the success rate with different threshold values for similarity in the case of Marian (EnFr) is depicted in Figure 3b. By considering only the adversarial examples with a similarity higher than a threshold, the success rate decreases as the threshold increases, and the quality of the adversarial examples increases. Similarly, we can do the same analysis for fluency. As suggested in (Morris et al., 2020), we count the grammatical errors by LanguageTool (Naber et al., 2003) for the original sen- tences and the adversarial examples. Figure 3a depicts the success rate for different thresholds of the number of added grammatical errors caused by adversarial perturbations. These analyses show that with tighter constraints, we can generate better adversarial examples while the success rate decreases. All in all, according to these results, TransFool outperforms the baselines for different thresholds of similarity and grammatical errors. D.4 MORE ADVERSARIAL EXAMPLES In this Section, we present more adversarial examples generated by TransFool, kNN, and Seq2Sick. In order to show the effect of using LM embeddings on the performance of TransFool, we also include the generated adversarial examples against English to French Marian NMT model when we do not use LM embeddings. In all these tables, the tokens modified by TransFool are written in blue in the original sentence, and the modified tokens by different adversarial attacks are written in red in their corresponding adversarial sentences. Moreover, the changes made by the adversarial attack to the translation that are not directly related to the modified tokens are written in orange, while the changes that are the direct result of modified tokens are written in brown. As can be seen in the examples presented in Tables 9 and 10, TransFool makes smaller changes to the sentence. The generated adversarial example is a correct English sentence, and it is similar to the original sentence. However, kNN, Seq2Sick, and our method with the NMT embeddings make changes that are perceptible, and the adversarial sentences are not necessarily similar to the original sentence. The higher semantic similarity of the adversarial sentences generated by TransFool is due to the integration of LM embeddings and the LM loss in the proposed optimization problem. We should highlight that TransFool is able to make changes to the adversarial sentence translation that are not directly related to the modifications of the original sentence but are the result of the NMT model failure. Other examples against different tasks and models are presented in Tables 11 to 16. E MORE RESULTS ON THE BLACK-BOX ATTACK E.1 ATTACKING GOOGLE TRANSLATE To evaluate the effect of different attacks in practice, we attack Google Translate7 by TransFool, kNN, and Seq2Sick. Since querying Google Translate is limited per day, we were not able to attack with WSLS, which requires high number of queries. Table 17 presents the performance of the English to French translation task. The results demonstrate that adversarial sentences crafted by TransFool can degrade the translation quality more while preserving the semantics better. The perplexity score and word error rate of TransFool compete with those metrics of Seq2Sick, but Seq2Sick is not meaning-preserving and is less effective. We also performed the cross-lingual black-box attack. We consider Marian NMT (En-Fr) as the reference model and attack En-De Google Translate. The results for TransFool are reported in Table 18. E.2 SEMANTIC SIMILARITY COMPUTED BY OTHER METRICS Similar to the white-box attack, we compute the similarity between the adversarial and original sentences by BERTScore and BLEURT-20, since they correlate well with human judgments. The similarity performance of TransFool and WSLS8 in the black-box settings are demonstrated in Table 19. According to Table 19, TransFool is better at maintaining semantic similarity. It may be because we used LM embeddings instead of the NMT ones in the similarity constraint. E.3 SOME ADVERSARIAL EXAMPLES We also present some adversarial examples generated by TransFool and WSLS, in the black-box setting, in Tables 20 to 22. In these tables, the tokens modified by TransFool are written in blue in the original sentence, and the modified tokens by different adversarial attacks are written in red in their corresponding adversarial sentences. Moreover, the changes made by the adversarial attack to the translation that are not directly related to the modified tokens are written in orange, while the changes that are the direct result of modified tokens are written in brown. These examples show that modifications made by TransFool are less detectable, i.e., the generated adversarial examples are more natural and similar to the original sentence. Moreover, TransFool makes changes to the translation that are not the direct result of the modified tokens of the adversarial sentence. 7We should note that since we do not have a tokenizer, we compute Word Error Rate (WER) instead of Token Error Rate (TER). 8The results of kNN and Seq2Sick are not reported since they are transfer attacks, and their performance is already reported in Table 7. F EFFECT OF BACK-TRANSLATION MODEL CHOICE ON WSLS PERFORMANCE WSLS uses a back-translation model for crafting an adversarial example. In (Zhang et al., 2021), the authors investigate the En-De task and use the winner model of the WMT19 DeEn sub-track (Ng et al., 2019) for the back-translation model. However, they do not evaluate their method for En-Fr and En-Zh tasks. To evaluate the performance of WSLS in Table 3, We have used pre-trained Marian NMT models for all three back-translation models. In order to show the effect of our choice of back-translation model, we compare the performance of WSLS for the En-De task when we use Marian NMT or (Ng et al., 2019) as the back-translation model in Table 23. As this Table shows, WSLS with Marian NMT as the back-translation model results in even more semantic similarity and lower perplexity score. On the other hand, WSLS with (Ng et al., 2019) as the back-translation model has a slightly more success rate. These results show that our choice of back-translation model does not highly affect the performance of WSLS. G LICENSE INFORMATION AND DETAILS In this Section, we provide some details about the datasets, codes, and models used in this paper. We should note that we used the models and datasets that are available in HuggingFace transformers (Wolf et al., 2020) and datasets (Lhoest et al., 2021) libraries.9 They are licensed under Apache License 2.0. Moreover, we used PyTorch for all experiments (Paszke et al., 2019), which is released under the BSD license10. G.1 DATASETS WMT14 In the Ninth Workshop on Statistical Machine Translation, WMT14 was introduced for four tasks. We used the En-De and En-Fr news translation tasks. There is no license available for this dataset. OPUS-100 OPUS-100 is a multilingual translation corpus for 100 languages, which is randomly sampled from the OPUS collection (Tiedemann, 2012). There is no license available for this dataset. G.2 MODELS Marian NMT Marian is a Neural Machine Translation framework, which is mainly developed by the Microsoft Translator team, and it is released under MIT License11. This model uses a beam size of 4. mBART50 mBART50 is a multilingual machine translation model of 50 languages, which has been introduced by Facebook. This model is published in the Fairseq library, which is released under MIT License12. This model uses a beam size of 5. 9These two libraries are available at this GitHub repository: https://github.com/huggingface. 10https://github.com/pytorch/pytorch/blob/master/LICENSE 11https://github.com/marian-nmt/marian/blob/master/LICENSE.md 12https://github.com/facebookresearch/fairseq/blob/main/LICENSE G.3 CODES kNN In order to compare our method with kNN (Michel et al., 2019), we used the code provided by the authors, which is released under the BSD 3-Clause "New" or "Revised" License.13 Seq2Sick To compare our method with Seq2Sick (Cheng et al., 2020a), we used the code published by the authors.14 There is no license available for their code. WSLS We implemented and evaluated WSLS (Zhang et al., 2021) using the source code published by the authors.15 There is no license available for this GitHub repository. H HUMAN EVALUATION We conduct a preliminary human evaluation campaign of TransFool, kNN, and Seq2Sick attacks on Marian NMT (En-Fr) in the white-box setting. We randomly choose 90 sentences from the test set of the WMT14 (En-FR) dataset with the adversarial samples and their translations by the NMT model. We split 90 sentences into three different surveys to obtain a manageable size for each annotator. We recruited two annotators for each survey. For the English surveys, we ensure that the annotators are highly proficient English speakers. Similarly, for the French survey, we ensure that the annotators are highly proficient in French. Before starting the rating task, we provided annotators with detailed guidelines similar to (Cer et al., 2017; Michel et al., 2019). The task is to rate the sentences for each criterion on a continuous scale (0-100) inspired by WMT18 practice (Ma et al., 2018) and Direct Assessment (Graham et al., 2013; 2017). For each sentence, we evaluate three aspects in three different surveys: • Fluency: We show the three adversarial sentences and the original sentence on the same page (in random order). We ask the annotators how much they agree with the "The sentence is fluent." statement for each sentence. • Semantic preservation: We show the original sentence on top and the three adversarial sentences afterwards (in random order). We ask the annotators how much they agree with the "The sentence is similar to the reference text." statement for each sentence. • Translation quality: Inspired by monolingual direct assessment (Ma et al., 2018; Graham et al., 2013; 2017), we evaluate the translation quality by showing the reference translation on top and the translations of three adversarial sentences afterwards (in random order). We ask the annotators how much they agree with the "The sentence is similar to the reference text." statement for each translation. We calculate 95% confidence intervals by using 15K bootstrap replications. The results are depicted in Figure 4. These results demonstrate that although the adversarial examples generated by TransFool are more semantic-preserving and fluent than both baselines. According to the provided guide to the annotators for semantic similarity, the score of 67.8 shows that the two sentences are roughly equivalent, but some details may differ. Moreover, a fluency of 66.4 demonstrates that although the generated adversarial examples by TransFool are more fluent than the baselines, there is still room to improve the performance in this regard. We follow the direct assessment strategy to measure the effectiveness of the adversarial attacks on translation quality. According to (Ma et al., 2018), since a sufficient level of agreement of translation quality is difficult to achieve with human evaluation, direct assessment simplifies the task to a simpler monolingual assessment instead of a bilingual task. The similarity of the translations of the adversarial sentences with the reference translation is shown in Figure 4c. The similarity of Seq2Sick is worse than other attacks. However, its similarity in the source language is worse. Therefore, we compute the decrease of similarity (between the original and adversarial sentences) 13The source code is available at https://github.com/pmichel31415/translate/tree/ paul/pytorch_translate/research/adversarial/experiments and the license is avialable at https://github.com/pmichel31415/translate/blob/paul/LICENSE 14The source code is available at https://github.com/cmhcbb/Seq2Sick. 15https://github.com/JHL-HUST/AdvNMT-WSLS/tree/79945881f75d92ae44e9ebc10500d8590c09bb13 from the source language to the target language. The results in Figure 4d show that all attacks affect the translation quality and the effect of TransFool is more pronounced than that of both baselines. Finally, we calculate Inter-Annotator Agreement (IAA). There are two human judgments for each sentence. We average both scores to compute the final score for each sentence. To ensure that the two annotators agree, we only consider sentences where their two corresponding scores are less than 30. We compute IAA in terms of Pearson Correlation coefficient instead of the commonly used Cohen’s K since scores are in a continuous scale. The results are presented in Table 24. Overall, we conclude that we achieve a reasonable inter-annotator agreement for all sentence types and evaluation metrics.
1. What is the focus and contribution of the paper regarding adversarial attacks on NMT models? 2. What are the strengths and weaknesses of the proposed approach, particularly in its novelty and engineering tricks? 3. Do you have any concerns or questions about the FC part in the approach and its similarity to BERT-like architectures? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, especially in its proposal of a new loss and architecture? 5. Are there any suggestions or requests for additional investigations or metrics to be included in the study, such as BERT scores for evaluating the distortion of generated adversarial sequences?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a new approach for adversarial attacks on NMT models. The authors design a way to propagate the gradient for embeddings and propose a specific loss that targets reasonable criteria. The propagation is based on the common idea of relaxation with a new additional FC part to create embeddings for comparison. The metrics are good, and overall, the results are promising. Strengths And Weaknesses Strengths: Quality metrics are better than that of competitors. Experiments are interesting and numerous. They include human evaluation as well. Nice adversarial example found in Table 2 Weakness, methods: The novelty is limited. The paper contains some engineering tricks to make everything work, but all ideas are similar to those previously discussed in the literature. Not clear why we need the FC part in the approach. We can use BERT-like architectures to generate similar differentiable v i -s as well. What would be the difference in this case? Investigation on the distortion of the generated adversarial sequence x ′ is limited and includes only "Semantic Similarity" from Yang et al. Can you include metrics like BERT score for a pair of x and x ′ as well? Clarity, Quality, Novelty And Reproducibility New loss is proposed that consists of three terms. The ideas for these are out there for some time. New architecture is proposed to find adversarial sequences. It is an interesting modification of a common one. I also imagine that the observed behaviour can be related to the requirement to include a projection layer in self-supervised learning or in other settings.
ICLR
Title TransFool: An Adversarial Attack against Neural Machine Translation Models Abstract Deep neural networks have been shown to be vulnerable to small perturbations of their inputs known as adversarial attacks. In this paper, we consider the particular task of Neural Machine Translation (NMT), where security is often critical. We investigate the vulnerability of NMT models to adversarial attacks and propose a new attack algorithm called TransFool. It builds on a multi-term optimization problem and a gradient projection step to compute adversarial examples that fool NMT models. By integrating the embedding representation of a language model in the proposed attack, we generate fluent adversarial examples in the source language that maintain a high level of semantic similarity with the clean samples and render the attack largely undetectable. Experimental results demonstrate that, for multiple translation tasks and different NMT architectures, our white-box attack can severely degrade the translation quality for more than 60% of the sentences while the semantic similarity between the original sentence and the adversarial example stays very high. Moreover, we show that the proposed attack is transferable to unknown target models and can fool those quite easily. Finally, based on automatic and human evaluations, our method leads to improvement in terms of success rate, semantic similarity, and fluency compared to the existing attacks both in white-box and black-box settings. Hence, TransFool permits to better characterize the vulnerability of NMT systems and outlines the necessity to design strong defense mechanisms and more robust NMT systems for real-life applications. 1 INTRODUCTION The impressive performance of Deep Neural Networks (DNNs) in different areas such as computer vision (He et al., 2016) and Natural Language Processing (NLP) (Vaswani et al., 2017) has led to their widespread usage in various applications. With such an extensive usage of these models, it is important to analyze their robustness and potential vulnerabilities. In particular, it has been shown that the outputs of these models are susceptible to imperceptible changes in the input, known as adversarial attacks (Szegedy et al., 2014). Adversarial examples, which differ from the original inputs in an imperceptible manner, cause the target model to generate incorrect outputs. If these models are not robust enough to these attacks, they cannot be reliably used in applications with security requirements. To address this issue, many studies have been recently devoted to the effective generation of adversarial examples, the defense against attacks, and the analysis of the vulnerabilities of DNN models (Moosavi-Dezfooli et al., 2016; Madry et al., 2018; Ortiz-Jiménez et al., 2021). The dominant methods to craft imperceptible attacks for continuous data, e.g., audio and image data, are based on gradient computing and various optimization strategies. However, these methods cannot be directly extended to NLP models due to the discrete nature of the tokens in the corresponding representations (i.e., words, subwords, and characters). Another challenge in dealing with textual data is the characterization of the imperceptibility of the adversarial perturbation. The ℓpnorm is highly utilized in image data to measure imperceptibility but it does not apply to textual data where manipulating only one token in a sentence may significantly change the semantics. Moreover, in gradient-based methods, it is challenging to incorporate linguistic constraints in a differentiable manner. Hence, optimization-based methods are more difficult and less investigated for adversarial attacks against NLP models. Currently, most attacks in textual data are gradient-free and simply based on heuristic word replacement, which may result in sub-optimal performance (Alzantot et al., 2018; Ren et al., 2019; Zang et al., 2020; Jin et al., 2020; Morris et al., 2020; Guo et al., 2021; Sadrizadeh et al., 2022). In the literature, adversarial attacks have been mainly studied for text classifiers, but less for other NLP tasks such as Neural Machine Translation (NMT) (Zhang et al., 2020b). In text classifiers, the number of output labels of the model is limited, and the adversary’s goal is to mislead the target model to classify the input into any wrong class (untargeted attack) or a wrong predetermined class (targeted attack). However, in NMT systems, the output of the target model is a sequence of tokens, which is a much larger space than that of a text classifier (Cheng et al., 2020a), and it is probable that the ground-truth translation changes after perturbing the input sequence. Hence, it is important to craft meaning-preserving adversarial sentences with a low impact on the ground-truth translation. In this paper, we propose TransFool to build meaning-preserving and fluent adversarial attacks against NMT models. We build a new solution to the challenges associated with gradient-based adversarial attacks against textual data. To find an adversarial sentence that is fluent and semantically similar to the input sentence but highly degrades the translation quality of the target model, we propose a multi-term optimization problem over the tokens of the adversarial example. We consider the white-box attack setting, where the adversary has access to the target model and its parameters. White-box attacks are widely studied since they reveal the vulnerabilities of the systems and are used in benchmarks. To ensure that the generated adversarial examples are imperceptibly similar to the original sentences, we incorporate a Language Model (LM) in our method in two ways. First, we consider the loss of a Causal Language Model (CLM) in our optimization problem in order to impose the syntactic correctness of the adversarial example. Second, by working with the embedding representation of LMs, instead of the NMT model, we ensure that similar tokens are close to each other in the embedding space (Tenney et al., 2019). It enables the definition of a similarity term between the respective tokens of the clean and adversarial sequences. Hence, we include a similarity constraint in the proposed optimization problem, which uses the LM embeddings. Finally, our optimization contains an adversarial term to maximize the loss of the target NMT model. The generated adversarial example, i.e., the minimizer of the proposed optimization problem, should consist of meaningful tokens, and hence, the proposed optimization problem should be solved in a discrete space. By using a gradient projection technique, we first consider the continuous space of the embedding space and perform a gradient descent step and then, we project the resultant embedding vectors to the most similar valid token. In the projection step, we use the LM embedding representation and project the output of the gradient descent step into the nearest meaningful token in the embedding space (with maximum cosine similarity). We test our method against different NMT models with transformer structures, which are now widely used for their exceptional performance. For different NMT architectures and translation tasks, experiments show that our white-box attack can reduce the BLEU score, a widely-used metric for translation quality evaluation (Post, 2018), to half for more than 60% of the sentences while it maintains a high level of semantic similarity with the clean samples. Furthermore, we extend TransFool to black-box settings and show that it can fool unknown target models. Overall, automatic and human evaluations show that in both white-box and black-box settings, TransFool outperforms the existing heuristic strategies in terms of success rate, semantic similarity, and fluency. In summary, our contributions are as follows: • We define a new optimization problem to compute semantic-preserving and fluent attacks against NMT models. The objective function contains several terms: adversarial loss to maximize the loss of the target NMT model; a similarity term to ensure that the adversarial example is similar to the original sentence; and loss of a CLM to generate fluent and natural adversarial examples. • We propose a new strategy to incorporate linguistic constraints in our attack in a differentiable manner. Since LM embeddings provide a meaningful representation of the tokens, we use them instead of the NMT embeddings to compute the similarity between two tokens. • We design a white-box attack algorithm, TransFool, against NMT models by solving the proposed optimization problem with gradient projection. Our attack, which operates at the token level, is effective against state-of-the-art transformer-based NMT models and outperforms prior works. • By using the transferability of adversarial attacks to other models, we extend the proposed whitebox attack to the black-box setting. Our attack is highly effective even when the target languages of the target NMT model and the reference model are different. To our knowledge, this type of transfer attack, cross-lingual, has not been investigated. The rest of the paper is organized as follows. We review the related works in Section 2. In Section 3, we formulate the problem of adversarial attacks against NMT models, and propose an optimization problem to build adversarial attacks. We describe our attack algorithm in Section 4. In Section 5, we discuss the experimental results and evaluate our algorithm against different transformer models and translation tasks. Moreover, we evaluate our attack in black-box settings and show that TransFool has very good transfer properties. Finally, the paper is concluded in Section 6. 2 RELATED WORK Machine translation, an important task in NLP, is the task of automatically converting a sequence of words in a source language to a sequence of words in a target language (Bahdanau et al., 2015). By using DNN models, NMT systems are reaching exceptional performance, which has resulted in their usage in a wide variety of areas, especially in safety and security sensitive applications. But any faulty output of NMT models may result in irreparable incidents in real-world applications. Hence, we need to better understand the vulnerabilities of NMT models to perturbations of input samples, in particular to adversarial examples, to ensure security of applications and robustness of such models. Adversarial attacks against NMT systems have been studied in recent years. First, Belinkov & Bisk (2018) show that character-level NMT models are highly vulnerable to character manipulations such as typos in a block-box setting. Similarly, Ebrahimi et al. (2018a) investigate the robustness of character-level NMT models. They propose a white-box adversarial attack based on HotFlip (Ebrahimi et al., 2018b) and greedily change the important characters to decrease the translation quality (untargeted attack) or mute/push a word in the translation (targeted attack). However, character-level manipulations can be easily detected. To circumvent this issue, many of the adversarial attacks against NMT models are rather based on word replacement. Cheng et al. (2019) propose a white-box attack where they first select random words of the input sentence and replace them with a similar word. In particular, in order to limit the search space, they find some candidates with the help of a language model and choose the token that aligns best with the gradient of the adversarial loss to cause more damage to the translation. Michel et al. (2019) and Zhang et al. (2021) find important words in the sentence and replace them with a neighbor word in the embedding space to create adversarial examples. However, these methods use heuristic strategies which may result in sub-optimal performance. There are also some other types of attacks against NMT models in the literature. In (Wallace et al., 2020), a new type of attack, i.e., universal adversarial attack, is proposed, which consists of a single snippet of text that can be added to any input sentence to mislead the NMT model. However, the added phrase is meaningless, hence easily detectable. Cheng et al. (2020a) propose Seq2Sick, a targeted white-box attack against NMT models. They introduce an optimization problem and solve it by gradient projection. The proposed optimization problem contains an adversarial loss and a group lasso term to ensure that only a few words of the sentence are modified. Although they have a projection step to the nearest embedding vector, they use the NMT embeddings, which may not preserve semantic similarity. Other types of attacks against NMT models with different threat models and purposes have also been investigated in the literature. Some papers focus on making NMT models robust to perturbation to the inputs (Cheng et al., 2018; 2020b; Tan et al., 2021). Some other papers use adversarial attacks to enhance the NMT models in some aspects, such as word sense disambiguation (Emelin et al., 2020), robustness to subword segmentation (Park et al., 2020), and robustness of unsupervised NMT (Yu et al., 2021). In (Xu et al., 2021; Wang et al., 2021), the data poisoning attacks against NMT models are studied. Another type of attack whose purpose is to change multiple words while ensuring that the output of the NMT model remains unchanged is explored in (Chaturvedi et al., 2019; 2021). Another attack approach is presented in (Cai et al., 2021), where the adversary uses the hardware faults of systems to fool NMT models. In summary, most of the existing adversarial attacks against NMT models are not undetectable since they are based on character manipulation, or they use the NMT embedding space to find similar tokens. Also, heuristic strategies based on word-replacement are likely to have sub-optimal performance. Finally, none of these attacks study the transferability to black-box settings. We introduce TransFool to craft effective and fluent adversarial sentences which are similar to the original ones. 3 OPTIMIZATION PROBLEM In this section, we first present our new formulation for generating adversarial examples against NMT models, along with different terms that form our optimization problem. Adversarial Attack. Consider X to be the source language space and Y to be the target language space. The NMT model f : X → Y generally has an encoder-decoder structure (Bahdanau et al., 2015; Vaswani et al., 2017) and aims to maximize the translation probability p(yref|x), where x ∈ X is the input sentence in the source language and yref ∈ Y is the ground-truth translation in the target language. To process textual data, each sentence is decomposed into a sequence of tokens. Therefore, the input sentence x = x1x2...xk is split into a sequence of k tokens, where xi is a token from the vocabulary set VX of the NMT model, which contains all the tokens from the source language. For each token in the translated sentence yref = yref,1, ...,yref,l, the NMT model generates a probability vector over the target language vocabulary set VY by applying a softmax function to the decoder output. The adversary is looking for an adversarial sentence x′, which is tokenized into a sequence of k tokens x′ = x′1x ′ 2...x ′ k, in the source language that fools the target NMT model, i.e., the translation of the adversarial example f(x′) is far from the true translation. However, the adversarial example x′ and the original sentence x should be imperceptibly close so that the translation of the adversarial example stays similar to yref. As is common in the NMT models (Vaswani et al., 2017; Junczys-Dowmunt et al., 2018; Tang et al., 2020), to feed the discrete sequence of tokens into the NMT model, each token is converted to a continuous vector, known as an embedding vector, using a lookup table. In particular, let emb(.) be the embedding function that maps the input token xi to the continuous embedding vector emb(xi) = ei ∈ Rm, where m is the embedding dimension of the target NMT model. Therefore, the input of the NMT model is a sequence of embedding vectors representing the tokens of the input sentence, i.e., ex = [e1, e2, ..., ek] ∈ R(k×m). In the same manner, ex′ = [e′1, e′2, ..., e′k] ∈ R(k×m) is defined for the adversarial example. To generate an adversarial example for a given input sentence, we introduce an optimization problem with respect to the embedding vectors of the adversarial sentence ex′ . Our optimization problem is composed of multiple terms: an adversarial loss, a similarity constraint, and the loss of a language model. An adversarial loss causes the target NMT model to generate faulty translation. Moreover, with a language model loss and a similarity constraint, we impose the generated adversarial example to be a fluent sentence and also semantically similar to the original sentence, respectively. The proposed optimization problem, which finds the adversarial example x′ from its embedding representation ex′ by using a lookup table, is defined as follows: x′ ← argmin e′i∈EVX [LAdv + αLSim + βLLM ], (1) where α and β are the hyperparameters that control the relative importance of each term. Moreover, we call the continuous space of the embedding representations the embedding space and denote it by E , and we show the discrete subspace of the embedding space E containing the embedding representation of every token in the source language vocabulary set by EVX . We now discuss the different terms of the optimization function in detail. Adversarial Loss. In order to create an adversarial example whose translation is far away from the reference translation yref, we try to maximize the training loss of the target NMT model. Since the NMT models are trained to generate the next token of the translation given the translation up until that token, we are looking for the adversarial example that maximizes the probability of wrong translation (i.e., minimizes the probability of correct translation) for the i-th token, given that the NMT model has produced the correct translation up to step (i− 1): LAdv = 1 l l∑ i=1 log(pf (yref,i|ex′ , {yref,1, ..., yref,(i−1)})), (2) where pf (yref,i|ex′ , {yref,1, ..., yref,(i−1)}) is the cross entropy between the predicted token distribution by the NMT model and the delta distribution on the token yref,i, which is one for the correct translated token, yref,i, and zero otherwise. By minimizing log(pf (.)), normalized by the sentence length l, we force the output probability vector of the NMT model to differ from the delta distribution on the token yref,i, which may cause the predicted translation to be wrong. Similarity Constraint. To ensure that the generated adversarial example is similar to the original sentence, we need to add a similarity constraint to our optimization problem. It has been shown that the embedding representation of a language model captures the semantics of the tokens (Tenney et al., 2019; Shavarani & Sarkar, 2021). Suppose that the embedding representation by a language model of the original sentence (which may differ from the NMT embedding representation ex) is vx = [v1,v2, ...,vk] ∈ R(k×n), where n is the embedding dimension of the language model. Likewise, let vx′ denote the sequence of LM embedding vectors regarding the tokens of the adversarial example. We can define the distance between the i-th tokens of the original and the adversarial sentences by computing the cosine distance between their corresponding LM embedding vectors: ∀i ∈ {1, ..., k} : ri = 1− v⊺i v ′ i ∥vi∥2.∥v′i∥2 . (3) The cosine distance is zero if the two tokens are the same and it has larger values for two unrelated tokens. We want the adversarial sentence to differ from the original sentence in only a few tokens. Therefore, the cosine distance between most of the tokens in the original and adversarial sentence should be zero, which causes the cosine distance vector [r1, r2, ..., rk] to be sparse. To ensure the sparsity of the cosine distance vector, instead of the ℓ0 norm, which is not differentiable, we can define the similarity constraint as the ℓ1 norm relaxation of the cosine distance vector normalized to the length of the sentence: LSim = 1 k k∑ i=1 1− v ⊺ i v ′ i ∥vi∥2.∥v′i∥2 . (4) Language Model Loss. Causal language models are trained to maximize the probability of a token given the previous tokens. Hence, we can use the loss of a CLM, i.e., the negative log-probability, as a rough and differentiable measure for the fluency of the generated adversarial sentence. The loss of a CLM, which is normalized to the sentence length, is as follows: LLM = − 1 k k∑ i=1 log(pg(v ′ i|v′1, ...,v′(i−1))), (5) where g is a CLM, and pg(v′i|v′1, ...,v′(i−1)) is the cross entropy between the predicted token distribution by the language model and the delta distribution on the token v′i, which is one for the corresponding token in the adversarial example, v′i, and zero otherwise. To generate adversarial examples against a target NMT model, we propose to solve the optimization problem (1), which contains an adversarial loss term, a similarity constraint, and a CLM loss. 4 TRANSFOOL ATTACK ALGORITHM We now introduce our algorithm for generating adversarial examples against NMT models. The block diagram of our proposed attack is presented in Figure 1. We are looking for an adversarial example with tokens in the vocabulary set VX and the corresponding embedding vectors in the subspace EVX . Hence, the optimization problem (1) is discrete. The high-level idea of our algorithm is to use gradient projection to solve equation 1 in the discrete subspace EVX . The objective function of equation 1 is a function of NMT and LM embedding representations of the adversarial example, ex′ and vx′ , respectively. Since we aim to minimize the optimization problem with respect to ex′ , we need to find a transformation between the embedding space of the language model and the target NMT model. To this aim, as depicted in Figure 1, we propose to replace the embedding layer of a pre-trained language model with a Fully Connected (FC) layer, which gets the embedding vectors of the NMT model as its input. Then, we train the language model and the FC layer simultaneously with the causal language modeling objective. Therefore, we can compute the LM embedding vectors as a function of the NMT embedding vectors: vi = FC(ei), where FC ∈ Rm×n is the trained FC layer. Algorithm 1 TransFool Adversarial Attack Input: f(.): Target NMT model, VX : Vocabulary set FC: Fully connected layer, x: Input sentence yref : Ground-truth translation of x λ: BLEU score ratio, α, β: Hyperparameters K: Maximum No. of iterations, γ: step size Output: x′: Generated adversarial example initialization: s← empty set, itr ← 0 thr ← BLEU(f(ex),yref ))× λ ∀i ∈ {1, ..., k} eg,i, ep,i ← ei while itr < K do itr ← itr + 1 Step 1: Gradient descent in the continuous embedding space: eg ← eg − γ.∇ex′ (Ladv +αLSim + βLLM ) vg ← FC(eg) Step 2: Projection to the discrete subspace EVX and update if the sentence is new: for i ∈ {1, ..., k} do ep,i ← argmax e∈EVX FC(e)⊤vg,i ∥FC(e)∥2.∥vg,i∥2 end for if ep not in set s then add ep to set s eg ← ep if BLEU(f(ep),yref )) ≤ thr then break (adversarial example is found) end if end if end while return ex′ ← ep The pseudo-code of our attack can be found in Algorithm 1. In more detail, we first convert the discrete tokens of the sentence to continuous embedding vectors of the target NMT model, then we use the FC layer to compute the embedding representations of the tokens by the language model. Afterwards, we consider the continuous relaxation of the optimization problem, which means that we assume that the embedding vectors are in the continuous embedding space E instead of EVX . In each iteration of the algorithm, we first update the sequence of embedding vectors ex′ in the opposite direction of the gradient (gradient descent). Let us denote the output of the gradient descent step for the i-th token by eg,i. Then we project the resultant embedding vectors, which are not necessarily in EVX , to the nearest token in the vocabulary set VX . Since the distance in the embedding space of the LM model represents the relationship between the tokens, we use the LM embedding representations with cosine similarity metric in the projection step to find the most similar token in the vocabulary. We can apply the trained fully connected layer FC to find the LM embedding representations: vg = FC(eg). Hence, the projected NMT embedding vector, ep,i, for the i-th token is: ep,i = argmax e∈EVX FC(e)⊤vg,i ∥FC(e)∥2.∥vg,i∥2 . (6) However, due to the discrete nature of data, by applying the projection step in every iteration of the algorithm, we may face an undesirable situation where the algorithm gets stuck in a loop of previously computed steps. In order to circumvent this issue, we will only update the embedding vectors by the output of the projection step if the projected sentence has not been generated before. We perform the gradient descent and projection steps iteratively until a maximum number of iterations is reached, or the translation quality of the adversarial example relative to the original translation quality is less than a threshold. To evaluate the translation quality, we use the BLEU score, which is a widely used metric in the literature: BLEU(f(ex′),yref )) BLEU(f(ex),yref )) ≤ λ. (7) 5 EXPERIMENTS In this section, we first discuss our experimental setup, and then we evaluate TransFool against different models and translation tasks, both in white-box and black-box settings. 5.1 EXPERIMENTAL SETUP We conduct experiments on the English-French (En-Fr), English-German (En-De), and EnglishChinese (En-Zh) translation tasks. We use the test set of WMT14 (Bojar et al., 2014) for the En-Fr and En-De tasks, and the test set of OPUS-100 (Zhang et al., 2020a) for the En-Zh task. Some statistics of these datasets are presented in Appendix A. We evaluate TransFool against transformer-based NMT models. To verify that our attack is effective against various model architectures, we attack the HuggingFace implementation of the Marian NMT models (Junczys-Dowmunt et al., 2018) and mBART50 multilingual NMT model (Tang et al., 2020). As explained in Section 4, the similarity constraint and the LM loss of the proposed optimization problem require an FC layer and a CLM. To this aim, for each NMT model, we train an FC layer and a CLM (with GPT-2 structure (Radford et al., 2019)) on WikiText-103 dataset. We note that the input of the FC layer is the target NMT embedding representation of the input sentence. To find the minimizer of our optimization problem (1), we use the Adam optimizer (Kingma & Ba, 2014) with step size γ = 0.016. Moreover, we set the maximum number of iterations to 500. Our algorithm has three parameters: coefficients α and β in the optimization function (1), and the relative BLEU score ratio λ in the stopping criteria (7). We set λ = 0.4, β = 1.8, and α = 20. We chose these parameters experimentally according to the ablation study, which is available in Appendix B, in order to optimize the performance in terms of success rate, semantic similarity, and fluency. We compare our attack with (Michel et al., 2019), which is a white-box untargeted attack against NMT models.1 We only consider one of their attacks, called kNN, which substitutes some words with their neighbors in the embedding space; the other attack considers swapping the characters, which is too easy to detect. We also adapted Seq2Sick (Cheng et al., 2020a), a targeted attack against NMT models based on an optimization problem in the NMT embedding space, to our untargeted setting. For evaluation, we report different performance metrics: (1) Attack Success Rate (ASR), which measures the rate of successful adversarial examples. Similar to (Ebrahimi et al., 2018a), we define the adversarial example as successful if the BLEU score of its translation is less than half of the BLEU score of the original translation. (2) Relative decrease of translation quality, by measuring the translation quality in terms of BLEU score2 and chrF (Popović, 2015). We denote these two metrics by RDBLEU and RDchrF, respectively. We choose to compute the relative decrease in translation quality so that scores are comparable across different models and datasets (Michel et al., 2019). (3) Semantic Similarity (Sim.), which is computed between the original and adversarial sentences and commonly approximated by the universal sentence encoder (Yang et al., 2020)3. (4) Perplexity score (Perp.), which is a measure of the fluency of the adversarial example computed with the perplexity score of GPT-2 (large). (5) Token Error Rate (TER), which measures the imperceptibility by computing the rate of tokens modified by an adversarial attack. 5.2 RESULTS OF THE WHITE-BOX ATTACK Now we evaluate TransFool in comparison to kNN and Seq2Sick against different NMT models. Table 1 shows the results in terms of different evaluation metrics.4 Overall, our attack is able to decrease the BLEU score of the target model to less than half of the BLEU score of the original translation for more than 60% of the sentences for all tasks and models (except for the En-Zh mBART50 model, where ASR is 57.50%). Also, in all cases, semantic similarity is more than 0.83, which shows that our attack can maintain a high level of semantic similarity with the clean sentences. In comparison to the baselines, TransFool obtains a higher success rate against different model structures and translation tasks, and it is able to reduce the translation quality more severely. Since the algorithm uses the gradients of the proposed optimization problem and is not based on token replacement, TransFool can highly degrade the translation quality. Furthermore, the perplexity score of the adversarial example generated by TransFool is much less than the ones of both baselines (except for the En-Fr Marian model, where it is a little higher than Seq2Sick), which is due to the 1Code of (Cheng et al., 2019; 2020b), untargeted white-box attacks against NMTs, is not publicly available. 2We use case-sensitive SacreBLEU (Post, 2018) on detokenized sentences. 3We use the multilingual version since we are dealing with multiple languages. 4We discard the sentences whose original BLEU score is zero to prevent improving the results artificially. We should also note that all results are computed after the re-tokenization of the adversarial example. Since we are generating the adversarial example at the token-level, there is a small chance that, when the generated adversarial example is converted to text, the re-tokenization does not produce the same set of tokens. integration of the LM embeddings and the LM loss term in the optimization problem. Moreover, the token error rate of our attack is lower than both baselines, and the semantic similarity is preserved better by TransFool in almost all cases since we use the LM embeddings instead of the NMT ones for the similarity constraint. While kNN can also maintain semantic similarity, Seq2Sick does not perform well in this criterion. We also computed similarity by BERTScore (Zhang et al., 2019) and BLEURT-20 (Sellam et al., 2020) that highly correlate with human judgments in Appendix D, which shows that TransFool is better than both baselines in maintaining the semantics. Moreover, as presented in Appendix D.2, the successful attacks by the baselines, as opposed to TransFool, are not semantic-preserving or fluent sentences. Finally, the complete setup and results of our human evaluation are presented in Appendix H, which also shows the superiority of TransFool. We also compare the runtime of TransFool and that of the two baselines. In each iteration of our proposed attack, we need to perform a back-propagation through the target NMT model and the language model to compute the gradients. Also, in some iterations (27 iterations per sentence on average), a forward pass is required to compute the output of the target NMT model to check the stopping criteria. For the Marian NMT (En-Fr) model, on a system equipped with an NVIDIA A100 GPU, it takes 26.45 seconds to generate adversarial examples by TransFool. On the same system, kNN needs 1.45 seconds, and Seq2Sick needs 38.85 seconds to generate adversarial examples for less effective adversarial attacks, however. Table 2 shows some adversarial examples against mBART50 (En-De). In comparison to the baselines, TransFool makes smaller changes to the sentence. The generated adversarial example is a correct English sentence, and it is similar to the original sentence. However, kNN and Seq2Sick generate adversarial sentences that are not necessarily natural or similar to the original sentences. More examples generated by TransFool, kNN, and Seq2Sick can be found in Appendix D.2. We also provide some adversarial sentences when we do not use the LM embeddings in our algorithm in order to show the importance of this component. Indeed, TransFool outperforms both baselines in terms of success rate. It is able to generate more natural adversarial examples with a lower number of perturbations (TER) and higher semantic similarity with the clean samples in almost all cases. A complete study of hyperparameters and the effect of using LM embeddings instead of NMT embeddings for computing similarity on TransFool performance is presented in Appendix B and C, respectively. 5.3 PERFORMANCE IN BLACK-BOX ATTACK SETTINGS In practice, the adversary’s access to the learning system may be limited. Hence, we propose to analyze the performance of TransFool in a black-box scenario. It has been shown that adversarial attacks often transfer to another model that has a different architecture and is even trained with different datasets (Szegedy et al., 2014). By utilizing this property of adversarial attacks, we extend TransFool to the black-box scenario. We consider that we have complete access to one NMT model (the reference model), including its gradients. We implement the proposed gradient-based attack in algorithm 1 with this model. However, for the stopping criteria of the algorithm, we query the black-box target NMT model to compute the BLEU score. We can also implement the black-box transfer attack in the case where the source languages of the reference model and the target model are the same, but their target languages are different. Since Marian NMT is faster and lighter than mBART50, we use it as the reference model and evaluate the performance of the black-box attack against mBART50. We compare the performance of TransFool with WSLS (Zhang et al., 2021), a black-box untargeted attack against NMT models based on word-replacement (the choice of backtranslation model used in WSLS is investigated in Appendix F). We also evaluate the performance of kNN and Seq2Sick in the black-box settings by attacking mBART50 with the adversarial example generated against Marian NMT (in the white-box settings). The results are reported in Table 3. We also report the performance when attacking Google Translate, some generated adversarial samples, and similarity performance computed by BERTScore and BLEURT-20 in Appendix E. In all tasks, with a few queries to the target model, our black-box attack achieves better performance than the white-box attack against the target model (mBART50) but a little worse performance than the white-box attack against the reference model (Marian NMT). In all cases, the success rate, token error rate, and perplexity of TransFool are better than all baselines (except for the En-Fr task, where perplexity is a little higher than Seq2Sick). The ability of TransFool and WSLS to maintain semantic similarity is comparable and better than both other baselines. However, WSLS has the highest token error rate, which makes the attack detectable. The effect of TransFool on BLEU score is larger than that of the other methods, and its effect on chrF metric comes after WSLS (except for the En-DE task, where RDchrF of TransFool is the best). Regarding the complexity, TransFool requires only a few queries to the target model for translation, while WSLS queries the model more than a thousand times, which is costly and may not be feasible in practice. For the En-Fr task, on a system equipped with an NVIDIA A100 GPU, it takes 43.36 and 1904.98 seconds to generate adversarial examples by TransFool and WSLS, respectively, which shows that WSLS is very time-consuming. We also analyze the transferability of the generated adversarial examples to a black-box NMT model with the same source language but a different target language. Since we need a dataset with the same set of sentences for different language pairs, we use the validation set of WMT14 for En-Fr and EnDe tasks. Table 4 shows the results for two cases: Marian NMT or mMBART50 as the target model. We use Marian NMT as the reference model with a different target language than that of the target model. In all settings, the generated adversarial examples are highly transferable to another NMT model with a different target language (i.e., they have high attack success rate and large semantic similarity). The high transferability of TransFool shows that it is able to capture the common failure modes in different NMT models, which can be dangerous in real-world applications. 6 CONCLUSION In this paper, we proposed TransFool, a white-box adversarial attack against NMT models, by introducing a new optimization problem solved by an iterative method based on gradient projection. We utilized the embedding representation of a language model to impose a similarity constraint on the adversarial examples. Moreover, by considering the loss of a language model in our optimization problem, the generated adversarial examples are more fluent. Extensive automatic and human evaluations show that TransFool is highly effective in different translation tasks and against different NMT models. Our attack is also transferable to black-box settings with different structures and even different target languages. In both white-box and black-box scenarios, TransFool obtains improvement over the baselines in terms of success rate, semantic similarity, and fluency. It is important to analyze adversarial attacks against NMT models such as TransFool to find the vulnerabilities of NMT models, measure their robustness, and eventually build more robust NMT models. Ethics Statement We introduced TransFool, an adversarial attack against NMT models, with the motivation of revealing the vulnerabilities of NMT models and paving the way for designing stronger defenses and building robust NMT models in real-life scenarios. While it remains a possibility that a threat actor may misuse our attack, we do not condone using our method with the intent of attacking a real NMT system. Reproducibility Statement The source code will be publicly available as soon as possible to help reproduce our results. Moreover, Appendix G contains the license information and more details of the assets (datasets, codes, and models). Supplementary Material TransFool: An Adversarial Attack against Neural Machine Translation Models ABSTRACT In this supplementary material, we first provide some statistics of the evaluation datasets in Section A. The ablation study of the hyperparameters of TransFool is presented in Section B. We investigate the effect of the LM embedding representation on TransFool and kNN in Section C. More results of the white-box attack are reported in D: the results of other similarity metrics (Section D.1), performance over successful attacks (Section D.2), and some generated adversarial examples (Section D.4). Section E provides more experiments on the black-box attack: the performance of attacking Google Translate (Section E.1), results of other similarity metrics (Section E.2), and some generated adversarial examples (Section E.3). We discuss the effect of the back-translation model choice on WSLS in Section F. Finally, the license information and more details of the assets (datasets, codes, and models) are provided in Section G. A SOME STATISTICS OF THE DATASETS Some statistics, including the number of samples, the Average length of the sentences, and the translation quality of Marian NMT and mBART50, of the evaluation datasets, i.e., OPUS100 (En-Zh) WMT14 (En-FR) and (En-De), are reported in table 5. B ABLATION STUDY In this Section, we analyze the effect of different hyperparameters (including the coefficients α and β in our optimization problem (1), the step size of the gradient descent γ, and the relative BLEU score ratio λ in the stopping criteria Eq. (7)) on the white-box attack performance in terms of success rate, semantic similarity, and perplexity score. In all the experiments, we consider English to French Marian NMT model and evaluate over the first 1000 sentences of the test set of WMT14. The default values for the hyperparameters are as follows, except for the hyperparameter that varies in the different experiments, respectively: α = 20, β = 1.8, γ = 0.016, and λ = 0.4. Effect of the similarity coefficient α. This hyperparameter determines the strength of the similarity term in the optimization problem (1). Figure 2a shows the effect of α on the performance of our attack. By increasing the similarity coefficient of the proposed optimization problem, we are forcing our algorithm to find adversarial sentences that are more similar to the original sentence. Therefore, as shown in Figure 2a, larger values of α result in higher semantic similarity. However, in this case, it is harder to fool the NMT model, i.e., lower attack success rate, RDBLEU, and RDchrF. Moreover, it seems that, since the generated adversarial examples are more similar to the original sentence, they are more natural, and their perplexity score is lower. Effect of the language model loss coefficient β. We analyze the impact of the hyperparameter β, which controls the importance of the language model loss term in the proposed optimization 5 15 25 35 Similarity Coefficient 40 60 80 100 At ta ck S uc es s R at e ASR Sim. Perp. 0.70 0.75 0.80 0.85 0.90 200 300 400 500 (a) 1.25 1.50 1.75 2.00 LM loss Coefficient 50 55 60 65 70 75 ASR Sim. Perp. 0.83 0.84 0.85 0.86 0.87 0.88 140 160 180 200 (b) 1.2 1.4 1.6 1.8 2.0 Step Size 50 60 70 80 ASR Sim. Perp. 0.80 0.82 0.84 0.86 0.88 150 200 250 300 350 x1e-2 (c) 0.2 0.4 0.6 0.8 BLEU Score Ratio 30 40 50 60 70 ASR Sim. Perp. 0.83 0.84 0.85 0.86 0.87 0.88 Se m an tic S im ila rit y 120 140 160 180 200 Pe rp le xi ty S co re (d) 5 15 25 35 Similarity Coefficient 0.45 0.50 0.55 0.60 0.65 0.70 RD BL EU BLEU chrF 0.20 0.25 0.30 0.35 (e) 1.25 1.50 1.75 2.00 LM loss Coefficient 0.48 0.51 0.54 0.57 BLEU chrF 0.19 0.20 0.21 0.22 0.23 0.24 (f) 1.2 1.4 1.6 1.8 2.0 Step Size 0.50 0.55 0.60 0.65 BLEU chrF 0.19 0.21 0.23 0.25 0.27 x1e-2 (g) 0.2 0.4 0.6 0.8 BLEU Score Ratio 0.35 0.40 0.45 0.50 0.55 0.60 BLEU chrF 0.16 0.18 0.20 0.22 0.24 RD ch rF (h) Figure 2: Effect of different hyperparameters on the performance of TransFool. problem, in Figure 2b. By increasing this coefficient, we weaken the effect of the similarity term, i.e., the generated adversarial examples are less similar to the original sentence. As a result, the success rate and the effect on translation quality, i.e., RDBLEU and RDchrF, increase. Effect of the step size γ. The step size of the gradient descent step of the algorithm can impact the performance of our attack, which is investigated in Figure 2c. Increasing the step size results in larger movement in the embedding space in each iteration of the algorithm. Hence, the generated adversarial examples are more aggressive, which results in lower semantic similarity and higher perplexity scores. However, we can find adversarial examples more easily and achieve a higher attack success rate, RDBLEU, and RDchRF. Effect of the BLEU score ratio λ. This hyperparameter determines the stopping criteria of our iterative algorithm. Figure 2d studies the effects of this hyperparameter on the performance of our attack. As this figure shows, a higher BLEU score ratio causes the algorithm to end in earlier iterations. Therefore, the changes applied to the sentence are less aggressive, and hence, we achieve higher semantic similarity and a lower perplexity score. However, the attack success rate, RDBLEU, and RDchrF decrease since we make fewer changes to the sentences. C EFFECT OF THE LM EMBEDDING REPRESENTATION Table 6 shows the results of TransFool and kNN when we use LM embeddings or NMT embeddings for measuring similarity between two tokens.5 The LM embeddings result in lower perplexity and higher semantic similarity for both methods, which demonstrates the importance of this component in generat- ing meaning-preserving fluent adversarial examples. 5In order to have a fair comparison, we fine-tuned hyperparameters of Transfool, in the case when we do not use LM embeddings, to have a similar attack success rate. D MORE RESULTS ON THE WHITE-BOX ATTACK D.1 SEMANTIC SIMILARITY COMPUTED BY OTHER METRICS To better assess the ability of adversarial attacks in maintaining semantic similarity, we can compute the similarity between the original and adversarial sentences using other metrics such as BERTScore (Zhang et al., 2019) and BLEURT-20 (Sellam et al., 2020). It is shown in (Zhang et al., 2019) that BERTScore correlates well with human judgments. BLEURT-20 is also shown to correlates better with human judgment than traditional measures (Freitag et al., 2021). The results are reported in Table 7. These results indicate that the TransFool is indeed more capable of preserving the semantics of the input sentence. In the two cases where kNN has better similarity by using the Universal Sentence Encoder (USE) (Yang et al., 2020), the performance of TransFool is better in terms of BERTScore and BLEURT-20. D.2 PERFORMANCE OVER SUCCESSFUL ATTACKS The evaluation metrics of the successful adversarial examples that strongly affect the translation quality are also important, and they show the capability of the adversarial attack. Hence, we evaluate TransFool, kNN, and Seq2Sick only over the successful adversarial examples.6 The results for the white-box setting are presented in Table 8. By comparing this Table and Table 1, which shows the results on the whole dataset, we can see that TransFool performance is consistent among successful and unsuccessful attacks. Moreover, successful adversarial examples generated by TransFool are still semantically similar to the original sentences, and their perplexity score is low. However, the successful adversarial examples generated by Seq2Sick and kNN do not preserve the semantic similarity and are not fluent sentences; hence, they are not valid adversarial sentences. D.3 TRADE-OFF BETWEEN SUCCESS RATE AND SIMILARITY/FLUENCY The results in our ablation study B show that there is a trade-off between the quality of adversarial example, in terms of semantic-preservation and fluency, and the attack success rate. As studied in 6As defined in Section 5, the adversarial example is successful if the BLEU score of its translation is less than half of the BLEU score of the original translation. (Morris et al., 2020), we can filter adversarial examples with low quality based on hard constraints on semantic similarity and the number of added grammatical errors caused by adversarial perturbations. We can analyze the trade-off between success rate and similarity/fluency by setting different thresholds for filtering adversarial examples. If we evaluate the similarity by the sentence encoder suggested in (Morris et al., 2020), the success rate with different threshold values for similarity in the case of Marian (EnFr) is depicted in Figure 3b. By considering only the adversarial examples with a similarity higher than a threshold, the success rate decreases as the threshold increases, and the quality of the adversarial examples increases. Similarly, we can do the same analysis for fluency. As suggested in (Morris et al., 2020), we count the grammatical errors by LanguageTool (Naber et al., 2003) for the original sen- tences and the adversarial examples. Figure 3a depicts the success rate for different thresholds of the number of added grammatical errors caused by adversarial perturbations. These analyses show that with tighter constraints, we can generate better adversarial examples while the success rate decreases. All in all, according to these results, TransFool outperforms the baselines for different thresholds of similarity and grammatical errors. D.4 MORE ADVERSARIAL EXAMPLES In this Section, we present more adversarial examples generated by TransFool, kNN, and Seq2Sick. In order to show the effect of using LM embeddings on the performance of TransFool, we also include the generated adversarial examples against English to French Marian NMT model when we do not use LM embeddings. In all these tables, the tokens modified by TransFool are written in blue in the original sentence, and the modified tokens by different adversarial attacks are written in red in their corresponding adversarial sentences. Moreover, the changes made by the adversarial attack to the translation that are not directly related to the modified tokens are written in orange, while the changes that are the direct result of modified tokens are written in brown. As can be seen in the examples presented in Tables 9 and 10, TransFool makes smaller changes to the sentence. The generated adversarial example is a correct English sentence, and it is similar to the original sentence. However, kNN, Seq2Sick, and our method with the NMT embeddings make changes that are perceptible, and the adversarial sentences are not necessarily similar to the original sentence. The higher semantic similarity of the adversarial sentences generated by TransFool is due to the integration of LM embeddings and the LM loss in the proposed optimization problem. We should highlight that TransFool is able to make changes to the adversarial sentence translation that are not directly related to the modifications of the original sentence but are the result of the NMT model failure. Other examples against different tasks and models are presented in Tables 11 to 16. E MORE RESULTS ON THE BLACK-BOX ATTACK E.1 ATTACKING GOOGLE TRANSLATE To evaluate the effect of different attacks in practice, we attack Google Translate7 by TransFool, kNN, and Seq2Sick. Since querying Google Translate is limited per day, we were not able to attack with WSLS, which requires high number of queries. Table 17 presents the performance of the English to French translation task. The results demonstrate that adversarial sentences crafted by TransFool can degrade the translation quality more while preserving the semantics better. The perplexity score and word error rate of TransFool compete with those metrics of Seq2Sick, but Seq2Sick is not meaning-preserving and is less effective. We also performed the cross-lingual black-box attack. We consider Marian NMT (En-Fr) as the reference model and attack En-De Google Translate. The results for TransFool are reported in Table 18. E.2 SEMANTIC SIMILARITY COMPUTED BY OTHER METRICS Similar to the white-box attack, we compute the similarity between the adversarial and original sentences by BERTScore and BLEURT-20, since they correlate well with human judgments. The similarity performance of TransFool and WSLS8 in the black-box settings are demonstrated in Table 19. According to Table 19, TransFool is better at maintaining semantic similarity. It may be because we used LM embeddings instead of the NMT ones in the similarity constraint. E.3 SOME ADVERSARIAL EXAMPLES We also present some adversarial examples generated by TransFool and WSLS, in the black-box setting, in Tables 20 to 22. In these tables, the tokens modified by TransFool are written in blue in the original sentence, and the modified tokens by different adversarial attacks are written in red in their corresponding adversarial sentences. Moreover, the changes made by the adversarial attack to the translation that are not directly related to the modified tokens are written in orange, while the changes that are the direct result of modified tokens are written in brown. These examples show that modifications made by TransFool are less detectable, i.e., the generated adversarial examples are more natural and similar to the original sentence. Moreover, TransFool makes changes to the translation that are not the direct result of the modified tokens of the adversarial sentence. 7We should note that since we do not have a tokenizer, we compute Word Error Rate (WER) instead of Token Error Rate (TER). 8The results of kNN and Seq2Sick are not reported since they are transfer attacks, and their performance is already reported in Table 7. F EFFECT OF BACK-TRANSLATION MODEL CHOICE ON WSLS PERFORMANCE WSLS uses a back-translation model for crafting an adversarial example. In (Zhang et al., 2021), the authors investigate the En-De task and use the winner model of the WMT19 DeEn sub-track (Ng et al., 2019) for the back-translation model. However, they do not evaluate their method for En-Fr and En-Zh tasks. To evaluate the performance of WSLS in Table 3, We have used pre-trained Marian NMT models for all three back-translation models. In order to show the effect of our choice of back-translation model, we compare the performance of WSLS for the En-De task when we use Marian NMT or (Ng et al., 2019) as the back-translation model in Table 23. As this Table shows, WSLS with Marian NMT as the back-translation model results in even more semantic similarity and lower perplexity score. On the other hand, WSLS with (Ng et al., 2019) as the back-translation model has a slightly more success rate. These results show that our choice of back-translation model does not highly affect the performance of WSLS. G LICENSE INFORMATION AND DETAILS In this Section, we provide some details about the datasets, codes, and models used in this paper. We should note that we used the models and datasets that are available in HuggingFace transformers (Wolf et al., 2020) and datasets (Lhoest et al., 2021) libraries.9 They are licensed under Apache License 2.0. Moreover, we used PyTorch for all experiments (Paszke et al., 2019), which is released under the BSD license10. G.1 DATASETS WMT14 In the Ninth Workshop on Statistical Machine Translation, WMT14 was introduced for four tasks. We used the En-De and En-Fr news translation tasks. There is no license available for this dataset. OPUS-100 OPUS-100 is a multilingual translation corpus for 100 languages, which is randomly sampled from the OPUS collection (Tiedemann, 2012). There is no license available for this dataset. G.2 MODELS Marian NMT Marian is a Neural Machine Translation framework, which is mainly developed by the Microsoft Translator team, and it is released under MIT License11. This model uses a beam size of 4. mBART50 mBART50 is a multilingual machine translation model of 50 languages, which has been introduced by Facebook. This model is published in the Fairseq library, which is released under MIT License12. This model uses a beam size of 5. 9These two libraries are available at this GitHub repository: https://github.com/huggingface. 10https://github.com/pytorch/pytorch/blob/master/LICENSE 11https://github.com/marian-nmt/marian/blob/master/LICENSE.md 12https://github.com/facebookresearch/fairseq/blob/main/LICENSE G.3 CODES kNN In order to compare our method with kNN (Michel et al., 2019), we used the code provided by the authors, which is released under the BSD 3-Clause "New" or "Revised" License.13 Seq2Sick To compare our method with Seq2Sick (Cheng et al., 2020a), we used the code published by the authors.14 There is no license available for their code. WSLS We implemented and evaluated WSLS (Zhang et al., 2021) using the source code published by the authors.15 There is no license available for this GitHub repository. H HUMAN EVALUATION We conduct a preliminary human evaluation campaign of TransFool, kNN, and Seq2Sick attacks on Marian NMT (En-Fr) in the white-box setting. We randomly choose 90 sentences from the test set of the WMT14 (En-FR) dataset with the adversarial samples and their translations by the NMT model. We split 90 sentences into three different surveys to obtain a manageable size for each annotator. We recruited two annotators for each survey. For the English surveys, we ensure that the annotators are highly proficient English speakers. Similarly, for the French survey, we ensure that the annotators are highly proficient in French. Before starting the rating task, we provided annotators with detailed guidelines similar to (Cer et al., 2017; Michel et al., 2019). The task is to rate the sentences for each criterion on a continuous scale (0-100) inspired by WMT18 practice (Ma et al., 2018) and Direct Assessment (Graham et al., 2013; 2017). For each sentence, we evaluate three aspects in three different surveys: • Fluency: We show the three adversarial sentences and the original sentence on the same page (in random order). We ask the annotators how much they agree with the "The sentence is fluent." statement for each sentence. • Semantic preservation: We show the original sentence on top and the three adversarial sentences afterwards (in random order). We ask the annotators how much they agree with the "The sentence is similar to the reference text." statement for each sentence. • Translation quality: Inspired by monolingual direct assessment (Ma et al., 2018; Graham et al., 2013; 2017), we evaluate the translation quality by showing the reference translation on top and the translations of three adversarial sentences afterwards (in random order). We ask the annotators how much they agree with the "The sentence is similar to the reference text." statement for each translation. We calculate 95% confidence intervals by using 15K bootstrap replications. The results are depicted in Figure 4. These results demonstrate that although the adversarial examples generated by TransFool are more semantic-preserving and fluent than both baselines. According to the provided guide to the annotators for semantic similarity, the score of 67.8 shows that the two sentences are roughly equivalent, but some details may differ. Moreover, a fluency of 66.4 demonstrates that although the generated adversarial examples by TransFool are more fluent than the baselines, there is still room to improve the performance in this regard. We follow the direct assessment strategy to measure the effectiveness of the adversarial attacks on translation quality. According to (Ma et al., 2018), since a sufficient level of agreement of translation quality is difficult to achieve with human evaluation, direct assessment simplifies the task to a simpler monolingual assessment instead of a bilingual task. The similarity of the translations of the adversarial sentences with the reference translation is shown in Figure 4c. The similarity of Seq2Sick is worse than other attacks. However, its similarity in the source language is worse. Therefore, we compute the decrease of similarity (between the original and adversarial sentences) 13The source code is available at https://github.com/pmichel31415/translate/tree/ paul/pytorch_translate/research/adversarial/experiments and the license is avialable at https://github.com/pmichel31415/translate/blob/paul/LICENSE 14The source code is available at https://github.com/cmhcbb/Seq2Sick. 15https://github.com/JHL-HUST/AdvNMT-WSLS/tree/79945881f75d92ae44e9ebc10500d8590c09bb13 from the source language to the target language. The results in Figure 4d show that all attacks affect the translation quality and the effect of TransFool is more pronounced than that of both baselines. Finally, we calculate Inter-Annotator Agreement (IAA). There are two human judgments for each sentence. We average both scores to compute the final score for each sentence. To ensure that the two annotators agree, we only consider sentences where their two corresponding scores are less than 30. We compute IAA in terms of Pearson Correlation coefficient instead of the commonly used Cohen’s K since scores are in a continuous scale. The results are presented in Table 24. Overall, we conclude that we achieve a reasonable inter-annotator agreement for all sentence types and evaluation metrics.
1. What is the main contribution of the paper, and what are the core ideas behind TransFool? 2. What are the strengths of the proposed approach, particularly in terms of the design of the loss function and the empirical study? 3. What are the weaknesses of the paper, especially regarding the defense mechanism and the transferability of the approach to targeted attacks? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This work proposes TransFool for generating non-targeted adversarial examples against neural machine translation models. One core idea is to utilize an autoregressive language model (GPT-2) to add a language model loss term, which helps generate fluent adversarial examples. They also add a similarity loss, which constrains the distances of embedding distances between the original input sentences and adversarial examples. They perform a comprehensive study to evaluate their approach on translating from English to different target languages, and show that TransFool achieves higher attack success rates compared to baselines, while the generated adversarial examples better preserve the semantic meaning and look more natural. Meanwhile, they show that the generated adversarial examples transfer to other models in the black-box setting, including Google Translate, and can also transfer to different target languages. Strengths And Weaknesses Strengths: Each term in the loss function of TransFool is well-motivated and properly designed. The empirical study is pretty thorough. To me, the most interesting finding is that the generated adversarial examples can transfer to different target languages. Weaknesses: The study of defense is lacking. For example, it is helpful to see how the attack works with existing defenses against adversarial examples for language models. Also, it is interesting to try adversarial training with TransFool adversarial examples. This is more of a question rather than a weakness, but have you tried TransFool for targeted attacks, and how does that work? It is good to investigate more into the transferability between different target languages. For example, have you done these experiments on Google Translate? Clarity, Quality, Novelty And Reproducibility The writing is clear, and the approach is well-designed. TransFool is novel as an adversarial attack algorithm for machine translation. The individual terms are not that new as there are prior works leveraging language models to improve the fluency of adversarial examples in the NLP domain, but this work provides a more comprehensive study for machine translation problems. The authors plan to release their code for reproducibility.
ICLR
Title TransFool: An Adversarial Attack against Neural Machine Translation Models Abstract Deep neural networks have been shown to be vulnerable to small perturbations of their inputs known as adversarial attacks. In this paper, we consider the particular task of Neural Machine Translation (NMT), where security is often critical. We investigate the vulnerability of NMT models to adversarial attacks and propose a new attack algorithm called TransFool. It builds on a multi-term optimization problem and a gradient projection step to compute adversarial examples that fool NMT models. By integrating the embedding representation of a language model in the proposed attack, we generate fluent adversarial examples in the source language that maintain a high level of semantic similarity with the clean samples and render the attack largely undetectable. Experimental results demonstrate that, for multiple translation tasks and different NMT architectures, our white-box attack can severely degrade the translation quality for more than 60% of the sentences while the semantic similarity between the original sentence and the adversarial example stays very high. Moreover, we show that the proposed attack is transferable to unknown target models and can fool those quite easily. Finally, based on automatic and human evaluations, our method leads to improvement in terms of success rate, semantic similarity, and fluency compared to the existing attacks both in white-box and black-box settings. Hence, TransFool permits to better characterize the vulnerability of NMT systems and outlines the necessity to design strong defense mechanisms and more robust NMT systems for real-life applications. 1 INTRODUCTION The impressive performance of Deep Neural Networks (DNNs) in different areas such as computer vision (He et al., 2016) and Natural Language Processing (NLP) (Vaswani et al., 2017) has led to their widespread usage in various applications. With such an extensive usage of these models, it is important to analyze their robustness and potential vulnerabilities. In particular, it has been shown that the outputs of these models are susceptible to imperceptible changes in the input, known as adversarial attacks (Szegedy et al., 2014). Adversarial examples, which differ from the original inputs in an imperceptible manner, cause the target model to generate incorrect outputs. If these models are not robust enough to these attacks, they cannot be reliably used in applications with security requirements. To address this issue, many studies have been recently devoted to the effective generation of adversarial examples, the defense against attacks, and the analysis of the vulnerabilities of DNN models (Moosavi-Dezfooli et al., 2016; Madry et al., 2018; Ortiz-Jiménez et al., 2021). The dominant methods to craft imperceptible attacks for continuous data, e.g., audio and image data, are based on gradient computing and various optimization strategies. However, these methods cannot be directly extended to NLP models due to the discrete nature of the tokens in the corresponding representations (i.e., words, subwords, and characters). Another challenge in dealing with textual data is the characterization of the imperceptibility of the adversarial perturbation. The ℓpnorm is highly utilized in image data to measure imperceptibility but it does not apply to textual data where manipulating only one token in a sentence may significantly change the semantics. Moreover, in gradient-based methods, it is challenging to incorporate linguistic constraints in a differentiable manner. Hence, optimization-based methods are more difficult and less investigated for adversarial attacks against NLP models. Currently, most attacks in textual data are gradient-free and simply based on heuristic word replacement, which may result in sub-optimal performance (Alzantot et al., 2018; Ren et al., 2019; Zang et al., 2020; Jin et al., 2020; Morris et al., 2020; Guo et al., 2021; Sadrizadeh et al., 2022). In the literature, adversarial attacks have been mainly studied for text classifiers, but less for other NLP tasks such as Neural Machine Translation (NMT) (Zhang et al., 2020b). In text classifiers, the number of output labels of the model is limited, and the adversary’s goal is to mislead the target model to classify the input into any wrong class (untargeted attack) or a wrong predetermined class (targeted attack). However, in NMT systems, the output of the target model is a sequence of tokens, which is a much larger space than that of a text classifier (Cheng et al., 2020a), and it is probable that the ground-truth translation changes after perturbing the input sequence. Hence, it is important to craft meaning-preserving adversarial sentences with a low impact on the ground-truth translation. In this paper, we propose TransFool to build meaning-preserving and fluent adversarial attacks against NMT models. We build a new solution to the challenges associated with gradient-based adversarial attacks against textual data. To find an adversarial sentence that is fluent and semantically similar to the input sentence but highly degrades the translation quality of the target model, we propose a multi-term optimization problem over the tokens of the adversarial example. We consider the white-box attack setting, where the adversary has access to the target model and its parameters. White-box attacks are widely studied since they reveal the vulnerabilities of the systems and are used in benchmarks. To ensure that the generated adversarial examples are imperceptibly similar to the original sentences, we incorporate a Language Model (LM) in our method in two ways. First, we consider the loss of a Causal Language Model (CLM) in our optimization problem in order to impose the syntactic correctness of the adversarial example. Second, by working with the embedding representation of LMs, instead of the NMT model, we ensure that similar tokens are close to each other in the embedding space (Tenney et al., 2019). It enables the definition of a similarity term between the respective tokens of the clean and adversarial sequences. Hence, we include a similarity constraint in the proposed optimization problem, which uses the LM embeddings. Finally, our optimization contains an adversarial term to maximize the loss of the target NMT model. The generated adversarial example, i.e., the minimizer of the proposed optimization problem, should consist of meaningful tokens, and hence, the proposed optimization problem should be solved in a discrete space. By using a gradient projection technique, we first consider the continuous space of the embedding space and perform a gradient descent step and then, we project the resultant embedding vectors to the most similar valid token. In the projection step, we use the LM embedding representation and project the output of the gradient descent step into the nearest meaningful token in the embedding space (with maximum cosine similarity). We test our method against different NMT models with transformer structures, which are now widely used for their exceptional performance. For different NMT architectures and translation tasks, experiments show that our white-box attack can reduce the BLEU score, a widely-used metric for translation quality evaluation (Post, 2018), to half for more than 60% of the sentences while it maintains a high level of semantic similarity with the clean samples. Furthermore, we extend TransFool to black-box settings and show that it can fool unknown target models. Overall, automatic and human evaluations show that in both white-box and black-box settings, TransFool outperforms the existing heuristic strategies in terms of success rate, semantic similarity, and fluency. In summary, our contributions are as follows: • We define a new optimization problem to compute semantic-preserving and fluent attacks against NMT models. The objective function contains several terms: adversarial loss to maximize the loss of the target NMT model; a similarity term to ensure that the adversarial example is similar to the original sentence; and loss of a CLM to generate fluent and natural adversarial examples. • We propose a new strategy to incorporate linguistic constraints in our attack in a differentiable manner. Since LM embeddings provide a meaningful representation of the tokens, we use them instead of the NMT embeddings to compute the similarity between two tokens. • We design a white-box attack algorithm, TransFool, against NMT models by solving the proposed optimization problem with gradient projection. Our attack, which operates at the token level, is effective against state-of-the-art transformer-based NMT models and outperforms prior works. • By using the transferability of adversarial attacks to other models, we extend the proposed whitebox attack to the black-box setting. Our attack is highly effective even when the target languages of the target NMT model and the reference model are different. To our knowledge, this type of transfer attack, cross-lingual, has not been investigated. The rest of the paper is organized as follows. We review the related works in Section 2. In Section 3, we formulate the problem of adversarial attacks against NMT models, and propose an optimization problem to build adversarial attacks. We describe our attack algorithm in Section 4. In Section 5, we discuss the experimental results and evaluate our algorithm against different transformer models and translation tasks. Moreover, we evaluate our attack in black-box settings and show that TransFool has very good transfer properties. Finally, the paper is concluded in Section 6. 2 RELATED WORK Machine translation, an important task in NLP, is the task of automatically converting a sequence of words in a source language to a sequence of words in a target language (Bahdanau et al., 2015). By using DNN models, NMT systems are reaching exceptional performance, which has resulted in their usage in a wide variety of areas, especially in safety and security sensitive applications. But any faulty output of NMT models may result in irreparable incidents in real-world applications. Hence, we need to better understand the vulnerabilities of NMT models to perturbations of input samples, in particular to adversarial examples, to ensure security of applications and robustness of such models. Adversarial attacks against NMT systems have been studied in recent years. First, Belinkov & Bisk (2018) show that character-level NMT models are highly vulnerable to character manipulations such as typos in a block-box setting. Similarly, Ebrahimi et al. (2018a) investigate the robustness of character-level NMT models. They propose a white-box adversarial attack based on HotFlip (Ebrahimi et al., 2018b) and greedily change the important characters to decrease the translation quality (untargeted attack) or mute/push a word in the translation (targeted attack). However, character-level manipulations can be easily detected. To circumvent this issue, many of the adversarial attacks against NMT models are rather based on word replacement. Cheng et al. (2019) propose a white-box attack where they first select random words of the input sentence and replace them with a similar word. In particular, in order to limit the search space, they find some candidates with the help of a language model and choose the token that aligns best with the gradient of the adversarial loss to cause more damage to the translation. Michel et al. (2019) and Zhang et al. (2021) find important words in the sentence and replace them with a neighbor word in the embedding space to create adversarial examples. However, these methods use heuristic strategies which may result in sub-optimal performance. There are also some other types of attacks against NMT models in the literature. In (Wallace et al., 2020), a new type of attack, i.e., universal adversarial attack, is proposed, which consists of a single snippet of text that can be added to any input sentence to mislead the NMT model. However, the added phrase is meaningless, hence easily detectable. Cheng et al. (2020a) propose Seq2Sick, a targeted white-box attack against NMT models. They introduce an optimization problem and solve it by gradient projection. The proposed optimization problem contains an adversarial loss and a group lasso term to ensure that only a few words of the sentence are modified. Although they have a projection step to the nearest embedding vector, they use the NMT embeddings, which may not preserve semantic similarity. Other types of attacks against NMT models with different threat models and purposes have also been investigated in the literature. Some papers focus on making NMT models robust to perturbation to the inputs (Cheng et al., 2018; 2020b; Tan et al., 2021). Some other papers use adversarial attacks to enhance the NMT models in some aspects, such as word sense disambiguation (Emelin et al., 2020), robustness to subword segmentation (Park et al., 2020), and robustness of unsupervised NMT (Yu et al., 2021). In (Xu et al., 2021; Wang et al., 2021), the data poisoning attacks against NMT models are studied. Another type of attack whose purpose is to change multiple words while ensuring that the output of the NMT model remains unchanged is explored in (Chaturvedi et al., 2019; 2021). Another attack approach is presented in (Cai et al., 2021), where the adversary uses the hardware faults of systems to fool NMT models. In summary, most of the existing adversarial attacks against NMT models are not undetectable since they are based on character manipulation, or they use the NMT embedding space to find similar tokens. Also, heuristic strategies based on word-replacement are likely to have sub-optimal performance. Finally, none of these attacks study the transferability to black-box settings. We introduce TransFool to craft effective and fluent adversarial sentences which are similar to the original ones. 3 OPTIMIZATION PROBLEM In this section, we first present our new formulation for generating adversarial examples against NMT models, along with different terms that form our optimization problem. Adversarial Attack. Consider X to be the source language space and Y to be the target language space. The NMT model f : X → Y generally has an encoder-decoder structure (Bahdanau et al., 2015; Vaswani et al., 2017) and aims to maximize the translation probability p(yref|x), where x ∈ X is the input sentence in the source language and yref ∈ Y is the ground-truth translation in the target language. To process textual data, each sentence is decomposed into a sequence of tokens. Therefore, the input sentence x = x1x2...xk is split into a sequence of k tokens, where xi is a token from the vocabulary set VX of the NMT model, which contains all the tokens from the source language. For each token in the translated sentence yref = yref,1, ...,yref,l, the NMT model generates a probability vector over the target language vocabulary set VY by applying a softmax function to the decoder output. The adversary is looking for an adversarial sentence x′, which is tokenized into a sequence of k tokens x′ = x′1x ′ 2...x ′ k, in the source language that fools the target NMT model, i.e., the translation of the adversarial example f(x′) is far from the true translation. However, the adversarial example x′ and the original sentence x should be imperceptibly close so that the translation of the adversarial example stays similar to yref. As is common in the NMT models (Vaswani et al., 2017; Junczys-Dowmunt et al., 2018; Tang et al., 2020), to feed the discrete sequence of tokens into the NMT model, each token is converted to a continuous vector, known as an embedding vector, using a lookup table. In particular, let emb(.) be the embedding function that maps the input token xi to the continuous embedding vector emb(xi) = ei ∈ Rm, where m is the embedding dimension of the target NMT model. Therefore, the input of the NMT model is a sequence of embedding vectors representing the tokens of the input sentence, i.e., ex = [e1, e2, ..., ek] ∈ R(k×m). In the same manner, ex′ = [e′1, e′2, ..., e′k] ∈ R(k×m) is defined for the adversarial example. To generate an adversarial example for a given input sentence, we introduce an optimization problem with respect to the embedding vectors of the adversarial sentence ex′ . Our optimization problem is composed of multiple terms: an adversarial loss, a similarity constraint, and the loss of a language model. An adversarial loss causes the target NMT model to generate faulty translation. Moreover, with a language model loss and a similarity constraint, we impose the generated adversarial example to be a fluent sentence and also semantically similar to the original sentence, respectively. The proposed optimization problem, which finds the adversarial example x′ from its embedding representation ex′ by using a lookup table, is defined as follows: x′ ← argmin e′i∈EVX [LAdv + αLSim + βLLM ], (1) where α and β are the hyperparameters that control the relative importance of each term. Moreover, we call the continuous space of the embedding representations the embedding space and denote it by E , and we show the discrete subspace of the embedding space E containing the embedding representation of every token in the source language vocabulary set by EVX . We now discuss the different terms of the optimization function in detail. Adversarial Loss. In order to create an adversarial example whose translation is far away from the reference translation yref, we try to maximize the training loss of the target NMT model. Since the NMT models are trained to generate the next token of the translation given the translation up until that token, we are looking for the adversarial example that maximizes the probability of wrong translation (i.e., minimizes the probability of correct translation) for the i-th token, given that the NMT model has produced the correct translation up to step (i− 1): LAdv = 1 l l∑ i=1 log(pf (yref,i|ex′ , {yref,1, ..., yref,(i−1)})), (2) where pf (yref,i|ex′ , {yref,1, ..., yref,(i−1)}) is the cross entropy between the predicted token distribution by the NMT model and the delta distribution on the token yref,i, which is one for the correct translated token, yref,i, and zero otherwise. By minimizing log(pf (.)), normalized by the sentence length l, we force the output probability vector of the NMT model to differ from the delta distribution on the token yref,i, which may cause the predicted translation to be wrong. Similarity Constraint. To ensure that the generated adversarial example is similar to the original sentence, we need to add a similarity constraint to our optimization problem. It has been shown that the embedding representation of a language model captures the semantics of the tokens (Tenney et al., 2019; Shavarani & Sarkar, 2021). Suppose that the embedding representation by a language model of the original sentence (which may differ from the NMT embedding representation ex) is vx = [v1,v2, ...,vk] ∈ R(k×n), where n is the embedding dimension of the language model. Likewise, let vx′ denote the sequence of LM embedding vectors regarding the tokens of the adversarial example. We can define the distance between the i-th tokens of the original and the adversarial sentences by computing the cosine distance between their corresponding LM embedding vectors: ∀i ∈ {1, ..., k} : ri = 1− v⊺i v ′ i ∥vi∥2.∥v′i∥2 . (3) The cosine distance is zero if the two tokens are the same and it has larger values for two unrelated tokens. We want the adversarial sentence to differ from the original sentence in only a few tokens. Therefore, the cosine distance between most of the tokens in the original and adversarial sentence should be zero, which causes the cosine distance vector [r1, r2, ..., rk] to be sparse. To ensure the sparsity of the cosine distance vector, instead of the ℓ0 norm, which is not differentiable, we can define the similarity constraint as the ℓ1 norm relaxation of the cosine distance vector normalized to the length of the sentence: LSim = 1 k k∑ i=1 1− v ⊺ i v ′ i ∥vi∥2.∥v′i∥2 . (4) Language Model Loss. Causal language models are trained to maximize the probability of a token given the previous tokens. Hence, we can use the loss of a CLM, i.e., the negative log-probability, as a rough and differentiable measure for the fluency of the generated adversarial sentence. The loss of a CLM, which is normalized to the sentence length, is as follows: LLM = − 1 k k∑ i=1 log(pg(v ′ i|v′1, ...,v′(i−1))), (5) where g is a CLM, and pg(v′i|v′1, ...,v′(i−1)) is the cross entropy between the predicted token distribution by the language model and the delta distribution on the token v′i, which is one for the corresponding token in the adversarial example, v′i, and zero otherwise. To generate adversarial examples against a target NMT model, we propose to solve the optimization problem (1), which contains an adversarial loss term, a similarity constraint, and a CLM loss. 4 TRANSFOOL ATTACK ALGORITHM We now introduce our algorithm for generating adversarial examples against NMT models. The block diagram of our proposed attack is presented in Figure 1. We are looking for an adversarial example with tokens in the vocabulary set VX and the corresponding embedding vectors in the subspace EVX . Hence, the optimization problem (1) is discrete. The high-level idea of our algorithm is to use gradient projection to solve equation 1 in the discrete subspace EVX . The objective function of equation 1 is a function of NMT and LM embedding representations of the adversarial example, ex′ and vx′ , respectively. Since we aim to minimize the optimization problem with respect to ex′ , we need to find a transformation between the embedding space of the language model and the target NMT model. To this aim, as depicted in Figure 1, we propose to replace the embedding layer of a pre-trained language model with a Fully Connected (FC) layer, which gets the embedding vectors of the NMT model as its input. Then, we train the language model and the FC layer simultaneously with the causal language modeling objective. Therefore, we can compute the LM embedding vectors as a function of the NMT embedding vectors: vi = FC(ei), where FC ∈ Rm×n is the trained FC layer. Algorithm 1 TransFool Adversarial Attack Input: f(.): Target NMT model, VX : Vocabulary set FC: Fully connected layer, x: Input sentence yref : Ground-truth translation of x λ: BLEU score ratio, α, β: Hyperparameters K: Maximum No. of iterations, γ: step size Output: x′: Generated adversarial example initialization: s← empty set, itr ← 0 thr ← BLEU(f(ex),yref ))× λ ∀i ∈ {1, ..., k} eg,i, ep,i ← ei while itr < K do itr ← itr + 1 Step 1: Gradient descent in the continuous embedding space: eg ← eg − γ.∇ex′ (Ladv +αLSim + βLLM ) vg ← FC(eg) Step 2: Projection to the discrete subspace EVX and update if the sentence is new: for i ∈ {1, ..., k} do ep,i ← argmax e∈EVX FC(e)⊤vg,i ∥FC(e)∥2.∥vg,i∥2 end for if ep not in set s then add ep to set s eg ← ep if BLEU(f(ep),yref )) ≤ thr then break (adversarial example is found) end if end if end while return ex′ ← ep The pseudo-code of our attack can be found in Algorithm 1. In more detail, we first convert the discrete tokens of the sentence to continuous embedding vectors of the target NMT model, then we use the FC layer to compute the embedding representations of the tokens by the language model. Afterwards, we consider the continuous relaxation of the optimization problem, which means that we assume that the embedding vectors are in the continuous embedding space E instead of EVX . In each iteration of the algorithm, we first update the sequence of embedding vectors ex′ in the opposite direction of the gradient (gradient descent). Let us denote the output of the gradient descent step for the i-th token by eg,i. Then we project the resultant embedding vectors, which are not necessarily in EVX , to the nearest token in the vocabulary set VX . Since the distance in the embedding space of the LM model represents the relationship between the tokens, we use the LM embedding representations with cosine similarity metric in the projection step to find the most similar token in the vocabulary. We can apply the trained fully connected layer FC to find the LM embedding representations: vg = FC(eg). Hence, the projected NMT embedding vector, ep,i, for the i-th token is: ep,i = argmax e∈EVX FC(e)⊤vg,i ∥FC(e)∥2.∥vg,i∥2 . (6) However, due to the discrete nature of data, by applying the projection step in every iteration of the algorithm, we may face an undesirable situation where the algorithm gets stuck in a loop of previously computed steps. In order to circumvent this issue, we will only update the embedding vectors by the output of the projection step if the projected sentence has not been generated before. We perform the gradient descent and projection steps iteratively until a maximum number of iterations is reached, or the translation quality of the adversarial example relative to the original translation quality is less than a threshold. To evaluate the translation quality, we use the BLEU score, which is a widely used metric in the literature: BLEU(f(ex′),yref )) BLEU(f(ex),yref )) ≤ λ. (7) 5 EXPERIMENTS In this section, we first discuss our experimental setup, and then we evaluate TransFool against different models and translation tasks, both in white-box and black-box settings. 5.1 EXPERIMENTAL SETUP We conduct experiments on the English-French (En-Fr), English-German (En-De), and EnglishChinese (En-Zh) translation tasks. We use the test set of WMT14 (Bojar et al., 2014) for the En-Fr and En-De tasks, and the test set of OPUS-100 (Zhang et al., 2020a) for the En-Zh task. Some statistics of these datasets are presented in Appendix A. We evaluate TransFool against transformer-based NMT models. To verify that our attack is effective against various model architectures, we attack the HuggingFace implementation of the Marian NMT models (Junczys-Dowmunt et al., 2018) and mBART50 multilingual NMT model (Tang et al., 2020). As explained in Section 4, the similarity constraint and the LM loss of the proposed optimization problem require an FC layer and a CLM. To this aim, for each NMT model, we train an FC layer and a CLM (with GPT-2 structure (Radford et al., 2019)) on WikiText-103 dataset. We note that the input of the FC layer is the target NMT embedding representation of the input sentence. To find the minimizer of our optimization problem (1), we use the Adam optimizer (Kingma & Ba, 2014) with step size γ = 0.016. Moreover, we set the maximum number of iterations to 500. Our algorithm has three parameters: coefficients α and β in the optimization function (1), and the relative BLEU score ratio λ in the stopping criteria (7). We set λ = 0.4, β = 1.8, and α = 20. We chose these parameters experimentally according to the ablation study, which is available in Appendix B, in order to optimize the performance in terms of success rate, semantic similarity, and fluency. We compare our attack with (Michel et al., 2019), which is a white-box untargeted attack against NMT models.1 We only consider one of their attacks, called kNN, which substitutes some words with their neighbors in the embedding space; the other attack considers swapping the characters, which is too easy to detect. We also adapted Seq2Sick (Cheng et al., 2020a), a targeted attack against NMT models based on an optimization problem in the NMT embedding space, to our untargeted setting. For evaluation, we report different performance metrics: (1) Attack Success Rate (ASR), which measures the rate of successful adversarial examples. Similar to (Ebrahimi et al., 2018a), we define the adversarial example as successful if the BLEU score of its translation is less than half of the BLEU score of the original translation. (2) Relative decrease of translation quality, by measuring the translation quality in terms of BLEU score2 and chrF (Popović, 2015). We denote these two metrics by RDBLEU and RDchrF, respectively. We choose to compute the relative decrease in translation quality so that scores are comparable across different models and datasets (Michel et al., 2019). (3) Semantic Similarity (Sim.), which is computed between the original and adversarial sentences and commonly approximated by the universal sentence encoder (Yang et al., 2020)3. (4) Perplexity score (Perp.), which is a measure of the fluency of the adversarial example computed with the perplexity score of GPT-2 (large). (5) Token Error Rate (TER), which measures the imperceptibility by computing the rate of tokens modified by an adversarial attack. 5.2 RESULTS OF THE WHITE-BOX ATTACK Now we evaluate TransFool in comparison to kNN and Seq2Sick against different NMT models. Table 1 shows the results in terms of different evaluation metrics.4 Overall, our attack is able to decrease the BLEU score of the target model to less than half of the BLEU score of the original translation for more than 60% of the sentences for all tasks and models (except for the En-Zh mBART50 model, where ASR is 57.50%). Also, in all cases, semantic similarity is more than 0.83, which shows that our attack can maintain a high level of semantic similarity with the clean sentences. In comparison to the baselines, TransFool obtains a higher success rate against different model structures and translation tasks, and it is able to reduce the translation quality more severely. Since the algorithm uses the gradients of the proposed optimization problem and is not based on token replacement, TransFool can highly degrade the translation quality. Furthermore, the perplexity score of the adversarial example generated by TransFool is much less than the ones of both baselines (except for the En-Fr Marian model, where it is a little higher than Seq2Sick), which is due to the 1Code of (Cheng et al., 2019; 2020b), untargeted white-box attacks against NMTs, is not publicly available. 2We use case-sensitive SacreBLEU (Post, 2018) on detokenized sentences. 3We use the multilingual version since we are dealing with multiple languages. 4We discard the sentences whose original BLEU score is zero to prevent improving the results artificially. We should also note that all results are computed after the re-tokenization of the adversarial example. Since we are generating the adversarial example at the token-level, there is a small chance that, when the generated adversarial example is converted to text, the re-tokenization does not produce the same set of tokens. integration of the LM embeddings and the LM loss term in the optimization problem. Moreover, the token error rate of our attack is lower than both baselines, and the semantic similarity is preserved better by TransFool in almost all cases since we use the LM embeddings instead of the NMT ones for the similarity constraint. While kNN can also maintain semantic similarity, Seq2Sick does not perform well in this criterion. We also computed similarity by BERTScore (Zhang et al., 2019) and BLEURT-20 (Sellam et al., 2020) that highly correlate with human judgments in Appendix D, which shows that TransFool is better than both baselines in maintaining the semantics. Moreover, as presented in Appendix D.2, the successful attacks by the baselines, as opposed to TransFool, are not semantic-preserving or fluent sentences. Finally, the complete setup and results of our human evaluation are presented in Appendix H, which also shows the superiority of TransFool. We also compare the runtime of TransFool and that of the two baselines. In each iteration of our proposed attack, we need to perform a back-propagation through the target NMT model and the language model to compute the gradients. Also, in some iterations (27 iterations per sentence on average), a forward pass is required to compute the output of the target NMT model to check the stopping criteria. For the Marian NMT (En-Fr) model, on a system equipped with an NVIDIA A100 GPU, it takes 26.45 seconds to generate adversarial examples by TransFool. On the same system, kNN needs 1.45 seconds, and Seq2Sick needs 38.85 seconds to generate adversarial examples for less effective adversarial attacks, however. Table 2 shows some adversarial examples against mBART50 (En-De). In comparison to the baselines, TransFool makes smaller changes to the sentence. The generated adversarial example is a correct English sentence, and it is similar to the original sentence. However, kNN and Seq2Sick generate adversarial sentences that are not necessarily natural or similar to the original sentences. More examples generated by TransFool, kNN, and Seq2Sick can be found in Appendix D.2. We also provide some adversarial sentences when we do not use the LM embeddings in our algorithm in order to show the importance of this component. Indeed, TransFool outperforms both baselines in terms of success rate. It is able to generate more natural adversarial examples with a lower number of perturbations (TER) and higher semantic similarity with the clean samples in almost all cases. A complete study of hyperparameters and the effect of using LM embeddings instead of NMT embeddings for computing similarity on TransFool performance is presented in Appendix B and C, respectively. 5.3 PERFORMANCE IN BLACK-BOX ATTACK SETTINGS In practice, the adversary’s access to the learning system may be limited. Hence, we propose to analyze the performance of TransFool in a black-box scenario. It has been shown that adversarial attacks often transfer to another model that has a different architecture and is even trained with different datasets (Szegedy et al., 2014). By utilizing this property of adversarial attacks, we extend TransFool to the black-box scenario. We consider that we have complete access to one NMT model (the reference model), including its gradients. We implement the proposed gradient-based attack in algorithm 1 with this model. However, for the stopping criteria of the algorithm, we query the black-box target NMT model to compute the BLEU score. We can also implement the black-box transfer attack in the case where the source languages of the reference model and the target model are the same, but their target languages are different. Since Marian NMT is faster and lighter than mBART50, we use it as the reference model and evaluate the performance of the black-box attack against mBART50. We compare the performance of TransFool with WSLS (Zhang et al., 2021), a black-box untargeted attack against NMT models based on word-replacement (the choice of backtranslation model used in WSLS is investigated in Appendix F). We also evaluate the performance of kNN and Seq2Sick in the black-box settings by attacking mBART50 with the adversarial example generated against Marian NMT (in the white-box settings). The results are reported in Table 3. We also report the performance when attacking Google Translate, some generated adversarial samples, and similarity performance computed by BERTScore and BLEURT-20 in Appendix E. In all tasks, with a few queries to the target model, our black-box attack achieves better performance than the white-box attack against the target model (mBART50) but a little worse performance than the white-box attack against the reference model (Marian NMT). In all cases, the success rate, token error rate, and perplexity of TransFool are better than all baselines (except for the En-Fr task, where perplexity is a little higher than Seq2Sick). The ability of TransFool and WSLS to maintain semantic similarity is comparable and better than both other baselines. However, WSLS has the highest token error rate, which makes the attack detectable. The effect of TransFool on BLEU score is larger than that of the other methods, and its effect on chrF metric comes after WSLS (except for the En-DE task, where RDchrF of TransFool is the best). Regarding the complexity, TransFool requires only a few queries to the target model for translation, while WSLS queries the model more than a thousand times, which is costly and may not be feasible in practice. For the En-Fr task, on a system equipped with an NVIDIA A100 GPU, it takes 43.36 and 1904.98 seconds to generate adversarial examples by TransFool and WSLS, respectively, which shows that WSLS is very time-consuming. We also analyze the transferability of the generated adversarial examples to a black-box NMT model with the same source language but a different target language. Since we need a dataset with the same set of sentences for different language pairs, we use the validation set of WMT14 for En-Fr and EnDe tasks. Table 4 shows the results for two cases: Marian NMT or mMBART50 as the target model. We use Marian NMT as the reference model with a different target language than that of the target model. In all settings, the generated adversarial examples are highly transferable to another NMT model with a different target language (i.e., they have high attack success rate and large semantic similarity). The high transferability of TransFool shows that it is able to capture the common failure modes in different NMT models, which can be dangerous in real-world applications. 6 CONCLUSION In this paper, we proposed TransFool, a white-box adversarial attack against NMT models, by introducing a new optimization problem solved by an iterative method based on gradient projection. We utilized the embedding representation of a language model to impose a similarity constraint on the adversarial examples. Moreover, by considering the loss of a language model in our optimization problem, the generated adversarial examples are more fluent. Extensive automatic and human evaluations show that TransFool is highly effective in different translation tasks and against different NMT models. Our attack is also transferable to black-box settings with different structures and even different target languages. In both white-box and black-box scenarios, TransFool obtains improvement over the baselines in terms of success rate, semantic similarity, and fluency. It is important to analyze adversarial attacks against NMT models such as TransFool to find the vulnerabilities of NMT models, measure their robustness, and eventually build more robust NMT models. Ethics Statement We introduced TransFool, an adversarial attack against NMT models, with the motivation of revealing the vulnerabilities of NMT models and paving the way for designing stronger defenses and building robust NMT models in real-life scenarios. While it remains a possibility that a threat actor may misuse our attack, we do not condone using our method with the intent of attacking a real NMT system. Reproducibility Statement The source code will be publicly available as soon as possible to help reproduce our results. Moreover, Appendix G contains the license information and more details of the assets (datasets, codes, and models). Supplementary Material TransFool: An Adversarial Attack against Neural Machine Translation Models ABSTRACT In this supplementary material, we first provide some statistics of the evaluation datasets in Section A. The ablation study of the hyperparameters of TransFool is presented in Section B. We investigate the effect of the LM embedding representation on TransFool and kNN in Section C. More results of the white-box attack are reported in D: the results of other similarity metrics (Section D.1), performance over successful attacks (Section D.2), and some generated adversarial examples (Section D.4). Section E provides more experiments on the black-box attack: the performance of attacking Google Translate (Section E.1), results of other similarity metrics (Section E.2), and some generated adversarial examples (Section E.3). We discuss the effect of the back-translation model choice on WSLS in Section F. Finally, the license information and more details of the assets (datasets, codes, and models) are provided in Section G. A SOME STATISTICS OF THE DATASETS Some statistics, including the number of samples, the Average length of the sentences, and the translation quality of Marian NMT and mBART50, of the evaluation datasets, i.e., OPUS100 (En-Zh) WMT14 (En-FR) and (En-De), are reported in table 5. B ABLATION STUDY In this Section, we analyze the effect of different hyperparameters (including the coefficients α and β in our optimization problem (1), the step size of the gradient descent γ, and the relative BLEU score ratio λ in the stopping criteria Eq. (7)) on the white-box attack performance in terms of success rate, semantic similarity, and perplexity score. In all the experiments, we consider English to French Marian NMT model and evaluate over the first 1000 sentences of the test set of WMT14. The default values for the hyperparameters are as follows, except for the hyperparameter that varies in the different experiments, respectively: α = 20, β = 1.8, γ = 0.016, and λ = 0.4. Effect of the similarity coefficient α. This hyperparameter determines the strength of the similarity term in the optimization problem (1). Figure 2a shows the effect of α on the performance of our attack. By increasing the similarity coefficient of the proposed optimization problem, we are forcing our algorithm to find adversarial sentences that are more similar to the original sentence. Therefore, as shown in Figure 2a, larger values of α result in higher semantic similarity. However, in this case, it is harder to fool the NMT model, i.e., lower attack success rate, RDBLEU, and RDchrF. Moreover, it seems that, since the generated adversarial examples are more similar to the original sentence, they are more natural, and their perplexity score is lower. Effect of the language model loss coefficient β. We analyze the impact of the hyperparameter β, which controls the importance of the language model loss term in the proposed optimization 5 15 25 35 Similarity Coefficient 40 60 80 100 At ta ck S uc es s R at e ASR Sim. Perp. 0.70 0.75 0.80 0.85 0.90 200 300 400 500 (a) 1.25 1.50 1.75 2.00 LM loss Coefficient 50 55 60 65 70 75 ASR Sim. Perp. 0.83 0.84 0.85 0.86 0.87 0.88 140 160 180 200 (b) 1.2 1.4 1.6 1.8 2.0 Step Size 50 60 70 80 ASR Sim. Perp. 0.80 0.82 0.84 0.86 0.88 150 200 250 300 350 x1e-2 (c) 0.2 0.4 0.6 0.8 BLEU Score Ratio 30 40 50 60 70 ASR Sim. Perp. 0.83 0.84 0.85 0.86 0.87 0.88 Se m an tic S im ila rit y 120 140 160 180 200 Pe rp le xi ty S co re (d) 5 15 25 35 Similarity Coefficient 0.45 0.50 0.55 0.60 0.65 0.70 RD BL EU BLEU chrF 0.20 0.25 0.30 0.35 (e) 1.25 1.50 1.75 2.00 LM loss Coefficient 0.48 0.51 0.54 0.57 BLEU chrF 0.19 0.20 0.21 0.22 0.23 0.24 (f) 1.2 1.4 1.6 1.8 2.0 Step Size 0.50 0.55 0.60 0.65 BLEU chrF 0.19 0.21 0.23 0.25 0.27 x1e-2 (g) 0.2 0.4 0.6 0.8 BLEU Score Ratio 0.35 0.40 0.45 0.50 0.55 0.60 BLEU chrF 0.16 0.18 0.20 0.22 0.24 RD ch rF (h) Figure 2: Effect of different hyperparameters on the performance of TransFool. problem, in Figure 2b. By increasing this coefficient, we weaken the effect of the similarity term, i.e., the generated adversarial examples are less similar to the original sentence. As a result, the success rate and the effect on translation quality, i.e., RDBLEU and RDchrF, increase. Effect of the step size γ. The step size of the gradient descent step of the algorithm can impact the performance of our attack, which is investigated in Figure 2c. Increasing the step size results in larger movement in the embedding space in each iteration of the algorithm. Hence, the generated adversarial examples are more aggressive, which results in lower semantic similarity and higher perplexity scores. However, we can find adversarial examples more easily and achieve a higher attack success rate, RDBLEU, and RDchRF. Effect of the BLEU score ratio λ. This hyperparameter determines the stopping criteria of our iterative algorithm. Figure 2d studies the effects of this hyperparameter on the performance of our attack. As this figure shows, a higher BLEU score ratio causes the algorithm to end in earlier iterations. Therefore, the changes applied to the sentence are less aggressive, and hence, we achieve higher semantic similarity and a lower perplexity score. However, the attack success rate, RDBLEU, and RDchrF decrease since we make fewer changes to the sentences. C EFFECT OF THE LM EMBEDDING REPRESENTATION Table 6 shows the results of TransFool and kNN when we use LM embeddings or NMT embeddings for measuring similarity between two tokens.5 The LM embeddings result in lower perplexity and higher semantic similarity for both methods, which demonstrates the importance of this component in generat- ing meaning-preserving fluent adversarial examples. 5In order to have a fair comparison, we fine-tuned hyperparameters of Transfool, in the case when we do not use LM embeddings, to have a similar attack success rate. D MORE RESULTS ON THE WHITE-BOX ATTACK D.1 SEMANTIC SIMILARITY COMPUTED BY OTHER METRICS To better assess the ability of adversarial attacks in maintaining semantic similarity, we can compute the similarity between the original and adversarial sentences using other metrics such as BERTScore (Zhang et al., 2019) and BLEURT-20 (Sellam et al., 2020). It is shown in (Zhang et al., 2019) that BERTScore correlates well with human judgments. BLEURT-20 is also shown to correlates better with human judgment than traditional measures (Freitag et al., 2021). The results are reported in Table 7. These results indicate that the TransFool is indeed more capable of preserving the semantics of the input sentence. In the two cases where kNN has better similarity by using the Universal Sentence Encoder (USE) (Yang et al., 2020), the performance of TransFool is better in terms of BERTScore and BLEURT-20. D.2 PERFORMANCE OVER SUCCESSFUL ATTACKS The evaluation metrics of the successful adversarial examples that strongly affect the translation quality are also important, and they show the capability of the adversarial attack. Hence, we evaluate TransFool, kNN, and Seq2Sick only over the successful adversarial examples.6 The results for the white-box setting are presented in Table 8. By comparing this Table and Table 1, which shows the results on the whole dataset, we can see that TransFool performance is consistent among successful and unsuccessful attacks. Moreover, successful adversarial examples generated by TransFool are still semantically similar to the original sentences, and their perplexity score is low. However, the successful adversarial examples generated by Seq2Sick and kNN do not preserve the semantic similarity and are not fluent sentences; hence, they are not valid adversarial sentences. D.3 TRADE-OFF BETWEEN SUCCESS RATE AND SIMILARITY/FLUENCY The results in our ablation study B show that there is a trade-off between the quality of adversarial example, in terms of semantic-preservation and fluency, and the attack success rate. As studied in 6As defined in Section 5, the adversarial example is successful if the BLEU score of its translation is less than half of the BLEU score of the original translation. (Morris et al., 2020), we can filter adversarial examples with low quality based on hard constraints on semantic similarity and the number of added grammatical errors caused by adversarial perturbations. We can analyze the trade-off between success rate and similarity/fluency by setting different thresholds for filtering adversarial examples. If we evaluate the similarity by the sentence encoder suggested in (Morris et al., 2020), the success rate with different threshold values for similarity in the case of Marian (EnFr) is depicted in Figure 3b. By considering only the adversarial examples with a similarity higher than a threshold, the success rate decreases as the threshold increases, and the quality of the adversarial examples increases. Similarly, we can do the same analysis for fluency. As suggested in (Morris et al., 2020), we count the grammatical errors by LanguageTool (Naber et al., 2003) for the original sen- tences and the adversarial examples. Figure 3a depicts the success rate for different thresholds of the number of added grammatical errors caused by adversarial perturbations. These analyses show that with tighter constraints, we can generate better adversarial examples while the success rate decreases. All in all, according to these results, TransFool outperforms the baselines for different thresholds of similarity and grammatical errors. D.4 MORE ADVERSARIAL EXAMPLES In this Section, we present more adversarial examples generated by TransFool, kNN, and Seq2Sick. In order to show the effect of using LM embeddings on the performance of TransFool, we also include the generated adversarial examples against English to French Marian NMT model when we do not use LM embeddings. In all these tables, the tokens modified by TransFool are written in blue in the original sentence, and the modified tokens by different adversarial attacks are written in red in their corresponding adversarial sentences. Moreover, the changes made by the adversarial attack to the translation that are not directly related to the modified tokens are written in orange, while the changes that are the direct result of modified tokens are written in brown. As can be seen in the examples presented in Tables 9 and 10, TransFool makes smaller changes to the sentence. The generated adversarial example is a correct English sentence, and it is similar to the original sentence. However, kNN, Seq2Sick, and our method with the NMT embeddings make changes that are perceptible, and the adversarial sentences are not necessarily similar to the original sentence. The higher semantic similarity of the adversarial sentences generated by TransFool is due to the integration of LM embeddings and the LM loss in the proposed optimization problem. We should highlight that TransFool is able to make changes to the adversarial sentence translation that are not directly related to the modifications of the original sentence but are the result of the NMT model failure. Other examples against different tasks and models are presented in Tables 11 to 16. E MORE RESULTS ON THE BLACK-BOX ATTACK E.1 ATTACKING GOOGLE TRANSLATE To evaluate the effect of different attacks in practice, we attack Google Translate7 by TransFool, kNN, and Seq2Sick. Since querying Google Translate is limited per day, we were not able to attack with WSLS, which requires high number of queries. Table 17 presents the performance of the English to French translation task. The results demonstrate that adversarial sentences crafted by TransFool can degrade the translation quality more while preserving the semantics better. The perplexity score and word error rate of TransFool compete with those metrics of Seq2Sick, but Seq2Sick is not meaning-preserving and is less effective. We also performed the cross-lingual black-box attack. We consider Marian NMT (En-Fr) as the reference model and attack En-De Google Translate. The results for TransFool are reported in Table 18. E.2 SEMANTIC SIMILARITY COMPUTED BY OTHER METRICS Similar to the white-box attack, we compute the similarity between the adversarial and original sentences by BERTScore and BLEURT-20, since they correlate well with human judgments. The similarity performance of TransFool and WSLS8 in the black-box settings are demonstrated in Table 19. According to Table 19, TransFool is better at maintaining semantic similarity. It may be because we used LM embeddings instead of the NMT ones in the similarity constraint. E.3 SOME ADVERSARIAL EXAMPLES We also present some adversarial examples generated by TransFool and WSLS, in the black-box setting, in Tables 20 to 22. In these tables, the tokens modified by TransFool are written in blue in the original sentence, and the modified tokens by different adversarial attacks are written in red in their corresponding adversarial sentences. Moreover, the changes made by the adversarial attack to the translation that are not directly related to the modified tokens are written in orange, while the changes that are the direct result of modified tokens are written in brown. These examples show that modifications made by TransFool are less detectable, i.e., the generated adversarial examples are more natural and similar to the original sentence. Moreover, TransFool makes changes to the translation that are not the direct result of the modified tokens of the adversarial sentence. 7We should note that since we do not have a tokenizer, we compute Word Error Rate (WER) instead of Token Error Rate (TER). 8The results of kNN and Seq2Sick are not reported since they are transfer attacks, and their performance is already reported in Table 7. F EFFECT OF BACK-TRANSLATION MODEL CHOICE ON WSLS PERFORMANCE WSLS uses a back-translation model for crafting an adversarial example. In (Zhang et al., 2021), the authors investigate the En-De task and use the winner model of the WMT19 DeEn sub-track (Ng et al., 2019) for the back-translation model. However, they do not evaluate their method for En-Fr and En-Zh tasks. To evaluate the performance of WSLS in Table 3, We have used pre-trained Marian NMT models for all three back-translation models. In order to show the effect of our choice of back-translation model, we compare the performance of WSLS for the En-De task when we use Marian NMT or (Ng et al., 2019) as the back-translation model in Table 23. As this Table shows, WSLS with Marian NMT as the back-translation model results in even more semantic similarity and lower perplexity score. On the other hand, WSLS with (Ng et al., 2019) as the back-translation model has a slightly more success rate. These results show that our choice of back-translation model does not highly affect the performance of WSLS. G LICENSE INFORMATION AND DETAILS In this Section, we provide some details about the datasets, codes, and models used in this paper. We should note that we used the models and datasets that are available in HuggingFace transformers (Wolf et al., 2020) and datasets (Lhoest et al., 2021) libraries.9 They are licensed under Apache License 2.0. Moreover, we used PyTorch for all experiments (Paszke et al., 2019), which is released under the BSD license10. G.1 DATASETS WMT14 In the Ninth Workshop on Statistical Machine Translation, WMT14 was introduced for four tasks. We used the En-De and En-Fr news translation tasks. There is no license available for this dataset. OPUS-100 OPUS-100 is a multilingual translation corpus for 100 languages, which is randomly sampled from the OPUS collection (Tiedemann, 2012). There is no license available for this dataset. G.2 MODELS Marian NMT Marian is a Neural Machine Translation framework, which is mainly developed by the Microsoft Translator team, and it is released under MIT License11. This model uses a beam size of 4. mBART50 mBART50 is a multilingual machine translation model of 50 languages, which has been introduced by Facebook. This model is published in the Fairseq library, which is released under MIT License12. This model uses a beam size of 5. 9These two libraries are available at this GitHub repository: https://github.com/huggingface. 10https://github.com/pytorch/pytorch/blob/master/LICENSE 11https://github.com/marian-nmt/marian/blob/master/LICENSE.md 12https://github.com/facebookresearch/fairseq/blob/main/LICENSE G.3 CODES kNN In order to compare our method with kNN (Michel et al., 2019), we used the code provided by the authors, which is released under the BSD 3-Clause "New" or "Revised" License.13 Seq2Sick To compare our method with Seq2Sick (Cheng et al., 2020a), we used the code published by the authors.14 There is no license available for their code. WSLS We implemented and evaluated WSLS (Zhang et al., 2021) using the source code published by the authors.15 There is no license available for this GitHub repository. H HUMAN EVALUATION We conduct a preliminary human evaluation campaign of TransFool, kNN, and Seq2Sick attacks on Marian NMT (En-Fr) in the white-box setting. We randomly choose 90 sentences from the test set of the WMT14 (En-FR) dataset with the adversarial samples and their translations by the NMT model. We split 90 sentences into three different surveys to obtain a manageable size for each annotator. We recruited two annotators for each survey. For the English surveys, we ensure that the annotators are highly proficient English speakers. Similarly, for the French survey, we ensure that the annotators are highly proficient in French. Before starting the rating task, we provided annotators with detailed guidelines similar to (Cer et al., 2017; Michel et al., 2019). The task is to rate the sentences for each criterion on a continuous scale (0-100) inspired by WMT18 practice (Ma et al., 2018) and Direct Assessment (Graham et al., 2013; 2017). For each sentence, we evaluate three aspects in three different surveys: • Fluency: We show the three adversarial sentences and the original sentence on the same page (in random order). We ask the annotators how much they agree with the "The sentence is fluent." statement for each sentence. • Semantic preservation: We show the original sentence on top and the three adversarial sentences afterwards (in random order). We ask the annotators how much they agree with the "The sentence is similar to the reference text." statement for each sentence. • Translation quality: Inspired by monolingual direct assessment (Ma et al., 2018; Graham et al., 2013; 2017), we evaluate the translation quality by showing the reference translation on top and the translations of three adversarial sentences afterwards (in random order). We ask the annotators how much they agree with the "The sentence is similar to the reference text." statement for each translation. We calculate 95% confidence intervals by using 15K bootstrap replications. The results are depicted in Figure 4. These results demonstrate that although the adversarial examples generated by TransFool are more semantic-preserving and fluent than both baselines. According to the provided guide to the annotators for semantic similarity, the score of 67.8 shows that the two sentences are roughly equivalent, but some details may differ. Moreover, a fluency of 66.4 demonstrates that although the generated adversarial examples by TransFool are more fluent than the baselines, there is still room to improve the performance in this regard. We follow the direct assessment strategy to measure the effectiveness of the adversarial attacks on translation quality. According to (Ma et al., 2018), since a sufficient level of agreement of translation quality is difficult to achieve with human evaluation, direct assessment simplifies the task to a simpler monolingual assessment instead of a bilingual task. The similarity of the translations of the adversarial sentences with the reference translation is shown in Figure 4c. The similarity of Seq2Sick is worse than other attacks. However, its similarity in the source language is worse. Therefore, we compute the decrease of similarity (between the original and adversarial sentences) 13The source code is available at https://github.com/pmichel31415/translate/tree/ paul/pytorch_translate/research/adversarial/experiments and the license is avialable at https://github.com/pmichel31415/translate/blob/paul/LICENSE 14The source code is available at https://github.com/cmhcbb/Seq2Sick. 15https://github.com/JHL-HUST/AdvNMT-WSLS/tree/79945881f75d92ae44e9ebc10500d8590c09bb13 from the source language to the target language. The results in Figure 4d show that all attacks affect the translation quality and the effect of TransFool is more pronounced than that of both baselines. Finally, we calculate Inter-Annotator Agreement (IAA). There are two human judgments for each sentence. We average both scores to compute the final score for each sentence. To ensure that the two annotators agree, we only consider sentences where their two corresponding scores are less than 30. We compute IAA in terms of Pearson Correlation coefficient instead of the commonly used Cohen’s K since scores are in a continuous scale. The results are presented in Table 24. Overall, we conclude that we achieve a reasonable inter-annotator agreement for all sentence types and evaluation metrics.
1. What are the strengths and weaknesses of the proposed TransFool method? 2. How does the reviewer assess the novelty and soundness of the paper's content? 3. What are the limitations regarding the transferability analysis? 4. Do you have any concerns about the evaluation metrics used in the paper? 5. Can the results be improved by incorporating more advanced techniques from related works?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a white-box adversarial attack against neural machine translation model named TransFool. TransFool uses an optimization loss function with three terms: a) adversarial loss to maximize the loss of the target NMT model; b) a similarity term to ensure that the adversarial example is similar to the original sentence; c) and loss of a language model to generate fluent and natural adversarial examples. The authors compare their TransFool method with two NMT attack methods, kNN and seq2sick, with respect to the Attack Success Rate, Relative decrease of translation quality, Semantic Similarity, Perplexity score and Token Error Rate. The proposed method can outperform baseline methods on two NMT model, Marian NMT and mBART50 MNMT model. The authors also show that the adversarial examples they found can conduct transfer attack on black box NMT models, using the adversarial examples from Marian NMT to attack mBART50 MNMT model and google translation api. Strengths And Weaknesses Pros: This paper is good written and easy to follow. The authors valid their method not only on research models but also on real world commercial product, like google translation. The transferability analysis section is enlightening. It is not widely studied in NLP field. The transferability between different language is an important issue in multilingual NLP filed. Cons: Novelty is limited. a) First, the authors consider not only the attack success rate but also the fluency and semantic similarity. However, these two issues are widely studied in NLP attack field. For example, [1] has systematically reveal that the fluency and semantic similarity issue in textual adversarial attack and they empirically study the threshold of filtering the adversarial examples according to the fluency and semantic similarity. b) Second, this paper claims that they propose a new strategy to incorporate the embedding vectors of a language model. However, utilizing language model to attack NLP model is not a novel technique. For example, [2-3] utilize BERT to conduct word-level substitution. Lack of soundness. This is not only the weakness of this paper but also the weakness of a large portion of textual adversarial attack works. a) The soundness of their method is weak. This paper use language model due to it provide a meaningful representation of the tokens. However, based on the finding of previous work [1], the semantic similarity cannot be guaranteed with language model. A good example is that “I [like] eating apple.” and “I [hate] eating apple” have very high representation similarity according to the language model representation. But their semantic are totally opposite. So I am worried that the adversarial attack will change the semantic meaning of source sentence, leading to the over-estimation of attack success rate. b) The soundness of their evaluation is weak. The authors evaluate the semantic similarity by the universal sentence encoder and BERTScore. However, as I mentioned above, the soundness of such model-based automatic metrics is limited. The high attack success rate could be partially due to the changing of meaning in source sentence. Conducting a human annotation experiment will be better than just showing an example. [1] John Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, Yanjun Qi ,Reevaluating Adversarial Examples in Natural Language, EMNLP 2020. [2] Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, Xipeng Qiu, BERT-ATTACK: Adversarial Attack Against BERT Using BERT, EMNLP 2020. [3] Siddhant Garg, Goutham Ramakrishnan, BAE: BERT-based Adversarial Examples for Text Classification, EMNLP 2020. Clarity, Quality, Novelty And Reproducibility Clarity: This paper is well-written and easy to follow. Quality: The soundness of their method and evaluation is limited. Please refer to the weakness part. Novelty: This paper is not very novel. Please refer to the weakness part. Reproducibility: The reproducibility will not be a issue. The authors claims that their code will be released. And they provide the link of the metric and scripts used in their paper.
ICLR
Title UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question Answering Over Knowledge Graph Abstract Multi-hop Question Answering over Knowledge Graph (KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question on a large-scale Knowledge Graph (KG). To cope with the vast search space, existing work usually adopts a two-stage approach: it first retrieves a relatively small subgraph related to the question and then performs the reasoning on the subgraph to find the answer entities accurately. Although these two stages are highly related, previous work employs very different technical solutions for developing the retrieval and reasoning models, neglecting their relatedness in task essence. In this paper, we propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning. For model architecture, UniKGQA consists of a semantic matching module based on a pre-trained language model (PLM) for question-relation semantic matching, and a matching information propagation module to propagate the matching information along the directed edges on KGs. For parameter learning, we design a shared pre-training task based on questionrelation matching for both retrieval and reasoning models, and then propose retrievaland reasoning-oriented fine-tuning strategies. Compared with previous studies, our approach is more unified, tightly relating the retrieval and reasoning stages. Extensive experiments on three benchmark datasets have demonstrated the effectiveness of our method on the multi-hop KGQA task. Our codes and data are publicly available at https://github.com/RUCAIBox/UniKGQA. 1 INTRODUCTION With the availability of large-scale knowledge graphs (KGs), such as Freebase (Bollacker et al., 2008) and Wikidata (Tanon et al., 2016), knowledge graph question answering (KGQA) has become an important research topic that aims to find the answer entities of natural language questions from KGs. Recent studies (Lan et al., 2021) mainly focus on multi-hop KGQA, a more complex scenario where sophisticated multi-hop reasoning over edges (or relations) is required to infer the correct answer on the KG. We show an example in Figure 1(a). Given the question “Who is the wife of the nominee for The Jeff Probst Show”, the task goal is to find a reasoning path from the topic entity “The Jeff Probst Show” to the answer entities “Shelley Wright” and “Lisa Ann Russell”. Faced with the vast search space in large-scale KGs, previous work (Sun et al., 2018; 2019) typically adopts a retrieval-then-reasoning approach, to achieve a good trade-off. Generally, the retrieval stage aims to extract relevant triples from the large-scale KG to compose a relatively smaller question-relevant subgraph, while the reasoning stage focuses on accurately finding the answer entities from the retrieved subgraph. Although the purposes of the two stages are different, both stages * Equal contribution. B Corresponding author. The Jeff Probst Show nominee Jeff Probst Shelley Wright Lisa Ann Russell is_a Talk show CBS Television Distribution distributed by Who is the wife of the nominee for The Jeff Probst Show ? Lisa Whelchel nominee Survivor spouse nominee is_a distributed by Who is the wife of the nominee for The Jeff Probst Show ? The Jeff Probst Show Talk show CBS Television Distribution Jeff Probst Lisa Whelchel Lisa Ann Russell Shelley Wright Survivor Θ𝑝 Θ𝑜 Γ𝑝 Γ𝑜 R et ri ev al R ea so n in g Pre-training Fine-tuning update by QRM (c) the overall learning procedure(b) an example of abstract subgraph(a) an example of multi-hop KGQA TV producer TV producer update by RAS initialize Γ𝑜 with Θ𝑜 and update it by RRS initialize Γ𝑝 with Θ𝑝 and fix it Figure 1: Illustrative examples and learning procedure of our work. need to evaluate the semantic relevance of a candidate entity with respect to the question (for removal or reranking), which can be considered as a semantic matching problem in essence. For measuring the entity relevance, relation-based features, either direct relations (Miller et al., 2016) or composite relation paths (Sun et al., 2018), have been shown to be particularly useful for building the semantic matching models. As shown in Figure 1(a), given the question, it is key to identify the semantically matched relations and the composed relation path in the KG (e.g., “nominee → spouse”) for finding the correct answer entities. Since the two stages cope with different scales of search space on KGs (e.g., millions v.s. thousands), they usually adopt specific technical solutions: the former prefers more efficient methods focusing on the recall performance (Sun et al., 2018), while the latter prefers more capable methods for modeling fined-grained matching signals (He et al., 2021). Considering the same essence for both stages, this work aims to push forwards the research on multihop KGQA by investigating the following problem: can we design a unified model architecture for both stages to derive a better performance? To develop a unified model architecture for multi-hop KGQA, a major merit is that we can tightly relate the two stages and enhance the sharing of the relevance information. Although the two stages are highly related, previous studies usually treat them separately in model learning: only the retrieved triples are passed from the retrieval stage to the reasoning stage, while the rest of the useful signal for semantic matching has been neglected in the pipeline framework. Such an approach is likely to lead to sub-optimal or inferior performance, since multi-hop KGQA is a very challenging task, requiring elaborate solutions that sufficiently leverage various kinds of relevance information from the two stages. However, there are two major issues about developing a unified model architecture for multi-hop KGQA: (1) How to cope with very different scales of search space for the two stages? (2) How to effectively share or transfer useful relevance information across the two stages? For the first issue, instead of letting the same model architecture directly fit very different data distributions, we propose a new subgraph form to reduce the node scale at the retrieval stage, namely abstract subgraph that is composed by merging the nodes with the same relations from the KG (see Figure 1(b)). For the second issue, based on the same model architecture, we design an effective learning approach for the two stages, so that we can share the same pre-trained parameters and use the learned retrieval model to initialize the reasoning model (see Figure 1(c)). To this end, in this paper, we propose UniKGQA, a unified model for multi-hop KGQA task. Specifically, UniKGQA consists of a semantic matching module based on a PLM for question-relation semantic matching, and a matching information propagation module to propagate the matching information along the directed edges on KGs. In order to learn these parameters, we design both pre-training (i.e., question-relation matching) and fine-tuning (i.e., retrieval- and reasoning-oriented learning) strategies based on the unified architecture. Compared with previous work on multi-hop KQGA, our approach is more unified and simplified, tightly relating the retrieval and reasoning stages. To our knowledge, it is the first work that unifies the retrieval and reasoning in both model architecture and learning for the multi-hop KGQA task. To evaluate our approach, we conduct extensive experiments on three benchmark datasets. On the difficult datasets, WebQSP and CWQ, we outperform existing state-of-the-art baselines by a large margin (e.g., 8.1% improvement of Hits@1 on WebQSP and 2.0% improvement of Hits@1 on CWQ). 2 PRELIMINARY In this section, we introduce the notations that will be used throughout the paper and then formally define the multi-hop KGQA task. Knowledge Graph (KG). A knowledge graph typically consists of a set of triples, denoted by G = {⟨e, r, e′⟩|e, e′ ∈ E , r ∈ R}, where E and R denote the entity set and relation set, respectively. A triple ⟨e, r, e′⟩ describes the fact that a relation r exists between head entity e and tail entity e′. Furthermore, we denote the set of neighborhood triples that an entity e belongs to by Ne = {⟨e, r, e′⟩ ∈ G} ∪ {⟨e′, r, e⟩ ∈ G}. Let r−1 denote the inverse relation of r, and we can represent a triple ⟨e, r, e′⟩ by its inverse triple ⟨e′, r−1, e⟩. In this way, we can simplify the definition of the neighborhood triples for an entity e as Ne = {⟨e′, r, e⟩ ∈ G}. We further use E ∈ Rd×|E| and R ∈ Rd×|R| to denote the embedding matrices for entities and relations in KG, respectively. Multi-hop Knowledge Graph Question Answering (Multi-hop KGQA). Given a natural language question q and a KG G, the task of KGQA aims to find answer entitie(s) to the question over the KG, denoted by the answer set Aq ∈ E . Following previous work (Sun et al., 2018; 2019), we assume that the entities mentioned in the question (e.g., “The Jeff Probst Show” in Figure 1(a)) are marked and linked with entities on KG, namely topic entities, denoted as Tq ⊂ E . In this work, we focus on solving the multi-hop KGQA task where the answer entities are multiple hops away from the topic entities over the KG. Considering the trade-off between efficiency and accuracy, we follow existing work (Sun et al., 2018; 2019) that solves this task using a retrieval-then-reasoning framework. In the two-stage framework, given a question q and topic entities Tq , the retrieval model aims to retrieve a small subgraph Gq from the large-scale input KG G, while the reasoning model searches answer entities Aq by reasoning over the retrieved subgraph Gq . Abstract Subgraph. Based on KGs, we further introduce the concept of abstract graph, which is derived based on the reduction from an original subgraph. Specifically, given a subgraph related to question q, denoted as Gq ⊂ G, we merge the tail entities from the triples with the same prefix (i.e., the same head entity and relation: ⟨e, r, ?⟩), and then generate a corresponding abstract node ẽ to represent the set of tail entities, so we have ẽ = {e′|⟨e, r, e′⟩ ∈ G}. Similarly, we can also perform the same operations on the head entities. To unify the notations, we transform an original node that can’t be merged into an abstract node by creating a set only including the node itself. In this way, the corresponding abstract subgraph Gq can be denoted as: G̃q = {⟨ẽ, r, ẽ′⟩|∃e ∈ ẽ, ∃e′ ∈ ẽ′, ⟨e, r, e′⟩ ∈ Gq}, where each node ẽ is an abstract node representing a set of original nodes (one or multiple). We present illustrative examples of the original subgraph and its abstract subgraph in Figure 1(a) and Figure 1(b). 3 APPROACH In this section, we present our proposed UniKGQA, which unifies the retrieval and reasoning for multi-hop KGQA. The major novelty is that we introduce a unified model architecture for both stages (Section 3.1) and design an effective learning approach involving both specific pre-training and fine-tuning strategies (Section 3.2). Next, we describe the two parts in detail. 3.1 UNIFIED MODEL ARCHITECTURE We consider a general input form for both retrieval and reasoning, and develop the base architecture by integrating two major modules: (1) the semantic matching (SM) module that employs a PLM to perform the semantic matching between questions and relations; (2) the matching information propagation (MIP) module that propagates the semantic matching information on KGs. We present the overview of the model architecture in Figure 2. Next, we describe the three parts in detail. General Input Formulation. In order to support both retrieval and reasoning stages, we consider a general form for evaluating entity relevance, where a question q and a subgraph Gq of candidate entities are given. For the retrieval stage, Gq is an abstract subgraph that incorporates abstract nodes to merge entities from the same relation. For the reasoning stage, Gq is constructed based on the retrieved subgraph from the retrieval stage, without abstract nodes. Such a general input formulation enables the development of the unified model architecture for the two different stages. In what follows, we will describe the approach in a general way, without considering specific stages. Semantic Matching (SM). The SM module aims to produce the semantic matching features between the question q and a triple ⟨e′, r, e⟩ from the given subgraph Gq . Considering the excellent modeling capacity of the PLM, we leverage the PLM to produce text encoding as the representations of question q and relation r. Specifically, we first utilize the PLM to encode the texts of q and r, and employ the output representation of the [CLS] token as their representations: hq = PLM(q), hr = PLM(r). (1) Based on hq and hr, inspired by the NSM model (He et al., 2021), we obtain the vector capturing semantic matching features m(t)⟨e′,r,e⟩ between question q and triple ⟨e ′, r, e⟩ at the t-th step by adopting corresponding projection layers: m (t) ⟨e′,r,e⟩ = σ ( hqW (t) Q ⊙ hrW (t) R ) , (2) where m(t)⟨e′,r,e⟩ ∈ R d, W (t)Q ,W (t) R ∈ Rh×d are parameters of the t-step projection layers, h, d are the hidden dimensions of PLM and the feature vector, respectively, σ is the sigmoid activation function, and ⊙ is the hadamard product. Matching Information Propagation (MIP). Based on the generated semantic matching features, the MIP module first aggregates them to update the entity representation and then utilizes it to obtain the entity match score. To initialize the match score, given a question q and a subgraph Gq , for each entity ei ∈ Gq , we set the match score between q and ei as follows: s(1)ei = 1 if ei is a topic entity and s(1)ei = 0 otherwise. At the t-th step, we utilize the match scores of the head entities computed at the last step s(t−1)e′ as the weights and aggregate the matching features from neighboring triples to obtain the representation of the tail entity: e(t) = W (t) E e(t−1); ∑ ⟨e′,r,e⟩∈Ne s (t−1) e′ ·m (t) ⟨e′,r,e⟩ , (3) where e(t) ∈ Rd is the representation of the entity e in the t-th step, and the W (t)E ∈ R2d×d is a learnable matrix. At the first step, since there are no matching scores, following the NSM (He et al., 2021) model, we directly aggregate the representations of its one-hop relations as the entity representation: e(1) = σ( ∑ ⟨e′,r,e⟩∈Ne r · U), where the U ∈ R 2d×d is a learnable matrix. Based on the representations of all entities E(t) ∈ Rd×n, we update their entity match scores using the softmax function as: s(t) = softmax ( E(t) ⊤ v ) , (4) where v ∈ Rd is a learnable vector. After T -step iterations, we can obtain the final entity match scores s(T ), which is a probability distribution over all entities in the subgraph Gq . These match scores can be leveraged to measure the possibilities of the entities being the answers to the given question q, and will be used in both the retrieval and reasoning stages. 3.2 MODEL TRAINING In our approach, we have both the retrieval model and the reasoning model for the two stages of multi-hop KGQA. Since the two models adopt the same architecture, we introduce Θ and Γ to denote the model parameters that are used for retrieval and reasoning stages, respectively. As shown in Section 3.1, our architecture contains two groups of parameters, namely the underlying PLM and the other parameters for matching and propagation. Thus, Θ and Γ can be decomposed as: Θ = {Θp,Θo} and Γ = {Γp,Γo}, where the subscripts p and o denote the PLM parameters and the other parameters in our architecture, respectively. In order to learn these parameters, we design both pre-training (i.e., question-relation matching) and fine-tuning (i.e., retrieval- and reasoning-oriented learning) strategies based on the unified architecture. Next, we describe the model training approach. Pre-training with Question-Relation Matching (QRM). For pre-training, we mainly focus on learning the parameters of the underlying PLMs (i.e., Θp and Γp). In the implementation, we let the two models share the same copy of PLM parameters, i.e., Θp = Γp. As shown in Section 3.1, the basic capacity of the semantic matching module is to model the relevance between a question and a single relation (Eq. 2), which is based on the text encoding from the underlying PLM. Therefore, we design a contrastive pre-training task based on question-relation matching. Specifically, we adopt the contrastive leaning objective (Hadsell et al., 2006) to align the representations of relevant question-relation pairs while pushing apart others. To collect the relevant question-relation pairs, given an example consisting of a question q, the topic entities Tq and answer entities Aq , we extract all the shortest paths between the Tq and Aq from the entire KG and regard all of the relations within these paths as relevant to q, denoted as R+. In this way, we can obtain a number of weaksupervised examples. During pre-training, for each question q, we randomly sample a relevant relation r+ ∈ R+, and utilize a contrastive learning loss for pre-training: LPT = − log esim(qi,r + i )/τ∑M j=1 ( esim(qi,r + j )/τ + esim(qi,r − j )/τ ) (5) where τ is a temperature hyperparameter, r−i is a randomly sampled negative relation, and sim (q, r) is the cosine similarity, and q, r is the question and relation encoded by the PLM from the SM module (Eq. 1). In this way, the question-relation matching capacity will be enhanced by pretraining the PLM parameters. Note that the PLM parameters will be fixed after pre-training. Fine-tuning for Retrieval on Abstract Subgraphs (RAS). After pre-training, we first fine-tune the entire model for learning the parameters Θo according to the retrieval task. Recall that we transform the subgraphs into a form of abstract subgraphs, where abstract nodes are incorporated for merging entities from the same relation. Since our MIP module (Section 3.1) can produce the matching scores sA of nodes in a subgraph (Eq. 4), where the subscript A denotes that the nodes are from an abstract subgraph. Furthermore, we utilize the labeled answers to get the ground-truth vectors, denoted by s∗A. We set an abstract node in s ∗ A to 1 if it contains the answer entity. Then we minimize the KL divergence between the learned and ground-truth matching score vectors as: LRAS = DKL ( sA, s ∗ A ) . (6) After fine-tuning the RAS loss, the retrieval model can be effectively learned. We further utilize it to retrieve the subgraph for the given question q, by selecting the top-K ranked nodes according to their match scores. Note that only the node within a reasonable distance to the topic entities will be selected into the subgraph, which ensures a relatively small yet relevant subgraph Gq for the subsequent reasoning stage to find answer entities. Fine-tuning for Reasoning on Retrieved Subgraphs (RRS). After fine-tuning the retrieval model, we continue to fine-tune the reasoning model by learning the parameters Γo. With the fine-tuned Table 1: Comparison of different methods. Methods Retrieval Reasoning ParametersTransferring GraftNet PPR GraftNet ✗ PullNet LSTM GraftNet ✗ NSM PPR NSM ✗ SR+NSM PLM NSM ✗ UniKGQA UniKGQA UniKGQA ✓ retrieval model, we can obtain a smaller subgraph Gq for each question q. In the reasoning stage, we focus on performing accurate reasoning to find the answer entities, so that we recover the original nodes in the abstract nodes and the original relations among them. Since the retrieval and reasoning stages are highly dependent, we first initialize the parameters of the reasoning model with those from the retrieval model: Θo → Γo. Then, following Eq. 4, we employ a similar approach to fitting the learned matching scores (denoted by sR) with the ground-truth vectors (denoted by s∗R) according to the KL loss: LRRS = DKL ( sR, s ∗ R ) , (7) where the subscript R denotes that the nodes come from a retrieved subgraph. After fine-tuning with the RRS loss, we can utilize the learned reasoning model to select the top-n ranked entities as the answer list according to the match scores. As shown in Figure 1(c), the overall training procedure is composed by: (1) pre-training Θp with question-relation matching, (2) fixing Θp and fine-tuning Θo for retrieval on abstract subgraphs, and (3) fixing the Γp initialized by Θp and fine-tuning Γo initialized by Θo for reasoning on subgraphs. Our work provides a novel unified model for the retrieval and reasoning stages to share the reasoning capacity. In Table 1, we summarize the differences between our method and several popular methods for multi-hop KGQA, including GraphfNet (Sun et al., 2018), PullNet (Sun et al., 2019), NSM (He et al., 2021), and SR+NSM (Zhang et al., 2022). As we can see, existing methods usually adopt different models for the retrieval and reasoning stages, while our approach is more unified. As a major benefit, the information between the two stages can be effectively shared and reused: we initialize the reasoning model with the learned retrieval model. 4 EXPERIMENT 4.1 EXPERIMENTAL SETTING Datasets. Following existing work on multi-hop KGQA (Sun et al., 2018; 2019; He et al., 2021; Zhang et al., 2022), we adopt three benchmark datasets, namely MetaQA (Zhang et al., 2018), WebQuestionsSP (WebQSP) (Zhang et al., 2018; Yih et al., 2015), and Complex WebQuestions 1.1 (CWQ) (Talmor & Berant, 2018) for evaluating our model. Table 2 shows the statistics of the three datasets. Since previous work has achieved nearly full marks on MetaQA, WebQSP and CWQ are our primarily evaluated datasets. We present a detailed description of these datasets in Appendix A. Evaluation Protocol. For the retrieval performance, we follow Zhang et al. (2022) that evaluate the models by the answer coverage rate (%). It is the proportion of questions whose retrieved subgraphs contain at least one answer. For the reasoning performance, we follow Sun et al. (2018; 2019) that regard the reasoning as a ranking task for evaluation. Given each test question, we rely on the predictive probabilities from the evaluated model to rank all candidate entities and then evaluate whether the top-1 answer is correct with Hits@1. Since a question may correspond to multiple answers, we also adopt the widely-used F1 metric. Baselines. We consider the following baselines for performance comparison: (1) reasoning-focused methods: KV-Mem (Miller et al., 2016), GraftNet (Sun et al., 2018), EmbedKGQA (Saxena et al., 2020), NSM (He et al., 2021), TransferNet (Shi et al., 2021); (2) retrieval-augmented methods: PullNet (Sun et al., 2019), SR+NSM (Zhang et al., 2022), SR+NSM+E2E (Zhang et al., 2022). We present a detailed description of these baselines in Appendix B. 4.2 EVALUATION RESULTS Table 3 shows the results of different methods on 5 multi-hop KGQA datasets. It can be seen that: First, most baselines perform very well on the three MetaQA datasets (100% Hits@1). It is because these datasets are based on a few hand-crafted question templates and have only nine relation types for the given KG. Thus, the model can easily capture the relevant semantics between the questions and relations to perform reasoning. To further examine this, we conduct an extra one-shot experiment on MetaQA datasets and present the details in Appendix E. Second, TransferNet performs better than GraftNet, EmbedKGQA, and NSM with the same retrieval method. It attends to question words to compute the scores of relations and transfers entity scores along with the relations. Such a way can effectively capture the question-path matching semantics. Besides, SR+NSM and SR+NSM+E2E outperform NSM and PullNet in a large margin. The reason is that they both leverage a PLM-based relation paths retriever to improve the retrieval performance and then reduce the difficulty of the later reasoning stage. Finally, on WebQSP and CWQ, our proposed UniKGQA is substantially better than all other competitive baselines. Unlike other baselines that rely on independent models to perform retrieval and reasoning, our approach can utilize a unified architecture to accomplish them. Such a unified architecture can pre-learn the essential capability of question-relation semantic matching for both stages, and is also capable of effectively transferring relevance information from the retrieval stage to the reasoning stage, i.e., initializing the reasoning model with the parameters of the retrieval model. In our approach, we fix the parameters of the PLM-based encoder for efficiency. Actually, updating its parameters can further improve our performance. Such a way enables researchers to trade off the efficiency and effectiveness when employing our approach in real-world applications. Here, we study it by proposing two variants of our UniKGQA: (1) w QU that updates the parameters of the PLM encoder only when encoding questions, (2) w QU, RU that updates the parameters of the PLM encoder both when encoding questions and relations. Indeed, both variants can boost the performance of our UniKGQA. And only updating the PLM encoder when encoding questions can obtain a comparable even better performance to update both. A possible reason is that updating the PLM encoder owns when encoding questions and relations may lead to overfitting on the downstream tasks. Therefore, it is promising for our UniKGQA to just update the PLM encoder when encoding questions, as it can achieve better performance with relative less additional computation cost. 4.3 FURTHER ANALYSIS Retrieval Evaluation. We evaluate the effectiveness of our UniKGQA to retrieve a smaller but better answer coverage rate subgraph for a given question. Following the evaluation principles of SR (Zhang et al., 2022), we measure such a capacity from three aspects: the direct subgraph size, answer coverage rate, and the final QA performance. Concretely, we first compare UniKGQA with SR (Zhang et al., 2022) and PPR-based heuristic retrieval method (Sun et al., 2018) based on the answer coverage rate curve w.r.t. the number of graph nodes. Then, we compare UniKGQA with SR+NSM (Zhang et al., 2022) and PPR+NSM (He et al., 2021) based on their final QA performance. To further study the effectiveness of our approach, we add an extra variant of our UniKGQA, namely UniKGQA+NSM, which relies on UniKGQA for retrieval while NSM for performing reasoning. The left and middle of Figure 3 show the comparison results of the above methods. As we can see, under the same size of retrieved subgraphs, UniKGQA and SR have significantly larger answer coverage rates than PPR. It demonstrates the effectiveness and necessity of training a learnable retrieval model. Besides, although the curves of UniKGQA and SR are very similar, our UniKGQA can achieve a better final reasoning performance than SR+NSM. The reason is that UniKGQA can transfer the relevance information from the retrieval stage to the reasoning stage based on the unified architecture, learning a more effective reasoning model. Such a finding can be further verified by comparing our UniKGQA with UniKGQA+NSM. Ablation Study. Our UniKGQA contains two important training strategies to improve performance: (1) pre-training with question-relation matching, (2) initializing the parameters of the reasoning model with the retrieval model. Here, we conduct the ablation study to verify their effectiveness. We propose three variants as: (1) w/o Pre removing the pre-training procedure, (2) w/o Trans removing the initialization with the parameters of retrieval model, (3) w/o Pre, Trans removing both the pretraining and initialization procedures. We show the results of the ablation study in Table 4. We can see that all these variants underperform the complete UniKGQA, which indicates that the two training strategies are both important for the final performance. Besides, such an observation also verifies that our UniKGQA is indeed capable of transferring and reusing the learned knowledge to improve the final performance. Fine-tuning Efficiency. As our UniKGQA model can transfer the learned knowledge from the pretraining stage and the retrieval task, it can be easily adapted into downstream reasoning tasks. In this way, we can perform a more efficient fine-tuning on the reasoning task with a few fine-tuning steps. To explore it, we compare the performance changes of our UniKGQA with a strong baseline NSM w.r.t. the increasing of fine-tuning epochs based on the same retrieved subgraphs. The results are presented on the right of Figure 3. First, we can see that before fine-tuning (i.e., when the epoch is zero), our UniKGQA has achieved a comparable performance as the best results of NSM at the last epoch. It indicates that the reasoning model has successfully leveraged the knowledge from prior tasks based on the parameters initialized by the retrieval model. After fine-tuning with two epochs, our UniKGQA has already achieved a good performance. It verifies that our model can be fine-tuned in an efficient way with very few epochs. To further investigate our UniKGQA model, we conduct parameter sensitivity analysis w.r.t. pre-training steps, hidden dimensions, and the number of retrieved nodes K, shown in Appendix H. 5 RELATED WORK Multi-hop Knowledge Graph Question Answering. Multi-hop KGQA aims to seek answer entities that are multiple hops away from the topic entities in a large-scale KG. Considering the efficiency and accuracy, existing work (Sun et al., 2018; 2019; Zhang et al., 2022) typically first retrieves a question-relevant subgraph to reduce the search space and then performs multi-hop reasoning on it. Such a retrieval-and-reasoning paradigm has shown superiority over directly reasoning on the entire KG (Chen et al., 2019; Saxena et al., 2020). The retrieval stage focuses on extracting a relatively small subgraph involving the answer entities. A commonly-used approach is to collect entities with nearer hops around the topic entities to compose the subgraph and filter the ones with low Personalized PageRank scores to reduce the graph size (Sun et al., 2018; He et al., 2021). Despite the simplicity, such a way neglects the question semantics, limiting the retrieval efficiency and accuracy. To address it, several work (Sun et al., 2019; Zhang et al., 2022) devises retrievers based on semantic matching using neural networks (e.g., LSTMs or PLMs). Starting from the topic entities, these retrievers iteratively measure the semantic relevance between the question and neighbouring entities or relations, and add proper ones into the subgraph. In this way, a smaller but more question-relevant subgraph would be constructed. The reasoning stage aims to accurately find the answer entities of the given question by walking along the relations starting from the topic entities. Early work (Miller et al., 2016; Sun et al., 2018; 2019; Jiang et al., 2022) relies on the special network architectures (e.g., Key-Value Memory Network or Graph Convolution Network) to model the multi-hop reasoning process. Recent work further enhances the reasoning capacity of the above networks from the perspectives of intermediate supervision signals (He et al., 2021), knowledge transferring (Shi et al., 2021), etc. However, all these methods design different model architectures and training methods for the retrieval and reasoning stages, respectively, neglecting the similarity and intrinsic connection of the two stages. Recently, some work parses the question into structured query language (e.g., SPARQL) (Lan et al., 2021; Das et al., 2021; Huang et al., 2021) and executes it by a query engine to get answers. In this way, the encoder-decoder architecture (i.e., T5 (Raffel et al., 2020)) is generally adopted to produce the structured queries, where the annotated structured queries are also required for training. Dense Retrieval. Given a query, the dense retrieval task aims to select relevant documents from a large-scale document pool. Different from the traditional sparse term-based retrieval methods, e.g., TF-IDF (Chen et al., 2017) and BM25 (Robertson & Zaragoza, 2009), dense retrieval methods (Karpukhin et al., 2020; Zhou et al., 2022a;b) rely on a bi-encoder architecture to map queries and documents into low-dimensional dense vectors. Then their relevance scores can be measured using vector distance metrics (e.g., cosine similarity), which supports efficient approximate nearest neighbour (ANN) search algorithms. In multi-hop KGQA, starting from the topic entities, we need to select the relevant neighboring triples from a large-scale KG, to induce a path to reach the answer entities, which can be seen as a constrained dense retrieval task. Therefore, in this work, we also incorporate a bi-encoder architecture to map questions and relations into dense vectors, and then perform retrieval or reasoning based on their vector distances. 6 CONCLUSION In this work, we proposed a novel approach for the multi-hop KGQA task. As the major technical contribution, UniKGQA introduced a unified model architecture based on PLMs for both retrieval and reasoning stages, consisting of the semantic matching module and the matching information propagation module. To cope with the different scales of search space in the two stages, we proposed to generate abstract subgraphs for the retrieval stage, which can significantly reduce the number of nodes to be searched. Furthermore, we designed an effective model learning method with both pre-training (i.e., question-relation matching) and fine-tuning (i.e., retrieval- and reasoning-oriented learning) strategies based on the unified architecture. With the unified architecture, the proposed learning method can effectively enhance the sharing and transferring of relevance information between the two stages. We conducted extensive experiments on three benchmark datasets, and the experimental results show that our proposed unified model outperforms the competitive methods, especially on more challenging datasets (i.e., WebQSP and CWQ). ACKNOWLEDGMENTS This work was partially supported by National Natural Science Foundation of China under Grant No. 62222215, Beijing Natural Science Foundation under Grant No. 4222027, and Beijing Outstanding Young Scientist Program under Grant No. BJJWZYJH012019100020098. And this work is also partially supported by the Outstanding Innovative Talents Cultivation Funded Programs 2022 of Renmin University of China. Xin Zhao is the corresponding author. A DATASETS We adopt three widely-used multi-hop KGQA datasets in this work: • MetaQA (Zhang et al., 2018) contains more than 400k questions in the domain of movie and the answer entities are up to 3 hops away from the topic entities. According to the number of hops, this dataset is split into three sub-datasets, i.e., MetaQA-1hop, MetaQA-2hop, and MetaQA-3hop. • WebQuestionsSP (WebQSP) (Yih et al., 2015) contains 4,737 questions and the answer entities require up to 2-hop reasoning on the KG Freebase (Bollacker et al., 2008). We use the same train/valid/test splits as GraftNet (Sun et al., 2018). • Complex WebQuestions 1.1 (CWQ) (Talmor & Berant, 2018) is constructed based on WebQSP by extending the question entities or adding constraints to answers. These questions require up to 4-hop reasoning on the KG Freebase (Bollacker et al., 2008). Existing work has demonstrated that the training data for MetaQA is more than sufficient (Shi et al., 2021; He et al., 2021), hence all the comparison methods in our experiments can achieve very high performance. We conduct further analysis of the three MetaQA datasets about the number of templates, the average number of training cases per template, and the number of relations used for constructing questions, and show them in Table 5. In summary, more training cases and simpler questions make the MetaQA easier to be solved. B BASELINES We consider the following baseline methods for performance comparison: • KV-Mem (Miller et al., 2016) maintains a key-value memory table to store KG facts, and conducts multi-hop reasoning by performing iterative read operations on the memory. • GraftNet (Sun et al., 2018) first retrieves the question-relevant subgraph and text sentences from the KG and Wikipedia respectively with a heuristic method. Then it adopts a graph neural network to perform multi-hop reasoning on a heterogeneous graph built upon the subgraph and text sentences. • PullNet (Sun et al., 2019) trains a graph retrieval model composed of a LSTM and a graph neural network instead of the heuristic way in GraftNet for the retrieval task, and then conducts multi-hop reasoning with GraftNet. • EmbedKGQA (Saxena et al., 2020) reformulates the multi-hop reasoning of GraftNet as a link prediction task by matching pre-trained entity embeddings with question representations from a PLM. • NSM (He et al., 2021) first conducts retrieval following GraftNet and then adapt the neural state machine (Hudson & Manning, 2019) used in visual reasoning for multi-hop reasoning on the KG. • TransferNet (Shi et al., 2021) first conducts retrieval following GraftNet and then performs the multi-hop reasoning on a KG or a text-formed relation graph in a transparent framework. The reasoning model consists of a PLM for question encoding and a graph neural network for updating the relevance scores between entities and the question. • SR+NSM (Zhang et al., 2022) first learns a PLM-based relation path retriever to conduct effectively retrieval and then leverages NSM reasoner to perform multi-hop reasoning. • SR+NSM+E2E (Zhang et al., 2022) further fine-tunes the SR+NSM by an end-to-end way. 64 128 256 512 768 1024 C KNOWLEDGE GRAPH PREPROCESSING DETAILS We preprocess the full Freebase following existing work (Sun et al., 2018; He et al., 2021). For MetaQA, we directly use the subset of WikiMovies provided by the datasets, and the size is about 134,741. For WebQSP and CWQ datasets, we set the max hops of retrieval and reasoning as two and four, respectively. Based on the topic entities labeled in original datasets, we reserve the neighborhood subgraph consisting of entities within the four hops of the topic entities for each sample. After such simple preprocessing, the size of KG we used is 147,748,092 for WebQSP and 202,358,414 for CWQ. Based on the preprocessed KG, we conduct the retrieval and reasoning using our proposed approach. D IMPLEMENTATION DETAILS. During pre-training, we collect question-relation pairs based on the shortest relation paths between topic entities and answer entities, and then use these pairs to pre-train the RoBERTa-base (Liu et al., 2019) model with the contrastive learning objective. We set the temperature τ as 0.05, and select the best model by evaluating Hits@1 on the validation set. For retrieval and reasoning, we initialize the PLM module of our UniKGQA model with the contrastive learning pre-trained RoBERTa, and set the hidden size of other linear layers as 768. We optimize parameters with the AdamW optimizer, where the learning rate is 0.00001 for the PLM module, and 0.0005 for other parameters. The batch size is set to 40. The reasoning step is set to 4 for CWQ dataset, 3 for WebQSP and MetaQA-3 datasets, 2 for MetaQA-2 dataset, and 1 for MetaQA-1 dataset. We preprocess the KGs for each datasets following existing work (Sun et al., 2018; He et al., 2021). E ONE-SHOT EXPERIMENT FOR METAQA Since the samples in MetaQA are more than sufficient, all the comparison methods in our experiments have achieved very high performance. For example, our method and previous work (e.g., TransferNet and NSM) have achieved more than 98% Hits@1 on MetaQA, which shows that this dataset’s performance may have been saturated. To examine this assumption, we consider conducting few-shot experiments to verify the performance of different methods. Specially, we follow the NSM paper (He et al., 2021) that conducts the one-shot experiment. We randomly sample just one training case for each question template from the original training set, to form a one-shot training dataset. In this way, the numbers of training samples for MetaQA-1, MetaQA-2, and MetaQA-3 are 161, 210, and 150, respectively. We evaluate the performance of our approach and some strong baselines (i.e., TrasnferNet and NSM) trained with this new training dataset. As shown in Table 6, our method can consistently outperform these baselines in all three subsets. F ABLATION STUDY OF OUR UNIFIED MODEL ARCHITECTURE The unified model architecture is the key of our approach. Once the unified model architecture is removed, it would be hard to share the question-relation matching capability enhanced by pre-training in retrieval and reasoning stages, and also hard to transfer the relevance information for multi-hop KGQA learned in the retrieval stage to the reasoning stage. To verify it, we conduct an extra ablation study to explore the effect of only adopting the unified model architecture as the reasoning model or the retrieval model. We select the existing strong retrieval model (i.e., SR) and reasoning model (i.e., NSM), and compare the performance when integrated with our UniKGQA. As we can see in Table 7, all the variants underperform our UniKGQA. It indicates that the unified model used in the retrieval and reasoning stages simultaneously is indeed the key reason for improvement. G ANALYSIS OF THE PRE-TRAINING STRATEGY We conduct the analysis experiments to investigate how the pre-training strategy (Pre) affects the performance with or without updating the PLM (QU). We show the results in Table 8. Once removing the pre-training strategy, the model performance would drop 10.4% (2.1%) in WebQSP and 5.1% (3.3%) in CWQ when fixing (not fixing) the PLM. It indicates that the pre-training strategy is an important component of our approach. After pre-training, the PLM can be fixed for more efficient parameters optimization during fine-tuning. H PARAMETER SENSITIVITY ANALYSIS Pre-training Steps Although the pre-training strategy has shown effective in our approach, too many pre-training steps will be time-consuming and costly. Here, we investigate the performance with respect to varying pre-training steps. As shown in the left of Figure 4, we can see that our method can reach the best performance with only few pre-training steps (i.e., 2800) compared with the best baseline TransferNet. It shows that our approach does not require too many steps for pretraining. Instead, we can see that too many pre-training steps will hurt the model performance. The reason may be that the PLM has overfit into the contrastive learning objective. Parameter Tuning. In our approach, we have two hyper-parameters required to tune: (1) the hidden size of linear layers d and (2) the number of retrieved nodes K. Here, we tune the d amongst {64, 128, 256, 512, 768, 1024} and K amongst {1, 5, 10, 15, 20}. We show the results in the middle and right of Figure 4 compared with the best results for the reasoning stage and the retrieval stage. Since K is a consistent hyper-parameter in the UniKGQA and SR, we also describe the various results of SR with different K to give a fair comparison. First, we can see that our method is robust to different hidden sizes, as the performance is consistently nearby 77.0. As the PLM adopts 768 as the embedding size, we can see 768 is also slightly better than other numbers. Besides, we can see that with the increase of K, the answer coverage rate also improves consistently. However, when K increases to 15 or even 20, the performance gain becomes relatively small. It means that the retrieved subgraphs are likely saturated, and further increasing K could only bring marginal improvement.
1. What is the main contribution of the paper on multi-hop KBQA? 2. What are the strengths of the proposed model, particularly its simplicity and effectiveness? 3. What are the weaknesses of the paper regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What questions does the reviewer have regarding the implementation and experimental design?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposed to learn a multi-hop KBQA model which contains a retrieval and reasoning model that shares the same architecture. Unifying the retrieval and reasoning module let the model share more learned knowledge. Experiments show great performance on three benchmark multi-hop reasoning datasets. Strengths And Weaknesses The propose model is simple and effective. The results are also very impressive. It can be a go-to solution for multi-hop KBQA. I have some questions about the implementation. Without any of the proposed technique (w/o Pre, Trans), the model already outperforms the previous state-of-the-art. Do you know why this happen? The reasoning and retrieval module share the same input structure and same model architecture, but it seems they do not share the same parameters. This sounds weird to me, and I am not sure why this will lead to improvement in model's performance. How much improvement comes from the pretraining of question-relation matching (see also Fig 3(c))? You should consider emphasizing the pretraining strategy if it leads to a big improvement. What is the size of the KB you use for WebQSP and CWQ? Do you use the full Freebase? Clarity, Quality, Novelty And Reproducibility Overall the paper is clear to read. The paper should include more details about key choices in the experiments, e.g. size of KB. Also, it makes me worried why the ablated results without any of the introduced technique (in Table 4) can outperform the previous state-of-the-art.
ICLR
Title UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question Answering Over Knowledge Graph Abstract Multi-hop Question Answering over Knowledge Graph (KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question on a large-scale Knowledge Graph (KG). To cope with the vast search space, existing work usually adopts a two-stage approach: it first retrieves a relatively small subgraph related to the question and then performs the reasoning on the subgraph to find the answer entities accurately. Although these two stages are highly related, previous work employs very different technical solutions for developing the retrieval and reasoning models, neglecting their relatedness in task essence. In this paper, we propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning. For model architecture, UniKGQA consists of a semantic matching module based on a pre-trained language model (PLM) for question-relation semantic matching, and a matching information propagation module to propagate the matching information along the directed edges on KGs. For parameter learning, we design a shared pre-training task based on questionrelation matching for both retrieval and reasoning models, and then propose retrievaland reasoning-oriented fine-tuning strategies. Compared with previous studies, our approach is more unified, tightly relating the retrieval and reasoning stages. Extensive experiments on three benchmark datasets have demonstrated the effectiveness of our method on the multi-hop KGQA task. Our codes and data are publicly available at https://github.com/RUCAIBox/UniKGQA. 1 INTRODUCTION With the availability of large-scale knowledge graphs (KGs), such as Freebase (Bollacker et al., 2008) and Wikidata (Tanon et al., 2016), knowledge graph question answering (KGQA) has become an important research topic that aims to find the answer entities of natural language questions from KGs. Recent studies (Lan et al., 2021) mainly focus on multi-hop KGQA, a more complex scenario where sophisticated multi-hop reasoning over edges (or relations) is required to infer the correct answer on the KG. We show an example in Figure 1(a). Given the question “Who is the wife of the nominee for The Jeff Probst Show”, the task goal is to find a reasoning path from the topic entity “The Jeff Probst Show” to the answer entities “Shelley Wright” and “Lisa Ann Russell”. Faced with the vast search space in large-scale KGs, previous work (Sun et al., 2018; 2019) typically adopts a retrieval-then-reasoning approach, to achieve a good trade-off. Generally, the retrieval stage aims to extract relevant triples from the large-scale KG to compose a relatively smaller question-relevant subgraph, while the reasoning stage focuses on accurately finding the answer entities from the retrieved subgraph. Although the purposes of the two stages are different, both stages * Equal contribution. B Corresponding author. The Jeff Probst Show nominee Jeff Probst Shelley Wright Lisa Ann Russell is_a Talk show CBS Television Distribution distributed by Who is the wife of the nominee for The Jeff Probst Show ? Lisa Whelchel nominee Survivor spouse nominee is_a distributed by Who is the wife of the nominee for The Jeff Probst Show ? The Jeff Probst Show Talk show CBS Television Distribution Jeff Probst Lisa Whelchel Lisa Ann Russell Shelley Wright Survivor Θ𝑝 Θ𝑜 Γ𝑝 Γ𝑜 R et ri ev al R ea so n in g Pre-training Fine-tuning update by QRM (c) the overall learning procedure(b) an example of abstract subgraph(a) an example of multi-hop KGQA TV producer TV producer update by RAS initialize Γ𝑜 with Θ𝑜 and update it by RRS initialize Γ𝑝 with Θ𝑝 and fix it Figure 1: Illustrative examples and learning procedure of our work. need to evaluate the semantic relevance of a candidate entity with respect to the question (for removal or reranking), which can be considered as a semantic matching problem in essence. For measuring the entity relevance, relation-based features, either direct relations (Miller et al., 2016) or composite relation paths (Sun et al., 2018), have been shown to be particularly useful for building the semantic matching models. As shown in Figure 1(a), given the question, it is key to identify the semantically matched relations and the composed relation path in the KG (e.g., “nominee → spouse”) for finding the correct answer entities. Since the two stages cope with different scales of search space on KGs (e.g., millions v.s. thousands), they usually adopt specific technical solutions: the former prefers more efficient methods focusing on the recall performance (Sun et al., 2018), while the latter prefers more capable methods for modeling fined-grained matching signals (He et al., 2021). Considering the same essence for both stages, this work aims to push forwards the research on multihop KGQA by investigating the following problem: can we design a unified model architecture for both stages to derive a better performance? To develop a unified model architecture for multi-hop KGQA, a major merit is that we can tightly relate the two stages and enhance the sharing of the relevance information. Although the two stages are highly related, previous studies usually treat them separately in model learning: only the retrieved triples are passed from the retrieval stage to the reasoning stage, while the rest of the useful signal for semantic matching has been neglected in the pipeline framework. Such an approach is likely to lead to sub-optimal or inferior performance, since multi-hop KGQA is a very challenging task, requiring elaborate solutions that sufficiently leverage various kinds of relevance information from the two stages. However, there are two major issues about developing a unified model architecture for multi-hop KGQA: (1) How to cope with very different scales of search space for the two stages? (2) How to effectively share or transfer useful relevance information across the two stages? For the first issue, instead of letting the same model architecture directly fit very different data distributions, we propose a new subgraph form to reduce the node scale at the retrieval stage, namely abstract subgraph that is composed by merging the nodes with the same relations from the KG (see Figure 1(b)). For the second issue, based on the same model architecture, we design an effective learning approach for the two stages, so that we can share the same pre-trained parameters and use the learned retrieval model to initialize the reasoning model (see Figure 1(c)). To this end, in this paper, we propose UniKGQA, a unified model for multi-hop KGQA task. Specifically, UniKGQA consists of a semantic matching module based on a PLM for question-relation semantic matching, and a matching information propagation module to propagate the matching information along the directed edges on KGs. In order to learn these parameters, we design both pre-training (i.e., question-relation matching) and fine-tuning (i.e., retrieval- and reasoning-oriented learning) strategies based on the unified architecture. Compared with previous work on multi-hop KQGA, our approach is more unified and simplified, tightly relating the retrieval and reasoning stages. To our knowledge, it is the first work that unifies the retrieval and reasoning in both model architecture and learning for the multi-hop KGQA task. To evaluate our approach, we conduct extensive experiments on three benchmark datasets. On the difficult datasets, WebQSP and CWQ, we outperform existing state-of-the-art baselines by a large margin (e.g., 8.1% improvement of Hits@1 on WebQSP and 2.0% improvement of Hits@1 on CWQ). 2 PRELIMINARY In this section, we introduce the notations that will be used throughout the paper and then formally define the multi-hop KGQA task. Knowledge Graph (KG). A knowledge graph typically consists of a set of triples, denoted by G = {⟨e, r, e′⟩|e, e′ ∈ E , r ∈ R}, where E and R denote the entity set and relation set, respectively. A triple ⟨e, r, e′⟩ describes the fact that a relation r exists between head entity e and tail entity e′. Furthermore, we denote the set of neighborhood triples that an entity e belongs to by Ne = {⟨e, r, e′⟩ ∈ G} ∪ {⟨e′, r, e⟩ ∈ G}. Let r−1 denote the inverse relation of r, and we can represent a triple ⟨e, r, e′⟩ by its inverse triple ⟨e′, r−1, e⟩. In this way, we can simplify the definition of the neighborhood triples for an entity e as Ne = {⟨e′, r, e⟩ ∈ G}. We further use E ∈ Rd×|E| and R ∈ Rd×|R| to denote the embedding matrices for entities and relations in KG, respectively. Multi-hop Knowledge Graph Question Answering (Multi-hop KGQA). Given a natural language question q and a KG G, the task of KGQA aims to find answer entitie(s) to the question over the KG, denoted by the answer set Aq ∈ E . Following previous work (Sun et al., 2018; 2019), we assume that the entities mentioned in the question (e.g., “The Jeff Probst Show” in Figure 1(a)) are marked and linked with entities on KG, namely topic entities, denoted as Tq ⊂ E . In this work, we focus on solving the multi-hop KGQA task where the answer entities are multiple hops away from the topic entities over the KG. Considering the trade-off between efficiency and accuracy, we follow existing work (Sun et al., 2018; 2019) that solves this task using a retrieval-then-reasoning framework. In the two-stage framework, given a question q and topic entities Tq , the retrieval model aims to retrieve a small subgraph Gq from the large-scale input KG G, while the reasoning model searches answer entities Aq by reasoning over the retrieved subgraph Gq . Abstract Subgraph. Based on KGs, we further introduce the concept of abstract graph, which is derived based on the reduction from an original subgraph. Specifically, given a subgraph related to question q, denoted as Gq ⊂ G, we merge the tail entities from the triples with the same prefix (i.e., the same head entity and relation: ⟨e, r, ?⟩), and then generate a corresponding abstract node ẽ to represent the set of tail entities, so we have ẽ = {e′|⟨e, r, e′⟩ ∈ G}. Similarly, we can also perform the same operations on the head entities. To unify the notations, we transform an original node that can’t be merged into an abstract node by creating a set only including the node itself. In this way, the corresponding abstract subgraph Gq can be denoted as: G̃q = {⟨ẽ, r, ẽ′⟩|∃e ∈ ẽ, ∃e′ ∈ ẽ′, ⟨e, r, e′⟩ ∈ Gq}, where each node ẽ is an abstract node representing a set of original nodes (one or multiple). We present illustrative examples of the original subgraph and its abstract subgraph in Figure 1(a) and Figure 1(b). 3 APPROACH In this section, we present our proposed UniKGQA, which unifies the retrieval and reasoning for multi-hop KGQA. The major novelty is that we introduce a unified model architecture for both stages (Section 3.1) and design an effective learning approach involving both specific pre-training and fine-tuning strategies (Section 3.2). Next, we describe the two parts in detail. 3.1 UNIFIED MODEL ARCHITECTURE We consider a general input form for both retrieval and reasoning, and develop the base architecture by integrating two major modules: (1) the semantic matching (SM) module that employs a PLM to perform the semantic matching between questions and relations; (2) the matching information propagation (MIP) module that propagates the semantic matching information on KGs. We present the overview of the model architecture in Figure 2. Next, we describe the three parts in detail. General Input Formulation. In order to support both retrieval and reasoning stages, we consider a general form for evaluating entity relevance, where a question q and a subgraph Gq of candidate entities are given. For the retrieval stage, Gq is an abstract subgraph that incorporates abstract nodes to merge entities from the same relation. For the reasoning stage, Gq is constructed based on the retrieved subgraph from the retrieval stage, without abstract nodes. Such a general input formulation enables the development of the unified model architecture for the two different stages. In what follows, we will describe the approach in a general way, without considering specific stages. Semantic Matching (SM). The SM module aims to produce the semantic matching features between the question q and a triple ⟨e′, r, e⟩ from the given subgraph Gq . Considering the excellent modeling capacity of the PLM, we leverage the PLM to produce text encoding as the representations of question q and relation r. Specifically, we first utilize the PLM to encode the texts of q and r, and employ the output representation of the [CLS] token as their representations: hq = PLM(q), hr = PLM(r). (1) Based on hq and hr, inspired by the NSM model (He et al., 2021), we obtain the vector capturing semantic matching features m(t)⟨e′,r,e⟩ between question q and triple ⟨e ′, r, e⟩ at the t-th step by adopting corresponding projection layers: m (t) ⟨e′,r,e⟩ = σ ( hqW (t) Q ⊙ hrW (t) R ) , (2) where m(t)⟨e′,r,e⟩ ∈ R d, W (t)Q ,W (t) R ∈ Rh×d are parameters of the t-step projection layers, h, d are the hidden dimensions of PLM and the feature vector, respectively, σ is the sigmoid activation function, and ⊙ is the hadamard product. Matching Information Propagation (MIP). Based on the generated semantic matching features, the MIP module first aggregates them to update the entity representation and then utilizes it to obtain the entity match score. To initialize the match score, given a question q and a subgraph Gq , for each entity ei ∈ Gq , we set the match score between q and ei as follows: s(1)ei = 1 if ei is a topic entity and s(1)ei = 0 otherwise. At the t-th step, we utilize the match scores of the head entities computed at the last step s(t−1)e′ as the weights and aggregate the matching features from neighboring triples to obtain the representation of the tail entity: e(t) = W (t) E e(t−1); ∑ ⟨e′,r,e⟩∈Ne s (t−1) e′ ·m (t) ⟨e′,r,e⟩ , (3) where e(t) ∈ Rd is the representation of the entity e in the t-th step, and the W (t)E ∈ R2d×d is a learnable matrix. At the first step, since there are no matching scores, following the NSM (He et al., 2021) model, we directly aggregate the representations of its one-hop relations as the entity representation: e(1) = σ( ∑ ⟨e′,r,e⟩∈Ne r · U), where the U ∈ R 2d×d is a learnable matrix. Based on the representations of all entities E(t) ∈ Rd×n, we update their entity match scores using the softmax function as: s(t) = softmax ( E(t) ⊤ v ) , (4) where v ∈ Rd is a learnable vector. After T -step iterations, we can obtain the final entity match scores s(T ), which is a probability distribution over all entities in the subgraph Gq . These match scores can be leveraged to measure the possibilities of the entities being the answers to the given question q, and will be used in both the retrieval and reasoning stages. 3.2 MODEL TRAINING In our approach, we have both the retrieval model and the reasoning model for the two stages of multi-hop KGQA. Since the two models adopt the same architecture, we introduce Θ and Γ to denote the model parameters that are used for retrieval and reasoning stages, respectively. As shown in Section 3.1, our architecture contains two groups of parameters, namely the underlying PLM and the other parameters for matching and propagation. Thus, Θ and Γ can be decomposed as: Θ = {Θp,Θo} and Γ = {Γp,Γo}, where the subscripts p and o denote the PLM parameters and the other parameters in our architecture, respectively. In order to learn these parameters, we design both pre-training (i.e., question-relation matching) and fine-tuning (i.e., retrieval- and reasoning-oriented learning) strategies based on the unified architecture. Next, we describe the model training approach. Pre-training with Question-Relation Matching (QRM). For pre-training, we mainly focus on learning the parameters of the underlying PLMs (i.e., Θp and Γp). In the implementation, we let the two models share the same copy of PLM parameters, i.e., Θp = Γp. As shown in Section 3.1, the basic capacity of the semantic matching module is to model the relevance between a question and a single relation (Eq. 2), which is based on the text encoding from the underlying PLM. Therefore, we design a contrastive pre-training task based on question-relation matching. Specifically, we adopt the contrastive leaning objective (Hadsell et al., 2006) to align the representations of relevant question-relation pairs while pushing apart others. To collect the relevant question-relation pairs, given an example consisting of a question q, the topic entities Tq and answer entities Aq , we extract all the shortest paths between the Tq and Aq from the entire KG and regard all of the relations within these paths as relevant to q, denoted as R+. In this way, we can obtain a number of weaksupervised examples. During pre-training, for each question q, we randomly sample a relevant relation r+ ∈ R+, and utilize a contrastive learning loss for pre-training: LPT = − log esim(qi,r + i )/τ∑M j=1 ( esim(qi,r + j )/τ + esim(qi,r − j )/τ ) (5) where τ is a temperature hyperparameter, r−i is a randomly sampled negative relation, and sim (q, r) is the cosine similarity, and q, r is the question and relation encoded by the PLM from the SM module (Eq. 1). In this way, the question-relation matching capacity will be enhanced by pretraining the PLM parameters. Note that the PLM parameters will be fixed after pre-training. Fine-tuning for Retrieval on Abstract Subgraphs (RAS). After pre-training, we first fine-tune the entire model for learning the parameters Θo according to the retrieval task. Recall that we transform the subgraphs into a form of abstract subgraphs, where abstract nodes are incorporated for merging entities from the same relation. Since our MIP module (Section 3.1) can produce the matching scores sA of nodes in a subgraph (Eq. 4), where the subscript A denotes that the nodes are from an abstract subgraph. Furthermore, we utilize the labeled answers to get the ground-truth vectors, denoted by s∗A. We set an abstract node in s ∗ A to 1 if it contains the answer entity. Then we minimize the KL divergence between the learned and ground-truth matching score vectors as: LRAS = DKL ( sA, s ∗ A ) . (6) After fine-tuning the RAS loss, the retrieval model can be effectively learned. We further utilize it to retrieve the subgraph for the given question q, by selecting the top-K ranked nodes according to their match scores. Note that only the node within a reasonable distance to the topic entities will be selected into the subgraph, which ensures a relatively small yet relevant subgraph Gq for the subsequent reasoning stage to find answer entities. Fine-tuning for Reasoning on Retrieved Subgraphs (RRS). After fine-tuning the retrieval model, we continue to fine-tune the reasoning model by learning the parameters Γo. With the fine-tuned Table 1: Comparison of different methods. Methods Retrieval Reasoning ParametersTransferring GraftNet PPR GraftNet ✗ PullNet LSTM GraftNet ✗ NSM PPR NSM ✗ SR+NSM PLM NSM ✗ UniKGQA UniKGQA UniKGQA ✓ retrieval model, we can obtain a smaller subgraph Gq for each question q. In the reasoning stage, we focus on performing accurate reasoning to find the answer entities, so that we recover the original nodes in the abstract nodes and the original relations among them. Since the retrieval and reasoning stages are highly dependent, we first initialize the parameters of the reasoning model with those from the retrieval model: Θo → Γo. Then, following Eq. 4, we employ a similar approach to fitting the learned matching scores (denoted by sR) with the ground-truth vectors (denoted by s∗R) according to the KL loss: LRRS = DKL ( sR, s ∗ R ) , (7) where the subscript R denotes that the nodes come from a retrieved subgraph. After fine-tuning with the RRS loss, we can utilize the learned reasoning model to select the top-n ranked entities as the answer list according to the match scores. As shown in Figure 1(c), the overall training procedure is composed by: (1) pre-training Θp with question-relation matching, (2) fixing Θp and fine-tuning Θo for retrieval on abstract subgraphs, and (3) fixing the Γp initialized by Θp and fine-tuning Γo initialized by Θo for reasoning on subgraphs. Our work provides a novel unified model for the retrieval and reasoning stages to share the reasoning capacity. In Table 1, we summarize the differences between our method and several popular methods for multi-hop KGQA, including GraphfNet (Sun et al., 2018), PullNet (Sun et al., 2019), NSM (He et al., 2021), and SR+NSM (Zhang et al., 2022). As we can see, existing methods usually adopt different models for the retrieval and reasoning stages, while our approach is more unified. As a major benefit, the information between the two stages can be effectively shared and reused: we initialize the reasoning model with the learned retrieval model. 4 EXPERIMENT 4.1 EXPERIMENTAL SETTING Datasets. Following existing work on multi-hop KGQA (Sun et al., 2018; 2019; He et al., 2021; Zhang et al., 2022), we adopt three benchmark datasets, namely MetaQA (Zhang et al., 2018), WebQuestionsSP (WebQSP) (Zhang et al., 2018; Yih et al., 2015), and Complex WebQuestions 1.1 (CWQ) (Talmor & Berant, 2018) for evaluating our model. Table 2 shows the statistics of the three datasets. Since previous work has achieved nearly full marks on MetaQA, WebQSP and CWQ are our primarily evaluated datasets. We present a detailed description of these datasets in Appendix A. Evaluation Protocol. For the retrieval performance, we follow Zhang et al. (2022) that evaluate the models by the answer coverage rate (%). It is the proportion of questions whose retrieved subgraphs contain at least one answer. For the reasoning performance, we follow Sun et al. (2018; 2019) that regard the reasoning as a ranking task for evaluation. Given each test question, we rely on the predictive probabilities from the evaluated model to rank all candidate entities and then evaluate whether the top-1 answer is correct with Hits@1. Since a question may correspond to multiple answers, we also adopt the widely-used F1 metric. Baselines. We consider the following baselines for performance comparison: (1) reasoning-focused methods: KV-Mem (Miller et al., 2016), GraftNet (Sun et al., 2018), EmbedKGQA (Saxena et al., 2020), NSM (He et al., 2021), TransferNet (Shi et al., 2021); (2) retrieval-augmented methods: PullNet (Sun et al., 2019), SR+NSM (Zhang et al., 2022), SR+NSM+E2E (Zhang et al., 2022). We present a detailed description of these baselines in Appendix B. 4.2 EVALUATION RESULTS Table 3 shows the results of different methods on 5 multi-hop KGQA datasets. It can be seen that: First, most baselines perform very well on the three MetaQA datasets (100% Hits@1). It is because these datasets are based on a few hand-crafted question templates and have only nine relation types for the given KG. Thus, the model can easily capture the relevant semantics between the questions and relations to perform reasoning. To further examine this, we conduct an extra one-shot experiment on MetaQA datasets and present the details in Appendix E. Second, TransferNet performs better than GraftNet, EmbedKGQA, and NSM with the same retrieval method. It attends to question words to compute the scores of relations and transfers entity scores along with the relations. Such a way can effectively capture the question-path matching semantics. Besides, SR+NSM and SR+NSM+E2E outperform NSM and PullNet in a large margin. The reason is that they both leverage a PLM-based relation paths retriever to improve the retrieval performance and then reduce the difficulty of the later reasoning stage. Finally, on WebQSP and CWQ, our proposed UniKGQA is substantially better than all other competitive baselines. Unlike other baselines that rely on independent models to perform retrieval and reasoning, our approach can utilize a unified architecture to accomplish them. Such a unified architecture can pre-learn the essential capability of question-relation semantic matching for both stages, and is also capable of effectively transferring relevance information from the retrieval stage to the reasoning stage, i.e., initializing the reasoning model with the parameters of the retrieval model. In our approach, we fix the parameters of the PLM-based encoder for efficiency. Actually, updating its parameters can further improve our performance. Such a way enables researchers to trade off the efficiency and effectiveness when employing our approach in real-world applications. Here, we study it by proposing two variants of our UniKGQA: (1) w QU that updates the parameters of the PLM encoder only when encoding questions, (2) w QU, RU that updates the parameters of the PLM encoder both when encoding questions and relations. Indeed, both variants can boost the performance of our UniKGQA. And only updating the PLM encoder when encoding questions can obtain a comparable even better performance to update both. A possible reason is that updating the PLM encoder owns when encoding questions and relations may lead to overfitting on the downstream tasks. Therefore, it is promising for our UniKGQA to just update the PLM encoder when encoding questions, as it can achieve better performance with relative less additional computation cost. 4.3 FURTHER ANALYSIS Retrieval Evaluation. We evaluate the effectiveness of our UniKGQA to retrieve a smaller but better answer coverage rate subgraph for a given question. Following the evaluation principles of SR (Zhang et al., 2022), we measure such a capacity from three aspects: the direct subgraph size, answer coverage rate, and the final QA performance. Concretely, we first compare UniKGQA with SR (Zhang et al., 2022) and PPR-based heuristic retrieval method (Sun et al., 2018) based on the answer coverage rate curve w.r.t. the number of graph nodes. Then, we compare UniKGQA with SR+NSM (Zhang et al., 2022) and PPR+NSM (He et al., 2021) based on their final QA performance. To further study the effectiveness of our approach, we add an extra variant of our UniKGQA, namely UniKGQA+NSM, which relies on UniKGQA for retrieval while NSM for performing reasoning. The left and middle of Figure 3 show the comparison results of the above methods. As we can see, under the same size of retrieved subgraphs, UniKGQA and SR have significantly larger answer coverage rates than PPR. It demonstrates the effectiveness and necessity of training a learnable retrieval model. Besides, although the curves of UniKGQA and SR are very similar, our UniKGQA can achieve a better final reasoning performance than SR+NSM. The reason is that UniKGQA can transfer the relevance information from the retrieval stage to the reasoning stage based on the unified architecture, learning a more effective reasoning model. Such a finding can be further verified by comparing our UniKGQA with UniKGQA+NSM. Ablation Study. Our UniKGQA contains two important training strategies to improve performance: (1) pre-training with question-relation matching, (2) initializing the parameters of the reasoning model with the retrieval model. Here, we conduct the ablation study to verify their effectiveness. We propose three variants as: (1) w/o Pre removing the pre-training procedure, (2) w/o Trans removing the initialization with the parameters of retrieval model, (3) w/o Pre, Trans removing both the pretraining and initialization procedures. We show the results of the ablation study in Table 4. We can see that all these variants underperform the complete UniKGQA, which indicates that the two training strategies are both important for the final performance. Besides, such an observation also verifies that our UniKGQA is indeed capable of transferring and reusing the learned knowledge to improve the final performance. Fine-tuning Efficiency. As our UniKGQA model can transfer the learned knowledge from the pretraining stage and the retrieval task, it can be easily adapted into downstream reasoning tasks. In this way, we can perform a more efficient fine-tuning on the reasoning task with a few fine-tuning steps. To explore it, we compare the performance changes of our UniKGQA with a strong baseline NSM w.r.t. the increasing of fine-tuning epochs based on the same retrieved subgraphs. The results are presented on the right of Figure 3. First, we can see that before fine-tuning (i.e., when the epoch is zero), our UniKGQA has achieved a comparable performance as the best results of NSM at the last epoch. It indicates that the reasoning model has successfully leveraged the knowledge from prior tasks based on the parameters initialized by the retrieval model. After fine-tuning with two epochs, our UniKGQA has already achieved a good performance. It verifies that our model can be fine-tuned in an efficient way with very few epochs. To further investigate our UniKGQA model, we conduct parameter sensitivity analysis w.r.t. pre-training steps, hidden dimensions, and the number of retrieved nodes K, shown in Appendix H. 5 RELATED WORK Multi-hop Knowledge Graph Question Answering. Multi-hop KGQA aims to seek answer entities that are multiple hops away from the topic entities in a large-scale KG. Considering the efficiency and accuracy, existing work (Sun et al., 2018; 2019; Zhang et al., 2022) typically first retrieves a question-relevant subgraph to reduce the search space and then performs multi-hop reasoning on it. Such a retrieval-and-reasoning paradigm has shown superiority over directly reasoning on the entire KG (Chen et al., 2019; Saxena et al., 2020). The retrieval stage focuses on extracting a relatively small subgraph involving the answer entities. A commonly-used approach is to collect entities with nearer hops around the topic entities to compose the subgraph and filter the ones with low Personalized PageRank scores to reduce the graph size (Sun et al., 2018; He et al., 2021). Despite the simplicity, such a way neglects the question semantics, limiting the retrieval efficiency and accuracy. To address it, several work (Sun et al., 2019; Zhang et al., 2022) devises retrievers based on semantic matching using neural networks (e.g., LSTMs or PLMs). Starting from the topic entities, these retrievers iteratively measure the semantic relevance between the question and neighbouring entities or relations, and add proper ones into the subgraph. In this way, a smaller but more question-relevant subgraph would be constructed. The reasoning stage aims to accurately find the answer entities of the given question by walking along the relations starting from the topic entities. Early work (Miller et al., 2016; Sun et al., 2018; 2019; Jiang et al., 2022) relies on the special network architectures (e.g., Key-Value Memory Network or Graph Convolution Network) to model the multi-hop reasoning process. Recent work further enhances the reasoning capacity of the above networks from the perspectives of intermediate supervision signals (He et al., 2021), knowledge transferring (Shi et al., 2021), etc. However, all these methods design different model architectures and training methods for the retrieval and reasoning stages, respectively, neglecting the similarity and intrinsic connection of the two stages. Recently, some work parses the question into structured query language (e.g., SPARQL) (Lan et al., 2021; Das et al., 2021; Huang et al., 2021) and executes it by a query engine to get answers. In this way, the encoder-decoder architecture (i.e., T5 (Raffel et al., 2020)) is generally adopted to produce the structured queries, where the annotated structured queries are also required for training. Dense Retrieval. Given a query, the dense retrieval task aims to select relevant documents from a large-scale document pool. Different from the traditional sparse term-based retrieval methods, e.g., TF-IDF (Chen et al., 2017) and BM25 (Robertson & Zaragoza, 2009), dense retrieval methods (Karpukhin et al., 2020; Zhou et al., 2022a;b) rely on a bi-encoder architecture to map queries and documents into low-dimensional dense vectors. Then their relevance scores can be measured using vector distance metrics (e.g., cosine similarity), which supports efficient approximate nearest neighbour (ANN) search algorithms. In multi-hop KGQA, starting from the topic entities, we need to select the relevant neighboring triples from a large-scale KG, to induce a path to reach the answer entities, which can be seen as a constrained dense retrieval task. Therefore, in this work, we also incorporate a bi-encoder architecture to map questions and relations into dense vectors, and then perform retrieval or reasoning based on their vector distances. 6 CONCLUSION In this work, we proposed a novel approach for the multi-hop KGQA task. As the major technical contribution, UniKGQA introduced a unified model architecture based on PLMs for both retrieval and reasoning stages, consisting of the semantic matching module and the matching information propagation module. To cope with the different scales of search space in the two stages, we proposed to generate abstract subgraphs for the retrieval stage, which can significantly reduce the number of nodes to be searched. Furthermore, we designed an effective model learning method with both pre-training (i.e., question-relation matching) and fine-tuning (i.e., retrieval- and reasoning-oriented learning) strategies based on the unified architecture. With the unified architecture, the proposed learning method can effectively enhance the sharing and transferring of relevance information between the two stages. We conducted extensive experiments on three benchmark datasets, and the experimental results show that our proposed unified model outperforms the competitive methods, especially on more challenging datasets (i.e., WebQSP and CWQ). ACKNOWLEDGMENTS This work was partially supported by National Natural Science Foundation of China under Grant No. 62222215, Beijing Natural Science Foundation under Grant No. 4222027, and Beijing Outstanding Young Scientist Program under Grant No. BJJWZYJH012019100020098. And this work is also partially supported by the Outstanding Innovative Talents Cultivation Funded Programs 2022 of Renmin University of China. Xin Zhao is the corresponding author. A DATASETS We adopt three widely-used multi-hop KGQA datasets in this work: • MetaQA (Zhang et al., 2018) contains more than 400k questions in the domain of movie and the answer entities are up to 3 hops away from the topic entities. According to the number of hops, this dataset is split into three sub-datasets, i.e., MetaQA-1hop, MetaQA-2hop, and MetaQA-3hop. • WebQuestionsSP (WebQSP) (Yih et al., 2015) contains 4,737 questions and the answer entities require up to 2-hop reasoning on the KG Freebase (Bollacker et al., 2008). We use the same train/valid/test splits as GraftNet (Sun et al., 2018). • Complex WebQuestions 1.1 (CWQ) (Talmor & Berant, 2018) is constructed based on WebQSP by extending the question entities or adding constraints to answers. These questions require up to 4-hop reasoning on the KG Freebase (Bollacker et al., 2008). Existing work has demonstrated that the training data for MetaQA is more than sufficient (Shi et al., 2021; He et al., 2021), hence all the comparison methods in our experiments can achieve very high performance. We conduct further analysis of the three MetaQA datasets about the number of templates, the average number of training cases per template, and the number of relations used for constructing questions, and show them in Table 5. In summary, more training cases and simpler questions make the MetaQA easier to be solved. B BASELINES We consider the following baseline methods for performance comparison: • KV-Mem (Miller et al., 2016) maintains a key-value memory table to store KG facts, and conducts multi-hop reasoning by performing iterative read operations on the memory. • GraftNet (Sun et al., 2018) first retrieves the question-relevant subgraph and text sentences from the KG and Wikipedia respectively with a heuristic method. Then it adopts a graph neural network to perform multi-hop reasoning on a heterogeneous graph built upon the subgraph and text sentences. • PullNet (Sun et al., 2019) trains a graph retrieval model composed of a LSTM and a graph neural network instead of the heuristic way in GraftNet for the retrieval task, and then conducts multi-hop reasoning with GraftNet. • EmbedKGQA (Saxena et al., 2020) reformulates the multi-hop reasoning of GraftNet as a link prediction task by matching pre-trained entity embeddings with question representations from a PLM. • NSM (He et al., 2021) first conducts retrieval following GraftNet and then adapt the neural state machine (Hudson & Manning, 2019) used in visual reasoning for multi-hop reasoning on the KG. • TransferNet (Shi et al., 2021) first conducts retrieval following GraftNet and then performs the multi-hop reasoning on a KG or a text-formed relation graph in a transparent framework. The reasoning model consists of a PLM for question encoding and a graph neural network for updating the relevance scores between entities and the question. • SR+NSM (Zhang et al., 2022) first learns a PLM-based relation path retriever to conduct effectively retrieval and then leverages NSM reasoner to perform multi-hop reasoning. • SR+NSM+E2E (Zhang et al., 2022) further fine-tunes the SR+NSM by an end-to-end way. 64 128 256 512 768 1024 C KNOWLEDGE GRAPH PREPROCESSING DETAILS We preprocess the full Freebase following existing work (Sun et al., 2018; He et al., 2021). For MetaQA, we directly use the subset of WikiMovies provided by the datasets, and the size is about 134,741. For WebQSP and CWQ datasets, we set the max hops of retrieval and reasoning as two and four, respectively. Based on the topic entities labeled in original datasets, we reserve the neighborhood subgraph consisting of entities within the four hops of the topic entities for each sample. After such simple preprocessing, the size of KG we used is 147,748,092 for WebQSP and 202,358,414 for CWQ. Based on the preprocessed KG, we conduct the retrieval and reasoning using our proposed approach. D IMPLEMENTATION DETAILS. During pre-training, we collect question-relation pairs based on the shortest relation paths between topic entities and answer entities, and then use these pairs to pre-train the RoBERTa-base (Liu et al., 2019) model with the contrastive learning objective. We set the temperature τ as 0.05, and select the best model by evaluating Hits@1 on the validation set. For retrieval and reasoning, we initialize the PLM module of our UniKGQA model with the contrastive learning pre-trained RoBERTa, and set the hidden size of other linear layers as 768. We optimize parameters with the AdamW optimizer, where the learning rate is 0.00001 for the PLM module, and 0.0005 for other parameters. The batch size is set to 40. The reasoning step is set to 4 for CWQ dataset, 3 for WebQSP and MetaQA-3 datasets, 2 for MetaQA-2 dataset, and 1 for MetaQA-1 dataset. We preprocess the KGs for each datasets following existing work (Sun et al., 2018; He et al., 2021). E ONE-SHOT EXPERIMENT FOR METAQA Since the samples in MetaQA are more than sufficient, all the comparison methods in our experiments have achieved very high performance. For example, our method and previous work (e.g., TransferNet and NSM) have achieved more than 98% Hits@1 on MetaQA, which shows that this dataset’s performance may have been saturated. To examine this assumption, we consider conducting few-shot experiments to verify the performance of different methods. Specially, we follow the NSM paper (He et al., 2021) that conducts the one-shot experiment. We randomly sample just one training case for each question template from the original training set, to form a one-shot training dataset. In this way, the numbers of training samples for MetaQA-1, MetaQA-2, and MetaQA-3 are 161, 210, and 150, respectively. We evaluate the performance of our approach and some strong baselines (i.e., TrasnferNet and NSM) trained with this new training dataset. As shown in Table 6, our method can consistently outperform these baselines in all three subsets. F ABLATION STUDY OF OUR UNIFIED MODEL ARCHITECTURE The unified model architecture is the key of our approach. Once the unified model architecture is removed, it would be hard to share the question-relation matching capability enhanced by pre-training in retrieval and reasoning stages, and also hard to transfer the relevance information for multi-hop KGQA learned in the retrieval stage to the reasoning stage. To verify it, we conduct an extra ablation study to explore the effect of only adopting the unified model architecture as the reasoning model or the retrieval model. We select the existing strong retrieval model (i.e., SR) and reasoning model (i.e., NSM), and compare the performance when integrated with our UniKGQA. As we can see in Table 7, all the variants underperform our UniKGQA. It indicates that the unified model used in the retrieval and reasoning stages simultaneously is indeed the key reason for improvement. G ANALYSIS OF THE PRE-TRAINING STRATEGY We conduct the analysis experiments to investigate how the pre-training strategy (Pre) affects the performance with or without updating the PLM (QU). We show the results in Table 8. Once removing the pre-training strategy, the model performance would drop 10.4% (2.1%) in WebQSP and 5.1% (3.3%) in CWQ when fixing (not fixing) the PLM. It indicates that the pre-training strategy is an important component of our approach. After pre-training, the PLM can be fixed for more efficient parameters optimization during fine-tuning. H PARAMETER SENSITIVITY ANALYSIS Pre-training Steps Although the pre-training strategy has shown effective in our approach, too many pre-training steps will be time-consuming and costly. Here, we investigate the performance with respect to varying pre-training steps. As shown in the left of Figure 4, we can see that our method can reach the best performance with only few pre-training steps (i.e., 2800) compared with the best baseline TransferNet. It shows that our approach does not require too many steps for pretraining. Instead, we can see that too many pre-training steps will hurt the model performance. The reason may be that the PLM has overfit into the contrastive learning objective. Parameter Tuning. In our approach, we have two hyper-parameters required to tune: (1) the hidden size of linear layers d and (2) the number of retrieved nodes K. Here, we tune the d amongst {64, 128, 256, 512, 768, 1024} and K amongst {1, 5, 10, 15, 20}. We show the results in the middle and right of Figure 4 compared with the best results for the reasoning stage and the retrieval stage. Since K is a consistent hyper-parameter in the UniKGQA and SR, we also describe the various results of SR with different K to give a fair comparison. First, we can see that our method is robust to different hidden sizes, as the performance is consistently nearby 77.0. As the PLM adopts 768 as the embedding size, we can see 768 is also slightly better than other numbers. Besides, we can see that with the increase of K, the answer coverage rate also improves consistently. However, when K increases to 15 or even 20, the performance gain becomes relatively small. It means that the retrieved subgraphs are likely saturated, and further increasing K could only bring marginal improvement.
1. What is the main contribution of the paper regarding knowledge graph question answering? 2. What are the strengths and weaknesses of the proposed approach compared to other works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the implementation details and comparisons with other baseline models?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a novel solution which uses pretrained language models to fuse the retrieval and reasoning stages for knowledge graph question answering. The authors also conduct extensive experiments to verify the effectiveness of the proposed model. Strengths And Weaknesses Strength. This paper proposes a novel solution which uses pretrained language models to fuse the retrieval and reasoning stages for knowledge graph question answering. Weakness. 1. Please present more implementation details about the baselines. It seems that most of the baselines are GNN-based models, which do not include additional knowledge. Whereas the solution in this paper uses pretrained models to introduce additional knowledge, which makes the experiments less comparable and persuasive. To render this paper more convincing, I suggest the authors present more baseline models which use similar pretrained models to do this task. To name a few: a. Xin Huang, Jung-Jae Kim, and Bowei Zou. 2021. Unseen Entity Handling in Complex Question Answering over Knowledge Base via Language Generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 547–557, Punta Cana, Dominican Republic. Association for Computational Linguistics. b. Rajarshi Das, Manzil Zaheer, Dung Thai, Ameya Godbole, Ethan Perez, Jay Yoon Lee, Lizhen Tan, Lazaros Polymenakos, and Andrew McCallum. 2021. Case-based Reasoning for Natural Language Queries over Knowledge Bases. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9594–9611, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Clarity, Quality, Novelty And Reproducibility This paper is well written and the solution is somewhat novelty.
ICLR
Title UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question Answering Over Knowledge Graph Abstract Multi-hop Question Answering over Knowledge Graph (KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question on a large-scale Knowledge Graph (KG). To cope with the vast search space, existing work usually adopts a two-stage approach: it first retrieves a relatively small subgraph related to the question and then performs the reasoning on the subgraph to find the answer entities accurately. Although these two stages are highly related, previous work employs very different technical solutions for developing the retrieval and reasoning models, neglecting their relatedness in task essence. In this paper, we propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning. For model architecture, UniKGQA consists of a semantic matching module based on a pre-trained language model (PLM) for question-relation semantic matching, and a matching information propagation module to propagate the matching information along the directed edges on KGs. For parameter learning, we design a shared pre-training task based on questionrelation matching for both retrieval and reasoning models, and then propose retrievaland reasoning-oriented fine-tuning strategies. Compared with previous studies, our approach is more unified, tightly relating the retrieval and reasoning stages. Extensive experiments on three benchmark datasets have demonstrated the effectiveness of our method on the multi-hop KGQA task. Our codes and data are publicly available at https://github.com/RUCAIBox/UniKGQA. 1 INTRODUCTION With the availability of large-scale knowledge graphs (KGs), such as Freebase (Bollacker et al., 2008) and Wikidata (Tanon et al., 2016), knowledge graph question answering (KGQA) has become an important research topic that aims to find the answer entities of natural language questions from KGs. Recent studies (Lan et al., 2021) mainly focus on multi-hop KGQA, a more complex scenario where sophisticated multi-hop reasoning over edges (or relations) is required to infer the correct answer on the KG. We show an example in Figure 1(a). Given the question “Who is the wife of the nominee for The Jeff Probst Show”, the task goal is to find a reasoning path from the topic entity “The Jeff Probst Show” to the answer entities “Shelley Wright” and “Lisa Ann Russell”. Faced with the vast search space in large-scale KGs, previous work (Sun et al., 2018; 2019) typically adopts a retrieval-then-reasoning approach, to achieve a good trade-off. Generally, the retrieval stage aims to extract relevant triples from the large-scale KG to compose a relatively smaller question-relevant subgraph, while the reasoning stage focuses on accurately finding the answer entities from the retrieved subgraph. Although the purposes of the two stages are different, both stages * Equal contribution. B Corresponding author. The Jeff Probst Show nominee Jeff Probst Shelley Wright Lisa Ann Russell is_a Talk show CBS Television Distribution distributed by Who is the wife of the nominee for The Jeff Probst Show ? Lisa Whelchel nominee Survivor spouse nominee is_a distributed by Who is the wife of the nominee for The Jeff Probst Show ? The Jeff Probst Show Talk show CBS Television Distribution Jeff Probst Lisa Whelchel Lisa Ann Russell Shelley Wright Survivor Θ𝑝 Θ𝑜 Γ𝑝 Γ𝑜 R et ri ev al R ea so n in g Pre-training Fine-tuning update by QRM (c) the overall learning procedure(b) an example of abstract subgraph(a) an example of multi-hop KGQA TV producer TV producer update by RAS initialize Γ𝑜 with Θ𝑜 and update it by RRS initialize Γ𝑝 with Θ𝑝 and fix it Figure 1: Illustrative examples and learning procedure of our work. need to evaluate the semantic relevance of a candidate entity with respect to the question (for removal or reranking), which can be considered as a semantic matching problem in essence. For measuring the entity relevance, relation-based features, either direct relations (Miller et al., 2016) or composite relation paths (Sun et al., 2018), have been shown to be particularly useful for building the semantic matching models. As shown in Figure 1(a), given the question, it is key to identify the semantically matched relations and the composed relation path in the KG (e.g., “nominee → spouse”) for finding the correct answer entities. Since the two stages cope with different scales of search space on KGs (e.g., millions v.s. thousands), they usually adopt specific technical solutions: the former prefers more efficient methods focusing on the recall performance (Sun et al., 2018), while the latter prefers more capable methods for modeling fined-grained matching signals (He et al., 2021). Considering the same essence for both stages, this work aims to push forwards the research on multihop KGQA by investigating the following problem: can we design a unified model architecture for both stages to derive a better performance? To develop a unified model architecture for multi-hop KGQA, a major merit is that we can tightly relate the two stages and enhance the sharing of the relevance information. Although the two stages are highly related, previous studies usually treat them separately in model learning: only the retrieved triples are passed from the retrieval stage to the reasoning stage, while the rest of the useful signal for semantic matching has been neglected in the pipeline framework. Such an approach is likely to lead to sub-optimal or inferior performance, since multi-hop KGQA is a very challenging task, requiring elaborate solutions that sufficiently leverage various kinds of relevance information from the two stages. However, there are two major issues about developing a unified model architecture for multi-hop KGQA: (1) How to cope with very different scales of search space for the two stages? (2) How to effectively share or transfer useful relevance information across the two stages? For the first issue, instead of letting the same model architecture directly fit very different data distributions, we propose a new subgraph form to reduce the node scale at the retrieval stage, namely abstract subgraph that is composed by merging the nodes with the same relations from the KG (see Figure 1(b)). For the second issue, based on the same model architecture, we design an effective learning approach for the two stages, so that we can share the same pre-trained parameters and use the learned retrieval model to initialize the reasoning model (see Figure 1(c)). To this end, in this paper, we propose UniKGQA, a unified model for multi-hop KGQA task. Specifically, UniKGQA consists of a semantic matching module based on a PLM for question-relation semantic matching, and a matching information propagation module to propagate the matching information along the directed edges on KGs. In order to learn these parameters, we design both pre-training (i.e., question-relation matching) and fine-tuning (i.e., retrieval- and reasoning-oriented learning) strategies based on the unified architecture. Compared with previous work on multi-hop KQGA, our approach is more unified and simplified, tightly relating the retrieval and reasoning stages. To our knowledge, it is the first work that unifies the retrieval and reasoning in both model architecture and learning for the multi-hop KGQA task. To evaluate our approach, we conduct extensive experiments on three benchmark datasets. On the difficult datasets, WebQSP and CWQ, we outperform existing state-of-the-art baselines by a large margin (e.g., 8.1% improvement of Hits@1 on WebQSP and 2.0% improvement of Hits@1 on CWQ). 2 PRELIMINARY In this section, we introduce the notations that will be used throughout the paper and then formally define the multi-hop KGQA task. Knowledge Graph (KG). A knowledge graph typically consists of a set of triples, denoted by G = {⟨e, r, e′⟩|e, e′ ∈ E , r ∈ R}, where E and R denote the entity set and relation set, respectively. A triple ⟨e, r, e′⟩ describes the fact that a relation r exists between head entity e and tail entity e′. Furthermore, we denote the set of neighborhood triples that an entity e belongs to by Ne = {⟨e, r, e′⟩ ∈ G} ∪ {⟨e′, r, e⟩ ∈ G}. Let r−1 denote the inverse relation of r, and we can represent a triple ⟨e, r, e′⟩ by its inverse triple ⟨e′, r−1, e⟩. In this way, we can simplify the definition of the neighborhood triples for an entity e as Ne = {⟨e′, r, e⟩ ∈ G}. We further use E ∈ Rd×|E| and R ∈ Rd×|R| to denote the embedding matrices for entities and relations in KG, respectively. Multi-hop Knowledge Graph Question Answering (Multi-hop KGQA). Given a natural language question q and a KG G, the task of KGQA aims to find answer entitie(s) to the question over the KG, denoted by the answer set Aq ∈ E . Following previous work (Sun et al., 2018; 2019), we assume that the entities mentioned in the question (e.g., “The Jeff Probst Show” in Figure 1(a)) are marked and linked with entities on KG, namely topic entities, denoted as Tq ⊂ E . In this work, we focus on solving the multi-hop KGQA task where the answer entities are multiple hops away from the topic entities over the KG. Considering the trade-off between efficiency and accuracy, we follow existing work (Sun et al., 2018; 2019) that solves this task using a retrieval-then-reasoning framework. In the two-stage framework, given a question q and topic entities Tq , the retrieval model aims to retrieve a small subgraph Gq from the large-scale input KG G, while the reasoning model searches answer entities Aq by reasoning over the retrieved subgraph Gq . Abstract Subgraph. Based on KGs, we further introduce the concept of abstract graph, which is derived based on the reduction from an original subgraph. Specifically, given a subgraph related to question q, denoted as Gq ⊂ G, we merge the tail entities from the triples with the same prefix (i.e., the same head entity and relation: ⟨e, r, ?⟩), and then generate a corresponding abstract node ẽ to represent the set of tail entities, so we have ẽ = {e′|⟨e, r, e′⟩ ∈ G}. Similarly, we can also perform the same operations on the head entities. To unify the notations, we transform an original node that can’t be merged into an abstract node by creating a set only including the node itself. In this way, the corresponding abstract subgraph Gq can be denoted as: G̃q = {⟨ẽ, r, ẽ′⟩|∃e ∈ ẽ, ∃e′ ∈ ẽ′, ⟨e, r, e′⟩ ∈ Gq}, where each node ẽ is an abstract node representing a set of original nodes (one or multiple). We present illustrative examples of the original subgraph and its abstract subgraph in Figure 1(a) and Figure 1(b). 3 APPROACH In this section, we present our proposed UniKGQA, which unifies the retrieval and reasoning for multi-hop KGQA. The major novelty is that we introduce a unified model architecture for both stages (Section 3.1) and design an effective learning approach involving both specific pre-training and fine-tuning strategies (Section 3.2). Next, we describe the two parts in detail. 3.1 UNIFIED MODEL ARCHITECTURE We consider a general input form for both retrieval and reasoning, and develop the base architecture by integrating two major modules: (1) the semantic matching (SM) module that employs a PLM to perform the semantic matching between questions and relations; (2) the matching information propagation (MIP) module that propagates the semantic matching information on KGs. We present the overview of the model architecture in Figure 2. Next, we describe the three parts in detail. General Input Formulation. In order to support both retrieval and reasoning stages, we consider a general form for evaluating entity relevance, where a question q and a subgraph Gq of candidate entities are given. For the retrieval stage, Gq is an abstract subgraph that incorporates abstract nodes to merge entities from the same relation. For the reasoning stage, Gq is constructed based on the retrieved subgraph from the retrieval stage, without abstract nodes. Such a general input formulation enables the development of the unified model architecture for the two different stages. In what follows, we will describe the approach in a general way, without considering specific stages. Semantic Matching (SM). The SM module aims to produce the semantic matching features between the question q and a triple ⟨e′, r, e⟩ from the given subgraph Gq . Considering the excellent modeling capacity of the PLM, we leverage the PLM to produce text encoding as the representations of question q and relation r. Specifically, we first utilize the PLM to encode the texts of q and r, and employ the output representation of the [CLS] token as their representations: hq = PLM(q), hr = PLM(r). (1) Based on hq and hr, inspired by the NSM model (He et al., 2021), we obtain the vector capturing semantic matching features m(t)⟨e′,r,e⟩ between question q and triple ⟨e ′, r, e⟩ at the t-th step by adopting corresponding projection layers: m (t) ⟨e′,r,e⟩ = σ ( hqW (t) Q ⊙ hrW (t) R ) , (2) where m(t)⟨e′,r,e⟩ ∈ R d, W (t)Q ,W (t) R ∈ Rh×d are parameters of the t-step projection layers, h, d are the hidden dimensions of PLM and the feature vector, respectively, σ is the sigmoid activation function, and ⊙ is the hadamard product. Matching Information Propagation (MIP). Based on the generated semantic matching features, the MIP module first aggregates them to update the entity representation and then utilizes it to obtain the entity match score. To initialize the match score, given a question q and a subgraph Gq , for each entity ei ∈ Gq , we set the match score between q and ei as follows: s(1)ei = 1 if ei is a topic entity and s(1)ei = 0 otherwise. At the t-th step, we utilize the match scores of the head entities computed at the last step s(t−1)e′ as the weights and aggregate the matching features from neighboring triples to obtain the representation of the tail entity: e(t) = W (t) E e(t−1); ∑ ⟨e′,r,e⟩∈Ne s (t−1) e′ ·m (t) ⟨e′,r,e⟩ , (3) where e(t) ∈ Rd is the representation of the entity e in the t-th step, and the W (t)E ∈ R2d×d is a learnable matrix. At the first step, since there are no matching scores, following the NSM (He et al., 2021) model, we directly aggregate the representations of its one-hop relations as the entity representation: e(1) = σ( ∑ ⟨e′,r,e⟩∈Ne r · U), where the U ∈ R 2d×d is a learnable matrix. Based on the representations of all entities E(t) ∈ Rd×n, we update their entity match scores using the softmax function as: s(t) = softmax ( E(t) ⊤ v ) , (4) where v ∈ Rd is a learnable vector. After T -step iterations, we can obtain the final entity match scores s(T ), which is a probability distribution over all entities in the subgraph Gq . These match scores can be leveraged to measure the possibilities of the entities being the answers to the given question q, and will be used in both the retrieval and reasoning stages. 3.2 MODEL TRAINING In our approach, we have both the retrieval model and the reasoning model for the two stages of multi-hop KGQA. Since the two models adopt the same architecture, we introduce Θ and Γ to denote the model parameters that are used for retrieval and reasoning stages, respectively. As shown in Section 3.1, our architecture contains two groups of parameters, namely the underlying PLM and the other parameters for matching and propagation. Thus, Θ and Γ can be decomposed as: Θ = {Θp,Θo} and Γ = {Γp,Γo}, where the subscripts p and o denote the PLM parameters and the other parameters in our architecture, respectively. In order to learn these parameters, we design both pre-training (i.e., question-relation matching) and fine-tuning (i.e., retrieval- and reasoning-oriented learning) strategies based on the unified architecture. Next, we describe the model training approach. Pre-training with Question-Relation Matching (QRM). For pre-training, we mainly focus on learning the parameters of the underlying PLMs (i.e., Θp and Γp). In the implementation, we let the two models share the same copy of PLM parameters, i.e., Θp = Γp. As shown in Section 3.1, the basic capacity of the semantic matching module is to model the relevance between a question and a single relation (Eq. 2), which is based on the text encoding from the underlying PLM. Therefore, we design a contrastive pre-training task based on question-relation matching. Specifically, we adopt the contrastive leaning objective (Hadsell et al., 2006) to align the representations of relevant question-relation pairs while pushing apart others. To collect the relevant question-relation pairs, given an example consisting of a question q, the topic entities Tq and answer entities Aq , we extract all the shortest paths between the Tq and Aq from the entire KG and regard all of the relations within these paths as relevant to q, denoted as R+. In this way, we can obtain a number of weaksupervised examples. During pre-training, for each question q, we randomly sample a relevant relation r+ ∈ R+, and utilize a contrastive learning loss for pre-training: LPT = − log esim(qi,r + i )/τ∑M j=1 ( esim(qi,r + j )/τ + esim(qi,r − j )/τ ) (5) where τ is a temperature hyperparameter, r−i is a randomly sampled negative relation, and sim (q, r) is the cosine similarity, and q, r is the question and relation encoded by the PLM from the SM module (Eq. 1). In this way, the question-relation matching capacity will be enhanced by pretraining the PLM parameters. Note that the PLM parameters will be fixed after pre-training. Fine-tuning for Retrieval on Abstract Subgraphs (RAS). After pre-training, we first fine-tune the entire model for learning the parameters Θo according to the retrieval task. Recall that we transform the subgraphs into a form of abstract subgraphs, where abstract nodes are incorporated for merging entities from the same relation. Since our MIP module (Section 3.1) can produce the matching scores sA of nodes in a subgraph (Eq. 4), where the subscript A denotes that the nodes are from an abstract subgraph. Furthermore, we utilize the labeled answers to get the ground-truth vectors, denoted by s∗A. We set an abstract node in s ∗ A to 1 if it contains the answer entity. Then we minimize the KL divergence between the learned and ground-truth matching score vectors as: LRAS = DKL ( sA, s ∗ A ) . (6) After fine-tuning the RAS loss, the retrieval model can be effectively learned. We further utilize it to retrieve the subgraph for the given question q, by selecting the top-K ranked nodes according to their match scores. Note that only the node within a reasonable distance to the topic entities will be selected into the subgraph, which ensures a relatively small yet relevant subgraph Gq for the subsequent reasoning stage to find answer entities. Fine-tuning for Reasoning on Retrieved Subgraphs (RRS). After fine-tuning the retrieval model, we continue to fine-tune the reasoning model by learning the parameters Γo. With the fine-tuned Table 1: Comparison of different methods. Methods Retrieval Reasoning ParametersTransferring GraftNet PPR GraftNet ✗ PullNet LSTM GraftNet ✗ NSM PPR NSM ✗ SR+NSM PLM NSM ✗ UniKGQA UniKGQA UniKGQA ✓ retrieval model, we can obtain a smaller subgraph Gq for each question q. In the reasoning stage, we focus on performing accurate reasoning to find the answer entities, so that we recover the original nodes in the abstract nodes and the original relations among them. Since the retrieval and reasoning stages are highly dependent, we first initialize the parameters of the reasoning model with those from the retrieval model: Θo → Γo. Then, following Eq. 4, we employ a similar approach to fitting the learned matching scores (denoted by sR) with the ground-truth vectors (denoted by s∗R) according to the KL loss: LRRS = DKL ( sR, s ∗ R ) , (7) where the subscript R denotes that the nodes come from a retrieved subgraph. After fine-tuning with the RRS loss, we can utilize the learned reasoning model to select the top-n ranked entities as the answer list according to the match scores. As shown in Figure 1(c), the overall training procedure is composed by: (1) pre-training Θp with question-relation matching, (2) fixing Θp and fine-tuning Θo for retrieval on abstract subgraphs, and (3) fixing the Γp initialized by Θp and fine-tuning Γo initialized by Θo for reasoning on subgraphs. Our work provides a novel unified model for the retrieval and reasoning stages to share the reasoning capacity. In Table 1, we summarize the differences between our method and several popular methods for multi-hop KGQA, including GraphfNet (Sun et al., 2018), PullNet (Sun et al., 2019), NSM (He et al., 2021), and SR+NSM (Zhang et al., 2022). As we can see, existing methods usually adopt different models for the retrieval and reasoning stages, while our approach is more unified. As a major benefit, the information between the two stages can be effectively shared and reused: we initialize the reasoning model with the learned retrieval model. 4 EXPERIMENT 4.1 EXPERIMENTAL SETTING Datasets. Following existing work on multi-hop KGQA (Sun et al., 2018; 2019; He et al., 2021; Zhang et al., 2022), we adopt three benchmark datasets, namely MetaQA (Zhang et al., 2018), WebQuestionsSP (WebQSP) (Zhang et al., 2018; Yih et al., 2015), and Complex WebQuestions 1.1 (CWQ) (Talmor & Berant, 2018) for evaluating our model. Table 2 shows the statistics of the three datasets. Since previous work has achieved nearly full marks on MetaQA, WebQSP and CWQ are our primarily evaluated datasets. We present a detailed description of these datasets in Appendix A. Evaluation Protocol. For the retrieval performance, we follow Zhang et al. (2022) that evaluate the models by the answer coverage rate (%). It is the proportion of questions whose retrieved subgraphs contain at least one answer. For the reasoning performance, we follow Sun et al. (2018; 2019) that regard the reasoning as a ranking task for evaluation. Given each test question, we rely on the predictive probabilities from the evaluated model to rank all candidate entities and then evaluate whether the top-1 answer is correct with Hits@1. Since a question may correspond to multiple answers, we also adopt the widely-used F1 metric. Baselines. We consider the following baselines for performance comparison: (1) reasoning-focused methods: KV-Mem (Miller et al., 2016), GraftNet (Sun et al., 2018), EmbedKGQA (Saxena et al., 2020), NSM (He et al., 2021), TransferNet (Shi et al., 2021); (2) retrieval-augmented methods: PullNet (Sun et al., 2019), SR+NSM (Zhang et al., 2022), SR+NSM+E2E (Zhang et al., 2022). We present a detailed description of these baselines in Appendix B. 4.2 EVALUATION RESULTS Table 3 shows the results of different methods on 5 multi-hop KGQA datasets. It can be seen that: First, most baselines perform very well on the three MetaQA datasets (100% Hits@1). It is because these datasets are based on a few hand-crafted question templates and have only nine relation types for the given KG. Thus, the model can easily capture the relevant semantics between the questions and relations to perform reasoning. To further examine this, we conduct an extra one-shot experiment on MetaQA datasets and present the details in Appendix E. Second, TransferNet performs better than GraftNet, EmbedKGQA, and NSM with the same retrieval method. It attends to question words to compute the scores of relations and transfers entity scores along with the relations. Such a way can effectively capture the question-path matching semantics. Besides, SR+NSM and SR+NSM+E2E outperform NSM and PullNet in a large margin. The reason is that they both leverage a PLM-based relation paths retriever to improve the retrieval performance and then reduce the difficulty of the later reasoning stage. Finally, on WebQSP and CWQ, our proposed UniKGQA is substantially better than all other competitive baselines. Unlike other baselines that rely on independent models to perform retrieval and reasoning, our approach can utilize a unified architecture to accomplish them. Such a unified architecture can pre-learn the essential capability of question-relation semantic matching for both stages, and is also capable of effectively transferring relevance information from the retrieval stage to the reasoning stage, i.e., initializing the reasoning model with the parameters of the retrieval model. In our approach, we fix the parameters of the PLM-based encoder for efficiency. Actually, updating its parameters can further improve our performance. Such a way enables researchers to trade off the efficiency and effectiveness when employing our approach in real-world applications. Here, we study it by proposing two variants of our UniKGQA: (1) w QU that updates the parameters of the PLM encoder only when encoding questions, (2) w QU, RU that updates the parameters of the PLM encoder both when encoding questions and relations. Indeed, both variants can boost the performance of our UniKGQA. And only updating the PLM encoder when encoding questions can obtain a comparable even better performance to update both. A possible reason is that updating the PLM encoder owns when encoding questions and relations may lead to overfitting on the downstream tasks. Therefore, it is promising for our UniKGQA to just update the PLM encoder when encoding questions, as it can achieve better performance with relative less additional computation cost. 4.3 FURTHER ANALYSIS Retrieval Evaluation. We evaluate the effectiveness of our UniKGQA to retrieve a smaller but better answer coverage rate subgraph for a given question. Following the evaluation principles of SR (Zhang et al., 2022), we measure such a capacity from three aspects: the direct subgraph size, answer coverage rate, and the final QA performance. Concretely, we first compare UniKGQA with SR (Zhang et al., 2022) and PPR-based heuristic retrieval method (Sun et al., 2018) based on the answer coverage rate curve w.r.t. the number of graph nodes. Then, we compare UniKGQA with SR+NSM (Zhang et al., 2022) and PPR+NSM (He et al., 2021) based on their final QA performance. To further study the effectiveness of our approach, we add an extra variant of our UniKGQA, namely UniKGQA+NSM, which relies on UniKGQA for retrieval while NSM for performing reasoning. The left and middle of Figure 3 show the comparison results of the above methods. As we can see, under the same size of retrieved subgraphs, UniKGQA and SR have significantly larger answer coverage rates than PPR. It demonstrates the effectiveness and necessity of training a learnable retrieval model. Besides, although the curves of UniKGQA and SR are very similar, our UniKGQA can achieve a better final reasoning performance than SR+NSM. The reason is that UniKGQA can transfer the relevance information from the retrieval stage to the reasoning stage based on the unified architecture, learning a more effective reasoning model. Such a finding can be further verified by comparing our UniKGQA with UniKGQA+NSM. Ablation Study. Our UniKGQA contains two important training strategies to improve performance: (1) pre-training with question-relation matching, (2) initializing the parameters of the reasoning model with the retrieval model. Here, we conduct the ablation study to verify their effectiveness. We propose three variants as: (1) w/o Pre removing the pre-training procedure, (2) w/o Trans removing the initialization with the parameters of retrieval model, (3) w/o Pre, Trans removing both the pretraining and initialization procedures. We show the results of the ablation study in Table 4. We can see that all these variants underperform the complete UniKGQA, which indicates that the two training strategies are both important for the final performance. Besides, such an observation also verifies that our UniKGQA is indeed capable of transferring and reusing the learned knowledge to improve the final performance. Fine-tuning Efficiency. As our UniKGQA model can transfer the learned knowledge from the pretraining stage and the retrieval task, it can be easily adapted into downstream reasoning tasks. In this way, we can perform a more efficient fine-tuning on the reasoning task with a few fine-tuning steps. To explore it, we compare the performance changes of our UniKGQA with a strong baseline NSM w.r.t. the increasing of fine-tuning epochs based on the same retrieved subgraphs. The results are presented on the right of Figure 3. First, we can see that before fine-tuning (i.e., when the epoch is zero), our UniKGQA has achieved a comparable performance as the best results of NSM at the last epoch. It indicates that the reasoning model has successfully leveraged the knowledge from prior tasks based on the parameters initialized by the retrieval model. After fine-tuning with two epochs, our UniKGQA has already achieved a good performance. It verifies that our model can be fine-tuned in an efficient way with very few epochs. To further investigate our UniKGQA model, we conduct parameter sensitivity analysis w.r.t. pre-training steps, hidden dimensions, and the number of retrieved nodes K, shown in Appendix H. 5 RELATED WORK Multi-hop Knowledge Graph Question Answering. Multi-hop KGQA aims to seek answer entities that are multiple hops away from the topic entities in a large-scale KG. Considering the efficiency and accuracy, existing work (Sun et al., 2018; 2019; Zhang et al., 2022) typically first retrieves a question-relevant subgraph to reduce the search space and then performs multi-hop reasoning on it. Such a retrieval-and-reasoning paradigm has shown superiority over directly reasoning on the entire KG (Chen et al., 2019; Saxena et al., 2020). The retrieval stage focuses on extracting a relatively small subgraph involving the answer entities. A commonly-used approach is to collect entities with nearer hops around the topic entities to compose the subgraph and filter the ones with low Personalized PageRank scores to reduce the graph size (Sun et al., 2018; He et al., 2021). Despite the simplicity, such a way neglects the question semantics, limiting the retrieval efficiency and accuracy. To address it, several work (Sun et al., 2019; Zhang et al., 2022) devises retrievers based on semantic matching using neural networks (e.g., LSTMs or PLMs). Starting from the topic entities, these retrievers iteratively measure the semantic relevance between the question and neighbouring entities or relations, and add proper ones into the subgraph. In this way, a smaller but more question-relevant subgraph would be constructed. The reasoning stage aims to accurately find the answer entities of the given question by walking along the relations starting from the topic entities. Early work (Miller et al., 2016; Sun et al., 2018; 2019; Jiang et al., 2022) relies on the special network architectures (e.g., Key-Value Memory Network or Graph Convolution Network) to model the multi-hop reasoning process. Recent work further enhances the reasoning capacity of the above networks from the perspectives of intermediate supervision signals (He et al., 2021), knowledge transferring (Shi et al., 2021), etc. However, all these methods design different model architectures and training methods for the retrieval and reasoning stages, respectively, neglecting the similarity and intrinsic connection of the two stages. Recently, some work parses the question into structured query language (e.g., SPARQL) (Lan et al., 2021; Das et al., 2021; Huang et al., 2021) and executes it by a query engine to get answers. In this way, the encoder-decoder architecture (i.e., T5 (Raffel et al., 2020)) is generally adopted to produce the structured queries, where the annotated structured queries are also required for training. Dense Retrieval. Given a query, the dense retrieval task aims to select relevant documents from a large-scale document pool. Different from the traditional sparse term-based retrieval methods, e.g., TF-IDF (Chen et al., 2017) and BM25 (Robertson & Zaragoza, 2009), dense retrieval methods (Karpukhin et al., 2020; Zhou et al., 2022a;b) rely on a bi-encoder architecture to map queries and documents into low-dimensional dense vectors. Then their relevance scores can be measured using vector distance metrics (e.g., cosine similarity), which supports efficient approximate nearest neighbour (ANN) search algorithms. In multi-hop KGQA, starting from the topic entities, we need to select the relevant neighboring triples from a large-scale KG, to induce a path to reach the answer entities, which can be seen as a constrained dense retrieval task. Therefore, in this work, we also incorporate a bi-encoder architecture to map questions and relations into dense vectors, and then perform retrieval or reasoning based on their vector distances. 6 CONCLUSION In this work, we proposed a novel approach for the multi-hop KGQA task. As the major technical contribution, UniKGQA introduced a unified model architecture based on PLMs for both retrieval and reasoning stages, consisting of the semantic matching module and the matching information propagation module. To cope with the different scales of search space in the two stages, we proposed to generate abstract subgraphs for the retrieval stage, which can significantly reduce the number of nodes to be searched. Furthermore, we designed an effective model learning method with both pre-training (i.e., question-relation matching) and fine-tuning (i.e., retrieval- and reasoning-oriented learning) strategies based on the unified architecture. With the unified architecture, the proposed learning method can effectively enhance the sharing and transferring of relevance information between the two stages. We conducted extensive experiments on three benchmark datasets, and the experimental results show that our proposed unified model outperforms the competitive methods, especially on more challenging datasets (i.e., WebQSP and CWQ). ACKNOWLEDGMENTS This work was partially supported by National Natural Science Foundation of China under Grant No. 62222215, Beijing Natural Science Foundation under Grant No. 4222027, and Beijing Outstanding Young Scientist Program under Grant No. BJJWZYJH012019100020098. And this work is also partially supported by the Outstanding Innovative Talents Cultivation Funded Programs 2022 of Renmin University of China. Xin Zhao is the corresponding author. A DATASETS We adopt three widely-used multi-hop KGQA datasets in this work: • MetaQA (Zhang et al., 2018) contains more than 400k questions in the domain of movie and the answer entities are up to 3 hops away from the topic entities. According to the number of hops, this dataset is split into three sub-datasets, i.e., MetaQA-1hop, MetaQA-2hop, and MetaQA-3hop. • WebQuestionsSP (WebQSP) (Yih et al., 2015) contains 4,737 questions and the answer entities require up to 2-hop reasoning on the KG Freebase (Bollacker et al., 2008). We use the same train/valid/test splits as GraftNet (Sun et al., 2018). • Complex WebQuestions 1.1 (CWQ) (Talmor & Berant, 2018) is constructed based on WebQSP by extending the question entities or adding constraints to answers. These questions require up to 4-hop reasoning on the KG Freebase (Bollacker et al., 2008). Existing work has demonstrated that the training data for MetaQA is more than sufficient (Shi et al., 2021; He et al., 2021), hence all the comparison methods in our experiments can achieve very high performance. We conduct further analysis of the three MetaQA datasets about the number of templates, the average number of training cases per template, and the number of relations used for constructing questions, and show them in Table 5. In summary, more training cases and simpler questions make the MetaQA easier to be solved. B BASELINES We consider the following baseline methods for performance comparison: • KV-Mem (Miller et al., 2016) maintains a key-value memory table to store KG facts, and conducts multi-hop reasoning by performing iterative read operations on the memory. • GraftNet (Sun et al., 2018) first retrieves the question-relevant subgraph and text sentences from the KG and Wikipedia respectively with a heuristic method. Then it adopts a graph neural network to perform multi-hop reasoning on a heterogeneous graph built upon the subgraph and text sentences. • PullNet (Sun et al., 2019) trains a graph retrieval model composed of a LSTM and a graph neural network instead of the heuristic way in GraftNet for the retrieval task, and then conducts multi-hop reasoning with GraftNet. • EmbedKGQA (Saxena et al., 2020) reformulates the multi-hop reasoning of GraftNet as a link prediction task by matching pre-trained entity embeddings with question representations from a PLM. • NSM (He et al., 2021) first conducts retrieval following GraftNet and then adapt the neural state machine (Hudson & Manning, 2019) used in visual reasoning for multi-hop reasoning on the KG. • TransferNet (Shi et al., 2021) first conducts retrieval following GraftNet and then performs the multi-hop reasoning on a KG or a text-formed relation graph in a transparent framework. The reasoning model consists of a PLM for question encoding and a graph neural network for updating the relevance scores between entities and the question. • SR+NSM (Zhang et al., 2022) first learns a PLM-based relation path retriever to conduct effectively retrieval and then leverages NSM reasoner to perform multi-hop reasoning. • SR+NSM+E2E (Zhang et al., 2022) further fine-tunes the SR+NSM by an end-to-end way. 64 128 256 512 768 1024 C KNOWLEDGE GRAPH PREPROCESSING DETAILS We preprocess the full Freebase following existing work (Sun et al., 2018; He et al., 2021). For MetaQA, we directly use the subset of WikiMovies provided by the datasets, and the size is about 134,741. For WebQSP and CWQ datasets, we set the max hops of retrieval and reasoning as two and four, respectively. Based on the topic entities labeled in original datasets, we reserve the neighborhood subgraph consisting of entities within the four hops of the topic entities for each sample. After such simple preprocessing, the size of KG we used is 147,748,092 for WebQSP and 202,358,414 for CWQ. Based on the preprocessed KG, we conduct the retrieval and reasoning using our proposed approach. D IMPLEMENTATION DETAILS. During pre-training, we collect question-relation pairs based on the shortest relation paths between topic entities and answer entities, and then use these pairs to pre-train the RoBERTa-base (Liu et al., 2019) model with the contrastive learning objective. We set the temperature τ as 0.05, and select the best model by evaluating Hits@1 on the validation set. For retrieval and reasoning, we initialize the PLM module of our UniKGQA model with the contrastive learning pre-trained RoBERTa, and set the hidden size of other linear layers as 768. We optimize parameters with the AdamW optimizer, where the learning rate is 0.00001 for the PLM module, and 0.0005 for other parameters. The batch size is set to 40. The reasoning step is set to 4 for CWQ dataset, 3 for WebQSP and MetaQA-3 datasets, 2 for MetaQA-2 dataset, and 1 for MetaQA-1 dataset. We preprocess the KGs for each datasets following existing work (Sun et al., 2018; He et al., 2021). E ONE-SHOT EXPERIMENT FOR METAQA Since the samples in MetaQA are more than sufficient, all the comparison methods in our experiments have achieved very high performance. For example, our method and previous work (e.g., TransferNet and NSM) have achieved more than 98% Hits@1 on MetaQA, which shows that this dataset’s performance may have been saturated. To examine this assumption, we consider conducting few-shot experiments to verify the performance of different methods. Specially, we follow the NSM paper (He et al., 2021) that conducts the one-shot experiment. We randomly sample just one training case for each question template from the original training set, to form a one-shot training dataset. In this way, the numbers of training samples for MetaQA-1, MetaQA-2, and MetaQA-3 are 161, 210, and 150, respectively. We evaluate the performance of our approach and some strong baselines (i.e., TrasnferNet and NSM) trained with this new training dataset. As shown in Table 6, our method can consistently outperform these baselines in all three subsets. F ABLATION STUDY OF OUR UNIFIED MODEL ARCHITECTURE The unified model architecture is the key of our approach. Once the unified model architecture is removed, it would be hard to share the question-relation matching capability enhanced by pre-training in retrieval and reasoning stages, and also hard to transfer the relevance information for multi-hop KGQA learned in the retrieval stage to the reasoning stage. To verify it, we conduct an extra ablation study to explore the effect of only adopting the unified model architecture as the reasoning model or the retrieval model. We select the existing strong retrieval model (i.e., SR) and reasoning model (i.e., NSM), and compare the performance when integrated with our UniKGQA. As we can see in Table 7, all the variants underperform our UniKGQA. It indicates that the unified model used in the retrieval and reasoning stages simultaneously is indeed the key reason for improvement. G ANALYSIS OF THE PRE-TRAINING STRATEGY We conduct the analysis experiments to investigate how the pre-training strategy (Pre) affects the performance with or without updating the PLM (QU). We show the results in Table 8. Once removing the pre-training strategy, the model performance would drop 10.4% (2.1%) in WebQSP and 5.1% (3.3%) in CWQ when fixing (not fixing) the PLM. It indicates that the pre-training strategy is an important component of our approach. After pre-training, the PLM can be fixed for more efficient parameters optimization during fine-tuning. H PARAMETER SENSITIVITY ANALYSIS Pre-training Steps Although the pre-training strategy has shown effective in our approach, too many pre-training steps will be time-consuming and costly. Here, we investigate the performance with respect to varying pre-training steps. As shown in the left of Figure 4, we can see that our method can reach the best performance with only few pre-training steps (i.e., 2800) compared with the best baseline TransferNet. It shows that our approach does not require too many steps for pretraining. Instead, we can see that too many pre-training steps will hurt the model performance. The reason may be that the PLM has overfit into the contrastive learning objective. Parameter Tuning. In our approach, we have two hyper-parameters required to tune: (1) the hidden size of linear layers d and (2) the number of retrieved nodes K. Here, we tune the d amongst {64, 128, 256, 512, 768, 1024} and K amongst {1, 5, 10, 15, 20}. We show the results in the middle and right of Figure 4 compared with the best results for the reasoning stage and the retrieval stage. Since K is a consistent hyper-parameter in the UniKGQA and SR, we also describe the various results of SR with different K to give a fair comparison. First, we can see that our method is robust to different hidden sizes, as the performance is consistently nearby 77.0. As the PLM adopts 768 as the embedding size, we can see 768 is also slightly better than other numbers. Besides, we can see that with the increase of K, the answer coverage rate also improves consistently. However, when K increases to 15 or even 20, the performance gain becomes relatively small. It means that the retrieved subgraphs are likely saturated, and further increasing K could only bring marginal improvement.
1. What is the main contribution of the paper in multi-hop question-answering over Knowledge Graphs? 2. What are the strengths and weaknesses of the proposed UniKGQA model, particularly in its evaluation results and ablation study? 3. Do you have any questions regarding the explanation of Abstract Subgraph and its application to WebQSP/CWQ sets and MetaQA? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any specific issues that should be improved in the paper, such as figure clarity, terminology explanations, and proofreading errors?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposed a model to do multi-hop question-answering over Knowledge Graphs. The core model responsible for relevance is trained once and shared between the initial retrieval of the subgraph and the detailed reasoning phases. A key claim of the paper is that by unifying the model for the two phases, we can get better quality in the answers than doing them separately. The proposed UniKGQA model consists of two main parts: Pretrained Language Model (PLM)-based semantic matching and the propagation of matching information through the graphs. The PLM (RoBERTa in implementation) is fine-tuned with task specific data to capture the connection between entities and relationships. Its parameters are shared between the retrieval and reasoning phases. Evaluation was done on three different datasets. The UniKGQA model was compared with recent baselines and showed on-par results on two of them and significantly better results on WebQSP. The paper included detailed ablation study and different fine-tuning setups to show the benefit of using a joint model for two phrases. Strengths And Weaknesses Strength Evaluation results: The evaluation results on WebQSP showed clear advantages of the proposed model. Ablation study and comparison under different setup helped support the paper's claim. Weakness Writing can be improved. Key abbreviations are not explained at first use (PLM / PPR / topic entity etc, it's always good to reduce reader's guess work). There are typos and proof-reading misses. The explanation of Abstract Subgraph is not clear. What if the tail entities in the same set branch out to different 2nd-hop entities? It's not all clear to the Reviewer. An example would help. It's better to explain more on the key differences between the WebQSP/CWQ sets and MetaQA, as UniKGQA shows clear improvements on the former but not the latter. Giving some examples would help too. List of concrete issues that should be improved. Figure 2. the order of h_m and h_q seems reversed. Page 4, MIP section, it's better to explain "topic entity" when used first time. Page 4 to the bottom, "are a learnable vector" -> "is a ..."; "of if the entities" -> "of the entities being"? Page 5 to the bottom, "Eq 4" should be "Eq 6"? P7 Section 4.2 second paragraph, it's not clear if unified model is the key reason of improvement, we need more supporting evidence. P8 Table 4, Why use "Trans"? Clarity, Quality, Novelty And Reproducibility Clarity can be improved. The writing could be improved. Figures are too complicated. Quality of the work is ok: the paper did extensive experiments on different dataset to help understand the performance deltas. Novelty is less significant: Using the sample pre-trained model feels more incremental and the benefit is only justified by better evaluation results. Reproducibility is not clear. The authors didn't mention open-source or other means to reproduce.
ICLR
Title Abstract Diagrammatic Reasoning with Multiplex Graph Networks Abstract DIAGRAMMATIC REASONING WITH MULTIPLEX GRAPH NETWORKS Duo Wang ∗& Mateja Jamnik & Pietro Lio Department of Computer Science and Technology University of Cambridge Cambridge, United Kingdom {Duo.Wang,Mateja.Jamnik,Pietro.Lio}@cl.cam.ac.uk N/A ABSTRACT DIAGRAMMATIC REASONING WITH MULTIPLEX GRAPH NETWORKS Duo Wang ∗& Mateja Jamnik & Pietro Lio Department of Computer Science and Technology University of Cambridge Cambridge, United Kingdom {Duo.Wang,Mateja.Jamnik,Pietro.Lio}@cl.cam.ac.uk ABSTRACT Abstract reasoning, particularly in the visual domain, is a complex human ability, but it remains a challenging problem for artificial neural learning systems. In this work we propose MXGNet, a multilayer graph neural network for multi-panel diagrammatic reasoning tasks. MXGNet combines three powerful concepts, namely, object-level representation, graph neural networks and multiplex graphs, for solving visual reasoning tasks. MXGNet first extracts object-level representations for each element in all panels of the diagrams, and then forms a multi-layer multiplex graph capturing multiple relations between objects across different diagram panels. MXGNet summarises the multiple graphs extracted from the diagrams of the task, and uses this summarisation to pick the most probable answer from the given candidates. We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices (RPM). For an Euler Diagram Syllogism task MXGNet achieves state-of-the-art accuracy of 99.8%. For PGM and RAVEN, two comprehensive datasets for RPM reasoning, MXGNet outperforms the state-of-the-art models by a considerable margin. 1 INTRODUCTION Abstract reasoning has long been thought of as a key part of human intelligence, and a necessary component towards Artificial General Intelligence. When presented in complex scenes, humans can quickly identify elements across different scenes and infer relations between them. For example, when you are using a pile of different types of LEGO bricks to assemble a spaceship, you are actively inferring relations between each LEGO brick, such as in what ways they can fit together. This type of abstract reasoning, particularly in the visual domain, is a crucial key to human ability to build complex things. Many tests have been proposed to measure human ability for abstract reasoning. The most popular test in the visual domain is the Raven Progressive Matrices (RPM) test (Raven (2000)). In the RPM test, the participants are asked to view a sequence of contextual diagrams, usually given as a 3× 3 matrices of diagrams with the bottom-right diagram left blank. Participants should infer abstract relationships in rows or columns of the diagram, and pick from a set of candidate answers the correct one to fill in the blank. Figures 1 (a) shows an example of RPM tasks containing XOR relations across diagrams in rows. More examples can be found in Appendix C. Another widely used test for measuring reasoning in psychology is Diagram Syllogism task (Sato et al. (2015)), where participants need to infer conclusions based on 2 given premises. Figure 1c shows an example of Euler Diagram Syllogism task. Barrett et al. (2018) recently published a large and comprehensive RPM-style dataset named Procedurally Generated Matrices ‘PGM’, and proposed Wild Relation Network (WReN), a state-of-the-art neural net for ∗Corresponding Author RPM-style tasks. While WReN outperforms other state-of-the-art vision models such as Residual Network He et al. (2016), the performance is still far from deep neural nets’ performance on other vision or natural language processing tasks. Recently, there has been a focus on object-level representations (Yi et al. (2018); Hu et al. (2017); Hudson & Manning (2018); Mao et al. (2019); Teney et al. (2017); Zellers et al. (2018)) for visual reasoning tasks, which enable the use of inductive-biased architectures such as symbolic programs and scene graphs to directly capture relations between objects. For RPM-style tasks, symbolic programs are less suitable as these programs are generated from given questions in the Visual-Question Answering setting. In RPM-style tasks there are no explicit questions. Encoding RPM tasks into graphs is a more natural choice. However, previous works on scene graphs (Teney et al. (2017); Zellers et al. (2018)) model a single image as graphs, which is not suitable for RPM tasks as there are many different layers of relations across different subsets of diagrams in a single task. In this paper we introduce MXGNet, a multi-layer multiplex graph neural net architecture for abstract diagram reasoning. Here ’Multi-layer’ means the graphs are built across different diagram panels, where each diagram is a layer. ‘Multiplex’ means that edges of the graphs encode multiple relations between different element attributes, such as colour, shape and position. Multiplex networks are discussed in detail by Kao & Porter (2018). We first tested the application of multiplex graph on a Diagram Syllogism dataset (Wang et al. (2018a)), and confirmed that multiplex graph improves performance on the original model. For RPM task, MXGNet encodes subsets of diagram panels into multi-layer multiplex graphs, and combines summarisation of several graphs to predict the correct candidate answer. With a hierarchical summarisation scheme, each graph is summarised into feature embeddings representing relationships in the subset. These relation embeddings are then combined to predict the correct answer. For PGM dataset (Barrett et al. (2018)), MXGNet outperforms WReN, the previous state-of-the-art model, by a considerable margin. For ‘neutral’ split of the dataset, MXGNet achieves 89.6% test accuracy, 12.7% higher than WReN’s 76.9%. For other splits MXGNet consistently performs better with smaller margins. For the RAVEN dataset (Zhang et al. (2019)), MXGNet, without any auxiliary training with additional labels, achieves 83.91% test accuracy, outperforming 59.56% accuracy by the best model with auxiliary training for the RAVEN dataset. We also show that MXGNet is robust to variations in forms of object-level representations. Both variants of MXGNet achieve higher test accuracies than existing best models for the two datasets. 2 RELATED WORK Raven Progressive Matrices: Hoshen & Werman (2017) proposed a neural network model on Raven-style reasoning tasks that are a subset of complete RPM problems. Their model is based on Convolutional Network, and is demonstrated to be ineffective in complete RPM tasks (Barrett et al. (2018)). Mandziuk & Zychowski also experimented with an auto-encoder based neural net on simple single-shape RPM tasks. Barrett et al. (2018) built PGM, a complete RPM dataset, and proposed WReN, a neural network architecture based on Relation Network (Santoro et al. (2017)).Steenbrugge et al. (2018) replace CNN part of WReN with a pre-trained Variational Auto Encoder and slightly improved performance. Zhang et al. (2019) built RAVEN, a RPM-style dataset with structured labels of elements in the diagrams in the form of parsing trees, and proposed Dynamic Residual Trees, a simple tree neural network for learning with these additional structures. Anonymous (2020) applies Multi-head attention (Vaswani et al. (2017)), originally developed for Language model, on RPM tasks. Visual Reasoning: RPM test falls in the broader category of visual reasoning. One widely explored task of visual reasoning is Visual Question Answering(VQA). Johnson et al. (2017) built CLEVR dataset, a VQA dataset that focuses on visual reasoning instead of information retrieval in traditional VQA datasets. Current leading approaches (Yi et al. (2018); Mao et al. (2019)) on CLEVR dataset generate synthetic programs using questions in the VQA setting, and use these programs to process object-level representations extracted with objection detection models (Ren et al. (2015)). This approach is not applicable to RPM-style problems as there is no explicit question present for program synthesis. Graph Neural Networks: Recently there has been a surge of interest in applying Graph Neural Networks (GNN) for datasets that are inherently structured as graphs, such as social networks. Many variants of GNNs (Li et al. (2015); Hamilton et al. (2017); Kipf & Welling (2016); Veličković et al. (2017)) have been proposed, which are all based on the same principle of learning feature representations of nodes by recursively aggregating information from neighbour nodes and edges. Recent methods (Teney et al. (2017); Zellers et al. (2018)) extract graph structures from visual scenes for visual question answering. These methods build scene graphs in which nodes represent parts of the scene, and edges capture relations between these parts. Such methods are only applied to scenes of a single image. For multi-image tasks such as video classification, Wang et al. (2018b) proposed non-local neural networks, which extract dense graphs where pixels in feature maps are connected to all other feature map pixels in the space-time dimensions. 3 REASONING TASKS 3.1 DIAGRAM SYLLOGISM Syllogism is a reasoning task where conclusion is drawn from two given assumed propositions (premises). One well-known example is ’Socrates is a man, all man will die, therefore Socrates will die’. Syllogism can be conveniently represented using many types of diagrams (Al-Fedaghi (2017)) such as Euler diagrams and Venn diagrams. Figure 1 (c) shows an example of Euler diagram syllogism. Wang et al. (2018a) developed Euler-Net, a neural net architecture that tackles Euler diagram syllogism tasks. However Euler-Net is just a simple Siamese Conv-Net, which does not guarantee scalability to more entities in diagrams. We show that the addition of multiplex graph both improves performance and scalability to more entities. 3.2 RAVEN PROGRESSIVE MATRICES In this section we briefly describe Raven Progressive Matrices (RPM) in the context of the PGM dataset (Barrett et al. (2018)) and the RAVEN dataset (Zhang et al. (2019)). RPM tasks usually have 8 context diagrams and 8 answer candidates. The context diagrams are laid out in a 3×3 matrix C where c1,1, ..c3,2 are context diagrams and c3,3 is a blank diagram to be filled with 1 of the 8 answer candidates A = {a1, . . . , a8}. One or more relations are present in rows or/and columns of the matrix. For example, in Figure 1 (a), there is XOR relation of positions of objects in rows of diagrams. With the correct answer filled in, the third row and column must satisfy all relations present in the first 2 rows and columns (in the RAVEN dataset, relations are only present in rows). In addition to labels of correct candidate choice, both datasets also provide labels of meta-targets for auxiliary training. The meta-target of a task is a multi-hot vector encoding tuples of (r, o, a) where r is the type of a relation present, o is the object type and a is the attribute. For example, the meta-target for Figure 1 (a) encodes (XOR,Shape, Position). The RAVEN dataset also provides additional structured labels of relations in the diagram. However, we found that structured labels do not improve results, and therefore did not use them in our implementation. 4 METHOD MXGNet is comprised of three main components: an object-level representation module, a graph processing module and a reasoning module. Figure 1a shows an overview of the MXGNet architecture. The object-level representation module Fρ, as the name suggests, extracts representations of objects in the diagrams as nodes in a graph. For each diagram di ⊂ C ∪A, a set of nodes vi,j ; i = 1 . . . L, j = 1 . . . N is extracted where L is the number of layers and N is the number of nodes per layer. We experimented with both fixed and dynamically learnt N values. We also experimented with an additional ‘background’ encoder that encodes background lines (See Appendix C for an example containing background lines) into a single vector, which can be considered as a single node. The multiplex graph module Gφ, for a subset of diagrams, learns the multiplex edges capturing multiple parallel relations between nodes in a multi-layer graph where each layer corresponds to one diagram in the subset, as illustrated in Figure 1 (c). In MXGNet, we consider a subset of cardinality 3 for 3 × 3 diagram matrices. While prior knowledge of RPM rules allows us to naturally treat rows and columns in RPM as subsets, this prior does not generalise to other types of visual reasoning problems. Considering all possible diagram combinations as subsets is computationally expensive. To tackle this, we developed a relatively quick pre-training method to greatly reduce the search space of subsets, as described below. Search Space Reduction: We can consider each diagram as node vdi in a graph, where relations between adjacent diagrams are embedded as edges edij . Note here we are considering the graph of ’diagrams’, which is different from the graph of ’objects’ in the graph processing modules. Each subset of 3 diagrams in this case can be considered as subset of 2 edges. We here make weak assumptions that edges exist between adjacent diagrams (including vertical, horizontal and diagonal direction) and edges in the same subset must be adjacent (defined as two edges linking the same node), which are often used in other visual reasoning problems. We denote the subset of edges as {edij , edjk}. We use 3 neural nets to embed nodes, edges and subsets. We use CNNs to embed diagram nodes into feature vectors, and MLPs to embed edges based on node embeddings and subsets based on edge embeddings. While it is possible to include graph architectures for better accuracy, we found that simple combinations of CNNs and MLPs train faster while still achieving the search space reduction results. This architecture first embeds nodes, then embeds edges based on node embedding, and finally embed subsets based on edge embedding. The subset embeddings are summed and passed through a reasoning network to predict answer probability, similar to WReN (Barrett et al. (2018)). For the exact configuration of the architecture used please refer to Appendix A. For each subset{edij , edjk} , we define a gating variable Gijk, controlling how much does each subset contributes to the final result. In practice we use tanh function, which allows a subset to contribute both positively and negatively to the final summed embeddings. In training we put L1 regularization constraint on the gating variables to suppress Gijk of non-contributing subsets close to zero. This architecture can quickly discover rows and columns as contributing subsets while leaving gating variables of other subsets not activated. We describe the experiment results in section 5.1. While this method is developed for discovering reasoning rules for RPM task, it can be readily applied to any other multi-frame reasoning task for search space reduction. In the rest of the paper, we hard-gate subsets by rounding the gating variables, thereby reducing subset space to only treat rows and columns as valid subsets. We treat the first 2 rows and columns as contextual subsets ci,j where i and j are row and column indices. For the last row and column, where the answers should be filled in, we fill in each of the 8 answer candidates, and make 8 row subsets ai, i ⊂ [1, 8] and 8 column subsets ai, i ⊂ [1, 8]. The graph module then summarises the graph of objects in a subset into embeddings representing relations present in the subset. The reasoning module Rθ takes embeddings from context rows/columns and last rows/columns with different candidate answers filled in, and produce normalised probability of each answer being true. It also predicts meta-target for auxiliary training using context rows/columns. Next, we describe each module in detail. 4.1 OBJECT-LEVEL REPRESENTATION In the PGM dataset there are two types of objects, namely ‘shapes’ and background ‘lines’. While it is a natural choice to use object-level representation on shapes as they are varying in many attributes such as position and size, it is less efficient on background lines as they only vary in colour intensity. In this section we first describe object-level representation applied to ‘shapes’ objects, and then discuss object-level representation on ’lines’ and an alternative background encoder which performs better. In MXGNet we experiment with two types of object-level representations for ‘shapes’, namely CNN grid features and representation obtained with spatial attention. For CNN grid features, we use each spatial location in the final CNN feature map as the object feature vector. Thus for each feature maps of width W and height H , N =W ×H object representations are extracted. This type of representation is used widely, such as in Relation Network (Santoro et al. (2017)) and VQ-VAE (van den Oord et al. (2017)). For representation obtained with attention, we use spatial attention to attend to locations of objects, and extract representations for each object attended. This is similar to objection detection models such as faster R-CNN (Ren et al. (2015)), which use a Region Proposal Network to propose bounding boxes of objects in the input image. For each attended location a presence variable zpres is predicted by attention module indicating whether an object exists in the location. Thus the total number of objects N can vary depending on the sum of zpres variables. As object-level representation is not the main innovation of this paper, we leave exact details for Appendix A.1. For background ‘lines’ objects, which are not varying in position and size, spatial attention is not needed. We experimented with a recurrent encoder with Long-Short Term Memory (Hochreiter & Schmidhuber (1997)) on the output feature map of CNN, outputting M number of feature vectors. However, in the experiment we found that this performs less well than just feature map embeddings produced by feed-forward conv-net encoder. 4.2 MULTIPLEX GRAPH NETWORK Multiplex Edge Embedding:The object-level representation module outputs a set of representations vi,j ; i ⊂ [1, L], j ⊂ [1, N ] for ‘shapes’ objects, where L is the number of layers (cardinality of subset of diagrams) and N is the number of nodes per layer. MXGNet uses an multiplex edge-embedding network Eγ to generate edge embeddings encoding multiple parallel relation embeddings: et(i,j),(l,k) = E t γ(P k(vi,j , vl,k)); i 6= l, t = 1 . . . T (1) Here P t is a projection layer projecting concatenated node embeddings to T different embeddings. Et is a small neural net processing tth projections to produce the tth sub-layer of edge embeddings. Here, we restricted the edges to be inter-layer only, as we found using intra-layer edges does not improve performance but increases computational costs. Figure 2 illustrates these multiplex edge embeddings between nodes of different layers. We hypothesise that different layers of the edge embeddings encode similarities/differences in different feature spaces. Such embeddings of similarities/differences are useful in comparing nodes for subsequent reasoning tasks. For example,for Progessive relation of object sizes, part of embeddings encoding size differences can be utilized to check if nodes in later layers are larger in size. This is similar to Mixture of Experts layers (Eigen et al. (2013); Shazeer et al. (2017)) introduced in Neural Machine Translation tasks. However, in this work we developed a new cross-multiplexing gating function at the node message aggregation stage, which is described below. Graph Summarisation: After edge embeddings are generated, the graph module then summarises the graph into a feature embedding representing relations present in the subset of diagrams. We aggregate information in the graph to nodes of the last layer corresponding to the third diagram in a row or column, because in RPM tasks the relations are in the form Diagram3 = Function(Diagram1, Diagram2). All edges connecting nodes in a particular layer vi,j ; i 6= L, to a node vL,k in the last layer L are aggregated by a function Fag composed of four different types of set operations, namely max, min, sum and mean: fvi,k = Fag(e(i,1),(L,k) . . . e(i,1),(L,k));Fag = concat(max(),min(), sum(),mean()) (2) We use multiple aggregation functions together because different sub-tasks in reasoning may require different types of summarization. For example, counting number of objects is better suited for sum while checking if there is a object with the same size is better suited for max. The aggregated node information from each layer is then combined with a cross-multiplexing gating function. It is named ’cross-multiplexing’ because each embeddings in the set are ’multiplexing’ other embeddings in the set with gating variables that regulate which stream of information pass through. This gating function accepts a set of summarised node embeddings {fv1,k . . . fvN,k} as input, and output gating variables for each layer of node embeddings in the set: g1,k . . .gN,k = G(fv1,k . . . fvN,k);gi,k = {g1i,k . . . gTi,k} (3) In practice G is implemented as an MLP with multi-head outputs for different embeddings, and Sigmoid activation which constrains gating variable g within the range of 0 to 1. The node embeddings of different layers are then multiplied with the gating variables, concatenated and passed through a small MLP to produce the final node embeddings: fvk = MLP (concat({fvi,k × g(i, k)|i = 1 . . . N})). Node embeddings and background embeddings are then concatenated and processed by a residual neural block to produce final relation feature embeddings r of the diagram subset. 4.3 REASONING NETWORK The reasoning network takes relation feature embeddings r from all graphs, and infers the correct answer based on these relation embeddings. We denote the relation embeddings for context rows as rcri ; i = 1, 2 and context columns as rcci ; i = 1, 2. The last row and column filled with each answer candidate ai are denoted rari ; i = 1, . . . , 8 and r ac i ; i = 1, . . . , 8. For the RAVEN dataset, only row relation embeddings r cr and rar are used, as discussed in Section 3.2. The reasoning network Rθ is a multi-layer residual neural net with a softmax output activation that processes concatenated relation embeddings and outputs class probabilities for each answer candidate. The exact configuration of the reasoning network can be found in Appendix A.3. For meta-target prediction, all relation information is contained in the context rows and columns of the RPM task. Therefore, we apply a meta-predicting network Rmeta with Sigmoid output activation to all context rows and columns to obtain probabilities of each meta-target categories: pmeta = Rmeta(r cr 1 + r cr 2 + r cc 1 + r cc 2 ) (4) 4.4 TRAINING The full pipeline of MXGNet is end-to-end trainable with any gradient descent optimiser. In practice, we used RAdam optimiser (Liu et al. (2019)) for its fast convergence and robustness to learning rate differences. The loss function for the PGM dataset is the same as used in WReN (Barrett et al. (2018)): L = Lans + βLmeta−target where β balances the training between answer prediction and meta-target prediction. For the RAVEN dataset, while the loss function can include auxiliary meta-target and structured labels as L = Lans + αLstruct + βLmeta−target, we found that both auxiliary targets do not improve performance, and thus set α and β to 0. 5 EXPERIMENTS 5.1 SEARCH SPACE REDUCTION The Search Space Reduction model is applied on both PGM and RAVEN dataset to reduce the subset space. After 10 epochs, only gating variables of rows and columns subset for PGM and of rows for RAVEN have value larger than 0.5. The Gating variables for three rows are 0.884, 0.812 and 0.832. The gating variables for three columns are 0.901, 0.845 and 0.854. All other gating variables are below the threshold value of 0.5. Interestingly all activated (absolute value > 0.5) gating variables are positive. This is possibly because it is easier for the neural net to learn an aggregation function than a comparator function. Exact experiment statistics can be found in Appendix D. 5.2 DIAGRAM SYLLOGISM PERFORMANCE We first test how well can the multiplex graph network capture relations for the simple Diagram Syllogism task. We simply add the multiplex graph to the original Conv-Net used in (Wang et al. (2018a)). MXGNet achieved 99.8% accuracy on both 2-contour and 3-contour tasks, higher than the original paper’s 99.5% and 99.4% accuracies. The same performance on 2-contour and 3-contour tasks also show that MXGNet scales better for more entities in the diagram. For more details please refer to Appendix E. 5.3 RPM TASK PERFORMANCES In this section we compare all variants of MXGNet against the state-of-the-art models for the PGM and the RAVEN datasets. For the PGM dataset, we tested against results of WReN (Barrett et al. (2018)) in the auxiliary training setting with β value of 10. In addition, we also compared MXGNet with VAE-WReN (Steenbrugge et al. (2018))’s result without auxiliary training. For the RAVEN dataset, we compared with WReN and ResNet model’s performance as reported in the original paper (Zhang et al. (2019)). We evaluated MXGNet with different object-level representations (Section 4.1) on the test data in the ‘neutral’ split of the PGM dataset. Table 1 (a) shows test accuracies of model variants compared with WReN and VAE-WReN for the case without auxiliary training (β = 0) and with auxiliary training (β = 10) for the PGM dataset. Both model variants of MXGNet outperform other models by a considerable margin, showing that the multi-layer graph is indeed a more suitable way to capture relations in the reasoning task. Model variants using grid features from the CNN feature maps slightly outperform model using spatial-attention-based object representations for both with and without auxiliary training settings. This is possibly because the increased number of parameters for the spatial attention variant leads to over-fitting, as the training losses of both model variants are very close. In our following experiments for PGM we will use model variants using CNN features to report performances. Table 1 (b) shows test accuracies of model variants compared with WReN the best performing ResNet models for RAVEN dataset. WReN surprisingly only achieves 14.69% as tested by Zhang et al. (2019). We include results of the ResNet model with or without Dynamic Residual Trees (DRT) which utilise additional structure labels of relations. We found that for the RAVEN dataset, auxiliary training of MXGNet with meta-target or structure labels does not improve performance. Therefore, we report test accuracies of models trained only with the target-prediction objective. Both variants of MXGNet significantly outperform the ResNet models. Models with spatial attention object-level representations under-perform simpler CNN features slightly, most probably due to overfitting, as the observed training losses of spatial attention models are in fact lower than CNN feature models. 5.4 GENERALISATION EVALUATION FOR PGM In the PGM dataset, other than the neutral data regime in which test dataset’s sampling space is the same as the training dataset, there are also other data regimes which restrict the sampling space of training or test data to evaluate the generalisation capability of a neural network. In the main paper, due to space limitations, we selected 2 representative regimes, the ‘interpolation’ regime and the ‘extrapolation’ regime to report results. For results of other data splits of PGM, please refer to Appendix G. For ‘interpolation’ regime, in the training dataset, when attribute a = color and a = size, the values of a are restricted to even-indexed values in the spectrum of a values. This tests how well can a model ‘interpolate’ for missing values. For ‘Extrapolation’ regime, in the training dataset, the value of a is restricted to be the lower half of the value spectrum. This tests how well can a model ‘extrapolate’ outside of the value range in the training dataset. Table 2 shows validation and test accuracies for all three data regimes with and without auxiliary training. In addition, differences between validation and test accuracies are also presented to show how well can models generalise. MXGNet models consistently perform better than WReN for all regimes tested. Interesting for ’Interpolation’ regime, while validation accuracy of MXGNet is lower than WReN, the test accuracy is higher. In addition, for regime ‘Interpolation’ and ‘Extrapolation’, MXGNet also shows a smaller difference between validation and test accuracy. These results show that MXGNet has better capability of generalising outside of the training space. 6 DISCUSSION AND CONCLUSION We presented MXGNet, a new graph-based approach to diagrammatic reasoning problems in the style of Raven Progressive Matrices (RPM). MXGNet combines three powerful ideas, namely, object-level representation, graph neural networks and multiplex graphs, to capture relations present in the reasoning task. Through experiments we showed that MXGNet performs better than previous models on two RPM datasets. We also showed that MXGNet has better generalisation performance. One important direction for future work is to make MXGNet interpretable, and thereby extract logic rules from MXGNet. Currently, the learnt representations in MXGNet are still entangled, providing little in the way of understanding its mechanism of reasoning. Rule extraction can provide people with better understanding of the reasoning problem, and may allow neural networks to work seamlessly with more programmable traditional logic engines. While the multi-layer multiplex graph neural network is designed for RPM style reasoning task, it can be readily extended to other diagrammatic reasoning tasks where relations are present between multiple elements across different diagrams. One example of a real-world application scenario is robots assembling parts of an object into a whole, such as building a LEGO model from a room of LEGO blocks. MXGNet provides a suitable way of capturing relations between parts, such as ways of piecing and locking two parts together. A ARCHITECTURE In this section we present exact configurations of all model variants of MXGNet. Due to the complexity of architectures, we will describe each modules in sequence. The object-level representation has two variations which are (o1) CNN features and (o2) Spatial Attention features. Also the models for PGM and RAVEN dataset differ in details. Unless otherwise stated, in all layers we apply Batch Normalization Ioffe & Szegedy (2015) and use Rectified Linear Unit as activation function. A.1 OBJECT-LEVEL REPRESENTATION ARCHITECTURE CNN features: The first approach applies a CNN on the input image and use each spatial location in the final CNN feature map as the object feature vector. This type of representation is used widely, such as in Relation Network Santoro et al. (2017) and VQ-VAE van den Oord et al. (2017). Formally, the output of a CNN is a feature map tensor of dimension H ×W ×D where H , W and D are respectively height, width and depth of the feature map. At each H and W location, an object vector is extracted. This type of object representation is simple and fast, but does not guarantee that the receptive field at each feature map location fully bounds objects in the image. We use a residual module He et al. (2016) with two residual blocks to extract CNN features, as shown in figure 4.This is because Residual connections show better performance in experiments. The structure of a single Residual Convolution Block is shown in figure 3.Unless otherwise stated, convolutional layer in residual blocks has kernel size of 3× 3. The output feature map processed by another residual block is treated as background encoding because we found that convolutional background encoding gives better results than feature vectors. Spatial Attention Object-level representation: The second approach is to use spatial attention to attend to locations of objects, and extract representations for each object attended. This is similar to object detection models such as faster R-CNN Ren et al. (2015), which use a Region Proposal Network to propose bounding boxes of objects in the input image. In practice, we use Spatial Transformer Jaderberg et al. (2015) as our spatial attention module. Figure 5 shows the architecture used for extracting object-level representation using spatial attention. A CNN composed of 1 conv layr and 2 residual blocks is first applied to the input image, and the last layer feature map is extracted. This part is the same as CNN grid feature module. A spatial attention network composed of 2 conv layer then processes information at each spatial location on the feature map, and outputs k numbers of z = (zpres, zwhere), corresponding to k possible objects at each location. Here, zpres is a binary value indicating if an object exists in this location, and zwhere is an affine transformation matrix specifying a sampling region on the feature maps. zpres, the binary variable, is sampled from Gumbel-Sigmoid distribution Maddison et al. (2016); Jang et al. (2016), which approximates the Bernoulli distribution. We set Gumbel temperature to 0.7 throughout the experiments. For the PGM dataset we restricted k to be 1 and zwhere to be a translation and scaling matrix as ‘shapes’ objects do not overlap and do not have affine transformation attributes other than scaling and translation. For all zi; i ⊂ [1, H ×W ], if zpresi is 1, an object encoder network samples a patch from location specified by z where i using a grid sampler with a fixed window size of 4× 4 pixels. More details of the grid sampler can be found in Jaderberg et al. (2015). The sampled patches are then processed by a conv-layer to generate object embeddings. A.2 GRAPH NETWORKS Multiplex Edge Embeddings:Figure 2 in the main paper shows an overview of the multiplex graph architecture. While motivation and overview of architecture is explained in section 4.2 of the main paper, in this section we provide exact configurations for each part of the model. Each sub-layer of the multiplex edge is embedded by a small MLP. For PGM dataset, we use 6 parallel layers for each multiplex edge embeddings , with each layer having 32 hidden units and 8 output units. For RAVEN dataset we use 4 layers with 16 hidden units and 8 output units because RAVEN dataset contains fewer relations types than PGM dataset. Gating function is implemented as one Sigmoid fully connected layer with hidden size equal to the length of concatenated aggregated embeddings. Gating variables are element-wise multiplied with concatenated embeddings for gating effects. Gated embeddings are then processed with a final fully connected layer with hidden size 64. Graph Summarization: This module summarizes all node summary embeddings and background embeddings to produce a diagram subset embedding representing relations present in the set of diagrams. We experimented with various approaches and found that keeping embeddings as feature maps and processing them with residual blocks yields the best results. Background feature map embeddings are generated with one additional residual block of 48 on top of lower layer feature-extracting resnet. For object representations obtained from CNN-grid features, we can simply reshape node embeddings into a feature map, and process it with additional conv-nets to generate a feature map embeddings of the same dimension to background feature map embeddings. For object representations with spatial attention, we can use another Spatial Transformer to write node summary embeddings to its corresponding locations on a canvas feature map. Finally we concatenate node summary embeddings and background embeddings and process it with 2 residual blocks of size 64 to produce the relation embeddings. A.3 REASONING NETWORK Figure 6 shows the reasoning network configuration for RPM tasks. We experimented with the approach introduced in Barrett et al. (2018), which compute scores for each answer candidates and finally normalize the scores. We found this approach leads to severe overfitting on the RAVEN dataset, and therefore used a simpler approach to just concatenate all relation embeddings and process them with a neural net. In practice we used two residual blocks of size 128 and 256, and a final fully connected layer with 8 units corresponding to 8 answer candidates. The output is normalized with softmax layer. For Meta-target prediction, all context relation embeddings (context rows and columns for PGM while only rows for RAVEN dataset) are summed and fed into a fully connected prediction layer with Sigmoid activation. For PGM there are 12 different meta-targets while for RAVEN there are 9. B TRAINING DETAILS The architecture is implemented in Pytorch framework. During training, we used RAdam optimizer Liu et al. (2019) with learning rate 0.0001, β1 = 0.9,β2 = 0.999. We used batch size of 64, and distributed the training across 2 Nvidia Geforce Titan X GPUs. We early-stop training when validation accuracy stops increasing. C MORE DETAILS OF RPM DATASETS In PGM dataset there are two types of elements present in the diagram, namely shapes and lines. These elements have different attributes such as colour and size. In the PGM dataset, five types of relations can be present in the task: {Progression,AND,OR,XOR,ConsistentUnion}. The RAVEN dataset, compared to PGM, does not have logic relationsAND,OR,XOR, but has additional relationsArithmetic, Constant. In addition RAVEN dataset only allow relations to be present in rows. Figure 7a and 7b show two examples from the PGM dataset(Image courtesy Barrett et al. (2018)). The first example contains a ’Progression’ relation of the number of objects across diagrams in columns. The second examples contains a ’XOR’ relation of position of objects across diagrams in rows. In addition to shape objects, diagrams in the PGM dataset can also contain background line objects that appear at fixed locations. Figure 8a and 8b show two examples of PGM tasks containing line objects. D MORE DETAILS ON SEARCH SPACE REDUCTION In this section we provide detailed architecture used for Search Space reduction, and present additional experimental results. The node embeddings are generated by applying a Conv-Net of 4 convolutional layer (32 filters in each layer) of kernel size 3, and a fully connected layer mapping flattened final-layer feature maps to a feature vector of size 256. Edge embeddings are generated by a 3-layer MLP of 512 − 512 − 256 hidden units. Subset embeddings are generated by a fully connected layer of 512 units. The subset embeddings are gated with the gating variables and summed into a feature vector, which is then feed into the reasoning net, a 3-layer MLP with 256− 256− 13. The output layer contains 13 units. The first unit gives probability of currently combined answer choice being true. The rest 12 units give meta-target prediction probabilities. This is the same as Barrett et al. (2018). The training loss function is: L = Lans + βLmeta−target + λ ∥∥∥∥∥∥ ∑ (i,j,k)⊂S Gi,j,k ∥∥∥∥∥∥ L1 (5) In our experiment we have tested various values of λ, and found 0.01 to be the best. This model is trained with RAdam optimizer with learning rate of 0.0001 and batch size of 64. After 10 epochs of training, only gating variables of subsets that are rows and columns are above the 0.5 threshold. The Gating variables for three rows are 0.884, 0.812 and 0.832. The gating variables for three columns are 0.901, 0.845 and 0.854. All other gating variables are below 0.5. Among these, the one with highest absolute value is 0.411. Table 3 shows the top-16 ranked subsets, with each subset indexed by 2 connecting edges in the subset. Figure 9 illustrates this way of indexing the subset. For example, the first column with red inter-connecting arrows is indexed as 0-3-6. This indicates that there two edges, one connecting diagram 0 and 3, and the other connecting diagram 3-6. Similarly the subset connected by blue arrows is indexed as 1-2-5. Note that 1-2-5 and 2-1-5 is different because the 1-2-5 contains edge 1-2 and 2-5 while 2-1-5 contains edges 1-2 and 1-5. E MORE DETAILS ON EULER DIAGRAM SYLLOGISM The original model in Wang et al. (2018a) uses a Siamese Conv-Net model to process two input premise diagrams and output all consistent conclusions. Convolutional layers with shared weights are first applied to two input diagrams. The top layer feature maps are then flattened and fed into a reasoning network to make predictions. We simply use CNN grid features of the top layer feature maps as object-level representations, and use the multi-layer multiplex graph to capture object relations between the two input premise diagrams. We use a multiplex edge embeddings of 4 layers, with each layer of dimension 32. The cross-multiplexing here becomes self-multiplexing as there are only 2 diagrams (Only 1 embedding of node summary for edges from first diagram to second diagram). Final node embeddings are processed by a convolutional layer to produce the final embedding, which is also fed into the reasoning network along with the conv-net embeddings. F ABLATION STUDY We performed ablation study experiments to test how much does the multiplex edges affects performance. We have tested two model variants, one without any graph modules, and the other model graphs using vanilla edge embeddings produced by MLPs, on PGM dataset. We found that without graph modules, the model only achieved 83.2% test accuracy. While this is lower than MXGNet’s 89.6%, it is still higher than WReN’s 76.9%. This is possibly because the search space reduction, by trimming away non-contributing subsets, allow the model to learn more efficiently. The graph model with vanilla edge embeddings achieves 88.3% accuracy, only slightly lower than MXGNet with multiplex edge embeddings. This shows that while general graph neural network is a suitable model for capturing relations between objects, the multiplex edge embedding does so more efficiently by allowing parallel relation multiplexing. G ADDITIONAL GENERALIZATION PERFORMANCE ON PGM DATASET Table 4 shows performance of MXGNet on other splits of PGM dataset. MXGNet consistently outperforms WReN for test accuracy, except for H.O. Triple Pairs and H.O. shape-color in the case β = 0 Additionally here we provide the analysis according to Sec 4.2 and Sec 4.6 in Barrett et al. (2018). unfortunately sec 4.3 of this paper, namely the analysis of distractors, cannot be performed as the publicly available dataset does not include any ground truth labels about distractors, nor any labels of present objects that can be used to synthesize distractor labels. For Meta-target prediction, MXG-Net achieves 84.1% accuracy. When Metatarget is correctly predicted, the model’s target prediction accuracy increases to 92.4%. When Meta-target is incorrectly predicted, the model only has 75.6% accuracy. For three logical relations the model performs best for OR relation (95.3%), and worst for XOR relation(92.6%). Accuracy for line-type tasks (86.5%) is only slightly better than for shape tasks (80.1%), showing that object representation with graph modeling does improve on relations between shapes. The type of relation with worst performance isConsistentUnion, with only 75.1% accuracy. This is expected as ConsistentUnion is in fact a memory task instead of relational reasoning task.
1. What is the focus of the paper, and what are the proposed approaches? 2. What are the strengths of the paper regarding its results and contributions? 3. What are the weaknesses of the paper regarding its structure, writing, and explanations? 4. How can the paper be improved regarding its presentation and motivation of the main model? 5. Are there any concerns or suggestions for improving the model or its presentation?
Review
Review This paper proposes using a new version of graph networks – multiplex graph networks – which do object representation followed by some form of graph processing and reasoning to answer "IQ test" style diagrammatic reasoning, in particular including Raven Progressive Matrices that have been previously studied (a little). The paper shows very strong results on multiple datasets, much stronger than previous results (from strong groups) on these datasets. On these grounds, I believe the paper should be accepted. However, the structure and writing of the paper was very frustrating to me. The paper just didn't make much of an attempt to explain and then motivate/analyze the model used. I mean, if I were writing the paper, I would have considered and done many things, such as: - shortening the introduction - shortening the related work - making the presentation of the datasets more succinct - having only one figure that covers most of what is currently in figures 1 and 2 - putting details of what seem more ancillary details like the treatment of background lines objects in an appendix - remove Figure 3, which didn't convey much to me in the absence of more careful explanation of the model. so that I could motivate, carefully explain, and evaluate the main model in the paper. But here, all these things fill the main text, and we're told that we have to read the appendices to understand the model.... And the presentation in the appendix is more a dump-all-the-facts presentation than a careful development of the design. Nevertheless, the general direction of the architecture seems sound, and the results look very strong, and there are even some useful ablations in the appendix.
ICLR
Title Abstract Diagrammatic Reasoning with Multiplex Graph Networks Abstract DIAGRAMMATIC REASONING WITH MULTIPLEX GRAPH NETWORKS Duo Wang ∗& Mateja Jamnik & Pietro Lio Department of Computer Science and Technology University of Cambridge Cambridge, United Kingdom {Duo.Wang,Mateja.Jamnik,Pietro.Lio}@cl.cam.ac.uk N/A ABSTRACT DIAGRAMMATIC REASONING WITH MULTIPLEX GRAPH NETWORKS Duo Wang ∗& Mateja Jamnik & Pietro Lio Department of Computer Science and Technology University of Cambridge Cambridge, United Kingdom {Duo.Wang,Mateja.Jamnik,Pietro.Lio}@cl.cam.ac.uk ABSTRACT Abstract reasoning, particularly in the visual domain, is a complex human ability, but it remains a challenging problem for artificial neural learning systems. In this work we propose MXGNet, a multilayer graph neural network for multi-panel diagrammatic reasoning tasks. MXGNet combines three powerful concepts, namely, object-level representation, graph neural networks and multiplex graphs, for solving visual reasoning tasks. MXGNet first extracts object-level representations for each element in all panels of the diagrams, and then forms a multi-layer multiplex graph capturing multiple relations between objects across different diagram panels. MXGNet summarises the multiple graphs extracted from the diagrams of the task, and uses this summarisation to pick the most probable answer from the given candidates. We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices (RPM). For an Euler Diagram Syllogism task MXGNet achieves state-of-the-art accuracy of 99.8%. For PGM and RAVEN, two comprehensive datasets for RPM reasoning, MXGNet outperforms the state-of-the-art models by a considerable margin. 1 INTRODUCTION Abstract reasoning has long been thought of as a key part of human intelligence, and a necessary component towards Artificial General Intelligence. When presented in complex scenes, humans can quickly identify elements across different scenes and infer relations between them. For example, when you are using a pile of different types of LEGO bricks to assemble a spaceship, you are actively inferring relations between each LEGO brick, such as in what ways they can fit together. This type of abstract reasoning, particularly in the visual domain, is a crucial key to human ability to build complex things. Many tests have been proposed to measure human ability for abstract reasoning. The most popular test in the visual domain is the Raven Progressive Matrices (RPM) test (Raven (2000)). In the RPM test, the participants are asked to view a sequence of contextual diagrams, usually given as a 3× 3 matrices of diagrams with the bottom-right diagram left blank. Participants should infer abstract relationships in rows or columns of the diagram, and pick from a set of candidate answers the correct one to fill in the blank. Figures 1 (a) shows an example of RPM tasks containing XOR relations across diagrams in rows. More examples can be found in Appendix C. Another widely used test for measuring reasoning in psychology is Diagram Syllogism task (Sato et al. (2015)), where participants need to infer conclusions based on 2 given premises. Figure 1c shows an example of Euler Diagram Syllogism task. Barrett et al. (2018) recently published a large and comprehensive RPM-style dataset named Procedurally Generated Matrices ‘PGM’, and proposed Wild Relation Network (WReN), a state-of-the-art neural net for ∗Corresponding Author RPM-style tasks. While WReN outperforms other state-of-the-art vision models such as Residual Network He et al. (2016), the performance is still far from deep neural nets’ performance on other vision or natural language processing tasks. Recently, there has been a focus on object-level representations (Yi et al. (2018); Hu et al. (2017); Hudson & Manning (2018); Mao et al. (2019); Teney et al. (2017); Zellers et al. (2018)) for visual reasoning tasks, which enable the use of inductive-biased architectures such as symbolic programs and scene graphs to directly capture relations between objects. For RPM-style tasks, symbolic programs are less suitable as these programs are generated from given questions in the Visual-Question Answering setting. In RPM-style tasks there are no explicit questions. Encoding RPM tasks into graphs is a more natural choice. However, previous works on scene graphs (Teney et al. (2017); Zellers et al. (2018)) model a single image as graphs, which is not suitable for RPM tasks as there are many different layers of relations across different subsets of diagrams in a single task. In this paper we introduce MXGNet, a multi-layer multiplex graph neural net architecture for abstract diagram reasoning. Here ’Multi-layer’ means the graphs are built across different diagram panels, where each diagram is a layer. ‘Multiplex’ means that edges of the graphs encode multiple relations between different element attributes, such as colour, shape and position. Multiplex networks are discussed in detail by Kao & Porter (2018). We first tested the application of multiplex graph on a Diagram Syllogism dataset (Wang et al. (2018a)), and confirmed that multiplex graph improves performance on the original model. For RPM task, MXGNet encodes subsets of diagram panels into multi-layer multiplex graphs, and combines summarisation of several graphs to predict the correct candidate answer. With a hierarchical summarisation scheme, each graph is summarised into feature embeddings representing relationships in the subset. These relation embeddings are then combined to predict the correct answer. For PGM dataset (Barrett et al. (2018)), MXGNet outperforms WReN, the previous state-of-the-art model, by a considerable margin. For ‘neutral’ split of the dataset, MXGNet achieves 89.6% test accuracy, 12.7% higher than WReN’s 76.9%. For other splits MXGNet consistently performs better with smaller margins. For the RAVEN dataset (Zhang et al. (2019)), MXGNet, without any auxiliary training with additional labels, achieves 83.91% test accuracy, outperforming 59.56% accuracy by the best model with auxiliary training for the RAVEN dataset. We also show that MXGNet is robust to variations in forms of object-level representations. Both variants of MXGNet achieve higher test accuracies than existing best models for the two datasets. 2 RELATED WORK Raven Progressive Matrices: Hoshen & Werman (2017) proposed a neural network model on Raven-style reasoning tasks that are a subset of complete RPM problems. Their model is based on Convolutional Network, and is demonstrated to be ineffective in complete RPM tasks (Barrett et al. (2018)). Mandziuk & Zychowski also experimented with an auto-encoder based neural net on simple single-shape RPM tasks. Barrett et al. (2018) built PGM, a complete RPM dataset, and proposed WReN, a neural network architecture based on Relation Network (Santoro et al. (2017)).Steenbrugge et al. (2018) replace CNN part of WReN with a pre-trained Variational Auto Encoder and slightly improved performance. Zhang et al. (2019) built RAVEN, a RPM-style dataset with structured labels of elements in the diagrams in the form of parsing trees, and proposed Dynamic Residual Trees, a simple tree neural network for learning with these additional structures. Anonymous (2020) applies Multi-head attention (Vaswani et al. (2017)), originally developed for Language model, on RPM tasks. Visual Reasoning: RPM test falls in the broader category of visual reasoning. One widely explored task of visual reasoning is Visual Question Answering(VQA). Johnson et al. (2017) built CLEVR dataset, a VQA dataset that focuses on visual reasoning instead of information retrieval in traditional VQA datasets. Current leading approaches (Yi et al. (2018); Mao et al. (2019)) on CLEVR dataset generate synthetic programs using questions in the VQA setting, and use these programs to process object-level representations extracted with objection detection models (Ren et al. (2015)). This approach is not applicable to RPM-style problems as there is no explicit question present for program synthesis. Graph Neural Networks: Recently there has been a surge of interest in applying Graph Neural Networks (GNN) for datasets that are inherently structured as graphs, such as social networks. Many variants of GNNs (Li et al. (2015); Hamilton et al. (2017); Kipf & Welling (2016); Veličković et al. (2017)) have been proposed, which are all based on the same principle of learning feature representations of nodes by recursively aggregating information from neighbour nodes and edges. Recent methods (Teney et al. (2017); Zellers et al. (2018)) extract graph structures from visual scenes for visual question answering. These methods build scene graphs in which nodes represent parts of the scene, and edges capture relations between these parts. Such methods are only applied to scenes of a single image. For multi-image tasks such as video classification, Wang et al. (2018b) proposed non-local neural networks, which extract dense graphs where pixels in feature maps are connected to all other feature map pixels in the space-time dimensions. 3 REASONING TASKS 3.1 DIAGRAM SYLLOGISM Syllogism is a reasoning task where conclusion is drawn from two given assumed propositions (premises). One well-known example is ’Socrates is a man, all man will die, therefore Socrates will die’. Syllogism can be conveniently represented using many types of diagrams (Al-Fedaghi (2017)) such as Euler diagrams and Venn diagrams. Figure 1 (c) shows an example of Euler diagram syllogism. Wang et al. (2018a) developed Euler-Net, a neural net architecture that tackles Euler diagram syllogism tasks. However Euler-Net is just a simple Siamese Conv-Net, which does not guarantee scalability to more entities in diagrams. We show that the addition of multiplex graph both improves performance and scalability to more entities. 3.2 RAVEN PROGRESSIVE MATRICES In this section we briefly describe Raven Progressive Matrices (RPM) in the context of the PGM dataset (Barrett et al. (2018)) and the RAVEN dataset (Zhang et al. (2019)). RPM tasks usually have 8 context diagrams and 8 answer candidates. The context diagrams are laid out in a 3×3 matrix C where c1,1, ..c3,2 are context diagrams and c3,3 is a blank diagram to be filled with 1 of the 8 answer candidates A = {a1, . . . , a8}. One or more relations are present in rows or/and columns of the matrix. For example, in Figure 1 (a), there is XOR relation of positions of objects in rows of diagrams. With the correct answer filled in, the third row and column must satisfy all relations present in the first 2 rows and columns (in the RAVEN dataset, relations are only present in rows). In addition to labels of correct candidate choice, both datasets also provide labels of meta-targets for auxiliary training. The meta-target of a task is a multi-hot vector encoding tuples of (r, o, a) where r is the type of a relation present, o is the object type and a is the attribute. For example, the meta-target for Figure 1 (a) encodes (XOR,Shape, Position). The RAVEN dataset also provides additional structured labels of relations in the diagram. However, we found that structured labels do not improve results, and therefore did not use them in our implementation. 4 METHOD MXGNet is comprised of three main components: an object-level representation module, a graph processing module and a reasoning module. Figure 1a shows an overview of the MXGNet architecture. The object-level representation module Fρ, as the name suggests, extracts representations of objects in the diagrams as nodes in a graph. For each diagram di ⊂ C ∪A, a set of nodes vi,j ; i = 1 . . . L, j = 1 . . . N is extracted where L is the number of layers and N is the number of nodes per layer. We experimented with both fixed and dynamically learnt N values. We also experimented with an additional ‘background’ encoder that encodes background lines (See Appendix C for an example containing background lines) into a single vector, which can be considered as a single node. The multiplex graph module Gφ, for a subset of diagrams, learns the multiplex edges capturing multiple parallel relations between nodes in a multi-layer graph where each layer corresponds to one diagram in the subset, as illustrated in Figure 1 (c). In MXGNet, we consider a subset of cardinality 3 for 3 × 3 diagram matrices. While prior knowledge of RPM rules allows us to naturally treat rows and columns in RPM as subsets, this prior does not generalise to other types of visual reasoning problems. Considering all possible diagram combinations as subsets is computationally expensive. To tackle this, we developed a relatively quick pre-training method to greatly reduce the search space of subsets, as described below. Search Space Reduction: We can consider each diagram as node vdi in a graph, where relations between adjacent diagrams are embedded as edges edij . Note here we are considering the graph of ’diagrams’, which is different from the graph of ’objects’ in the graph processing modules. Each subset of 3 diagrams in this case can be considered as subset of 2 edges. We here make weak assumptions that edges exist between adjacent diagrams (including vertical, horizontal and diagonal direction) and edges in the same subset must be adjacent (defined as two edges linking the same node), which are often used in other visual reasoning problems. We denote the subset of edges as {edij , edjk}. We use 3 neural nets to embed nodes, edges and subsets. We use CNNs to embed diagram nodes into feature vectors, and MLPs to embed edges based on node embeddings and subsets based on edge embeddings. While it is possible to include graph architectures for better accuracy, we found that simple combinations of CNNs and MLPs train faster while still achieving the search space reduction results. This architecture first embeds nodes, then embeds edges based on node embedding, and finally embed subsets based on edge embedding. The subset embeddings are summed and passed through a reasoning network to predict answer probability, similar to WReN (Barrett et al. (2018)). For the exact configuration of the architecture used please refer to Appendix A. For each subset{edij , edjk} , we define a gating variable Gijk, controlling how much does each subset contributes to the final result. In practice we use tanh function, which allows a subset to contribute both positively and negatively to the final summed embeddings. In training we put L1 regularization constraint on the gating variables to suppress Gijk of non-contributing subsets close to zero. This architecture can quickly discover rows and columns as contributing subsets while leaving gating variables of other subsets not activated. We describe the experiment results in section 5.1. While this method is developed for discovering reasoning rules for RPM task, it can be readily applied to any other multi-frame reasoning task for search space reduction. In the rest of the paper, we hard-gate subsets by rounding the gating variables, thereby reducing subset space to only treat rows and columns as valid subsets. We treat the first 2 rows and columns as contextual subsets ci,j where i and j are row and column indices. For the last row and column, where the answers should be filled in, we fill in each of the 8 answer candidates, and make 8 row subsets ai, i ⊂ [1, 8] and 8 column subsets ai, i ⊂ [1, 8]. The graph module then summarises the graph of objects in a subset into embeddings representing relations present in the subset. The reasoning module Rθ takes embeddings from context rows/columns and last rows/columns with different candidate answers filled in, and produce normalised probability of each answer being true. It also predicts meta-target for auxiliary training using context rows/columns. Next, we describe each module in detail. 4.1 OBJECT-LEVEL REPRESENTATION In the PGM dataset there are two types of objects, namely ‘shapes’ and background ‘lines’. While it is a natural choice to use object-level representation on shapes as they are varying in many attributes such as position and size, it is less efficient on background lines as they only vary in colour intensity. In this section we first describe object-level representation applied to ‘shapes’ objects, and then discuss object-level representation on ’lines’ and an alternative background encoder which performs better. In MXGNet we experiment with two types of object-level representations for ‘shapes’, namely CNN grid features and representation obtained with spatial attention. For CNN grid features, we use each spatial location in the final CNN feature map as the object feature vector. Thus for each feature maps of width W and height H , N =W ×H object representations are extracted. This type of representation is used widely, such as in Relation Network (Santoro et al. (2017)) and VQ-VAE (van den Oord et al. (2017)). For representation obtained with attention, we use spatial attention to attend to locations of objects, and extract representations for each object attended. This is similar to objection detection models such as faster R-CNN (Ren et al. (2015)), which use a Region Proposal Network to propose bounding boxes of objects in the input image. For each attended location a presence variable zpres is predicted by attention module indicating whether an object exists in the location. Thus the total number of objects N can vary depending on the sum of zpres variables. As object-level representation is not the main innovation of this paper, we leave exact details for Appendix A.1. For background ‘lines’ objects, which are not varying in position and size, spatial attention is not needed. We experimented with a recurrent encoder with Long-Short Term Memory (Hochreiter & Schmidhuber (1997)) on the output feature map of CNN, outputting M number of feature vectors. However, in the experiment we found that this performs less well than just feature map embeddings produced by feed-forward conv-net encoder. 4.2 MULTIPLEX GRAPH NETWORK Multiplex Edge Embedding:The object-level representation module outputs a set of representations vi,j ; i ⊂ [1, L], j ⊂ [1, N ] for ‘shapes’ objects, where L is the number of layers (cardinality of subset of diagrams) and N is the number of nodes per layer. MXGNet uses an multiplex edge-embedding network Eγ to generate edge embeddings encoding multiple parallel relation embeddings: et(i,j),(l,k) = E t γ(P k(vi,j , vl,k)); i 6= l, t = 1 . . . T (1) Here P t is a projection layer projecting concatenated node embeddings to T different embeddings. Et is a small neural net processing tth projections to produce the tth sub-layer of edge embeddings. Here, we restricted the edges to be inter-layer only, as we found using intra-layer edges does not improve performance but increases computational costs. Figure 2 illustrates these multiplex edge embeddings between nodes of different layers. We hypothesise that different layers of the edge embeddings encode similarities/differences in different feature spaces. Such embeddings of similarities/differences are useful in comparing nodes for subsequent reasoning tasks. For example,for Progessive relation of object sizes, part of embeddings encoding size differences can be utilized to check if nodes in later layers are larger in size. This is similar to Mixture of Experts layers (Eigen et al. (2013); Shazeer et al. (2017)) introduced in Neural Machine Translation tasks. However, in this work we developed a new cross-multiplexing gating function at the node message aggregation stage, which is described below. Graph Summarisation: After edge embeddings are generated, the graph module then summarises the graph into a feature embedding representing relations present in the subset of diagrams. We aggregate information in the graph to nodes of the last layer corresponding to the third diagram in a row or column, because in RPM tasks the relations are in the form Diagram3 = Function(Diagram1, Diagram2). All edges connecting nodes in a particular layer vi,j ; i 6= L, to a node vL,k in the last layer L are aggregated by a function Fag composed of four different types of set operations, namely max, min, sum and mean: fvi,k = Fag(e(i,1),(L,k) . . . e(i,1),(L,k));Fag = concat(max(),min(), sum(),mean()) (2) We use multiple aggregation functions together because different sub-tasks in reasoning may require different types of summarization. For example, counting number of objects is better suited for sum while checking if there is a object with the same size is better suited for max. The aggregated node information from each layer is then combined with a cross-multiplexing gating function. It is named ’cross-multiplexing’ because each embeddings in the set are ’multiplexing’ other embeddings in the set with gating variables that regulate which stream of information pass through. This gating function accepts a set of summarised node embeddings {fv1,k . . . fvN,k} as input, and output gating variables for each layer of node embeddings in the set: g1,k . . .gN,k = G(fv1,k . . . fvN,k);gi,k = {g1i,k . . . gTi,k} (3) In practice G is implemented as an MLP with multi-head outputs for different embeddings, and Sigmoid activation which constrains gating variable g within the range of 0 to 1. The node embeddings of different layers are then multiplied with the gating variables, concatenated and passed through a small MLP to produce the final node embeddings: fvk = MLP (concat({fvi,k × g(i, k)|i = 1 . . . N})). Node embeddings and background embeddings are then concatenated and processed by a residual neural block to produce final relation feature embeddings r of the diagram subset. 4.3 REASONING NETWORK The reasoning network takes relation feature embeddings r from all graphs, and infers the correct answer based on these relation embeddings. We denote the relation embeddings for context rows as rcri ; i = 1, 2 and context columns as rcci ; i = 1, 2. The last row and column filled with each answer candidate ai are denoted rari ; i = 1, . . . , 8 and r ac i ; i = 1, . . . , 8. For the RAVEN dataset, only row relation embeddings r cr and rar are used, as discussed in Section 3.2. The reasoning network Rθ is a multi-layer residual neural net with a softmax output activation that processes concatenated relation embeddings and outputs class probabilities for each answer candidate. The exact configuration of the reasoning network can be found in Appendix A.3. For meta-target prediction, all relation information is contained in the context rows and columns of the RPM task. Therefore, we apply a meta-predicting network Rmeta with Sigmoid output activation to all context rows and columns to obtain probabilities of each meta-target categories: pmeta = Rmeta(r cr 1 + r cr 2 + r cc 1 + r cc 2 ) (4) 4.4 TRAINING The full pipeline of MXGNet is end-to-end trainable with any gradient descent optimiser. In practice, we used RAdam optimiser (Liu et al. (2019)) for its fast convergence and robustness to learning rate differences. The loss function for the PGM dataset is the same as used in WReN (Barrett et al. (2018)): L = Lans + βLmeta−target where β balances the training between answer prediction and meta-target prediction. For the RAVEN dataset, while the loss function can include auxiliary meta-target and structured labels as L = Lans + αLstruct + βLmeta−target, we found that both auxiliary targets do not improve performance, and thus set α and β to 0. 5 EXPERIMENTS 5.1 SEARCH SPACE REDUCTION The Search Space Reduction model is applied on both PGM and RAVEN dataset to reduce the subset space. After 10 epochs, only gating variables of rows and columns subset for PGM and of rows for RAVEN have value larger than 0.5. The Gating variables for three rows are 0.884, 0.812 and 0.832. The gating variables for three columns are 0.901, 0.845 and 0.854. All other gating variables are below the threshold value of 0.5. Interestingly all activated (absolute value > 0.5) gating variables are positive. This is possibly because it is easier for the neural net to learn an aggregation function than a comparator function. Exact experiment statistics can be found in Appendix D. 5.2 DIAGRAM SYLLOGISM PERFORMANCE We first test how well can the multiplex graph network capture relations for the simple Diagram Syllogism task. We simply add the multiplex graph to the original Conv-Net used in (Wang et al. (2018a)). MXGNet achieved 99.8% accuracy on both 2-contour and 3-contour tasks, higher than the original paper’s 99.5% and 99.4% accuracies. The same performance on 2-contour and 3-contour tasks also show that MXGNet scales better for more entities in the diagram. For more details please refer to Appendix E. 5.3 RPM TASK PERFORMANCES In this section we compare all variants of MXGNet against the state-of-the-art models for the PGM and the RAVEN datasets. For the PGM dataset, we tested against results of WReN (Barrett et al. (2018)) in the auxiliary training setting with β value of 10. In addition, we also compared MXGNet with VAE-WReN (Steenbrugge et al. (2018))’s result without auxiliary training. For the RAVEN dataset, we compared with WReN and ResNet model’s performance as reported in the original paper (Zhang et al. (2019)). We evaluated MXGNet with different object-level representations (Section 4.1) on the test data in the ‘neutral’ split of the PGM dataset. Table 1 (a) shows test accuracies of model variants compared with WReN and VAE-WReN for the case without auxiliary training (β = 0) and with auxiliary training (β = 10) for the PGM dataset. Both model variants of MXGNet outperform other models by a considerable margin, showing that the multi-layer graph is indeed a more suitable way to capture relations in the reasoning task. Model variants using grid features from the CNN feature maps slightly outperform model using spatial-attention-based object representations for both with and without auxiliary training settings. This is possibly because the increased number of parameters for the spatial attention variant leads to over-fitting, as the training losses of both model variants are very close. In our following experiments for PGM we will use model variants using CNN features to report performances. Table 1 (b) shows test accuracies of model variants compared with WReN the best performing ResNet models for RAVEN dataset. WReN surprisingly only achieves 14.69% as tested by Zhang et al. (2019). We include results of the ResNet model with or without Dynamic Residual Trees (DRT) which utilise additional structure labels of relations. We found that for the RAVEN dataset, auxiliary training of MXGNet with meta-target or structure labels does not improve performance. Therefore, we report test accuracies of models trained only with the target-prediction objective. Both variants of MXGNet significantly outperform the ResNet models. Models with spatial attention object-level representations under-perform simpler CNN features slightly, most probably due to overfitting, as the observed training losses of spatial attention models are in fact lower than CNN feature models. 5.4 GENERALISATION EVALUATION FOR PGM In the PGM dataset, other than the neutral data regime in which test dataset’s sampling space is the same as the training dataset, there are also other data regimes which restrict the sampling space of training or test data to evaluate the generalisation capability of a neural network. In the main paper, due to space limitations, we selected 2 representative regimes, the ‘interpolation’ regime and the ‘extrapolation’ regime to report results. For results of other data splits of PGM, please refer to Appendix G. For ‘interpolation’ regime, in the training dataset, when attribute a = color and a = size, the values of a are restricted to even-indexed values in the spectrum of a values. This tests how well can a model ‘interpolate’ for missing values. For ‘Extrapolation’ regime, in the training dataset, the value of a is restricted to be the lower half of the value spectrum. This tests how well can a model ‘extrapolate’ outside of the value range in the training dataset. Table 2 shows validation and test accuracies for all three data regimes with and without auxiliary training. In addition, differences between validation and test accuracies are also presented to show how well can models generalise. MXGNet models consistently perform better than WReN for all regimes tested. Interesting for ’Interpolation’ regime, while validation accuracy of MXGNet is lower than WReN, the test accuracy is higher. In addition, for regime ‘Interpolation’ and ‘Extrapolation’, MXGNet also shows a smaller difference between validation and test accuracy. These results show that MXGNet has better capability of generalising outside of the training space. 6 DISCUSSION AND CONCLUSION We presented MXGNet, a new graph-based approach to diagrammatic reasoning problems in the style of Raven Progressive Matrices (RPM). MXGNet combines three powerful ideas, namely, object-level representation, graph neural networks and multiplex graphs, to capture relations present in the reasoning task. Through experiments we showed that MXGNet performs better than previous models on two RPM datasets. We also showed that MXGNet has better generalisation performance. One important direction for future work is to make MXGNet interpretable, and thereby extract logic rules from MXGNet. Currently, the learnt representations in MXGNet are still entangled, providing little in the way of understanding its mechanism of reasoning. Rule extraction can provide people with better understanding of the reasoning problem, and may allow neural networks to work seamlessly with more programmable traditional logic engines. While the multi-layer multiplex graph neural network is designed for RPM style reasoning task, it can be readily extended to other diagrammatic reasoning tasks where relations are present between multiple elements across different diagrams. One example of a real-world application scenario is robots assembling parts of an object into a whole, such as building a LEGO model from a room of LEGO blocks. MXGNet provides a suitable way of capturing relations between parts, such as ways of piecing and locking two parts together. A ARCHITECTURE In this section we present exact configurations of all model variants of MXGNet. Due to the complexity of architectures, we will describe each modules in sequence. The object-level representation has two variations which are (o1) CNN features and (o2) Spatial Attention features. Also the models for PGM and RAVEN dataset differ in details. Unless otherwise stated, in all layers we apply Batch Normalization Ioffe & Szegedy (2015) and use Rectified Linear Unit as activation function. A.1 OBJECT-LEVEL REPRESENTATION ARCHITECTURE CNN features: The first approach applies a CNN on the input image and use each spatial location in the final CNN feature map as the object feature vector. This type of representation is used widely, such as in Relation Network Santoro et al. (2017) and VQ-VAE van den Oord et al. (2017). Formally, the output of a CNN is a feature map tensor of dimension H ×W ×D where H , W and D are respectively height, width and depth of the feature map. At each H and W location, an object vector is extracted. This type of object representation is simple and fast, but does not guarantee that the receptive field at each feature map location fully bounds objects in the image. We use a residual module He et al. (2016) with two residual blocks to extract CNN features, as shown in figure 4.This is because Residual connections show better performance in experiments. The structure of a single Residual Convolution Block is shown in figure 3.Unless otherwise stated, convolutional layer in residual blocks has kernel size of 3× 3. The output feature map processed by another residual block is treated as background encoding because we found that convolutional background encoding gives better results than feature vectors. Spatial Attention Object-level representation: The second approach is to use spatial attention to attend to locations of objects, and extract representations for each object attended. This is similar to object detection models such as faster R-CNN Ren et al. (2015), which use a Region Proposal Network to propose bounding boxes of objects in the input image. In practice, we use Spatial Transformer Jaderberg et al. (2015) as our spatial attention module. Figure 5 shows the architecture used for extracting object-level representation using spatial attention. A CNN composed of 1 conv layr and 2 residual blocks is first applied to the input image, and the last layer feature map is extracted. This part is the same as CNN grid feature module. A spatial attention network composed of 2 conv layer then processes information at each spatial location on the feature map, and outputs k numbers of z = (zpres, zwhere), corresponding to k possible objects at each location. Here, zpres is a binary value indicating if an object exists in this location, and zwhere is an affine transformation matrix specifying a sampling region on the feature maps. zpres, the binary variable, is sampled from Gumbel-Sigmoid distribution Maddison et al. (2016); Jang et al. (2016), which approximates the Bernoulli distribution. We set Gumbel temperature to 0.7 throughout the experiments. For the PGM dataset we restricted k to be 1 and zwhere to be a translation and scaling matrix as ‘shapes’ objects do not overlap and do not have affine transformation attributes other than scaling and translation. For all zi; i ⊂ [1, H ×W ], if zpresi is 1, an object encoder network samples a patch from location specified by z where i using a grid sampler with a fixed window size of 4× 4 pixels. More details of the grid sampler can be found in Jaderberg et al. (2015). The sampled patches are then processed by a conv-layer to generate object embeddings. A.2 GRAPH NETWORKS Multiplex Edge Embeddings:Figure 2 in the main paper shows an overview of the multiplex graph architecture. While motivation and overview of architecture is explained in section 4.2 of the main paper, in this section we provide exact configurations for each part of the model. Each sub-layer of the multiplex edge is embedded by a small MLP. For PGM dataset, we use 6 parallel layers for each multiplex edge embeddings , with each layer having 32 hidden units and 8 output units. For RAVEN dataset we use 4 layers with 16 hidden units and 8 output units because RAVEN dataset contains fewer relations types than PGM dataset. Gating function is implemented as one Sigmoid fully connected layer with hidden size equal to the length of concatenated aggregated embeddings. Gating variables are element-wise multiplied with concatenated embeddings for gating effects. Gated embeddings are then processed with a final fully connected layer with hidden size 64. Graph Summarization: This module summarizes all node summary embeddings and background embeddings to produce a diagram subset embedding representing relations present in the set of diagrams. We experimented with various approaches and found that keeping embeddings as feature maps and processing them with residual blocks yields the best results. Background feature map embeddings are generated with one additional residual block of 48 on top of lower layer feature-extracting resnet. For object representations obtained from CNN-grid features, we can simply reshape node embeddings into a feature map, and process it with additional conv-nets to generate a feature map embeddings of the same dimension to background feature map embeddings. For object representations with spatial attention, we can use another Spatial Transformer to write node summary embeddings to its corresponding locations on a canvas feature map. Finally we concatenate node summary embeddings and background embeddings and process it with 2 residual blocks of size 64 to produce the relation embeddings. A.3 REASONING NETWORK Figure 6 shows the reasoning network configuration for RPM tasks. We experimented with the approach introduced in Barrett et al. (2018), which compute scores for each answer candidates and finally normalize the scores. We found this approach leads to severe overfitting on the RAVEN dataset, and therefore used a simpler approach to just concatenate all relation embeddings and process them with a neural net. In practice we used two residual blocks of size 128 and 256, and a final fully connected layer with 8 units corresponding to 8 answer candidates. The output is normalized with softmax layer. For Meta-target prediction, all context relation embeddings (context rows and columns for PGM while only rows for RAVEN dataset) are summed and fed into a fully connected prediction layer with Sigmoid activation. For PGM there are 12 different meta-targets while for RAVEN there are 9. B TRAINING DETAILS The architecture is implemented in Pytorch framework. During training, we used RAdam optimizer Liu et al. (2019) with learning rate 0.0001, β1 = 0.9,β2 = 0.999. We used batch size of 64, and distributed the training across 2 Nvidia Geforce Titan X GPUs. We early-stop training when validation accuracy stops increasing. C MORE DETAILS OF RPM DATASETS In PGM dataset there are two types of elements present in the diagram, namely shapes and lines. These elements have different attributes such as colour and size. In the PGM dataset, five types of relations can be present in the task: {Progression,AND,OR,XOR,ConsistentUnion}. The RAVEN dataset, compared to PGM, does not have logic relationsAND,OR,XOR, but has additional relationsArithmetic, Constant. In addition RAVEN dataset only allow relations to be present in rows. Figure 7a and 7b show two examples from the PGM dataset(Image courtesy Barrett et al. (2018)). The first example contains a ’Progression’ relation of the number of objects across diagrams in columns. The second examples contains a ’XOR’ relation of position of objects across diagrams in rows. In addition to shape objects, diagrams in the PGM dataset can also contain background line objects that appear at fixed locations. Figure 8a and 8b show two examples of PGM tasks containing line objects. D MORE DETAILS ON SEARCH SPACE REDUCTION In this section we provide detailed architecture used for Search Space reduction, and present additional experimental results. The node embeddings are generated by applying a Conv-Net of 4 convolutional layer (32 filters in each layer) of kernel size 3, and a fully connected layer mapping flattened final-layer feature maps to a feature vector of size 256. Edge embeddings are generated by a 3-layer MLP of 512 − 512 − 256 hidden units. Subset embeddings are generated by a fully connected layer of 512 units. The subset embeddings are gated with the gating variables and summed into a feature vector, which is then feed into the reasoning net, a 3-layer MLP with 256− 256− 13. The output layer contains 13 units. The first unit gives probability of currently combined answer choice being true. The rest 12 units give meta-target prediction probabilities. This is the same as Barrett et al. (2018). The training loss function is: L = Lans + βLmeta−target + λ ∥∥∥∥∥∥ ∑ (i,j,k)⊂S Gi,j,k ∥∥∥∥∥∥ L1 (5) In our experiment we have tested various values of λ, and found 0.01 to be the best. This model is trained with RAdam optimizer with learning rate of 0.0001 and batch size of 64. After 10 epochs of training, only gating variables of subsets that are rows and columns are above the 0.5 threshold. The Gating variables for three rows are 0.884, 0.812 and 0.832. The gating variables for three columns are 0.901, 0.845 and 0.854. All other gating variables are below 0.5. Among these, the one with highest absolute value is 0.411. Table 3 shows the top-16 ranked subsets, with each subset indexed by 2 connecting edges in the subset. Figure 9 illustrates this way of indexing the subset. For example, the first column with red inter-connecting arrows is indexed as 0-3-6. This indicates that there two edges, one connecting diagram 0 and 3, and the other connecting diagram 3-6. Similarly the subset connected by blue arrows is indexed as 1-2-5. Note that 1-2-5 and 2-1-5 is different because the 1-2-5 contains edge 1-2 and 2-5 while 2-1-5 contains edges 1-2 and 1-5. E MORE DETAILS ON EULER DIAGRAM SYLLOGISM The original model in Wang et al. (2018a) uses a Siamese Conv-Net model to process two input premise diagrams and output all consistent conclusions. Convolutional layers with shared weights are first applied to two input diagrams. The top layer feature maps are then flattened and fed into a reasoning network to make predictions. We simply use CNN grid features of the top layer feature maps as object-level representations, and use the multi-layer multiplex graph to capture object relations between the two input premise diagrams. We use a multiplex edge embeddings of 4 layers, with each layer of dimension 32. The cross-multiplexing here becomes self-multiplexing as there are only 2 diagrams (Only 1 embedding of node summary for edges from first diagram to second diagram). Final node embeddings are processed by a convolutional layer to produce the final embedding, which is also fed into the reasoning network along with the conv-net embeddings. F ABLATION STUDY We performed ablation study experiments to test how much does the multiplex edges affects performance. We have tested two model variants, one without any graph modules, and the other model graphs using vanilla edge embeddings produced by MLPs, on PGM dataset. We found that without graph modules, the model only achieved 83.2% test accuracy. While this is lower than MXGNet’s 89.6%, it is still higher than WReN’s 76.9%. This is possibly because the search space reduction, by trimming away non-contributing subsets, allow the model to learn more efficiently. The graph model with vanilla edge embeddings achieves 88.3% accuracy, only slightly lower than MXGNet with multiplex edge embeddings. This shows that while general graph neural network is a suitable model for capturing relations between objects, the multiplex edge embedding does so more efficiently by allowing parallel relation multiplexing. G ADDITIONAL GENERALIZATION PERFORMANCE ON PGM DATASET Table 4 shows performance of MXGNet on other splits of PGM dataset. MXGNet consistently outperforms WReN for test accuracy, except for H.O. Triple Pairs and H.O. shape-color in the case β = 0 Additionally here we provide the analysis according to Sec 4.2 and Sec 4.6 in Barrett et al. (2018). unfortunately sec 4.3 of this paper, namely the analysis of distractors, cannot be performed as the publicly available dataset does not include any ground truth labels about distractors, nor any labels of present objects that can be used to synthesize distractor labels. For Meta-target prediction, MXG-Net achieves 84.1% accuracy. When Metatarget is correctly predicted, the model’s target prediction accuracy increases to 92.4%. When Meta-target is incorrectly predicted, the model only has 75.6% accuracy. For three logical relations the model performs best for OR relation (95.3%), and worst for XOR relation(92.6%). Accuracy for line-type tasks (86.5%) is only slightly better than for shape tasks (80.1%), showing that object representation with graph modeling does improve on relations between shapes. The type of relation with worst performance isConsistentUnion, with only 75.1% accuracy. This is expected as ConsistentUnion is in fact a memory task instead of relational reasoning task.
1. What is the main contribution of the paper, and how does it improve upon previous work in abstract diagrammatic reasoning? 2. What are the strengths and weaknesses of the proposed model architecture, particularly regarding its ability to exploit information at multiple granularities? 3. How does the reviewer assess the quality of the paper's writing, and what specific issues do they identify? 4. What clarifications does the reviewer suggest would be helpful in improving the paper's technical writing? 5. How does the reviewer evaluate the novelty and effectiveness of the proposed approach, despite some concerns about terminology and interpretability?
Review
Review The paper proposes a novel, feedforward, end-to-end trainable, deep, neural network for abstract diagrammatic reasoning with significant improvements over the state of the art. The proposed model architecture is reasonable and is designed to exploit the information present at multiple granularities – at the level of objects in the diagram, their relations across diagrams, and diagram subsets. As a multimodule neural pipeline, it seems a reasonable design. Further, it shows significant performance gains over the state of the art. However, the writing quality is poor and is the primary reason for my giving it a low score. The paper is difficult to read and it’s hard to figure out the terminology and it’s grounding in the problem; the high-level abstract design and design choices that address the nature of the problem from the low level details, etc. The paper uses terminology without explaining the reason for it - for example, why is the approach called ‘Multiplex Graph Networks’? What information is being multiplexed and how? Graphs are conceptual in the proposed approach – there doesn’t seem to be any graph algorithms or graph based processing. Once the module is run for search space reduction, the set of edges or relations (node pairs) become well-defined (in adjacent rows, columns) as well diagram subsets (edge pairs). The corresponding modules are just computing vectorial embeddings. Similarly, there is no reasoning that’s taking place. Reasoning requires tokens and grammar over such tokens which is not there in this case. The proposed model is non-interpretable. The technical writing is loose and hand-wavy. The appendix is a lot of grammatical mistakes. A few clarifications may be helpful: - “The reasoning module can also be considered as another graph processing module”? - “… we use spatial attention to iteratively attend …” – there is no iterative attention. It’s all parallel. - What do the ‘N’ nodes in each layer correspond to? There are clearly not objects or diagram primitives as they can vary in number in each diagram. - if interlayer connections are between objects in different layers (diagrams), what is this supposed to capture? Clearly, there may not be any unique correspondence between objects across diagrams. - What’s a cross-multiplexing gating function? If it’s a known concept, please provide a reference else explain. Finally, I’m open to revising my score upwards if it turns out that I’m the only one who had difficulty with the writing. The architecture design makes sense for the addressed class of problems (though the proposed network is non-interpretable and doesn’t do any reasoning nor uses graphs or graph based processing in a meaningful way), the results are good and the experimental evaluation sufficient.
ICLR
Title Abstract Diagrammatic Reasoning with Multiplex Graph Networks Abstract DIAGRAMMATIC REASONING WITH MULTIPLEX GRAPH NETWORKS Duo Wang ∗& Mateja Jamnik & Pietro Lio Department of Computer Science and Technology University of Cambridge Cambridge, United Kingdom {Duo.Wang,Mateja.Jamnik,Pietro.Lio}@cl.cam.ac.uk N/A ABSTRACT DIAGRAMMATIC REASONING WITH MULTIPLEX GRAPH NETWORKS Duo Wang ∗& Mateja Jamnik & Pietro Lio Department of Computer Science and Technology University of Cambridge Cambridge, United Kingdom {Duo.Wang,Mateja.Jamnik,Pietro.Lio}@cl.cam.ac.uk ABSTRACT Abstract reasoning, particularly in the visual domain, is a complex human ability, but it remains a challenging problem for artificial neural learning systems. In this work we propose MXGNet, a multilayer graph neural network for multi-panel diagrammatic reasoning tasks. MXGNet combines three powerful concepts, namely, object-level representation, graph neural networks and multiplex graphs, for solving visual reasoning tasks. MXGNet first extracts object-level representations for each element in all panels of the diagrams, and then forms a multi-layer multiplex graph capturing multiple relations between objects across different diagram panels. MXGNet summarises the multiple graphs extracted from the diagrams of the task, and uses this summarisation to pick the most probable answer from the given candidates. We have tested MXGNet on two types of diagrammatic reasoning tasks, namely Diagram Syllogisms and Raven Progressive Matrices (RPM). For an Euler Diagram Syllogism task MXGNet achieves state-of-the-art accuracy of 99.8%. For PGM and RAVEN, two comprehensive datasets for RPM reasoning, MXGNet outperforms the state-of-the-art models by a considerable margin. 1 INTRODUCTION Abstract reasoning has long been thought of as a key part of human intelligence, and a necessary component towards Artificial General Intelligence. When presented in complex scenes, humans can quickly identify elements across different scenes and infer relations between them. For example, when you are using a pile of different types of LEGO bricks to assemble a spaceship, you are actively inferring relations between each LEGO brick, such as in what ways they can fit together. This type of abstract reasoning, particularly in the visual domain, is a crucial key to human ability to build complex things. Many tests have been proposed to measure human ability for abstract reasoning. The most popular test in the visual domain is the Raven Progressive Matrices (RPM) test (Raven (2000)). In the RPM test, the participants are asked to view a sequence of contextual diagrams, usually given as a 3× 3 matrices of diagrams with the bottom-right diagram left blank. Participants should infer abstract relationships in rows or columns of the diagram, and pick from a set of candidate answers the correct one to fill in the blank. Figures 1 (a) shows an example of RPM tasks containing XOR relations across diagrams in rows. More examples can be found in Appendix C. Another widely used test for measuring reasoning in psychology is Diagram Syllogism task (Sato et al. (2015)), where participants need to infer conclusions based on 2 given premises. Figure 1c shows an example of Euler Diagram Syllogism task. Barrett et al. (2018) recently published a large and comprehensive RPM-style dataset named Procedurally Generated Matrices ‘PGM’, and proposed Wild Relation Network (WReN), a state-of-the-art neural net for ∗Corresponding Author RPM-style tasks. While WReN outperforms other state-of-the-art vision models such as Residual Network He et al. (2016), the performance is still far from deep neural nets’ performance on other vision or natural language processing tasks. Recently, there has been a focus on object-level representations (Yi et al. (2018); Hu et al. (2017); Hudson & Manning (2018); Mao et al. (2019); Teney et al. (2017); Zellers et al. (2018)) for visual reasoning tasks, which enable the use of inductive-biased architectures such as symbolic programs and scene graphs to directly capture relations between objects. For RPM-style tasks, symbolic programs are less suitable as these programs are generated from given questions in the Visual-Question Answering setting. In RPM-style tasks there are no explicit questions. Encoding RPM tasks into graphs is a more natural choice. However, previous works on scene graphs (Teney et al. (2017); Zellers et al. (2018)) model a single image as graphs, which is not suitable for RPM tasks as there are many different layers of relations across different subsets of diagrams in a single task. In this paper we introduce MXGNet, a multi-layer multiplex graph neural net architecture for abstract diagram reasoning. Here ’Multi-layer’ means the graphs are built across different diagram panels, where each diagram is a layer. ‘Multiplex’ means that edges of the graphs encode multiple relations between different element attributes, such as colour, shape and position. Multiplex networks are discussed in detail by Kao & Porter (2018). We first tested the application of multiplex graph on a Diagram Syllogism dataset (Wang et al. (2018a)), and confirmed that multiplex graph improves performance on the original model. For RPM task, MXGNet encodes subsets of diagram panels into multi-layer multiplex graphs, and combines summarisation of several graphs to predict the correct candidate answer. With a hierarchical summarisation scheme, each graph is summarised into feature embeddings representing relationships in the subset. These relation embeddings are then combined to predict the correct answer. For PGM dataset (Barrett et al. (2018)), MXGNet outperforms WReN, the previous state-of-the-art model, by a considerable margin. For ‘neutral’ split of the dataset, MXGNet achieves 89.6% test accuracy, 12.7% higher than WReN’s 76.9%. For other splits MXGNet consistently performs better with smaller margins. For the RAVEN dataset (Zhang et al. (2019)), MXGNet, without any auxiliary training with additional labels, achieves 83.91% test accuracy, outperforming 59.56% accuracy by the best model with auxiliary training for the RAVEN dataset. We also show that MXGNet is robust to variations in forms of object-level representations. Both variants of MXGNet achieve higher test accuracies than existing best models for the two datasets. 2 RELATED WORK Raven Progressive Matrices: Hoshen & Werman (2017) proposed a neural network model on Raven-style reasoning tasks that are a subset of complete RPM problems. Their model is based on Convolutional Network, and is demonstrated to be ineffective in complete RPM tasks (Barrett et al. (2018)). Mandziuk & Zychowski also experimented with an auto-encoder based neural net on simple single-shape RPM tasks. Barrett et al. (2018) built PGM, a complete RPM dataset, and proposed WReN, a neural network architecture based on Relation Network (Santoro et al. (2017)).Steenbrugge et al. (2018) replace CNN part of WReN with a pre-trained Variational Auto Encoder and slightly improved performance. Zhang et al. (2019) built RAVEN, a RPM-style dataset with structured labels of elements in the diagrams in the form of parsing trees, and proposed Dynamic Residual Trees, a simple tree neural network for learning with these additional structures. Anonymous (2020) applies Multi-head attention (Vaswani et al. (2017)), originally developed for Language model, on RPM tasks. Visual Reasoning: RPM test falls in the broader category of visual reasoning. One widely explored task of visual reasoning is Visual Question Answering(VQA). Johnson et al. (2017) built CLEVR dataset, a VQA dataset that focuses on visual reasoning instead of information retrieval in traditional VQA datasets. Current leading approaches (Yi et al. (2018); Mao et al. (2019)) on CLEVR dataset generate synthetic programs using questions in the VQA setting, and use these programs to process object-level representations extracted with objection detection models (Ren et al. (2015)). This approach is not applicable to RPM-style problems as there is no explicit question present for program synthesis. Graph Neural Networks: Recently there has been a surge of interest in applying Graph Neural Networks (GNN) for datasets that are inherently structured as graphs, such as social networks. Many variants of GNNs (Li et al. (2015); Hamilton et al. (2017); Kipf & Welling (2016); Veličković et al. (2017)) have been proposed, which are all based on the same principle of learning feature representations of nodes by recursively aggregating information from neighbour nodes and edges. Recent methods (Teney et al. (2017); Zellers et al. (2018)) extract graph structures from visual scenes for visual question answering. These methods build scene graphs in which nodes represent parts of the scene, and edges capture relations between these parts. Such methods are only applied to scenes of a single image. For multi-image tasks such as video classification, Wang et al. (2018b) proposed non-local neural networks, which extract dense graphs where pixels in feature maps are connected to all other feature map pixels in the space-time dimensions. 3 REASONING TASKS 3.1 DIAGRAM SYLLOGISM Syllogism is a reasoning task where conclusion is drawn from two given assumed propositions (premises). One well-known example is ’Socrates is a man, all man will die, therefore Socrates will die’. Syllogism can be conveniently represented using many types of diagrams (Al-Fedaghi (2017)) such as Euler diagrams and Venn diagrams. Figure 1 (c) shows an example of Euler diagram syllogism. Wang et al. (2018a) developed Euler-Net, a neural net architecture that tackles Euler diagram syllogism tasks. However Euler-Net is just a simple Siamese Conv-Net, which does not guarantee scalability to more entities in diagrams. We show that the addition of multiplex graph both improves performance and scalability to more entities. 3.2 RAVEN PROGRESSIVE MATRICES In this section we briefly describe Raven Progressive Matrices (RPM) in the context of the PGM dataset (Barrett et al. (2018)) and the RAVEN dataset (Zhang et al. (2019)). RPM tasks usually have 8 context diagrams and 8 answer candidates. The context diagrams are laid out in a 3×3 matrix C where c1,1, ..c3,2 are context diagrams and c3,3 is a blank diagram to be filled with 1 of the 8 answer candidates A = {a1, . . . , a8}. One or more relations are present in rows or/and columns of the matrix. For example, in Figure 1 (a), there is XOR relation of positions of objects in rows of diagrams. With the correct answer filled in, the third row and column must satisfy all relations present in the first 2 rows and columns (in the RAVEN dataset, relations are only present in rows). In addition to labels of correct candidate choice, both datasets also provide labels of meta-targets for auxiliary training. The meta-target of a task is a multi-hot vector encoding tuples of (r, o, a) where r is the type of a relation present, o is the object type and a is the attribute. For example, the meta-target for Figure 1 (a) encodes (XOR,Shape, Position). The RAVEN dataset also provides additional structured labels of relations in the diagram. However, we found that structured labels do not improve results, and therefore did not use them in our implementation. 4 METHOD MXGNet is comprised of three main components: an object-level representation module, a graph processing module and a reasoning module. Figure 1a shows an overview of the MXGNet architecture. The object-level representation module Fρ, as the name suggests, extracts representations of objects in the diagrams as nodes in a graph. For each diagram di ⊂ C ∪A, a set of nodes vi,j ; i = 1 . . . L, j = 1 . . . N is extracted where L is the number of layers and N is the number of nodes per layer. We experimented with both fixed and dynamically learnt N values. We also experimented with an additional ‘background’ encoder that encodes background lines (See Appendix C for an example containing background lines) into a single vector, which can be considered as a single node. The multiplex graph module Gφ, for a subset of diagrams, learns the multiplex edges capturing multiple parallel relations between nodes in a multi-layer graph where each layer corresponds to one diagram in the subset, as illustrated in Figure 1 (c). In MXGNet, we consider a subset of cardinality 3 for 3 × 3 diagram matrices. While prior knowledge of RPM rules allows us to naturally treat rows and columns in RPM as subsets, this prior does not generalise to other types of visual reasoning problems. Considering all possible diagram combinations as subsets is computationally expensive. To tackle this, we developed a relatively quick pre-training method to greatly reduce the search space of subsets, as described below. Search Space Reduction: We can consider each diagram as node vdi in a graph, where relations between adjacent diagrams are embedded as edges edij . Note here we are considering the graph of ’diagrams’, which is different from the graph of ’objects’ in the graph processing modules. Each subset of 3 diagrams in this case can be considered as subset of 2 edges. We here make weak assumptions that edges exist between adjacent diagrams (including vertical, horizontal and diagonal direction) and edges in the same subset must be adjacent (defined as two edges linking the same node), which are often used in other visual reasoning problems. We denote the subset of edges as {edij , edjk}. We use 3 neural nets to embed nodes, edges and subsets. We use CNNs to embed diagram nodes into feature vectors, and MLPs to embed edges based on node embeddings and subsets based on edge embeddings. While it is possible to include graph architectures for better accuracy, we found that simple combinations of CNNs and MLPs train faster while still achieving the search space reduction results. This architecture first embeds nodes, then embeds edges based on node embedding, and finally embed subsets based on edge embedding. The subset embeddings are summed and passed through a reasoning network to predict answer probability, similar to WReN (Barrett et al. (2018)). For the exact configuration of the architecture used please refer to Appendix A. For each subset{edij , edjk} , we define a gating variable Gijk, controlling how much does each subset contributes to the final result. In practice we use tanh function, which allows a subset to contribute both positively and negatively to the final summed embeddings. In training we put L1 regularization constraint on the gating variables to suppress Gijk of non-contributing subsets close to zero. This architecture can quickly discover rows and columns as contributing subsets while leaving gating variables of other subsets not activated. We describe the experiment results in section 5.1. While this method is developed for discovering reasoning rules for RPM task, it can be readily applied to any other multi-frame reasoning task for search space reduction. In the rest of the paper, we hard-gate subsets by rounding the gating variables, thereby reducing subset space to only treat rows and columns as valid subsets. We treat the first 2 rows and columns as contextual subsets ci,j where i and j are row and column indices. For the last row and column, where the answers should be filled in, we fill in each of the 8 answer candidates, and make 8 row subsets ai, i ⊂ [1, 8] and 8 column subsets ai, i ⊂ [1, 8]. The graph module then summarises the graph of objects in a subset into embeddings representing relations present in the subset. The reasoning module Rθ takes embeddings from context rows/columns and last rows/columns with different candidate answers filled in, and produce normalised probability of each answer being true. It also predicts meta-target for auxiliary training using context rows/columns. Next, we describe each module in detail. 4.1 OBJECT-LEVEL REPRESENTATION In the PGM dataset there are two types of objects, namely ‘shapes’ and background ‘lines’. While it is a natural choice to use object-level representation on shapes as they are varying in many attributes such as position and size, it is less efficient on background lines as they only vary in colour intensity. In this section we first describe object-level representation applied to ‘shapes’ objects, and then discuss object-level representation on ’lines’ and an alternative background encoder which performs better. In MXGNet we experiment with two types of object-level representations for ‘shapes’, namely CNN grid features and representation obtained with spatial attention. For CNN grid features, we use each spatial location in the final CNN feature map as the object feature vector. Thus for each feature maps of width W and height H , N =W ×H object representations are extracted. This type of representation is used widely, such as in Relation Network (Santoro et al. (2017)) and VQ-VAE (van den Oord et al. (2017)). For representation obtained with attention, we use spatial attention to attend to locations of objects, and extract representations for each object attended. This is similar to objection detection models such as faster R-CNN (Ren et al. (2015)), which use a Region Proposal Network to propose bounding boxes of objects in the input image. For each attended location a presence variable zpres is predicted by attention module indicating whether an object exists in the location. Thus the total number of objects N can vary depending on the sum of zpres variables. As object-level representation is not the main innovation of this paper, we leave exact details for Appendix A.1. For background ‘lines’ objects, which are not varying in position and size, spatial attention is not needed. We experimented with a recurrent encoder with Long-Short Term Memory (Hochreiter & Schmidhuber (1997)) on the output feature map of CNN, outputting M number of feature vectors. However, in the experiment we found that this performs less well than just feature map embeddings produced by feed-forward conv-net encoder. 4.2 MULTIPLEX GRAPH NETWORK Multiplex Edge Embedding:The object-level representation module outputs a set of representations vi,j ; i ⊂ [1, L], j ⊂ [1, N ] for ‘shapes’ objects, where L is the number of layers (cardinality of subset of diagrams) and N is the number of nodes per layer. MXGNet uses an multiplex edge-embedding network Eγ to generate edge embeddings encoding multiple parallel relation embeddings: et(i,j),(l,k) = E t γ(P k(vi,j , vl,k)); i 6= l, t = 1 . . . T (1) Here P t is a projection layer projecting concatenated node embeddings to T different embeddings. Et is a small neural net processing tth projections to produce the tth sub-layer of edge embeddings. Here, we restricted the edges to be inter-layer only, as we found using intra-layer edges does not improve performance but increases computational costs. Figure 2 illustrates these multiplex edge embeddings between nodes of different layers. We hypothesise that different layers of the edge embeddings encode similarities/differences in different feature spaces. Such embeddings of similarities/differences are useful in comparing nodes for subsequent reasoning tasks. For example,for Progessive relation of object sizes, part of embeddings encoding size differences can be utilized to check if nodes in later layers are larger in size. This is similar to Mixture of Experts layers (Eigen et al. (2013); Shazeer et al. (2017)) introduced in Neural Machine Translation tasks. However, in this work we developed a new cross-multiplexing gating function at the node message aggregation stage, which is described below. Graph Summarisation: After edge embeddings are generated, the graph module then summarises the graph into a feature embedding representing relations present in the subset of diagrams. We aggregate information in the graph to nodes of the last layer corresponding to the third diagram in a row or column, because in RPM tasks the relations are in the form Diagram3 = Function(Diagram1, Diagram2). All edges connecting nodes in a particular layer vi,j ; i 6= L, to a node vL,k in the last layer L are aggregated by a function Fag composed of four different types of set operations, namely max, min, sum and mean: fvi,k = Fag(e(i,1),(L,k) . . . e(i,1),(L,k));Fag = concat(max(),min(), sum(),mean()) (2) We use multiple aggregation functions together because different sub-tasks in reasoning may require different types of summarization. For example, counting number of objects is better suited for sum while checking if there is a object with the same size is better suited for max. The aggregated node information from each layer is then combined with a cross-multiplexing gating function. It is named ’cross-multiplexing’ because each embeddings in the set are ’multiplexing’ other embeddings in the set with gating variables that regulate which stream of information pass through. This gating function accepts a set of summarised node embeddings {fv1,k . . . fvN,k} as input, and output gating variables for each layer of node embeddings in the set: g1,k . . .gN,k = G(fv1,k . . . fvN,k);gi,k = {g1i,k . . . gTi,k} (3) In practice G is implemented as an MLP with multi-head outputs for different embeddings, and Sigmoid activation which constrains gating variable g within the range of 0 to 1. The node embeddings of different layers are then multiplied with the gating variables, concatenated and passed through a small MLP to produce the final node embeddings: fvk = MLP (concat({fvi,k × g(i, k)|i = 1 . . . N})). Node embeddings and background embeddings are then concatenated and processed by a residual neural block to produce final relation feature embeddings r of the diagram subset. 4.3 REASONING NETWORK The reasoning network takes relation feature embeddings r from all graphs, and infers the correct answer based on these relation embeddings. We denote the relation embeddings for context rows as rcri ; i = 1, 2 and context columns as rcci ; i = 1, 2. The last row and column filled with each answer candidate ai are denoted rari ; i = 1, . . . , 8 and r ac i ; i = 1, . . . , 8. For the RAVEN dataset, only row relation embeddings r cr and rar are used, as discussed in Section 3.2. The reasoning network Rθ is a multi-layer residual neural net with a softmax output activation that processes concatenated relation embeddings and outputs class probabilities for each answer candidate. The exact configuration of the reasoning network can be found in Appendix A.3. For meta-target prediction, all relation information is contained in the context rows and columns of the RPM task. Therefore, we apply a meta-predicting network Rmeta with Sigmoid output activation to all context rows and columns to obtain probabilities of each meta-target categories: pmeta = Rmeta(r cr 1 + r cr 2 + r cc 1 + r cc 2 ) (4) 4.4 TRAINING The full pipeline of MXGNet is end-to-end trainable with any gradient descent optimiser. In practice, we used RAdam optimiser (Liu et al. (2019)) for its fast convergence and robustness to learning rate differences. The loss function for the PGM dataset is the same as used in WReN (Barrett et al. (2018)): L = Lans + βLmeta−target where β balances the training between answer prediction and meta-target prediction. For the RAVEN dataset, while the loss function can include auxiliary meta-target and structured labels as L = Lans + αLstruct + βLmeta−target, we found that both auxiliary targets do not improve performance, and thus set α and β to 0. 5 EXPERIMENTS 5.1 SEARCH SPACE REDUCTION The Search Space Reduction model is applied on both PGM and RAVEN dataset to reduce the subset space. After 10 epochs, only gating variables of rows and columns subset for PGM and of rows for RAVEN have value larger than 0.5. The Gating variables for three rows are 0.884, 0.812 and 0.832. The gating variables for three columns are 0.901, 0.845 and 0.854. All other gating variables are below the threshold value of 0.5. Interestingly all activated (absolute value > 0.5) gating variables are positive. This is possibly because it is easier for the neural net to learn an aggregation function than a comparator function. Exact experiment statistics can be found in Appendix D. 5.2 DIAGRAM SYLLOGISM PERFORMANCE We first test how well can the multiplex graph network capture relations for the simple Diagram Syllogism task. We simply add the multiplex graph to the original Conv-Net used in (Wang et al. (2018a)). MXGNet achieved 99.8% accuracy on both 2-contour and 3-contour tasks, higher than the original paper’s 99.5% and 99.4% accuracies. The same performance on 2-contour and 3-contour tasks also show that MXGNet scales better for more entities in the diagram. For more details please refer to Appendix E. 5.3 RPM TASK PERFORMANCES In this section we compare all variants of MXGNet against the state-of-the-art models for the PGM and the RAVEN datasets. For the PGM dataset, we tested against results of WReN (Barrett et al. (2018)) in the auxiliary training setting with β value of 10. In addition, we also compared MXGNet with VAE-WReN (Steenbrugge et al. (2018))’s result without auxiliary training. For the RAVEN dataset, we compared with WReN and ResNet model’s performance as reported in the original paper (Zhang et al. (2019)). We evaluated MXGNet with different object-level representations (Section 4.1) on the test data in the ‘neutral’ split of the PGM dataset. Table 1 (a) shows test accuracies of model variants compared with WReN and VAE-WReN for the case without auxiliary training (β = 0) and with auxiliary training (β = 10) for the PGM dataset. Both model variants of MXGNet outperform other models by a considerable margin, showing that the multi-layer graph is indeed a more suitable way to capture relations in the reasoning task. Model variants using grid features from the CNN feature maps slightly outperform model using spatial-attention-based object representations for both with and without auxiliary training settings. This is possibly because the increased number of parameters for the spatial attention variant leads to over-fitting, as the training losses of both model variants are very close. In our following experiments for PGM we will use model variants using CNN features to report performances. Table 1 (b) shows test accuracies of model variants compared with WReN the best performing ResNet models for RAVEN dataset. WReN surprisingly only achieves 14.69% as tested by Zhang et al. (2019). We include results of the ResNet model with or without Dynamic Residual Trees (DRT) which utilise additional structure labels of relations. We found that for the RAVEN dataset, auxiliary training of MXGNet with meta-target or structure labels does not improve performance. Therefore, we report test accuracies of models trained only with the target-prediction objective. Both variants of MXGNet significantly outperform the ResNet models. Models with spatial attention object-level representations under-perform simpler CNN features slightly, most probably due to overfitting, as the observed training losses of spatial attention models are in fact lower than CNN feature models. 5.4 GENERALISATION EVALUATION FOR PGM In the PGM dataset, other than the neutral data regime in which test dataset’s sampling space is the same as the training dataset, there are also other data regimes which restrict the sampling space of training or test data to evaluate the generalisation capability of a neural network. In the main paper, due to space limitations, we selected 2 representative regimes, the ‘interpolation’ regime and the ‘extrapolation’ regime to report results. For results of other data splits of PGM, please refer to Appendix G. For ‘interpolation’ regime, in the training dataset, when attribute a = color and a = size, the values of a are restricted to even-indexed values in the spectrum of a values. This tests how well can a model ‘interpolate’ for missing values. For ‘Extrapolation’ regime, in the training dataset, the value of a is restricted to be the lower half of the value spectrum. This tests how well can a model ‘extrapolate’ outside of the value range in the training dataset. Table 2 shows validation and test accuracies for all three data regimes with and without auxiliary training. In addition, differences between validation and test accuracies are also presented to show how well can models generalise. MXGNet models consistently perform better than WReN for all regimes tested. Interesting for ’Interpolation’ regime, while validation accuracy of MXGNet is lower than WReN, the test accuracy is higher. In addition, for regime ‘Interpolation’ and ‘Extrapolation’, MXGNet also shows a smaller difference between validation and test accuracy. These results show that MXGNet has better capability of generalising outside of the training space. 6 DISCUSSION AND CONCLUSION We presented MXGNet, a new graph-based approach to diagrammatic reasoning problems in the style of Raven Progressive Matrices (RPM). MXGNet combines three powerful ideas, namely, object-level representation, graph neural networks and multiplex graphs, to capture relations present in the reasoning task. Through experiments we showed that MXGNet performs better than previous models on two RPM datasets. We also showed that MXGNet has better generalisation performance. One important direction for future work is to make MXGNet interpretable, and thereby extract logic rules from MXGNet. Currently, the learnt representations in MXGNet are still entangled, providing little in the way of understanding its mechanism of reasoning. Rule extraction can provide people with better understanding of the reasoning problem, and may allow neural networks to work seamlessly with more programmable traditional logic engines. While the multi-layer multiplex graph neural network is designed for RPM style reasoning task, it can be readily extended to other diagrammatic reasoning tasks where relations are present between multiple elements across different diagrams. One example of a real-world application scenario is robots assembling parts of an object into a whole, such as building a LEGO model from a room of LEGO blocks. MXGNet provides a suitable way of capturing relations between parts, such as ways of piecing and locking two parts together. A ARCHITECTURE In this section we present exact configurations of all model variants of MXGNet. Due to the complexity of architectures, we will describe each modules in sequence. The object-level representation has two variations which are (o1) CNN features and (o2) Spatial Attention features. Also the models for PGM and RAVEN dataset differ in details. Unless otherwise stated, in all layers we apply Batch Normalization Ioffe & Szegedy (2015) and use Rectified Linear Unit as activation function. A.1 OBJECT-LEVEL REPRESENTATION ARCHITECTURE CNN features: The first approach applies a CNN on the input image and use each spatial location in the final CNN feature map as the object feature vector. This type of representation is used widely, such as in Relation Network Santoro et al. (2017) and VQ-VAE van den Oord et al. (2017). Formally, the output of a CNN is a feature map tensor of dimension H ×W ×D where H , W and D are respectively height, width and depth of the feature map. At each H and W location, an object vector is extracted. This type of object representation is simple and fast, but does not guarantee that the receptive field at each feature map location fully bounds objects in the image. We use a residual module He et al. (2016) with two residual blocks to extract CNN features, as shown in figure 4.This is because Residual connections show better performance in experiments. The structure of a single Residual Convolution Block is shown in figure 3.Unless otherwise stated, convolutional layer in residual blocks has kernel size of 3× 3. The output feature map processed by another residual block is treated as background encoding because we found that convolutional background encoding gives better results than feature vectors. Spatial Attention Object-level representation: The second approach is to use spatial attention to attend to locations of objects, and extract representations for each object attended. This is similar to object detection models such as faster R-CNN Ren et al. (2015), which use a Region Proposal Network to propose bounding boxes of objects in the input image. In practice, we use Spatial Transformer Jaderberg et al. (2015) as our spatial attention module. Figure 5 shows the architecture used for extracting object-level representation using spatial attention. A CNN composed of 1 conv layr and 2 residual blocks is first applied to the input image, and the last layer feature map is extracted. This part is the same as CNN grid feature module. A spatial attention network composed of 2 conv layer then processes information at each spatial location on the feature map, and outputs k numbers of z = (zpres, zwhere), corresponding to k possible objects at each location. Here, zpres is a binary value indicating if an object exists in this location, and zwhere is an affine transformation matrix specifying a sampling region on the feature maps. zpres, the binary variable, is sampled from Gumbel-Sigmoid distribution Maddison et al. (2016); Jang et al. (2016), which approximates the Bernoulli distribution. We set Gumbel temperature to 0.7 throughout the experiments. For the PGM dataset we restricted k to be 1 and zwhere to be a translation and scaling matrix as ‘shapes’ objects do not overlap and do not have affine transformation attributes other than scaling and translation. For all zi; i ⊂ [1, H ×W ], if zpresi is 1, an object encoder network samples a patch from location specified by z where i using a grid sampler with a fixed window size of 4× 4 pixels. More details of the grid sampler can be found in Jaderberg et al. (2015). The sampled patches are then processed by a conv-layer to generate object embeddings. A.2 GRAPH NETWORKS Multiplex Edge Embeddings:Figure 2 in the main paper shows an overview of the multiplex graph architecture. While motivation and overview of architecture is explained in section 4.2 of the main paper, in this section we provide exact configurations for each part of the model. Each sub-layer of the multiplex edge is embedded by a small MLP. For PGM dataset, we use 6 parallel layers for each multiplex edge embeddings , with each layer having 32 hidden units and 8 output units. For RAVEN dataset we use 4 layers with 16 hidden units and 8 output units because RAVEN dataset contains fewer relations types than PGM dataset. Gating function is implemented as one Sigmoid fully connected layer with hidden size equal to the length of concatenated aggregated embeddings. Gating variables are element-wise multiplied with concatenated embeddings for gating effects. Gated embeddings are then processed with a final fully connected layer with hidden size 64. Graph Summarization: This module summarizes all node summary embeddings and background embeddings to produce a diagram subset embedding representing relations present in the set of diagrams. We experimented with various approaches and found that keeping embeddings as feature maps and processing them with residual blocks yields the best results. Background feature map embeddings are generated with one additional residual block of 48 on top of lower layer feature-extracting resnet. For object representations obtained from CNN-grid features, we can simply reshape node embeddings into a feature map, and process it with additional conv-nets to generate a feature map embeddings of the same dimension to background feature map embeddings. For object representations with spatial attention, we can use another Spatial Transformer to write node summary embeddings to its corresponding locations on a canvas feature map. Finally we concatenate node summary embeddings and background embeddings and process it with 2 residual blocks of size 64 to produce the relation embeddings. A.3 REASONING NETWORK Figure 6 shows the reasoning network configuration for RPM tasks. We experimented with the approach introduced in Barrett et al. (2018), which compute scores for each answer candidates and finally normalize the scores. We found this approach leads to severe overfitting on the RAVEN dataset, and therefore used a simpler approach to just concatenate all relation embeddings and process them with a neural net. In practice we used two residual blocks of size 128 and 256, and a final fully connected layer with 8 units corresponding to 8 answer candidates. The output is normalized with softmax layer. For Meta-target prediction, all context relation embeddings (context rows and columns for PGM while only rows for RAVEN dataset) are summed and fed into a fully connected prediction layer with Sigmoid activation. For PGM there are 12 different meta-targets while for RAVEN there are 9. B TRAINING DETAILS The architecture is implemented in Pytorch framework. During training, we used RAdam optimizer Liu et al. (2019) with learning rate 0.0001, β1 = 0.9,β2 = 0.999. We used batch size of 64, and distributed the training across 2 Nvidia Geforce Titan X GPUs. We early-stop training when validation accuracy stops increasing. C MORE DETAILS OF RPM DATASETS In PGM dataset there are two types of elements present in the diagram, namely shapes and lines. These elements have different attributes such as colour and size. In the PGM dataset, five types of relations can be present in the task: {Progression,AND,OR,XOR,ConsistentUnion}. The RAVEN dataset, compared to PGM, does not have logic relationsAND,OR,XOR, but has additional relationsArithmetic, Constant. In addition RAVEN dataset only allow relations to be present in rows. Figure 7a and 7b show two examples from the PGM dataset(Image courtesy Barrett et al. (2018)). The first example contains a ’Progression’ relation of the number of objects across diagrams in columns. The second examples contains a ’XOR’ relation of position of objects across diagrams in rows. In addition to shape objects, diagrams in the PGM dataset can also contain background line objects that appear at fixed locations. Figure 8a and 8b show two examples of PGM tasks containing line objects. D MORE DETAILS ON SEARCH SPACE REDUCTION In this section we provide detailed architecture used for Search Space reduction, and present additional experimental results. The node embeddings are generated by applying a Conv-Net of 4 convolutional layer (32 filters in each layer) of kernel size 3, and a fully connected layer mapping flattened final-layer feature maps to a feature vector of size 256. Edge embeddings are generated by a 3-layer MLP of 512 − 512 − 256 hidden units. Subset embeddings are generated by a fully connected layer of 512 units. The subset embeddings are gated with the gating variables and summed into a feature vector, which is then feed into the reasoning net, a 3-layer MLP with 256− 256− 13. The output layer contains 13 units. The first unit gives probability of currently combined answer choice being true. The rest 12 units give meta-target prediction probabilities. This is the same as Barrett et al. (2018). The training loss function is: L = Lans + βLmeta−target + λ ∥∥∥∥∥∥ ∑ (i,j,k)⊂S Gi,j,k ∥∥∥∥∥∥ L1 (5) In our experiment we have tested various values of λ, and found 0.01 to be the best. This model is trained with RAdam optimizer with learning rate of 0.0001 and batch size of 64. After 10 epochs of training, only gating variables of subsets that are rows and columns are above the 0.5 threshold. The Gating variables for three rows are 0.884, 0.812 and 0.832. The gating variables for three columns are 0.901, 0.845 and 0.854. All other gating variables are below 0.5. Among these, the one with highest absolute value is 0.411. Table 3 shows the top-16 ranked subsets, with each subset indexed by 2 connecting edges in the subset. Figure 9 illustrates this way of indexing the subset. For example, the first column with red inter-connecting arrows is indexed as 0-3-6. This indicates that there two edges, one connecting diagram 0 and 3, and the other connecting diagram 3-6. Similarly the subset connected by blue arrows is indexed as 1-2-5. Note that 1-2-5 and 2-1-5 is different because the 1-2-5 contains edge 1-2 and 2-5 while 2-1-5 contains edges 1-2 and 1-5. E MORE DETAILS ON EULER DIAGRAM SYLLOGISM The original model in Wang et al. (2018a) uses a Siamese Conv-Net model to process two input premise diagrams and output all consistent conclusions. Convolutional layers with shared weights are first applied to two input diagrams. The top layer feature maps are then flattened and fed into a reasoning network to make predictions. We simply use CNN grid features of the top layer feature maps as object-level representations, and use the multi-layer multiplex graph to capture object relations between the two input premise diagrams. We use a multiplex edge embeddings of 4 layers, with each layer of dimension 32. The cross-multiplexing here becomes self-multiplexing as there are only 2 diagrams (Only 1 embedding of node summary for edges from first diagram to second diagram). Final node embeddings are processed by a convolutional layer to produce the final embedding, which is also fed into the reasoning network along with the conv-net embeddings. F ABLATION STUDY We performed ablation study experiments to test how much does the multiplex edges affects performance. We have tested two model variants, one without any graph modules, and the other model graphs using vanilla edge embeddings produced by MLPs, on PGM dataset. We found that without graph modules, the model only achieved 83.2% test accuracy. While this is lower than MXGNet’s 89.6%, it is still higher than WReN’s 76.9%. This is possibly because the search space reduction, by trimming away non-contributing subsets, allow the model to learn more efficiently. The graph model with vanilla edge embeddings achieves 88.3% accuracy, only slightly lower than MXGNet with multiplex edge embeddings. This shows that while general graph neural network is a suitable model for capturing relations between objects, the multiplex edge embedding does so more efficiently by allowing parallel relation multiplexing. G ADDITIONAL GENERALIZATION PERFORMANCE ON PGM DATASET Table 4 shows performance of MXGNet on other splits of PGM dataset. MXGNet consistently outperforms WReN for test accuracy, except for H.O. Triple Pairs and H.O. shape-color in the case β = 0 Additionally here we provide the analysis according to Sec 4.2 and Sec 4.6 in Barrett et al. (2018). unfortunately sec 4.3 of this paper, namely the analysis of distractors, cannot be performed as the publicly available dataset does not include any ground truth labels about distractors, nor any labels of present objects that can be used to synthesize distractor labels. For Meta-target prediction, MXG-Net achieves 84.1% accuracy. When Metatarget is correctly predicted, the model’s target prediction accuracy increases to 92.4%. When Meta-target is incorrectly predicted, the model only has 75.6% accuracy. For three logical relations the model performs best for OR relation (95.3%), and worst for XOR relation(92.6%). Accuracy for line-type tasks (86.5%) is only slightly better than for shape tasks (80.1%), showing that object representation with graph modeling does improve on relations between shapes. The type of relation with worst performance isConsistentUnion, with only 75.1% accuracy. This is expected as ConsistentUnion is in fact a memory task instead of relational reasoning task.
1. What is the main contribution of the paper in the field of Raven Progressive Matrices (RPM) reasoning? 2. What are the strengths of the proposed approach, particularly in its architecture and use of gated graph networks? 3. What are the weaknesses of the paper, especially regarding its comparison with another simultaneous submission and lack of interpretability analysis? 4. How does the reviewer assess the overall quality and novelty of the paper's content?
Review
Review In this paper the authors solve for the task of Raven Progressive Matrices (RPM) reasoning. They do so by considering multiplexed graph networks. They present an architecture for the same. The basic premise is a combination of object level representation that is obtained by a method similar to region proposal and combining them with graph network. The approach uses gated graph networks that also uses an aggregation function. These are combined and result in node embeddings. Detailed analysis of the network is provided. This provides improved results over earlier WREN method. However, the performance is slightly lesser than another paper simultaneously submitted that achieves similar results. That approach uses transformer network for spatial attention while here the spatial attention is just based on object level representation. Over all while the contribution is useful, not much analysis is provided on the interpretability of the results. For instance, the statistics in terms of the search space reduction as to how many subsets get pruned. Further, there may be subsets of graphs that could span across rows and columns. The decision in terms of restricting the reduction to span specific rows or columns may result in pertinent nodes also being pruned. Certain aspects that relate to object level representation are not very clear. I am not fully aware about results in this specific area and that may also be a reason for the same. To conclude, I believe this paper provides a useful contribution by modeling the diagrammatic abstract reasoning as a graph based reasoning approach. The multiplex graph network could be a useful component that is also relevant for other problems. The paper provides sufficient analysis to convince us regarding the claims.
ICLR
Title Causal Imitation Learning via Inverse Reinforcement Learning Abstract One of the most common ways children learn when unfamiliar with the environment is by mimicking adults. Imitation learning concerns an imitator learning to behave in an unknown environment from an expert’s demonstration; reward signals remain latent to the imitator. This paper studies imitation learning through causal lenses and extends the analysis and tools developed for behavior cloning (Zhang, Kumor, Bareinboim, 2020) to inverse reinforcement learning. First, we propose novel graphical conditions that allow the imitator to learn a policy performing as well as the expert’s behavior policy, even when the imitator and the expert’s state-action space disagree, and unobserved confounders (UCs) are present. When provided with parametric knowledge about the unknown reward function, such a policy may outperform the expert’s. Also, our method is easily extensible and allows one to leverage existing IRL algorithms even when UCs are present, including the multiplicative-weights algorithm (MWAL) (Syed & Schapire, 2008) and the generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016). Finally, we validate our framework by simulations using real-world and synthetic data. 1 INTRODUCTION Reinforcement Learning (RL) has been deployed and shown to perform extremely well in highly complex environments in the past decades (Sutton & Barto, 1998; Mnih et al., 2013; Silver et al., 2016; Berner et al., 2019). One of the critical assumptions behind many of the classical RL algorithms is that the reward signal is fully observed, and the reward function could be well-specified. In many real-world applications, however, it might be impractical to design a suitable reward function that evaluates each and every scenario (Randløv & Alstrøm, 1998; Ng et al., 1999). For example, in the context of human driving, it is challenging to design a precise reward function, and experimenting in the environment could be ill-advised; still, watching expert drivers operating is usually feasible. In machine learning, the imitation learning paradigm investigates the problem of how an agent should behave and learn in an environment with an unknown reward function by observing demonstrations from a human expert (Argall et al., 2009; Billard et al., 2008; Hussein et al., 2017; Osa et al., 2018). There are two major learning modalities that implements IL – behavioral cloning (BC) (Widrow, 1964; Pomerleau, 1989; Muller et al., 2006; Mülling et al., 2013; Mahler & Goldberg, 2017) and inverse reinforcement learning (IRL) Ng et al. (2000); Ziebart et al. (2008); Ho & Ermon (2016); Fu et al. (2017). BC methods directly mimic the expert’s behavior policy by learning a mapping from observed states to the expert’s action via supervised learning. Alternatively, IRL methods first learn a potential reward function under which the expert’s behavior policy is optimal. The imitator then obtains a policy by employing standard RL methods to maximize the learned reward function. Under some common assumptions, both BC and IRL are able to obtain policies that achieve the expert’s performance (Kumor et al., 2021; Swamy et al., 2021). Moreover, when additional parametric knowledge about the reward function is provided, IRL may produce a policy that outperforms the expert’s in the underlying environment (Syed & Schapire, 2008; Li et al., 2017; Yu et al., 2020). For concreteness, consider a learning scenario depicted in Fig. 1a, describing trajectories of humandriven cars collected by drones flying over highways (Krajewski et al., 2018; Etesami & Geiger, 2020). Using such data, we want to learn a policy X ← π(Z) deciding on the acceleration (action) X ∈ ∗ Equal contribution. {0, 1} of the demonstrator car based on velocities and locations Z of surrounding cars. The driving performance is measured by a latent reward signal Y . Consider an instance where Y ← (1−X)Z + X(1−Z) and values of Z are drawn uniformly over {0, 1}. A human expert generates demonstrations following a behavior policy such that P (X = 1 | Z = 0) = 0.6 and P (X = 0 | Z = 1) = 0.4. Evaluating the expert’s performance gives E[Y ] = P (X = 1, Z = 0) + P (X = 0, Z = 1) = 0.5. Now we apply standard IRL algorithms to learn a policy X ← π(Z) so that the imitator’s driving performance, denoted by E[Y | do(π)], is at least as good as the expert’s performance E[Y ]. Detailed derivations of IRL policy are shown in (Ruan et al., 2023, Appendix A). Note that E[Y |z, x] = x+ z − 2xz belongs to a family of reward functions fY (x, z) = αx+ βz − γxz, where 0 < α < γ. A typical IRL imitator solves a minimax problem minπ maxfY E [fY (X,Z)]−E [fY (X,Z) | do(π)]. The inner step “guesses” a reward function being optimized by the expert; while the outer step learns a policy maximizing the learned reward function. Applying these steps leads to a policy π∗ : X ← ¬Z with the expected reward E[Y | do(π∗)] = 1, which outperforms the sub-optimal expert. Despite the performance guarantees provided by existing imitation methods, both BC and IRL rely on the assumption that the expert’s input observations match those available to the imitator. More recently, there exists an emerging line of research under the rubric of causal imitation learning that augments the imitation paradigm to account for environments consisting of arbitrary causal mechanisms and the aforementioned mismatch between expert and imitator’s sensory capabilities (de Haan et al., 2019; Zhang et al., 2020; Etesami & Geiger, 2020; Kumor et al., 2021). Closest to our work, Zhang et al. (2020); Kumor et al. (2021) derived graphical criteria that completely characterize when and how BC could lead to successful imitation even when the agents perceive reality differently. Still, it is unclear how to perform IRL-type training if some expert’s observed states remain latent to the imitator, which leads to the presence of unobserved confounding (UCs) in expert’s demonstrations. Perhaps surprisingly, naively applying IRL methods when UCs are present does not necessarily lead to satisfactory performance, even when the expert itself behaves optimally. To witness, we now modify the previous highway driving scenario to demonstrate the challenges of UCs. In reality, covariates Z (i.e., velocities and location) are also affected by the car horn U1 of surrounding vehicles and the wind condition U2. However, due to the different perspectives of drones (recording from the top), such critical information (i.e, U1, U2 ) is not recorded by the camera and thus remains unobserved. Fig. 1b graphically describes this modified learning setting. More specifically, consider an instance where Z ← U1 ⊕ U2, Y ← ¬X ⊕ Z ⊕ U2; ⊕ is the exclusive-or operator; and values of U1 and U2 are drawn uniformly over {0, 1}. An expert driver, being able to hear the car horn U1, follows a behavior policy X ← U1 and achieves the optimal performance E[Y ] = 1. Meanwhile, observe that E[Y |z, x] = 1 belongs to a family of reward functions fY (x, z) = α (where α > 0). Solving minπ maxfY E [fY (X,Z)]− E [fY (X,Z) | do(π)] leads to an IRL policy π∗ with expected reward E[Y |do(π∗)] = 0.5, which is far from the expert’s optimal performance E[Y ] = 1. After all, a question that naturally arises is, under what conditions an IRL imitator procedure can perform well when UCs are present, and there is a mismatch between the perception of the two agents? In this paper, we answer this question and, more broadly, investigate the challenge of performing IRL through causal lenses. In particular, our contributions are summarized as follows. (1) We provide a novel, causal formulation of the inverse reinforcement learning problem. This formulation allows one to formally study and understand the conditions under which an IRL policy is learnable, including in settings where UCs cannot be ruled out a priori. (2) We derive a new graphical condition for deciding whether an imitating policy can be computed from the available data and knowledge, which provides a robust generalization of current IRL algorithms to non-Markovian settings, including GAIL (Ho & Ermon, 2016) and MWAL (Syed & Schapire, 2008). (3) Finally, we move beyond this graphical condition and develop an effective IRL algorithm for structural causal models (Pearl, 2000) with arbitrary causal relationships. Due to the space constraints, all proofs are provided in (Ruan et al., 2023, Appendix B). For a more detailed survey on imitation learning and causal inference, we refer readers to (Ruan et al., 2023, Appendix E). 1.1 PRELIMINARIES We use capital letters to denote random variables (X) and small letters for their values (x). DX represents the domain of X and PX the space of probability distributions over DX . For a set X , let |X| denote its dimension. The probability distribution over variables X is denoted by P (X). Similarly, P (Y |X) represents a set of conditional distributions P (Y |X = x) for all realizations x. We use abbreviations P (x) for probabilities P (X = x); so does P (Y = y |X = x) = P (y | x). Finally, indicator function 1{Z = z} returns 1 if Z = z holds true; otherwise 0. The basic semantic framework of our analysis rests on structural causal models (SCMs) (Pearl, 2000, Ch. 7). An SCM M is a tuple ⟨U ,V ,F , P (U)⟩ with V the set of endogenous, and U exogenous variables. F is a set of structural functions s.t. for fV ∈ F , V ← fV (paV ,uV ), with PAV ⊆ V ,UV ⊆ U . Values of U are drawn from an exogenous distribution P (U), inducing distribution P (V ) over endogenous variables V . Since the learner can observe only a subset of endogenous variables, we split V into a partition O ∪L where variable O ⊆ V are observed and L = V \O remain latent to the leaner. The marginal distribution P (O) is thus referred to as the observational distribution. An atomic intervention on a subset X ⊆ V , denoted by do(x), is an operation where values of X are set to constants x, replacing the functions fX = {fX : ∀X ∈X} that would normally determine their values. For an SCM M , let Mx be a submodel of M induced by intervention do(x). For a set Y ⊆ V , the interventional distribution P (s|do(x)) induced by do(x) is defined as the distribution over Y in the submodel Mx, i.e., PM (Y |do(x)) ≜ PMx(Y ). We leave M implicit when it is obvious from the context. Each SCM M is associated with a causal diagram G which is a directed acyclic graph where (e.g., see Fig. 1) solid nodes represent observed variables O, dashed nodes represent latent variables L, and arrows represent the arguments PAV of each function fV ∈ F . Exogenous variables U are not explicitly shown; a bi-directed arrow between nodes Vi and Vj indicates the presence of an unobserved confounder (UC) affecting both Vi and Vj . We will use family abbreviations to represent graphical relationships such as parents, children, descendants, and ancestors. For example, the set of parent nodes of X in G is denoted by pa(X)G = ∪X∈Xpa(X)G ; ch , de and an are similarly defined. Capitalized versions Pa,Ch,De,An include the argument as well, e.g. Pa(X)G = pa(X)G ∪X . For a subset X ⊆ V , the subgraph obtained from G with edges outgoing from X / incoming into X removed is written as GX /GX respectively. G[X] is a subgraph of G containing only nodes X and edges among them. A path from a node X to a node Y in G is a sequence of edges, which does not include a particular node more than once. Two sets of nodes X,Y are said to be d-separated by a third set Z in a DAG G, denoted by (X ⊥ Y |Z)G , if every edge path from nodes in X to nodes in Y is “blocked” by nodes in Z. The criterion of blockage follows (Pearl, 2000, Def. 1.2.3). For a more detailed survey on SCMs, we refer readers to (Pearl, 2000; Bareinboim et al., 2022). 2 CAUSAL INVERSE REINFORCEMENT LEARNING We investigate the sequential decision-making setting concerning a set of actions X , a series of covariates Z, and a latent reward Y in an SCM M . An expert (e.g., a physician, driver), operating in SCM M , selects actions following a behavior policy, which is the collection of structural functions fX = {fX | X ∈ X}. The expert’s performance is evaluated as the expected reward E[Y ]. On the other hand, a learning agent (i.e., the imitator) intervenes on actions X following an ordering X1 ≺ · · · ≺ Xn; each action Xi is associated with a set of features PA∗i ⊆ O \ {Xi}. A policy π over actions X is a sequence of decision rules π = {π1, . . . , πn}. Each decision rule πi(Xi | Zi) is a probability distribution over an action Xi ∈ X , conditioning on values of a set of covariates Zi ⊆ PA∗i . Such policies π are also referred to as dynamic treatment regimes (Murphy et al., 2001; Chakraborty & Murphy, 2014), which generalize personalized medicine to time-varying treatment settings in healthcare, in which treatment is repeatedly tailored to a patient’s dynamic state. A policy intervention on actions X following a policy π, denoted by do(π), entails a submodel Mπ from a SCM M where structural functions fX associated with X (i.e., the expert’s behavior policy) are replaced with decision rules Xi ∼ πi(Xi | Zi) for every Xi ∈X . A critical assumption throughout this paper is that submodel Mπ does not contain any cycles. Similarly, the interventional distribution P (V | do(π)) induced by policy π is defined as the joint distribution over V in Mπ . Throughout this paper, detailed parametrizations of the underlying SCM M are assumed to be unknown to the agent. Instead, the agent has access to the input: (1) a causal diagram G associated with M , and (2) the expert’s demonstrations, summarized as the observational distribution P (O). The goal of the agent is to output an imitating policy π∗ that achieves the expert’s performance. Definition 1. For an SCM M = ⟨U ,V ,F , P (U)⟩, an imitating policy π∗ is a policy such that its expected reward is lower bounded by the expert’s reward, i.e., EM [Y | do(π∗)] ≥ EM [Y ]. In words, the right-hand side is the expert’s performance that the agent wants to achieve, while the left-hand side is the real reward experienced by the agent. The challenge in imitation learning arises from the fact that the reward Y is not specified and latent, i.e., Y ̸∈ O. This precludes approaches that identify E[Y |do(π)] directly from the demonstration data (e.g., through the do- or soft-do-calculus Pearl (2000); Correa & Bareinboim (2020)). There exist methods in the literature for finding an imitating policy in Def. 1. Before describing their details, we first introduce some necessary concepts. For any policy π, we summarize its associated state-action domain using a sequence of pairs of variables called a policy scope S. Definition 2 (Lee & Bareinboim (2020)). For an SCM M , a policy scope S (for short, scope) over actions X is a sequence of tuples {⟨Xi,Zi⟩}ni=1 where Zi ⊆ PA ∗ i for every Xi ∈X . We will consistently use π ∼ S to denote a policy π associated with scope S . For example, consider a policy scope S = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩} over actions X1, X2 in Fig. 1c. A policy π ∼ S is a sequence of distributions π = {π1(X1 | Z1), π2(X2 | Z2)}. Zhang et al. (2020); Kumor et al. (2021) provide a graphical condition that is sufficient for learning an imitating policy via behavioral cloning (BC) provided with a causal diagram G. For a policy scope S = {⟨Xi,Zi⟩}ni=1, let G(i), i = 1, . . . , n, denote a manipulated graph obtained from G by the following steps: for all j = i+1, . . . , n, (1) remove arrows coming into every action Xj ; and (2) add direct arrows from nodes in Zj to Xj . Formally, the sequential π-backdoor criterion is defined as: Definition 3 (Kumor et al. (2021)). Given a causal diagram G, a policy scope S = {⟨Xi,Zi⟩}ni=1 is said to satisfy the sequential π-backdoor criterion in G (for short, π-backdoor admissible) if at each Xi ∈ X , one of the following conditions hold: (1) Xi is not an ancestor of Y in G(i), i.e., X ̸∈ An(Y )G(i) ; or (2) Zi blocks all backdoor path from Xi to Y in G(i), i.e., (Y ⊥ Xi|Zi) in G (i) Xi . (Kumor et al., 2021) showed that whenever a π-backdoor admissible scope S is available, one could learn an imitating policy π∗ ∼ S by setting π∗i (xi | zi) = P (xi | zi) for every action Xi ∈ X . For instance, consider the causal diagram G in Fig. 1c. Scope S = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩} is π-backdoor admissible since (X1 ⊥ Y |Z1) and (X2 ⊥ Y |Z2) hold in G, which is a super graph containing both manipulated G(1) and G(2). An imitating policy π∗ = {π∗1 , π∗2} is thus obtainable by setting π∗1(X1 | Z1) = P (X1 | Z1) and π∗2(X2 | Z2) = P (X2 | Z2). While impressive, a caveat of their results is that the performance of the imitator is restricted by that of the expert, i.e., E[Y | do(π∗)] = E[Y ]. In other words, causal BC provides an efficient way to mimic the expert’s performance. If the expert’s behavior is far from optimal, the same will hold for the learning agent. 2.1 MINIMAL SEQUENTIAL BACKDOOR CRITERION To circumvent this issue, we take a somewhat different approach to causal imitation by incorporating the principle of inverse reinforcement learning (IRL) principle. Following the game-theoretic approach (Syed & Schapire, 2008), we formulate the problem as learning to play a two-player zero-sum game in which the agent chooses a policy, and the nature chooses an SCM instance. A key property of this algorithm is that it allows us to incorporate prior parametric knowledge about the latent reward signal. When such knowledge is informative, our algorithm is about to obtain a policy that could significantly outperform the expert with respect to the unknown causal environment, while at the same time are guaranteed to be no worse. Formally, let M = {∀M | GM = G, PM (O) = P (O)} denote the set of SCMs compatible with both the causal diagram G and the observational distribution P (O). Fix a policy scope S. Now consider the optimization problem defined as follows. ν∗ = min π∼S max M∈M EM [Y ]− EM [Y | do(π)]. (1) The inner maximization in the above equation can be viewed as an causal IRL step where we attempt to “guess” a worst-case SCM M̂ compatible with G and P (O) that prioritizes the expert’s policy. That is, the gap in the performance between the expert’s and the imitator’s policies is maximized. Meanwhile, since the expert’s reward EM [Y ] is not affected by the imitator’s policy π, the outer minimization is equivalent to a planning step that finds a policy π∗ optimizing the learned SCM M̂ . Obviously, the solution π∗ is an imitating policy if gap ν∗ = 0. In cases where the expert is sub-optimal, i.e., EM̂ [Y ] < EM̂ [Y | do(π)] for some policies π, we may have ν∗ < 0. That is, the policy π∗ will dominate the expert’s policy fX regardless of parametrizations of SCM M in the worst-case scenario. In other words, π∗ to some extent ignores the sub-optimal expert, and instead exploits prior knowledge about the underlying model. Despite the clear semantics in terms of causal models, the optimization problem in Eq. (1) requires the learner to search over all possible SCMs compatible with the causal diagram G and observational distribution P (O). In principle, it entails a quite challenging search since one does not have access to the parametric forms of the underlying structural functions F nor the exogenous distribution P (U). It is not clear how the existing optimization procedures can be used. In this paper, we will develop novel methods to circumvent this issue, thus leading to effective imitating policies. Our first algorithm relies on a refinement of the sequential π-backdoor, based on the concept of minimality. A subscope S ′ of a policy scope S = {⟨Xi,Zi⟩}ni=1, denoted by S ′ ⊆ S , is a sequence {⟨Xi,Z ′i⟩} n i=1 where Z ′ i ⊆ Zi for every Xi ∈ X . A proper subscope S ′ ⊂ S is a subscope in S other than S itself. The minimal π-backdoor admissible scope is defined as follows. Definition 4. Given a causal diagram G, a π-backdoor admissible scope S is said to be minimal if there exists no proper subscope S ′ ⊂ S satisfying the sequential π-backdoor in G. Theorem 1. Given a causal diagram G, if there exists a minimal π-backdoor admissible scope S = {⟨Xi,Zi⟩}ni=1 in G, consider the following conditions: 1. Let effective actions X∗ = X ∩An(Y )GS and effective covariates Z∗ = ⋃ Xi∈X∗ Zi; 2. For i = 1, . . . , n+ 1, let X∗<i = {∀Xj ∈X∗ | j < i} and Z∗<i = ⋃ Xj∈X∗<i Zj . Then, for any policy π ∼ S, the expected reward E[Y | do(π)] is computable from P (O, Y ) as: E[Y | do(π)] = ∑ x∗,z∗ E[Y | x∗, z∗]ρπ(x∗, z∗) (2) where the occupancy measure ρπ(x∗, z∗) = ∏ Xi∈X∗ P ( zi | x∗<i, z∗<i ) πi(xi | zi). To illustrate, consider again the causal diagram G in Fig. 1c; the manipulated diagram G(2) = G and G(1) is obtained from G by removing Z2 ↔ X2. While scope S1 = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩} satisfies the sequential π-backdoor, it is not minimal since (X1 ⊥ Y ) in G(1)X1 . On the other hand, S2 = {⟨X1, ∅⟩, ⟨X2, {Z2}⟩} is minimal π-backdoor admissible since (X2 ⊥ Y | Z2) holds true in G(2)X2 ; and the covariate set {Z2} is minimal due to the presence of the backdoor path X2 ← Z2 → Y . Let us focus on the minimal π-backdoor admissible scope S2. Note that GS2 is a subgraph obtained from G by removing the bi-directed arrowZ2 ↔ X2. We must have effective actions X∗ = {X1, X2} and effective covariates Z∗ = {Z2}. Therefore, Z∗<1 = Z∗<2 = ∅ and Z∗<3 = {Z2}. For any policy π ∼ S2, Thm. 1 implies E[Y | do(π)] = ∑ x1,x2,z2 E[Y | x1, x2, z2]P (z2|x1)π2(x2|z2)π(x1). On the other hand, the same result in Thm. 1 does not necessarily hold for a non-minimal π-backdoor admissible scope. For instance, consider again the non-minimal scope S1 = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩}. The expected reward E[Y | do(π)] of a policy π ∼ S2 is not computable from Eq. (2), and is ultimately not identifiable from distribution P (O, Y ) in G (Tian, 2008). 2.2 IMITATION VIA INVERSE REINFORCEMENT LEARNING Once a minimal π-backdoor admissible scope S is found, there exist effective procedures to solve for an imitating policy in Eq. (1). Let R be a hypothesis class containing all expected rewards EM [Y | x∗, z∗] compatible with candidate SCMs M ∈ M , i.e., R = {EM [Y | x∗, z∗] | ∀M ∈M }. Applying the identification formula in Thm. 1 reduces the optimization problem in Eq. (1) as follows: ν∗ = min π∼S max r∈R ∑ x∗,z∗ r(x∗, z∗) (ρ(x∗, z∗)− ρπ(x∗, z∗)) (3) where the expert’s occupancy measure ρ(x∗, z∗) = P (x∗, z∗) and the agent’s occupancy measure ρπ(x ∗, z∗) is given by Eq. (2). The above minimax problem is solvable using standard IRL algorithms. The identification result in Thm. 1 ensures that the learned policy applies to any SCM compatible with the causal diagram and the observational data, thus robust to the unobserved confounding bias in the expert’s demonstrations. Henceforth, we will consistently refer to Eq. (3) as the canonical equation of causal IRL. In this paper, we solve for an imitating policy π∗ in Eq. (3) using state-of-the-art IRL algorithms, provided with common choices of parametric reward functions. These algorithms include the multiplicative-weights algorithm (MWAL) (Syed & Schapire, 2008) and the generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016). We refer readers to Algs. 3 and 4 in (Ruan et al., 2023, Appendix C) for more discussions on the pseudo-code and implementation details. Causal MWAL (Abbeel & Ng, 2004; Syed & Schapire, 2008) study IRL in Markov decision processes where the reward function r(x∗, z∗) is a linear combination of k-length feature expectations vectors ϕ(x∗, z∗). Particularly, let r(x∗, z∗) = w · ϕ(x∗, z∗) for a coefficient vector w contained in a convex set Sk = { w ∈ Rk | ∥w∥1 = 1 and w ⪰ 0 } . Let ϕ(i) be the i-th component of feature vector ϕ and let deterministic policies with scope S be ordered by π(1), . . . ,π(n). The canonical equation in Eq. (3) is reducible to a two-person zero-sum matrix game under linearity. Proposition 1. For a hypothesis class R = {r = w · ϕ | w ∈ Sk}, the solution ν∗ of the canonical equation in Eq. (3) is obtainable by solving the following minimax problem: ν∗ = min π∼S max w∈Sk w⊤Gπ, (4) where G is a k × n matrix given by G(i, j) = ∑ x∗,z∗ ϕ (i)(x∗, z∗) (ρ(x∗, z∗)− ρπ(j)(x∗, z∗)). There exist effective multiplicative weights algorithms for solving the matrix game in Eq. (4), including MW (Freund & Schapire, 1999) and MWAL (Syed & Schapire, 2008). Causal GAIL (Ho & Ermon, 2016) introduces the GAIL algorithm for learning an imitating policy in Markov decision processes with a general family of non-linear reward functions. In particular, r(x∗, z∗) takes values in the real space R, i.e., r ∈ RX∗,Z∗ where RX∗,Z∗ = {r : DX∗ ×DZ∗ 7→ R}. The complexity of reward function r is penalized by a convex regularization function ψ(r), i.e., ν∗ = min π∼S max r∈RX×Z ∑ x∗,z∗ r(x∗, z∗) (ρ(x∗, z∗)− ρπ(x∗, z∗))− ψ(r) (5) Henceforth, we will consistently refer to Eq. (5) as the penalized canonical equation of causal IRL. It is often preferable to solve its conjugate form. Formally, Proposition 2. For a hypothesis class R = {r : DX∗ ×DZ∗ 7→ R} regularized by ψ, the solution ν∗ of the penalized canonical equation in Eq. (5) is obtainable by solving the following problem: ν∗ = min π∼S ψ∗ (ρ− ρπ) (6) where ψ∗ be a conjugate function of ψ and is given by ψ∗ = maxr∈RX×Z a⊤r − ψ(r). Eq. (6) seeks a policy π which minimizes the divergence of the occupancy measures between the imitator and the expert, as measured by the function ψ∗. The computational framework of generative adversarial networks (Goodfellow et al., 2014) provides an effective approach to solve such a matching problem, e.g., the GAIL algorithm (Ho & Ermon, 2016). 3 CAUSAL IMITATION WITHOUT SEQUENTIAL BACKDOOR In this section, we investigate causal IRL beyond the condition of minimal sequential π-backdoor. Observe that the key to the reduction of the canonical causal IRL equation in Eq. (3) lies in the identification of expected rewards E[Y | do(π)] had the latent reward Y been observed. Next we will study general conditions under which E[Y | do(π)] is uniquely discernible from distribution P (O, Y ) in the causal diagram G, called the identifiability of causal effects (Pearl, 2000, Def. 3.2.4). Definition 5 (Identifiability). Given a causal diagram G and a policy π ∼ S, the expected reward E[Y | do(π)] is said to be identifiable from distribution P (O, Y ) in G if E[Y | do(π)] is uniquely computable from P (O, Y ) in any SCM M compatible with G. We say a policy scope S is identifiable (from P (O, Y ) in G) if for all policies π ∼ S , the corresponding expected rewards E[Y | do(π)] are identifiable from P (O, Y ) in G. Our next result shows that whenever an identifiable policy scope S is found, one could always reduce the causal IRL problem to the canonical optimization equation in Eq. (3). Theorem 2. Given a causal diagram G, a policy scope S is identifiable from P (O, Y ) in G if and only if for any policy π ∼ S, the expected reward E[Y | do(π)] is computable from P (O, Y ) as E[Y | do(π)] = ∑ x∗,z∗ E[Y | x∗, z∗]ρπ(x∗, z∗) (7) where subsets X∗ ⊆ X , Z∗ ⊆ O \X; and the imitator’s occupancy measure ρπ(x∗, z∗) is a function of the observational distribution P (O) and policy π. Thm. 2 suggests a general procedure to learn an imitating policy via causal IRL. Whenever an identifiable scope S is found, the identification formula in Eq. (7) permits one to reduce the optimization problem in Eq. (1) to the canonical equation in Eq. (3). One could thus obtain an imitating policy π ∼ S by solving Eq. (3) where the expert’s occupancy measure ρ(x∗, z∗) = P (x∗, z∗) and the imitator’s occupancy measure ρπ(x∗, z∗) is given by Eq. (7). As an example, consider the frontdoor diagram described in Fig. 2a and a policy scope S = {⟨X, ∅⟩}. The expected reward E[Y | do(π)] = ∑ x′ E[Y | do(x′)]π(x′) and E[Y | do(x′)] is identifiable from P (X,Y, Z) using the frontdoor adjustment formula (Pearl, 2000, Thm. 3.3.4). The expected reward E[Y | do(π)] of any policy π(X) could be written as: E[Y | do(π)] = ∑ z,x E[Y | x, z]P (x) ∑ x′ P (z|x′)π(x′). (8) Let occupancy measures ρ(x, z) = P (x, z) and ρπ(x, z) = P (x) ∑ x′ P (z|x′)π(x′). We could thus learn an imitating policy in the frontdoor diagram by solving the canonical equation given by: ν∗ = min π∼S max r∈R ∑ x,z r(x, z) (ρ(x, z)− ρπ(x, z)) , (9) where R is a hypothesis class of the reward function r(x, z) ≜ E[Y | x, z]. The solution π∗(X) is an imitating policy performing at least as well as the expert’s behavior policy if the gap ν∗ ≤ 0. Next, we will describe how to obtain the identification formula in Eq. (7) provided with an identifiable scope S . Without loss of generality, we will assume that the reward Y is the only endogenous variable that is latent in the causal diagram G, i.e., V = O∪{Y }.∗ We will utilize a special type of clustering of nodes in the causal diagram G, called the confounded component (for short, c-component). Definition 6 (C-component (Tian & Pearl, 2002)). For a causal diagram G, a subset C ⊆ V is a c-component if any pair Vi, Vj ∈ C is connected by a bi-directed path in G. For instance, the frontdoor diagram in Fig. 2a contains two c-components C1 = {X,Y } and C2 = {Z}. We will utilize a sound and complete procedure IDENTIFY (Tian, 2002; 2008) for identifying causal effects E[Y | do(π)] of an arbitrary policy π ∼ S . Particularly, IDENTIFY takes as input the causal diagram G, a reward Y , and a policy scope S . It returns an identification formula for E[Y | do(π)] from P (O, Y ) if expected rewards of all policies π ∼ S are identifiable. Otherwise, IDENTIFY(G, Y,S) = “FAIL”. Details of IDENTIFY are shown in (Zhang et al., 2020, Appendix B). Recall that GS is the causal diagram of submodel Mπ induced by policy π ∼ S. Fig. 2b shows diagram GS obtained from the frontdoor graph G and scope S = {⟨X, ∅⟩} described in Fig. 2a. Let ZY = An(Y ) be ancestors of Y in GS . Our next result shows that IDENTIFY(G, Y,S) is ensured to find an identification formula of the form in Eq. (7) when it is identifiable. Lemma 1. Given a causal diagram G, a policy scope S is identifiable from P (O, Y ) in G if and only if IDENTIFY(G, Y,S) ̸= “FAIL”. Moreover, IDENTIFY(G, Y,S) returns an identification formula of the form in Eq. (7) where X∗ = Pa(CY ) ∩X and Z∗ = Pa(CY ) \ ({Y } ∪X); and CY is a c-component containing reward Y in subgraph G[An(ZY )]. ∗ Otherwise, one could always simplify the diagram G and project other latent variables L \ {Y } using the projection algorithm (Tian, 2002, Sec. 4.5), without affecting the identifiability of target query E[Y | do(π)]. For example, for the frontdoor diagram G in Fig. 2a, the manipulated diagram GS with scope S = {⟨X, ∅⟩} is described in Fig. 2b. Since ZY = An(Y )GS = {X,Z, Y }, CY is thus given by {X,Y }. Lem. 1 implies that X∗ = Pa({X,Y }) ∩ {X} = {X} and Z∗ = Pa({X,Y }) \ {X,Y } = {Z}. Applying IDENTIFY(G, Y, {⟨X, ∅}) returns the frontdoor adjustment formula in Eq. (8). 3.1 SEARCHING FOR IDENTIFIABLE POLICY SCOPES The remainder of this section describes an effective algorithm to find identifiable policy scopes S had the latent reward signal Y been observed. Let S denote the collection of all identifiable policy scopes S from distribution P (O, Y ) in the causal diagram G. Our algorithm LISTIDSCOPE, described in Alg. 1, enumerates elements in S. It takes as input a causal diagram G, a reward signal Y , and subsets L = ∅ and R = ⋃n i=1 PA ∗ i . More specifically, LISTIDSCOPE maintains two scopes Sl ⊆ Sr (Step 2). It performs backtrack search to find identifiable scopes S in G such that Sl ⊆ S ⊆ Sr. It aborts branches that either (1) all subscopes in Sr are identifiable (Step 3); or (2) all subscopes containing Sl are non-identifiable (Step 6). The following proposition supports our aborting criterion. Lemma 2. Given a causal diagram G, for policy scopes S ′ ⊆ S , S ′ is identifiable from distribution P (O, Y ) in G if S is identifiable from P (O, Y ) in G. Algorithm 1: LISTIDSCOPE 1: Input: G, Y and subsets L ⊆ R 2: Output: a set of identifiable policy scopes S 3: Let scopes Sr = {⟨Xi,R ∩PA∗i ⟩} n i=1 and Sl = {⟨Xi,L ∩PA∗i ⟩} n i=1. 4: if IDENTIFY(G, Y,Sr) ̸= “FAIL′′ then 5: Output Sr. 6: end if 7: if IDENTIFY(G, Y,Sl) ̸= “FAIL′′ then 8: Pick an arbitrary V ∈ R \L. 9: LISTIDSCOPE(G, Y,L ∪ {V },R). 10: LISTIDSCOPE(G, Y,L,R \ {V }). 11: end if At Step 7, LISTIDSCOPE picks an arbitrary variable V that is included in input covariates R but not in L. It then recursively returns all identifiable policy scopes S in G: the first recursive call returns scopes taking V as an input for some actions Xi ∈ X and the second call return all scopes that do not consider V when selecting values for all actions X . We say a policy π is associated with a collection of policy scopes S, denoted by π ∼ S, if there exists S ∈ S so that π ∼ S. It is possible to show that LISTIDSCOPE produces a collection of identifiable scopes that is sufficient for the imitation task. Theorem 3. For a causal diagram G and a reward Y , LISTIDSCOPE(G, Y, ∅, ⋃n i=1 PA ∗ i ) enumerates a subset S∗ ⊆ S so that for any π ∼ S, there is π∗ ∼ S∗ where E[Y | do(π)] = E[Y | do(π∗)]. Moreover, LISTIDSCOPE outputs identifiable policy scopes with a polynomial delay. This follows from the observation that LISTIDSCOPE searches over a tree of policy scopes with height at most | ⋃n i=1 PA ∗ i | and IDENTIFY(G, Y,S) terminates in polynomial steps w.r.t. the size of diagram G. 4 EXPERIMENTS In this section, we demonstrate our framework on various imitation learning tasks, ranging from synthetic causal models to real-world datasets, including highway driving (Krajewski et al., 2018) and images (LeCun, 1998). We find that our approach is able to incorporate parametric knowledge about the reward function and achieve effective imitating policies across different causal diagrams. For all experiments, we evaluate our proposed Causal-IRL based on the canonical equation formulation in Eq. (3). As a baseline, we also include: (1) standard BC mimicking the expert’s nominal behavior policy; (2) standard IRL utilizing all observed covariates preceding every Xi ∈ X while being blind to causal relationships in the underlying model; and (3) Causal-BC (Zhang et al., 2020; Kumor et al., 2021) that learn an imitating policy with the sequential π-backdoor criterion. We refer readers to (Ruan et al., 2023, Appendix D) for additional experiments and more discussions on the experimental setup. Backdoor Consider an SCM instance compatible with Fig. 1c including binary observed variables Z1, X1, Z2, X2, Y ∈ {0, 1}. Causal-BC utilizes a sequential π-backdoor admissible scope {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩}; while Causal-IRL utilizes the scope {⟨X1, ∅⟩, ⟨X2, {Z2}⟩} satisfying the minimal sequential π-backdoor. Simulation results, shown in Fig. 3a, reveal that Causal-IRL consistently outperforms the expert’s policy and other imitation strategies by exploiting additional parametric knowledge about the expected reward E[Y | X1, X2, Z2]; Causal-BC is able to achieve the expert’s performance. Unsurprisingly, neither BC nor IRL is able to obtain an imitating policy. Highway Driving We consider a learning scenario where the agent learns a driving policy from the observed trajectories of a human expert. Causal diagram of this example is provided in (Ruan et al., 2023, Appendix D, Fig. 4) where X1 is the accelerations of the ego vehicle at the previous step; Z1 is both longitudinal and lateral historical accelerations of the ego vehicle two steps ago; X2 is the velocity of the ego vehicle; Z2 is the velocity of the preceding vehicle; W indicates the information from surrounding vehicles. Values of X1, X2, Z1, Z2 are drawn from a real-world driving dataset HighD Krajewski et al. (2018). The reward Y is decided by a non-linear function fY (X2, Z2, UY ). Both Causal-IRL and Causal-BC utilize the scope {⟨X1, ∅⟩, ⟨X2, {Z2}⟩}. Causal-IRL also exploits the additional knowledge that the expected reward E[Y | X1, X2, Z2] is a monotone function via reward augmentation (Li et al., 2017). Simulation results are shown in Fig. 3b. We found that Causal-IRL performs the best among all strategies. Causal-BC is able to achieve the expert’s performance. BC and IRL perform the worst among all and fail to obtain an imitating policy. MNIST Digits Consider again the frontdoor diagram in Fig. 2a. To evaluate the performance of our proposed approach in high-dimensional domains, we now replace variable Z with sampled images drawn from MNIST digits dataset (LeCun, 1998). The reward Y is decided by a linear function taking Z and an unobserved confounder UX,Y as input. The Causal-IRL formulates the imitation problem as a two-person zero-sum game through the frontdoor adjustment described in Eq. (9), which can be solved by the MW algorithm (Freund & Schapire, 1999; Syed & Schapire, 2008). As shown in Fig. 3c, simulation results reveal that Causal-IRL outperforms Causal-BC and BC; while IRL performs the worst among all the algorithms. Infinite MDPUC To demonstrate our proposed framework in the sequential decision-making setting with an infinite horizon, we consider a generalized Markov decision process incorporating unobserved confounders (Ruan & Di, 2022), called the MDPUC (Zhang & Bareinboim, 2022). This sequential model simulates real-world driving dynamics. By exploiting the Markov property over time steps, we are able to decompose the causal diagram over the infinite horizon into a collection of sub-graphs, one for each time step i = 1, 2, . . . . Fig. 1d shows the causal diagram spanning time steps i = 1, 2, 3. As a comparison, BC and IRL still utilize the stationary policy {⟨Xi, {Zi}⟩}. By applying Thm. 1 at each time step, we obtain a π-backdoor admissible policy scope {⟨Xi, {Zi, Xi−1, Zi−1}⟩} for Causal-IRL and Causal-BC. Simulation results are shown in Fig. 3d. One could see by inspection that Causal-IRL performs the best and achieves the expert’s performance. 5 CONCLUSION This paper investigates imitation learning via inverse reinforcement learning (IRL) in the semantical framework of structural causal models. The goal is to find an effective imitating policy that performs at least as well as the expert’s behavior policy from combinations of demonstration data, qualitative knowledge the data-generating mechanisms represented as a causal diagram, and quantitative knowledge about the reward function. We provide a graphical criterion (Thm. 1) based on the sequential backdoor, which allows one to obtain an imitating policy by solving a canonical optimization equation of causal IRL. Such a canonical formulation addresses the challenge of the presence of unobserved confounders (UCs), and is solvable by leveraging standard IRL algorithms (Props. 1 and 2). Finally, we move beyond the backdoor criterion and show that the canonical equation is achievable whenever expected rewards of policies are identifiable had the reward also been observed (Thms. 2 and 3). ACKNOWLEDGEMENTS This research was supported in part by the NSF, ONR, AFOSR, DoE, Amazon, JP Morgan, and The Alfred P. Sloan Foundation. ETHICS STATEMENT This paper investigates the theoretical framework of causal inverse RL from the natural trajectories of an expert demonstrator, even when the reward signal is unobserved. Input covariates used by the expert to determine the original values of the action are unknown, introducing unobserved confounding bias in demonstration data. Our framework may apply to various fields in reality, including autonomous vehicle development, industrial automation, and chronic disease management. A positive impact of this work is that we discuss the potential risk of training IRL policy from demonstrations with the presence of unobserved confounding (UC). Our formulation of causal IRL is inherently robust against confounding bias. For example, solving the causal IRL problem in Eq. (1) requires the imitator to learn an effective policy that maximizes the reward in a worst-case causal model where the performance gap between the expert and imitator is the largest possible. More broadly, automated decision systems using causal inference methods prioritize safety and robustness during their decision-making processes. Such requirements are increasingly essential since black-box AI systems are prevalent, and our understandings of their potential implications are still limited. REPRODUCIBILITY STATEMENT The complete proof of all theoretical results presented in this paper, including Thms. 1 and 2, is provided in (Ruan et al., 2023, Appendix B). Details on the implementation of the proposed algorithms are included (Ruan et al., 2023, Appendix C). Finally, (Ruan et al., 2023, Appendix D) provides a detailed description of the experimental setup. Readers could find all appendices as part of the supplementary text after “References” section. We provided references to all existing datasets used in experiments, including HIGHD (Krajewski et al., 2018) and MNIST (LeCun, 1998). Other experiments are synthetic and do not introduce any new assets. Source codes for all experiments and simulations are released in the complete technical report (Ruan et al., 2023).
1. What is the focus and contribution of the paper regarding inverse reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its formulation and examples? 3. Do you have any concerns or questions about the gap ν ∗, the setting of sequential decision-making, and the extension to general settings? 4. How do the authors address the identifiability of the cumulative reward and the effective actions and covariates in their theorem? 5. What are your assessments of the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors considered the problem of imitation learning when the underlying causal structure of the environment is given. They provided a causal formulation of the problem of inverse RL (IRL) (equation (2)). They introduced the notion of minimal π -backdoor admissible scope and showed that the effect of such policies on the average reward can be computed from the observational distribution over ( O , Y ) , i.e., the observable variables and the reward. Based on this, a canonical equation for IRL is given in (4) where for two settings of MWAL and GAIL, this formulation is simplified. Then, the authors used a method called IDENTIFY which is beyond the graphical condition in π -backdoor admissible scope and it is sound and complete for checking whether a policy scope is identifiable from P ( O , Y ) . Finally, they proposed LISTIDSCOPE to enumerate all identifiable policy scopes. Strengths And Weaknesses Strength: The authors provided a nice formulation of inverse RL in the case that we have access to the causal graph. Moreover, they also gave some examples of this formulation for MWAL and GAIL. They presented a method that can enumerate all identifiable policy scopes from P ( O , Y ) . Weakness: Regarding the gap ν ∗ , it is not clear why the authors only considered the cases that ν ∗ ≤ 0 . This can only happen if the expert has suboptimal performance in the environment (or in causal language, in some SCM M ) that is acting. In the following, I give my detailed comments: It would be great if the authors can give some real cases that the performance of an expert is suboptimal even in the environment that is acting in it. The example in the introduction is not convincing enough as it is not clear why the expert acts based on just X which results in poor performance. The authors considered a ``sequential decision-making" setting however, it seems that Y is not indexed with time steps. How can the results in the paper be extended to the more general setting where we have a sequence of Y i 's? In the FAQ, it is mentioned that Y can be a set of variables but it might be the case that the cumulative reward is identifiable while each Y i is not identifiable from P ( O , Y ) . Under what conditions, is ν ∗ positive? What is the exact definition of effective actions and covariates in Theorem 1? What happens to other variables X and Z in Theorem 1? Clarity, Quality, Novelty And Reproducibility The paper is generally well-written but some more explanations can be given in the paper. For instance, it is good to discuss the projection algorithm mentioned in the footnote on page 7 and also mention why it is required to perform such projections. Regarding the novelty of the paper, I think the causal formulation of causal IRL and the algorithm LISTIDSCOPE are somewhat novel but it is not clear how causality can help to get better performance than the expert. The example in the introduction is not convincing. It seems that the results can be reproduced based on the explanation in the appendix.
ICLR
Title Causal Imitation Learning via Inverse Reinforcement Learning Abstract One of the most common ways children learn when unfamiliar with the environment is by mimicking adults. Imitation learning concerns an imitator learning to behave in an unknown environment from an expert’s demonstration; reward signals remain latent to the imitator. This paper studies imitation learning through causal lenses and extends the analysis and tools developed for behavior cloning (Zhang, Kumor, Bareinboim, 2020) to inverse reinforcement learning. First, we propose novel graphical conditions that allow the imitator to learn a policy performing as well as the expert’s behavior policy, even when the imitator and the expert’s state-action space disagree, and unobserved confounders (UCs) are present. When provided with parametric knowledge about the unknown reward function, such a policy may outperform the expert’s. Also, our method is easily extensible and allows one to leverage existing IRL algorithms even when UCs are present, including the multiplicative-weights algorithm (MWAL) (Syed & Schapire, 2008) and the generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016). Finally, we validate our framework by simulations using real-world and synthetic data. 1 INTRODUCTION Reinforcement Learning (RL) has been deployed and shown to perform extremely well in highly complex environments in the past decades (Sutton & Barto, 1998; Mnih et al., 2013; Silver et al., 2016; Berner et al., 2019). One of the critical assumptions behind many of the classical RL algorithms is that the reward signal is fully observed, and the reward function could be well-specified. In many real-world applications, however, it might be impractical to design a suitable reward function that evaluates each and every scenario (Randløv & Alstrøm, 1998; Ng et al., 1999). For example, in the context of human driving, it is challenging to design a precise reward function, and experimenting in the environment could be ill-advised; still, watching expert drivers operating is usually feasible. In machine learning, the imitation learning paradigm investigates the problem of how an agent should behave and learn in an environment with an unknown reward function by observing demonstrations from a human expert (Argall et al., 2009; Billard et al., 2008; Hussein et al., 2017; Osa et al., 2018). There are two major learning modalities that implements IL – behavioral cloning (BC) (Widrow, 1964; Pomerleau, 1989; Muller et al., 2006; Mülling et al., 2013; Mahler & Goldberg, 2017) and inverse reinforcement learning (IRL) Ng et al. (2000); Ziebart et al. (2008); Ho & Ermon (2016); Fu et al. (2017). BC methods directly mimic the expert’s behavior policy by learning a mapping from observed states to the expert’s action via supervised learning. Alternatively, IRL methods first learn a potential reward function under which the expert’s behavior policy is optimal. The imitator then obtains a policy by employing standard RL methods to maximize the learned reward function. Under some common assumptions, both BC and IRL are able to obtain policies that achieve the expert’s performance (Kumor et al., 2021; Swamy et al., 2021). Moreover, when additional parametric knowledge about the reward function is provided, IRL may produce a policy that outperforms the expert’s in the underlying environment (Syed & Schapire, 2008; Li et al., 2017; Yu et al., 2020). For concreteness, consider a learning scenario depicted in Fig. 1a, describing trajectories of humandriven cars collected by drones flying over highways (Krajewski et al., 2018; Etesami & Geiger, 2020). Using such data, we want to learn a policy X ← π(Z) deciding on the acceleration (action) X ∈ ∗ Equal contribution. {0, 1} of the demonstrator car based on velocities and locations Z of surrounding cars. The driving performance is measured by a latent reward signal Y . Consider an instance where Y ← (1−X)Z + X(1−Z) and values of Z are drawn uniformly over {0, 1}. A human expert generates demonstrations following a behavior policy such that P (X = 1 | Z = 0) = 0.6 and P (X = 0 | Z = 1) = 0.4. Evaluating the expert’s performance gives E[Y ] = P (X = 1, Z = 0) + P (X = 0, Z = 1) = 0.5. Now we apply standard IRL algorithms to learn a policy X ← π(Z) so that the imitator’s driving performance, denoted by E[Y | do(π)], is at least as good as the expert’s performance E[Y ]. Detailed derivations of IRL policy are shown in (Ruan et al., 2023, Appendix A). Note that E[Y |z, x] = x+ z − 2xz belongs to a family of reward functions fY (x, z) = αx+ βz − γxz, where 0 < α < γ. A typical IRL imitator solves a minimax problem minπ maxfY E [fY (X,Z)]−E [fY (X,Z) | do(π)]. The inner step “guesses” a reward function being optimized by the expert; while the outer step learns a policy maximizing the learned reward function. Applying these steps leads to a policy π∗ : X ← ¬Z with the expected reward E[Y | do(π∗)] = 1, which outperforms the sub-optimal expert. Despite the performance guarantees provided by existing imitation methods, both BC and IRL rely on the assumption that the expert’s input observations match those available to the imitator. More recently, there exists an emerging line of research under the rubric of causal imitation learning that augments the imitation paradigm to account for environments consisting of arbitrary causal mechanisms and the aforementioned mismatch between expert and imitator’s sensory capabilities (de Haan et al., 2019; Zhang et al., 2020; Etesami & Geiger, 2020; Kumor et al., 2021). Closest to our work, Zhang et al. (2020); Kumor et al. (2021) derived graphical criteria that completely characterize when and how BC could lead to successful imitation even when the agents perceive reality differently. Still, it is unclear how to perform IRL-type training if some expert’s observed states remain latent to the imitator, which leads to the presence of unobserved confounding (UCs) in expert’s demonstrations. Perhaps surprisingly, naively applying IRL methods when UCs are present does not necessarily lead to satisfactory performance, even when the expert itself behaves optimally. To witness, we now modify the previous highway driving scenario to demonstrate the challenges of UCs. In reality, covariates Z (i.e., velocities and location) are also affected by the car horn U1 of surrounding vehicles and the wind condition U2. However, due to the different perspectives of drones (recording from the top), such critical information (i.e, U1, U2 ) is not recorded by the camera and thus remains unobserved. Fig. 1b graphically describes this modified learning setting. More specifically, consider an instance where Z ← U1 ⊕ U2, Y ← ¬X ⊕ Z ⊕ U2; ⊕ is the exclusive-or operator; and values of U1 and U2 are drawn uniformly over {0, 1}. An expert driver, being able to hear the car horn U1, follows a behavior policy X ← U1 and achieves the optimal performance E[Y ] = 1. Meanwhile, observe that E[Y |z, x] = 1 belongs to a family of reward functions fY (x, z) = α (where α > 0). Solving minπ maxfY E [fY (X,Z)]− E [fY (X,Z) | do(π)] leads to an IRL policy π∗ with expected reward E[Y |do(π∗)] = 0.5, which is far from the expert’s optimal performance E[Y ] = 1. After all, a question that naturally arises is, under what conditions an IRL imitator procedure can perform well when UCs are present, and there is a mismatch between the perception of the two agents? In this paper, we answer this question and, more broadly, investigate the challenge of performing IRL through causal lenses. In particular, our contributions are summarized as follows. (1) We provide a novel, causal formulation of the inverse reinforcement learning problem. This formulation allows one to formally study and understand the conditions under which an IRL policy is learnable, including in settings where UCs cannot be ruled out a priori. (2) We derive a new graphical condition for deciding whether an imitating policy can be computed from the available data and knowledge, which provides a robust generalization of current IRL algorithms to non-Markovian settings, including GAIL (Ho & Ermon, 2016) and MWAL (Syed & Schapire, 2008). (3) Finally, we move beyond this graphical condition and develop an effective IRL algorithm for structural causal models (Pearl, 2000) with arbitrary causal relationships. Due to the space constraints, all proofs are provided in (Ruan et al., 2023, Appendix B). For a more detailed survey on imitation learning and causal inference, we refer readers to (Ruan et al., 2023, Appendix E). 1.1 PRELIMINARIES We use capital letters to denote random variables (X) and small letters for their values (x). DX represents the domain of X and PX the space of probability distributions over DX . For a set X , let |X| denote its dimension. The probability distribution over variables X is denoted by P (X). Similarly, P (Y |X) represents a set of conditional distributions P (Y |X = x) for all realizations x. We use abbreviations P (x) for probabilities P (X = x); so does P (Y = y |X = x) = P (y | x). Finally, indicator function 1{Z = z} returns 1 if Z = z holds true; otherwise 0. The basic semantic framework of our analysis rests on structural causal models (SCMs) (Pearl, 2000, Ch. 7). An SCM M is a tuple ⟨U ,V ,F , P (U)⟩ with V the set of endogenous, and U exogenous variables. F is a set of structural functions s.t. for fV ∈ F , V ← fV (paV ,uV ), with PAV ⊆ V ,UV ⊆ U . Values of U are drawn from an exogenous distribution P (U), inducing distribution P (V ) over endogenous variables V . Since the learner can observe only a subset of endogenous variables, we split V into a partition O ∪L where variable O ⊆ V are observed and L = V \O remain latent to the leaner. The marginal distribution P (O) is thus referred to as the observational distribution. An atomic intervention on a subset X ⊆ V , denoted by do(x), is an operation where values of X are set to constants x, replacing the functions fX = {fX : ∀X ∈X} that would normally determine their values. For an SCM M , let Mx be a submodel of M induced by intervention do(x). For a set Y ⊆ V , the interventional distribution P (s|do(x)) induced by do(x) is defined as the distribution over Y in the submodel Mx, i.e., PM (Y |do(x)) ≜ PMx(Y ). We leave M implicit when it is obvious from the context. Each SCM M is associated with a causal diagram G which is a directed acyclic graph where (e.g., see Fig. 1) solid nodes represent observed variables O, dashed nodes represent latent variables L, and arrows represent the arguments PAV of each function fV ∈ F . Exogenous variables U are not explicitly shown; a bi-directed arrow between nodes Vi and Vj indicates the presence of an unobserved confounder (UC) affecting both Vi and Vj . We will use family abbreviations to represent graphical relationships such as parents, children, descendants, and ancestors. For example, the set of parent nodes of X in G is denoted by pa(X)G = ∪X∈Xpa(X)G ; ch , de and an are similarly defined. Capitalized versions Pa,Ch,De,An include the argument as well, e.g. Pa(X)G = pa(X)G ∪X . For a subset X ⊆ V , the subgraph obtained from G with edges outgoing from X / incoming into X removed is written as GX /GX respectively. G[X] is a subgraph of G containing only nodes X and edges among them. A path from a node X to a node Y in G is a sequence of edges, which does not include a particular node more than once. Two sets of nodes X,Y are said to be d-separated by a third set Z in a DAG G, denoted by (X ⊥ Y |Z)G , if every edge path from nodes in X to nodes in Y is “blocked” by nodes in Z. The criterion of blockage follows (Pearl, 2000, Def. 1.2.3). For a more detailed survey on SCMs, we refer readers to (Pearl, 2000; Bareinboim et al., 2022). 2 CAUSAL INVERSE REINFORCEMENT LEARNING We investigate the sequential decision-making setting concerning a set of actions X , a series of covariates Z, and a latent reward Y in an SCM M . An expert (e.g., a physician, driver), operating in SCM M , selects actions following a behavior policy, which is the collection of structural functions fX = {fX | X ∈ X}. The expert’s performance is evaluated as the expected reward E[Y ]. On the other hand, a learning agent (i.e., the imitator) intervenes on actions X following an ordering X1 ≺ · · · ≺ Xn; each action Xi is associated with a set of features PA∗i ⊆ O \ {Xi}. A policy π over actions X is a sequence of decision rules π = {π1, . . . , πn}. Each decision rule πi(Xi | Zi) is a probability distribution over an action Xi ∈ X , conditioning on values of a set of covariates Zi ⊆ PA∗i . Such policies π are also referred to as dynamic treatment regimes (Murphy et al., 2001; Chakraborty & Murphy, 2014), which generalize personalized medicine to time-varying treatment settings in healthcare, in which treatment is repeatedly tailored to a patient’s dynamic state. A policy intervention on actions X following a policy π, denoted by do(π), entails a submodel Mπ from a SCM M where structural functions fX associated with X (i.e., the expert’s behavior policy) are replaced with decision rules Xi ∼ πi(Xi | Zi) for every Xi ∈X . A critical assumption throughout this paper is that submodel Mπ does not contain any cycles. Similarly, the interventional distribution P (V | do(π)) induced by policy π is defined as the joint distribution over V in Mπ . Throughout this paper, detailed parametrizations of the underlying SCM M are assumed to be unknown to the agent. Instead, the agent has access to the input: (1) a causal diagram G associated with M , and (2) the expert’s demonstrations, summarized as the observational distribution P (O). The goal of the agent is to output an imitating policy π∗ that achieves the expert’s performance. Definition 1. For an SCM M = ⟨U ,V ,F , P (U)⟩, an imitating policy π∗ is a policy such that its expected reward is lower bounded by the expert’s reward, i.e., EM [Y | do(π∗)] ≥ EM [Y ]. In words, the right-hand side is the expert’s performance that the agent wants to achieve, while the left-hand side is the real reward experienced by the agent. The challenge in imitation learning arises from the fact that the reward Y is not specified and latent, i.e., Y ̸∈ O. This precludes approaches that identify E[Y |do(π)] directly from the demonstration data (e.g., through the do- or soft-do-calculus Pearl (2000); Correa & Bareinboim (2020)). There exist methods in the literature for finding an imitating policy in Def. 1. Before describing their details, we first introduce some necessary concepts. For any policy π, we summarize its associated state-action domain using a sequence of pairs of variables called a policy scope S. Definition 2 (Lee & Bareinboim (2020)). For an SCM M , a policy scope S (for short, scope) over actions X is a sequence of tuples {⟨Xi,Zi⟩}ni=1 where Zi ⊆ PA ∗ i for every Xi ∈X . We will consistently use π ∼ S to denote a policy π associated with scope S . For example, consider a policy scope S = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩} over actions X1, X2 in Fig. 1c. A policy π ∼ S is a sequence of distributions π = {π1(X1 | Z1), π2(X2 | Z2)}. Zhang et al. (2020); Kumor et al. (2021) provide a graphical condition that is sufficient for learning an imitating policy via behavioral cloning (BC) provided with a causal diagram G. For a policy scope S = {⟨Xi,Zi⟩}ni=1, let G(i), i = 1, . . . , n, denote a manipulated graph obtained from G by the following steps: for all j = i+1, . . . , n, (1) remove arrows coming into every action Xj ; and (2) add direct arrows from nodes in Zj to Xj . Formally, the sequential π-backdoor criterion is defined as: Definition 3 (Kumor et al. (2021)). Given a causal diagram G, a policy scope S = {⟨Xi,Zi⟩}ni=1 is said to satisfy the sequential π-backdoor criterion in G (for short, π-backdoor admissible) if at each Xi ∈ X , one of the following conditions hold: (1) Xi is not an ancestor of Y in G(i), i.e., X ̸∈ An(Y )G(i) ; or (2) Zi blocks all backdoor path from Xi to Y in G(i), i.e., (Y ⊥ Xi|Zi) in G (i) Xi . (Kumor et al., 2021) showed that whenever a π-backdoor admissible scope S is available, one could learn an imitating policy π∗ ∼ S by setting π∗i (xi | zi) = P (xi | zi) for every action Xi ∈ X . For instance, consider the causal diagram G in Fig. 1c. Scope S = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩} is π-backdoor admissible since (X1 ⊥ Y |Z1) and (X2 ⊥ Y |Z2) hold in G, which is a super graph containing both manipulated G(1) and G(2). An imitating policy π∗ = {π∗1 , π∗2} is thus obtainable by setting π∗1(X1 | Z1) = P (X1 | Z1) and π∗2(X2 | Z2) = P (X2 | Z2). While impressive, a caveat of their results is that the performance of the imitator is restricted by that of the expert, i.e., E[Y | do(π∗)] = E[Y ]. In other words, causal BC provides an efficient way to mimic the expert’s performance. If the expert’s behavior is far from optimal, the same will hold for the learning agent. 2.1 MINIMAL SEQUENTIAL BACKDOOR CRITERION To circumvent this issue, we take a somewhat different approach to causal imitation by incorporating the principle of inverse reinforcement learning (IRL) principle. Following the game-theoretic approach (Syed & Schapire, 2008), we formulate the problem as learning to play a two-player zero-sum game in which the agent chooses a policy, and the nature chooses an SCM instance. A key property of this algorithm is that it allows us to incorporate prior parametric knowledge about the latent reward signal. When such knowledge is informative, our algorithm is about to obtain a policy that could significantly outperform the expert with respect to the unknown causal environment, while at the same time are guaranteed to be no worse. Formally, let M = {∀M | GM = G, PM (O) = P (O)} denote the set of SCMs compatible with both the causal diagram G and the observational distribution P (O). Fix a policy scope S. Now consider the optimization problem defined as follows. ν∗ = min π∼S max M∈M EM [Y ]− EM [Y | do(π)]. (1) The inner maximization in the above equation can be viewed as an causal IRL step where we attempt to “guess” a worst-case SCM M̂ compatible with G and P (O) that prioritizes the expert’s policy. That is, the gap in the performance between the expert’s and the imitator’s policies is maximized. Meanwhile, since the expert’s reward EM [Y ] is not affected by the imitator’s policy π, the outer minimization is equivalent to a planning step that finds a policy π∗ optimizing the learned SCM M̂ . Obviously, the solution π∗ is an imitating policy if gap ν∗ = 0. In cases where the expert is sub-optimal, i.e., EM̂ [Y ] < EM̂ [Y | do(π)] for some policies π, we may have ν∗ < 0. That is, the policy π∗ will dominate the expert’s policy fX regardless of parametrizations of SCM M in the worst-case scenario. In other words, π∗ to some extent ignores the sub-optimal expert, and instead exploits prior knowledge about the underlying model. Despite the clear semantics in terms of causal models, the optimization problem in Eq. (1) requires the learner to search over all possible SCMs compatible with the causal diagram G and observational distribution P (O). In principle, it entails a quite challenging search since one does not have access to the parametric forms of the underlying structural functions F nor the exogenous distribution P (U). It is not clear how the existing optimization procedures can be used. In this paper, we will develop novel methods to circumvent this issue, thus leading to effective imitating policies. Our first algorithm relies on a refinement of the sequential π-backdoor, based on the concept of minimality. A subscope S ′ of a policy scope S = {⟨Xi,Zi⟩}ni=1, denoted by S ′ ⊆ S , is a sequence {⟨Xi,Z ′i⟩} n i=1 where Z ′ i ⊆ Zi for every Xi ∈ X . A proper subscope S ′ ⊂ S is a subscope in S other than S itself. The minimal π-backdoor admissible scope is defined as follows. Definition 4. Given a causal diagram G, a π-backdoor admissible scope S is said to be minimal if there exists no proper subscope S ′ ⊂ S satisfying the sequential π-backdoor in G. Theorem 1. Given a causal diagram G, if there exists a minimal π-backdoor admissible scope S = {⟨Xi,Zi⟩}ni=1 in G, consider the following conditions: 1. Let effective actions X∗ = X ∩An(Y )GS and effective covariates Z∗ = ⋃ Xi∈X∗ Zi; 2. For i = 1, . . . , n+ 1, let X∗<i = {∀Xj ∈X∗ | j < i} and Z∗<i = ⋃ Xj∈X∗<i Zj . Then, for any policy π ∼ S, the expected reward E[Y | do(π)] is computable from P (O, Y ) as: E[Y | do(π)] = ∑ x∗,z∗ E[Y | x∗, z∗]ρπ(x∗, z∗) (2) where the occupancy measure ρπ(x∗, z∗) = ∏ Xi∈X∗ P ( zi | x∗<i, z∗<i ) πi(xi | zi). To illustrate, consider again the causal diagram G in Fig. 1c; the manipulated diagram G(2) = G and G(1) is obtained from G by removing Z2 ↔ X2. While scope S1 = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩} satisfies the sequential π-backdoor, it is not minimal since (X1 ⊥ Y ) in G(1)X1 . On the other hand, S2 = {⟨X1, ∅⟩, ⟨X2, {Z2}⟩} is minimal π-backdoor admissible since (X2 ⊥ Y | Z2) holds true in G(2)X2 ; and the covariate set {Z2} is minimal due to the presence of the backdoor path X2 ← Z2 → Y . Let us focus on the minimal π-backdoor admissible scope S2. Note that GS2 is a subgraph obtained from G by removing the bi-directed arrowZ2 ↔ X2. We must have effective actions X∗ = {X1, X2} and effective covariates Z∗ = {Z2}. Therefore, Z∗<1 = Z∗<2 = ∅ and Z∗<3 = {Z2}. For any policy π ∼ S2, Thm. 1 implies E[Y | do(π)] = ∑ x1,x2,z2 E[Y | x1, x2, z2]P (z2|x1)π2(x2|z2)π(x1). On the other hand, the same result in Thm. 1 does not necessarily hold for a non-minimal π-backdoor admissible scope. For instance, consider again the non-minimal scope S1 = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩}. The expected reward E[Y | do(π)] of a policy π ∼ S2 is not computable from Eq. (2), and is ultimately not identifiable from distribution P (O, Y ) in G (Tian, 2008). 2.2 IMITATION VIA INVERSE REINFORCEMENT LEARNING Once a minimal π-backdoor admissible scope S is found, there exist effective procedures to solve for an imitating policy in Eq. (1). Let R be a hypothesis class containing all expected rewards EM [Y | x∗, z∗] compatible with candidate SCMs M ∈ M , i.e., R = {EM [Y | x∗, z∗] | ∀M ∈M }. Applying the identification formula in Thm. 1 reduces the optimization problem in Eq. (1) as follows: ν∗ = min π∼S max r∈R ∑ x∗,z∗ r(x∗, z∗) (ρ(x∗, z∗)− ρπ(x∗, z∗)) (3) where the expert’s occupancy measure ρ(x∗, z∗) = P (x∗, z∗) and the agent’s occupancy measure ρπ(x ∗, z∗) is given by Eq. (2). The above minimax problem is solvable using standard IRL algorithms. The identification result in Thm. 1 ensures that the learned policy applies to any SCM compatible with the causal diagram and the observational data, thus robust to the unobserved confounding bias in the expert’s demonstrations. Henceforth, we will consistently refer to Eq. (3) as the canonical equation of causal IRL. In this paper, we solve for an imitating policy π∗ in Eq. (3) using state-of-the-art IRL algorithms, provided with common choices of parametric reward functions. These algorithms include the multiplicative-weights algorithm (MWAL) (Syed & Schapire, 2008) and the generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016). We refer readers to Algs. 3 and 4 in (Ruan et al., 2023, Appendix C) for more discussions on the pseudo-code and implementation details. Causal MWAL (Abbeel & Ng, 2004; Syed & Schapire, 2008) study IRL in Markov decision processes where the reward function r(x∗, z∗) is a linear combination of k-length feature expectations vectors ϕ(x∗, z∗). Particularly, let r(x∗, z∗) = w · ϕ(x∗, z∗) for a coefficient vector w contained in a convex set Sk = { w ∈ Rk | ∥w∥1 = 1 and w ⪰ 0 } . Let ϕ(i) be the i-th component of feature vector ϕ and let deterministic policies with scope S be ordered by π(1), . . . ,π(n). The canonical equation in Eq. (3) is reducible to a two-person zero-sum matrix game under linearity. Proposition 1. For a hypothesis class R = {r = w · ϕ | w ∈ Sk}, the solution ν∗ of the canonical equation in Eq. (3) is obtainable by solving the following minimax problem: ν∗ = min π∼S max w∈Sk w⊤Gπ, (4) where G is a k × n matrix given by G(i, j) = ∑ x∗,z∗ ϕ (i)(x∗, z∗) (ρ(x∗, z∗)− ρπ(j)(x∗, z∗)). There exist effective multiplicative weights algorithms for solving the matrix game in Eq. (4), including MW (Freund & Schapire, 1999) and MWAL (Syed & Schapire, 2008). Causal GAIL (Ho & Ermon, 2016) introduces the GAIL algorithm for learning an imitating policy in Markov decision processes with a general family of non-linear reward functions. In particular, r(x∗, z∗) takes values in the real space R, i.e., r ∈ RX∗,Z∗ where RX∗,Z∗ = {r : DX∗ ×DZ∗ 7→ R}. The complexity of reward function r is penalized by a convex regularization function ψ(r), i.e., ν∗ = min π∼S max r∈RX×Z ∑ x∗,z∗ r(x∗, z∗) (ρ(x∗, z∗)− ρπ(x∗, z∗))− ψ(r) (5) Henceforth, we will consistently refer to Eq. (5) as the penalized canonical equation of causal IRL. It is often preferable to solve its conjugate form. Formally, Proposition 2. For a hypothesis class R = {r : DX∗ ×DZ∗ 7→ R} regularized by ψ, the solution ν∗ of the penalized canonical equation in Eq. (5) is obtainable by solving the following problem: ν∗ = min π∼S ψ∗ (ρ− ρπ) (6) where ψ∗ be a conjugate function of ψ and is given by ψ∗ = maxr∈RX×Z a⊤r − ψ(r). Eq. (6) seeks a policy π which minimizes the divergence of the occupancy measures between the imitator and the expert, as measured by the function ψ∗. The computational framework of generative adversarial networks (Goodfellow et al., 2014) provides an effective approach to solve such a matching problem, e.g., the GAIL algorithm (Ho & Ermon, 2016). 3 CAUSAL IMITATION WITHOUT SEQUENTIAL BACKDOOR In this section, we investigate causal IRL beyond the condition of minimal sequential π-backdoor. Observe that the key to the reduction of the canonical causal IRL equation in Eq. (3) lies in the identification of expected rewards E[Y | do(π)] had the latent reward Y been observed. Next we will study general conditions under which E[Y | do(π)] is uniquely discernible from distribution P (O, Y ) in the causal diagram G, called the identifiability of causal effects (Pearl, 2000, Def. 3.2.4). Definition 5 (Identifiability). Given a causal diagram G and a policy π ∼ S, the expected reward E[Y | do(π)] is said to be identifiable from distribution P (O, Y ) in G if E[Y | do(π)] is uniquely computable from P (O, Y ) in any SCM M compatible with G. We say a policy scope S is identifiable (from P (O, Y ) in G) if for all policies π ∼ S , the corresponding expected rewards E[Y | do(π)] are identifiable from P (O, Y ) in G. Our next result shows that whenever an identifiable policy scope S is found, one could always reduce the causal IRL problem to the canonical optimization equation in Eq. (3). Theorem 2. Given a causal diagram G, a policy scope S is identifiable from P (O, Y ) in G if and only if for any policy π ∼ S, the expected reward E[Y | do(π)] is computable from P (O, Y ) as E[Y | do(π)] = ∑ x∗,z∗ E[Y | x∗, z∗]ρπ(x∗, z∗) (7) where subsets X∗ ⊆ X , Z∗ ⊆ O \X; and the imitator’s occupancy measure ρπ(x∗, z∗) is a function of the observational distribution P (O) and policy π. Thm. 2 suggests a general procedure to learn an imitating policy via causal IRL. Whenever an identifiable scope S is found, the identification formula in Eq. (7) permits one to reduce the optimization problem in Eq. (1) to the canonical equation in Eq. (3). One could thus obtain an imitating policy π ∼ S by solving Eq. (3) where the expert’s occupancy measure ρ(x∗, z∗) = P (x∗, z∗) and the imitator’s occupancy measure ρπ(x∗, z∗) is given by Eq. (7). As an example, consider the frontdoor diagram described in Fig. 2a and a policy scope S = {⟨X, ∅⟩}. The expected reward E[Y | do(π)] = ∑ x′ E[Y | do(x′)]π(x′) and E[Y | do(x′)] is identifiable from P (X,Y, Z) using the frontdoor adjustment formula (Pearl, 2000, Thm. 3.3.4). The expected reward E[Y | do(π)] of any policy π(X) could be written as: E[Y | do(π)] = ∑ z,x E[Y | x, z]P (x) ∑ x′ P (z|x′)π(x′). (8) Let occupancy measures ρ(x, z) = P (x, z) and ρπ(x, z) = P (x) ∑ x′ P (z|x′)π(x′). We could thus learn an imitating policy in the frontdoor diagram by solving the canonical equation given by: ν∗ = min π∼S max r∈R ∑ x,z r(x, z) (ρ(x, z)− ρπ(x, z)) , (9) where R is a hypothesis class of the reward function r(x, z) ≜ E[Y | x, z]. The solution π∗(X) is an imitating policy performing at least as well as the expert’s behavior policy if the gap ν∗ ≤ 0. Next, we will describe how to obtain the identification formula in Eq. (7) provided with an identifiable scope S . Without loss of generality, we will assume that the reward Y is the only endogenous variable that is latent in the causal diagram G, i.e., V = O∪{Y }.∗ We will utilize a special type of clustering of nodes in the causal diagram G, called the confounded component (for short, c-component). Definition 6 (C-component (Tian & Pearl, 2002)). For a causal diagram G, a subset C ⊆ V is a c-component if any pair Vi, Vj ∈ C is connected by a bi-directed path in G. For instance, the frontdoor diagram in Fig. 2a contains two c-components C1 = {X,Y } and C2 = {Z}. We will utilize a sound and complete procedure IDENTIFY (Tian, 2002; 2008) for identifying causal effects E[Y | do(π)] of an arbitrary policy π ∼ S . Particularly, IDENTIFY takes as input the causal diagram G, a reward Y , and a policy scope S . It returns an identification formula for E[Y | do(π)] from P (O, Y ) if expected rewards of all policies π ∼ S are identifiable. Otherwise, IDENTIFY(G, Y,S) = “FAIL”. Details of IDENTIFY are shown in (Zhang et al., 2020, Appendix B). Recall that GS is the causal diagram of submodel Mπ induced by policy π ∼ S. Fig. 2b shows diagram GS obtained from the frontdoor graph G and scope S = {⟨X, ∅⟩} described in Fig. 2a. Let ZY = An(Y ) be ancestors of Y in GS . Our next result shows that IDENTIFY(G, Y,S) is ensured to find an identification formula of the form in Eq. (7) when it is identifiable. Lemma 1. Given a causal diagram G, a policy scope S is identifiable from P (O, Y ) in G if and only if IDENTIFY(G, Y,S) ̸= “FAIL”. Moreover, IDENTIFY(G, Y,S) returns an identification formula of the form in Eq. (7) where X∗ = Pa(CY ) ∩X and Z∗ = Pa(CY ) \ ({Y } ∪X); and CY is a c-component containing reward Y in subgraph G[An(ZY )]. ∗ Otherwise, one could always simplify the diagram G and project other latent variables L \ {Y } using the projection algorithm (Tian, 2002, Sec. 4.5), without affecting the identifiability of target query E[Y | do(π)]. For example, for the frontdoor diagram G in Fig. 2a, the manipulated diagram GS with scope S = {⟨X, ∅⟩} is described in Fig. 2b. Since ZY = An(Y )GS = {X,Z, Y }, CY is thus given by {X,Y }. Lem. 1 implies that X∗ = Pa({X,Y }) ∩ {X} = {X} and Z∗ = Pa({X,Y }) \ {X,Y } = {Z}. Applying IDENTIFY(G, Y, {⟨X, ∅}) returns the frontdoor adjustment formula in Eq. (8). 3.1 SEARCHING FOR IDENTIFIABLE POLICY SCOPES The remainder of this section describes an effective algorithm to find identifiable policy scopes S had the latent reward signal Y been observed. Let S denote the collection of all identifiable policy scopes S from distribution P (O, Y ) in the causal diagram G. Our algorithm LISTIDSCOPE, described in Alg. 1, enumerates elements in S. It takes as input a causal diagram G, a reward signal Y , and subsets L = ∅ and R = ⋃n i=1 PA ∗ i . More specifically, LISTIDSCOPE maintains two scopes Sl ⊆ Sr (Step 2). It performs backtrack search to find identifiable scopes S in G such that Sl ⊆ S ⊆ Sr. It aborts branches that either (1) all subscopes in Sr are identifiable (Step 3); or (2) all subscopes containing Sl are non-identifiable (Step 6). The following proposition supports our aborting criterion. Lemma 2. Given a causal diagram G, for policy scopes S ′ ⊆ S , S ′ is identifiable from distribution P (O, Y ) in G if S is identifiable from P (O, Y ) in G. Algorithm 1: LISTIDSCOPE 1: Input: G, Y and subsets L ⊆ R 2: Output: a set of identifiable policy scopes S 3: Let scopes Sr = {⟨Xi,R ∩PA∗i ⟩} n i=1 and Sl = {⟨Xi,L ∩PA∗i ⟩} n i=1. 4: if IDENTIFY(G, Y,Sr) ̸= “FAIL′′ then 5: Output Sr. 6: end if 7: if IDENTIFY(G, Y,Sl) ̸= “FAIL′′ then 8: Pick an arbitrary V ∈ R \L. 9: LISTIDSCOPE(G, Y,L ∪ {V },R). 10: LISTIDSCOPE(G, Y,L,R \ {V }). 11: end if At Step 7, LISTIDSCOPE picks an arbitrary variable V that is included in input covariates R but not in L. It then recursively returns all identifiable policy scopes S in G: the first recursive call returns scopes taking V as an input for some actions Xi ∈ X and the second call return all scopes that do not consider V when selecting values for all actions X . We say a policy π is associated with a collection of policy scopes S, denoted by π ∼ S, if there exists S ∈ S so that π ∼ S. It is possible to show that LISTIDSCOPE produces a collection of identifiable scopes that is sufficient for the imitation task. Theorem 3. For a causal diagram G and a reward Y , LISTIDSCOPE(G, Y, ∅, ⋃n i=1 PA ∗ i ) enumerates a subset S∗ ⊆ S so that for any π ∼ S, there is π∗ ∼ S∗ where E[Y | do(π)] = E[Y | do(π∗)]. Moreover, LISTIDSCOPE outputs identifiable policy scopes with a polynomial delay. This follows from the observation that LISTIDSCOPE searches over a tree of policy scopes with height at most | ⋃n i=1 PA ∗ i | and IDENTIFY(G, Y,S) terminates in polynomial steps w.r.t. the size of diagram G. 4 EXPERIMENTS In this section, we demonstrate our framework on various imitation learning tasks, ranging from synthetic causal models to real-world datasets, including highway driving (Krajewski et al., 2018) and images (LeCun, 1998). We find that our approach is able to incorporate parametric knowledge about the reward function and achieve effective imitating policies across different causal diagrams. For all experiments, we evaluate our proposed Causal-IRL based on the canonical equation formulation in Eq. (3). As a baseline, we also include: (1) standard BC mimicking the expert’s nominal behavior policy; (2) standard IRL utilizing all observed covariates preceding every Xi ∈ X while being blind to causal relationships in the underlying model; and (3) Causal-BC (Zhang et al., 2020; Kumor et al., 2021) that learn an imitating policy with the sequential π-backdoor criterion. We refer readers to (Ruan et al., 2023, Appendix D) for additional experiments and more discussions on the experimental setup. Backdoor Consider an SCM instance compatible with Fig. 1c including binary observed variables Z1, X1, Z2, X2, Y ∈ {0, 1}. Causal-BC utilizes a sequential π-backdoor admissible scope {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩}; while Causal-IRL utilizes the scope {⟨X1, ∅⟩, ⟨X2, {Z2}⟩} satisfying the minimal sequential π-backdoor. Simulation results, shown in Fig. 3a, reveal that Causal-IRL consistently outperforms the expert’s policy and other imitation strategies by exploiting additional parametric knowledge about the expected reward E[Y | X1, X2, Z2]; Causal-BC is able to achieve the expert’s performance. Unsurprisingly, neither BC nor IRL is able to obtain an imitating policy. Highway Driving We consider a learning scenario where the agent learns a driving policy from the observed trajectories of a human expert. Causal diagram of this example is provided in (Ruan et al., 2023, Appendix D, Fig. 4) where X1 is the accelerations of the ego vehicle at the previous step; Z1 is both longitudinal and lateral historical accelerations of the ego vehicle two steps ago; X2 is the velocity of the ego vehicle; Z2 is the velocity of the preceding vehicle; W indicates the information from surrounding vehicles. Values of X1, X2, Z1, Z2 are drawn from a real-world driving dataset HighD Krajewski et al. (2018). The reward Y is decided by a non-linear function fY (X2, Z2, UY ). Both Causal-IRL and Causal-BC utilize the scope {⟨X1, ∅⟩, ⟨X2, {Z2}⟩}. Causal-IRL also exploits the additional knowledge that the expected reward E[Y | X1, X2, Z2] is a monotone function via reward augmentation (Li et al., 2017). Simulation results are shown in Fig. 3b. We found that Causal-IRL performs the best among all strategies. Causal-BC is able to achieve the expert’s performance. BC and IRL perform the worst among all and fail to obtain an imitating policy. MNIST Digits Consider again the frontdoor diagram in Fig. 2a. To evaluate the performance of our proposed approach in high-dimensional domains, we now replace variable Z with sampled images drawn from MNIST digits dataset (LeCun, 1998). The reward Y is decided by a linear function taking Z and an unobserved confounder UX,Y as input. The Causal-IRL formulates the imitation problem as a two-person zero-sum game through the frontdoor adjustment described in Eq. (9), which can be solved by the MW algorithm (Freund & Schapire, 1999; Syed & Schapire, 2008). As shown in Fig. 3c, simulation results reveal that Causal-IRL outperforms Causal-BC and BC; while IRL performs the worst among all the algorithms. Infinite MDPUC To demonstrate our proposed framework in the sequential decision-making setting with an infinite horizon, we consider a generalized Markov decision process incorporating unobserved confounders (Ruan & Di, 2022), called the MDPUC (Zhang & Bareinboim, 2022). This sequential model simulates real-world driving dynamics. By exploiting the Markov property over time steps, we are able to decompose the causal diagram over the infinite horizon into a collection of sub-graphs, one for each time step i = 1, 2, . . . . Fig. 1d shows the causal diagram spanning time steps i = 1, 2, 3. As a comparison, BC and IRL still utilize the stationary policy {⟨Xi, {Zi}⟩}. By applying Thm. 1 at each time step, we obtain a π-backdoor admissible policy scope {⟨Xi, {Zi, Xi−1, Zi−1}⟩} for Causal-IRL and Causal-BC. Simulation results are shown in Fig. 3d. One could see by inspection that Causal-IRL performs the best and achieves the expert’s performance. 5 CONCLUSION This paper investigates imitation learning via inverse reinforcement learning (IRL) in the semantical framework of structural causal models. The goal is to find an effective imitating policy that performs at least as well as the expert’s behavior policy from combinations of demonstration data, qualitative knowledge the data-generating mechanisms represented as a causal diagram, and quantitative knowledge about the reward function. We provide a graphical criterion (Thm. 1) based on the sequential backdoor, which allows one to obtain an imitating policy by solving a canonical optimization equation of causal IRL. Such a canonical formulation addresses the challenge of the presence of unobserved confounders (UCs), and is solvable by leveraging standard IRL algorithms (Props. 1 and 2). Finally, we move beyond the backdoor criterion and show that the canonical equation is achievable whenever expected rewards of policies are identifiable had the reward also been observed (Thms. 2 and 3). ACKNOWLEDGEMENTS This research was supported in part by the NSF, ONR, AFOSR, DoE, Amazon, JP Morgan, and The Alfred P. Sloan Foundation. ETHICS STATEMENT This paper investigates the theoretical framework of causal inverse RL from the natural trajectories of an expert demonstrator, even when the reward signal is unobserved. Input covariates used by the expert to determine the original values of the action are unknown, introducing unobserved confounding bias in demonstration data. Our framework may apply to various fields in reality, including autonomous vehicle development, industrial automation, and chronic disease management. A positive impact of this work is that we discuss the potential risk of training IRL policy from demonstrations with the presence of unobserved confounding (UC). Our formulation of causal IRL is inherently robust against confounding bias. For example, solving the causal IRL problem in Eq. (1) requires the imitator to learn an effective policy that maximizes the reward in a worst-case causal model where the performance gap between the expert and imitator is the largest possible. More broadly, automated decision systems using causal inference methods prioritize safety and robustness during their decision-making processes. Such requirements are increasingly essential since black-box AI systems are prevalent, and our understandings of their potential implications are still limited. REPRODUCIBILITY STATEMENT The complete proof of all theoretical results presented in this paper, including Thms. 1 and 2, is provided in (Ruan et al., 2023, Appendix B). Details on the implementation of the proposed algorithms are included (Ruan et al., 2023, Appendix C). Finally, (Ruan et al., 2023, Appendix D) provides a detailed description of the experimental setup. Readers could find all appendices as part of the supplementary text after “References” section. We provided references to all existing datasets used in experiments, including HIGHD (Krajewski et al., 2018) and MNIST (LeCun, 1998). Other experiments are synthetic and do not introduce any new assets. Source codes for all experiments and simulations are released in the complete technical report (Ruan et al., 2023).
1. What is the focus and contribution of the paper on causal imitation learning? 2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis? 3. What are the weaknesses of the paper regarding its clarity, presentation, and completeness? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or questions regarding the problem setting and its difference from partially observable MDP (POMDP)?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a framework for causal imitation learning. The problem assumes that the experts has access to latent features that are not directly observable to imitators. The solution is based on identifying the minimal \pi-backdoor admissible scope from the causal diagram. Experiments on a few datasets are provided to validate the effectiveness of the proposed approach. Strengths And Weaknesses Strengths: The problem described in the paper is meaningful and challenging. Theoretical proofs for theorems are provided. Weaknesses: The paper lacks a clear problem formulation. The input/output and objectives of the problem are not provided. The paper uses many different variables, where some of them are either undefined, or repeatedly used for different purposes. The presentation makes it hard to follow the details of the paper. The paper seems to be incomplete. The only algorithm only shows how to identify identifiable policy scope. However, how to use this algorithm with IRL or GAIL is not provided. The experiments are not convincing. Why would the MNIST dataset be used, and how to learn a policy from this dataset? How is the problem setting in this paper different from a partially observable MDP (POMDP)? More discussions are needed. Clarity, Quality, Novelty And Reproducibility The proposed work has its value especially for imitation learning. However, the paper lacks a clear problem formulation. The presentation, in particular, the use of variables, is very confusing and hard to follow. The solution part does not appear to be complete. A number of concepts are not clearly defined. For example, variable X and x used to refer to different concepts in Section 1.1 and Section 2. The concept of "intervention" is not clearly defined and denoted using different variables (e.g., X and do(\pi)).
ICLR
Title Causal Imitation Learning via Inverse Reinforcement Learning Abstract One of the most common ways children learn when unfamiliar with the environment is by mimicking adults. Imitation learning concerns an imitator learning to behave in an unknown environment from an expert’s demonstration; reward signals remain latent to the imitator. This paper studies imitation learning through causal lenses and extends the analysis and tools developed for behavior cloning (Zhang, Kumor, Bareinboim, 2020) to inverse reinforcement learning. First, we propose novel graphical conditions that allow the imitator to learn a policy performing as well as the expert’s behavior policy, even when the imitator and the expert’s state-action space disagree, and unobserved confounders (UCs) are present. When provided with parametric knowledge about the unknown reward function, such a policy may outperform the expert’s. Also, our method is easily extensible and allows one to leverage existing IRL algorithms even when UCs are present, including the multiplicative-weights algorithm (MWAL) (Syed & Schapire, 2008) and the generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016). Finally, we validate our framework by simulations using real-world and synthetic data. 1 INTRODUCTION Reinforcement Learning (RL) has been deployed and shown to perform extremely well in highly complex environments in the past decades (Sutton & Barto, 1998; Mnih et al., 2013; Silver et al., 2016; Berner et al., 2019). One of the critical assumptions behind many of the classical RL algorithms is that the reward signal is fully observed, and the reward function could be well-specified. In many real-world applications, however, it might be impractical to design a suitable reward function that evaluates each and every scenario (Randløv & Alstrøm, 1998; Ng et al., 1999). For example, in the context of human driving, it is challenging to design a precise reward function, and experimenting in the environment could be ill-advised; still, watching expert drivers operating is usually feasible. In machine learning, the imitation learning paradigm investigates the problem of how an agent should behave and learn in an environment with an unknown reward function by observing demonstrations from a human expert (Argall et al., 2009; Billard et al., 2008; Hussein et al., 2017; Osa et al., 2018). There are two major learning modalities that implements IL – behavioral cloning (BC) (Widrow, 1964; Pomerleau, 1989; Muller et al., 2006; Mülling et al., 2013; Mahler & Goldberg, 2017) and inverse reinforcement learning (IRL) Ng et al. (2000); Ziebart et al. (2008); Ho & Ermon (2016); Fu et al. (2017). BC methods directly mimic the expert’s behavior policy by learning a mapping from observed states to the expert’s action via supervised learning. Alternatively, IRL methods first learn a potential reward function under which the expert’s behavior policy is optimal. The imitator then obtains a policy by employing standard RL methods to maximize the learned reward function. Under some common assumptions, both BC and IRL are able to obtain policies that achieve the expert’s performance (Kumor et al., 2021; Swamy et al., 2021). Moreover, when additional parametric knowledge about the reward function is provided, IRL may produce a policy that outperforms the expert’s in the underlying environment (Syed & Schapire, 2008; Li et al., 2017; Yu et al., 2020). For concreteness, consider a learning scenario depicted in Fig. 1a, describing trajectories of humandriven cars collected by drones flying over highways (Krajewski et al., 2018; Etesami & Geiger, 2020). Using such data, we want to learn a policy X ← π(Z) deciding on the acceleration (action) X ∈ ∗ Equal contribution. {0, 1} of the demonstrator car based on velocities and locations Z of surrounding cars. The driving performance is measured by a latent reward signal Y . Consider an instance where Y ← (1−X)Z + X(1−Z) and values of Z are drawn uniformly over {0, 1}. A human expert generates demonstrations following a behavior policy such that P (X = 1 | Z = 0) = 0.6 and P (X = 0 | Z = 1) = 0.4. Evaluating the expert’s performance gives E[Y ] = P (X = 1, Z = 0) + P (X = 0, Z = 1) = 0.5. Now we apply standard IRL algorithms to learn a policy X ← π(Z) so that the imitator’s driving performance, denoted by E[Y | do(π)], is at least as good as the expert’s performance E[Y ]. Detailed derivations of IRL policy are shown in (Ruan et al., 2023, Appendix A). Note that E[Y |z, x] = x+ z − 2xz belongs to a family of reward functions fY (x, z) = αx+ βz − γxz, where 0 < α < γ. A typical IRL imitator solves a minimax problem minπ maxfY E [fY (X,Z)]−E [fY (X,Z) | do(π)]. The inner step “guesses” a reward function being optimized by the expert; while the outer step learns a policy maximizing the learned reward function. Applying these steps leads to a policy π∗ : X ← ¬Z with the expected reward E[Y | do(π∗)] = 1, which outperforms the sub-optimal expert. Despite the performance guarantees provided by existing imitation methods, both BC and IRL rely on the assumption that the expert’s input observations match those available to the imitator. More recently, there exists an emerging line of research under the rubric of causal imitation learning that augments the imitation paradigm to account for environments consisting of arbitrary causal mechanisms and the aforementioned mismatch between expert and imitator’s sensory capabilities (de Haan et al., 2019; Zhang et al., 2020; Etesami & Geiger, 2020; Kumor et al., 2021). Closest to our work, Zhang et al. (2020); Kumor et al. (2021) derived graphical criteria that completely characterize when and how BC could lead to successful imitation even when the agents perceive reality differently. Still, it is unclear how to perform IRL-type training if some expert’s observed states remain latent to the imitator, which leads to the presence of unobserved confounding (UCs) in expert’s demonstrations. Perhaps surprisingly, naively applying IRL methods when UCs are present does not necessarily lead to satisfactory performance, even when the expert itself behaves optimally. To witness, we now modify the previous highway driving scenario to demonstrate the challenges of UCs. In reality, covariates Z (i.e., velocities and location) are also affected by the car horn U1 of surrounding vehicles and the wind condition U2. However, due to the different perspectives of drones (recording from the top), such critical information (i.e, U1, U2 ) is not recorded by the camera and thus remains unobserved. Fig. 1b graphically describes this modified learning setting. More specifically, consider an instance where Z ← U1 ⊕ U2, Y ← ¬X ⊕ Z ⊕ U2; ⊕ is the exclusive-or operator; and values of U1 and U2 are drawn uniformly over {0, 1}. An expert driver, being able to hear the car horn U1, follows a behavior policy X ← U1 and achieves the optimal performance E[Y ] = 1. Meanwhile, observe that E[Y |z, x] = 1 belongs to a family of reward functions fY (x, z) = α (where α > 0). Solving minπ maxfY E [fY (X,Z)]− E [fY (X,Z) | do(π)] leads to an IRL policy π∗ with expected reward E[Y |do(π∗)] = 0.5, which is far from the expert’s optimal performance E[Y ] = 1. After all, a question that naturally arises is, under what conditions an IRL imitator procedure can perform well when UCs are present, and there is a mismatch between the perception of the two agents? In this paper, we answer this question and, more broadly, investigate the challenge of performing IRL through causal lenses. In particular, our contributions are summarized as follows. (1) We provide a novel, causal formulation of the inverse reinforcement learning problem. This formulation allows one to formally study and understand the conditions under which an IRL policy is learnable, including in settings where UCs cannot be ruled out a priori. (2) We derive a new graphical condition for deciding whether an imitating policy can be computed from the available data and knowledge, which provides a robust generalization of current IRL algorithms to non-Markovian settings, including GAIL (Ho & Ermon, 2016) and MWAL (Syed & Schapire, 2008). (3) Finally, we move beyond this graphical condition and develop an effective IRL algorithm for structural causal models (Pearl, 2000) with arbitrary causal relationships. Due to the space constraints, all proofs are provided in (Ruan et al., 2023, Appendix B). For a more detailed survey on imitation learning and causal inference, we refer readers to (Ruan et al., 2023, Appendix E). 1.1 PRELIMINARIES We use capital letters to denote random variables (X) and small letters for their values (x). DX represents the domain of X and PX the space of probability distributions over DX . For a set X , let |X| denote its dimension. The probability distribution over variables X is denoted by P (X). Similarly, P (Y |X) represents a set of conditional distributions P (Y |X = x) for all realizations x. We use abbreviations P (x) for probabilities P (X = x); so does P (Y = y |X = x) = P (y | x). Finally, indicator function 1{Z = z} returns 1 if Z = z holds true; otherwise 0. The basic semantic framework of our analysis rests on structural causal models (SCMs) (Pearl, 2000, Ch. 7). An SCM M is a tuple ⟨U ,V ,F , P (U)⟩ with V the set of endogenous, and U exogenous variables. F is a set of structural functions s.t. for fV ∈ F , V ← fV (paV ,uV ), with PAV ⊆ V ,UV ⊆ U . Values of U are drawn from an exogenous distribution P (U), inducing distribution P (V ) over endogenous variables V . Since the learner can observe only a subset of endogenous variables, we split V into a partition O ∪L where variable O ⊆ V are observed and L = V \O remain latent to the leaner. The marginal distribution P (O) is thus referred to as the observational distribution. An atomic intervention on a subset X ⊆ V , denoted by do(x), is an operation where values of X are set to constants x, replacing the functions fX = {fX : ∀X ∈X} that would normally determine their values. For an SCM M , let Mx be a submodel of M induced by intervention do(x). For a set Y ⊆ V , the interventional distribution P (s|do(x)) induced by do(x) is defined as the distribution over Y in the submodel Mx, i.e., PM (Y |do(x)) ≜ PMx(Y ). We leave M implicit when it is obvious from the context. Each SCM M is associated with a causal diagram G which is a directed acyclic graph where (e.g., see Fig. 1) solid nodes represent observed variables O, dashed nodes represent latent variables L, and arrows represent the arguments PAV of each function fV ∈ F . Exogenous variables U are not explicitly shown; a bi-directed arrow between nodes Vi and Vj indicates the presence of an unobserved confounder (UC) affecting both Vi and Vj . We will use family abbreviations to represent graphical relationships such as parents, children, descendants, and ancestors. For example, the set of parent nodes of X in G is denoted by pa(X)G = ∪X∈Xpa(X)G ; ch , de and an are similarly defined. Capitalized versions Pa,Ch,De,An include the argument as well, e.g. Pa(X)G = pa(X)G ∪X . For a subset X ⊆ V , the subgraph obtained from G with edges outgoing from X / incoming into X removed is written as GX /GX respectively. G[X] is a subgraph of G containing only nodes X and edges among them. A path from a node X to a node Y in G is a sequence of edges, which does not include a particular node more than once. Two sets of nodes X,Y are said to be d-separated by a third set Z in a DAG G, denoted by (X ⊥ Y |Z)G , if every edge path from nodes in X to nodes in Y is “blocked” by nodes in Z. The criterion of blockage follows (Pearl, 2000, Def. 1.2.3). For a more detailed survey on SCMs, we refer readers to (Pearl, 2000; Bareinboim et al., 2022). 2 CAUSAL INVERSE REINFORCEMENT LEARNING We investigate the sequential decision-making setting concerning a set of actions X , a series of covariates Z, and a latent reward Y in an SCM M . An expert (e.g., a physician, driver), operating in SCM M , selects actions following a behavior policy, which is the collection of structural functions fX = {fX | X ∈ X}. The expert’s performance is evaluated as the expected reward E[Y ]. On the other hand, a learning agent (i.e., the imitator) intervenes on actions X following an ordering X1 ≺ · · · ≺ Xn; each action Xi is associated with a set of features PA∗i ⊆ O \ {Xi}. A policy π over actions X is a sequence of decision rules π = {π1, . . . , πn}. Each decision rule πi(Xi | Zi) is a probability distribution over an action Xi ∈ X , conditioning on values of a set of covariates Zi ⊆ PA∗i . Such policies π are also referred to as dynamic treatment regimes (Murphy et al., 2001; Chakraborty & Murphy, 2014), which generalize personalized medicine to time-varying treatment settings in healthcare, in which treatment is repeatedly tailored to a patient’s dynamic state. A policy intervention on actions X following a policy π, denoted by do(π), entails a submodel Mπ from a SCM M where structural functions fX associated with X (i.e., the expert’s behavior policy) are replaced with decision rules Xi ∼ πi(Xi | Zi) for every Xi ∈X . A critical assumption throughout this paper is that submodel Mπ does not contain any cycles. Similarly, the interventional distribution P (V | do(π)) induced by policy π is defined as the joint distribution over V in Mπ . Throughout this paper, detailed parametrizations of the underlying SCM M are assumed to be unknown to the agent. Instead, the agent has access to the input: (1) a causal diagram G associated with M , and (2) the expert’s demonstrations, summarized as the observational distribution P (O). The goal of the agent is to output an imitating policy π∗ that achieves the expert’s performance. Definition 1. For an SCM M = ⟨U ,V ,F , P (U)⟩, an imitating policy π∗ is a policy such that its expected reward is lower bounded by the expert’s reward, i.e., EM [Y | do(π∗)] ≥ EM [Y ]. In words, the right-hand side is the expert’s performance that the agent wants to achieve, while the left-hand side is the real reward experienced by the agent. The challenge in imitation learning arises from the fact that the reward Y is not specified and latent, i.e., Y ̸∈ O. This precludes approaches that identify E[Y |do(π)] directly from the demonstration data (e.g., through the do- or soft-do-calculus Pearl (2000); Correa & Bareinboim (2020)). There exist methods in the literature for finding an imitating policy in Def. 1. Before describing their details, we first introduce some necessary concepts. For any policy π, we summarize its associated state-action domain using a sequence of pairs of variables called a policy scope S. Definition 2 (Lee & Bareinboim (2020)). For an SCM M , a policy scope S (for short, scope) over actions X is a sequence of tuples {⟨Xi,Zi⟩}ni=1 where Zi ⊆ PA ∗ i for every Xi ∈X . We will consistently use π ∼ S to denote a policy π associated with scope S . For example, consider a policy scope S = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩} over actions X1, X2 in Fig. 1c. A policy π ∼ S is a sequence of distributions π = {π1(X1 | Z1), π2(X2 | Z2)}. Zhang et al. (2020); Kumor et al. (2021) provide a graphical condition that is sufficient for learning an imitating policy via behavioral cloning (BC) provided with a causal diagram G. For a policy scope S = {⟨Xi,Zi⟩}ni=1, let G(i), i = 1, . . . , n, denote a manipulated graph obtained from G by the following steps: for all j = i+1, . . . , n, (1) remove arrows coming into every action Xj ; and (2) add direct arrows from nodes in Zj to Xj . Formally, the sequential π-backdoor criterion is defined as: Definition 3 (Kumor et al. (2021)). Given a causal diagram G, a policy scope S = {⟨Xi,Zi⟩}ni=1 is said to satisfy the sequential π-backdoor criterion in G (for short, π-backdoor admissible) if at each Xi ∈ X , one of the following conditions hold: (1) Xi is not an ancestor of Y in G(i), i.e., X ̸∈ An(Y )G(i) ; or (2) Zi blocks all backdoor path from Xi to Y in G(i), i.e., (Y ⊥ Xi|Zi) in G (i) Xi . (Kumor et al., 2021) showed that whenever a π-backdoor admissible scope S is available, one could learn an imitating policy π∗ ∼ S by setting π∗i (xi | zi) = P (xi | zi) for every action Xi ∈ X . For instance, consider the causal diagram G in Fig. 1c. Scope S = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩} is π-backdoor admissible since (X1 ⊥ Y |Z1) and (X2 ⊥ Y |Z2) hold in G, which is a super graph containing both manipulated G(1) and G(2). An imitating policy π∗ = {π∗1 , π∗2} is thus obtainable by setting π∗1(X1 | Z1) = P (X1 | Z1) and π∗2(X2 | Z2) = P (X2 | Z2). While impressive, a caveat of their results is that the performance of the imitator is restricted by that of the expert, i.e., E[Y | do(π∗)] = E[Y ]. In other words, causal BC provides an efficient way to mimic the expert’s performance. If the expert’s behavior is far from optimal, the same will hold for the learning agent. 2.1 MINIMAL SEQUENTIAL BACKDOOR CRITERION To circumvent this issue, we take a somewhat different approach to causal imitation by incorporating the principle of inverse reinforcement learning (IRL) principle. Following the game-theoretic approach (Syed & Schapire, 2008), we formulate the problem as learning to play a two-player zero-sum game in which the agent chooses a policy, and the nature chooses an SCM instance. A key property of this algorithm is that it allows us to incorporate prior parametric knowledge about the latent reward signal. When such knowledge is informative, our algorithm is about to obtain a policy that could significantly outperform the expert with respect to the unknown causal environment, while at the same time are guaranteed to be no worse. Formally, let M = {∀M | GM = G, PM (O) = P (O)} denote the set of SCMs compatible with both the causal diagram G and the observational distribution P (O). Fix a policy scope S. Now consider the optimization problem defined as follows. ν∗ = min π∼S max M∈M EM [Y ]− EM [Y | do(π)]. (1) The inner maximization in the above equation can be viewed as an causal IRL step where we attempt to “guess” a worst-case SCM M̂ compatible with G and P (O) that prioritizes the expert’s policy. That is, the gap in the performance between the expert’s and the imitator’s policies is maximized. Meanwhile, since the expert’s reward EM [Y ] is not affected by the imitator’s policy π, the outer minimization is equivalent to a planning step that finds a policy π∗ optimizing the learned SCM M̂ . Obviously, the solution π∗ is an imitating policy if gap ν∗ = 0. In cases where the expert is sub-optimal, i.e., EM̂ [Y ] < EM̂ [Y | do(π)] for some policies π, we may have ν∗ < 0. That is, the policy π∗ will dominate the expert’s policy fX regardless of parametrizations of SCM M in the worst-case scenario. In other words, π∗ to some extent ignores the sub-optimal expert, and instead exploits prior knowledge about the underlying model. Despite the clear semantics in terms of causal models, the optimization problem in Eq. (1) requires the learner to search over all possible SCMs compatible with the causal diagram G and observational distribution P (O). In principle, it entails a quite challenging search since one does not have access to the parametric forms of the underlying structural functions F nor the exogenous distribution P (U). It is not clear how the existing optimization procedures can be used. In this paper, we will develop novel methods to circumvent this issue, thus leading to effective imitating policies. Our first algorithm relies on a refinement of the sequential π-backdoor, based on the concept of minimality. A subscope S ′ of a policy scope S = {⟨Xi,Zi⟩}ni=1, denoted by S ′ ⊆ S , is a sequence {⟨Xi,Z ′i⟩} n i=1 where Z ′ i ⊆ Zi for every Xi ∈ X . A proper subscope S ′ ⊂ S is a subscope in S other than S itself. The minimal π-backdoor admissible scope is defined as follows. Definition 4. Given a causal diagram G, a π-backdoor admissible scope S is said to be minimal if there exists no proper subscope S ′ ⊂ S satisfying the sequential π-backdoor in G. Theorem 1. Given a causal diagram G, if there exists a minimal π-backdoor admissible scope S = {⟨Xi,Zi⟩}ni=1 in G, consider the following conditions: 1. Let effective actions X∗ = X ∩An(Y )GS and effective covariates Z∗ = ⋃ Xi∈X∗ Zi; 2. For i = 1, . . . , n+ 1, let X∗<i = {∀Xj ∈X∗ | j < i} and Z∗<i = ⋃ Xj∈X∗<i Zj . Then, for any policy π ∼ S, the expected reward E[Y | do(π)] is computable from P (O, Y ) as: E[Y | do(π)] = ∑ x∗,z∗ E[Y | x∗, z∗]ρπ(x∗, z∗) (2) where the occupancy measure ρπ(x∗, z∗) = ∏ Xi∈X∗ P ( zi | x∗<i, z∗<i ) πi(xi | zi). To illustrate, consider again the causal diagram G in Fig. 1c; the manipulated diagram G(2) = G and G(1) is obtained from G by removing Z2 ↔ X2. While scope S1 = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩} satisfies the sequential π-backdoor, it is not minimal since (X1 ⊥ Y ) in G(1)X1 . On the other hand, S2 = {⟨X1, ∅⟩, ⟨X2, {Z2}⟩} is minimal π-backdoor admissible since (X2 ⊥ Y | Z2) holds true in G(2)X2 ; and the covariate set {Z2} is minimal due to the presence of the backdoor path X2 ← Z2 → Y . Let us focus on the minimal π-backdoor admissible scope S2. Note that GS2 is a subgraph obtained from G by removing the bi-directed arrowZ2 ↔ X2. We must have effective actions X∗ = {X1, X2} and effective covariates Z∗ = {Z2}. Therefore, Z∗<1 = Z∗<2 = ∅ and Z∗<3 = {Z2}. For any policy π ∼ S2, Thm. 1 implies E[Y | do(π)] = ∑ x1,x2,z2 E[Y | x1, x2, z2]P (z2|x1)π2(x2|z2)π(x1). On the other hand, the same result in Thm. 1 does not necessarily hold for a non-minimal π-backdoor admissible scope. For instance, consider again the non-minimal scope S1 = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩}. The expected reward E[Y | do(π)] of a policy π ∼ S2 is not computable from Eq. (2), and is ultimately not identifiable from distribution P (O, Y ) in G (Tian, 2008). 2.2 IMITATION VIA INVERSE REINFORCEMENT LEARNING Once a minimal π-backdoor admissible scope S is found, there exist effective procedures to solve for an imitating policy in Eq. (1). Let R be a hypothesis class containing all expected rewards EM [Y | x∗, z∗] compatible with candidate SCMs M ∈ M , i.e., R = {EM [Y | x∗, z∗] | ∀M ∈M }. Applying the identification formula in Thm. 1 reduces the optimization problem in Eq. (1) as follows: ν∗ = min π∼S max r∈R ∑ x∗,z∗ r(x∗, z∗) (ρ(x∗, z∗)− ρπ(x∗, z∗)) (3) where the expert’s occupancy measure ρ(x∗, z∗) = P (x∗, z∗) and the agent’s occupancy measure ρπ(x ∗, z∗) is given by Eq. (2). The above minimax problem is solvable using standard IRL algorithms. The identification result in Thm. 1 ensures that the learned policy applies to any SCM compatible with the causal diagram and the observational data, thus robust to the unobserved confounding bias in the expert’s demonstrations. Henceforth, we will consistently refer to Eq. (3) as the canonical equation of causal IRL. In this paper, we solve for an imitating policy π∗ in Eq. (3) using state-of-the-art IRL algorithms, provided with common choices of parametric reward functions. These algorithms include the multiplicative-weights algorithm (MWAL) (Syed & Schapire, 2008) and the generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016). We refer readers to Algs. 3 and 4 in (Ruan et al., 2023, Appendix C) for more discussions on the pseudo-code and implementation details. Causal MWAL (Abbeel & Ng, 2004; Syed & Schapire, 2008) study IRL in Markov decision processes where the reward function r(x∗, z∗) is a linear combination of k-length feature expectations vectors ϕ(x∗, z∗). Particularly, let r(x∗, z∗) = w · ϕ(x∗, z∗) for a coefficient vector w contained in a convex set Sk = { w ∈ Rk | ∥w∥1 = 1 and w ⪰ 0 } . Let ϕ(i) be the i-th component of feature vector ϕ and let deterministic policies with scope S be ordered by π(1), . . . ,π(n). The canonical equation in Eq. (3) is reducible to a two-person zero-sum matrix game under linearity. Proposition 1. For a hypothesis class R = {r = w · ϕ | w ∈ Sk}, the solution ν∗ of the canonical equation in Eq. (3) is obtainable by solving the following minimax problem: ν∗ = min π∼S max w∈Sk w⊤Gπ, (4) where G is a k × n matrix given by G(i, j) = ∑ x∗,z∗ ϕ (i)(x∗, z∗) (ρ(x∗, z∗)− ρπ(j)(x∗, z∗)). There exist effective multiplicative weights algorithms for solving the matrix game in Eq. (4), including MW (Freund & Schapire, 1999) and MWAL (Syed & Schapire, 2008). Causal GAIL (Ho & Ermon, 2016) introduces the GAIL algorithm for learning an imitating policy in Markov decision processes with a general family of non-linear reward functions. In particular, r(x∗, z∗) takes values in the real space R, i.e., r ∈ RX∗,Z∗ where RX∗,Z∗ = {r : DX∗ ×DZ∗ 7→ R}. The complexity of reward function r is penalized by a convex regularization function ψ(r), i.e., ν∗ = min π∼S max r∈RX×Z ∑ x∗,z∗ r(x∗, z∗) (ρ(x∗, z∗)− ρπ(x∗, z∗))− ψ(r) (5) Henceforth, we will consistently refer to Eq. (5) as the penalized canonical equation of causal IRL. It is often preferable to solve its conjugate form. Formally, Proposition 2. For a hypothesis class R = {r : DX∗ ×DZ∗ 7→ R} regularized by ψ, the solution ν∗ of the penalized canonical equation in Eq. (5) is obtainable by solving the following problem: ν∗ = min π∼S ψ∗ (ρ− ρπ) (6) where ψ∗ be a conjugate function of ψ and is given by ψ∗ = maxr∈RX×Z a⊤r − ψ(r). Eq. (6) seeks a policy π which minimizes the divergence of the occupancy measures between the imitator and the expert, as measured by the function ψ∗. The computational framework of generative adversarial networks (Goodfellow et al., 2014) provides an effective approach to solve such a matching problem, e.g., the GAIL algorithm (Ho & Ermon, 2016). 3 CAUSAL IMITATION WITHOUT SEQUENTIAL BACKDOOR In this section, we investigate causal IRL beyond the condition of minimal sequential π-backdoor. Observe that the key to the reduction of the canonical causal IRL equation in Eq. (3) lies in the identification of expected rewards E[Y | do(π)] had the latent reward Y been observed. Next we will study general conditions under which E[Y | do(π)] is uniquely discernible from distribution P (O, Y ) in the causal diagram G, called the identifiability of causal effects (Pearl, 2000, Def. 3.2.4). Definition 5 (Identifiability). Given a causal diagram G and a policy π ∼ S, the expected reward E[Y | do(π)] is said to be identifiable from distribution P (O, Y ) in G if E[Y | do(π)] is uniquely computable from P (O, Y ) in any SCM M compatible with G. We say a policy scope S is identifiable (from P (O, Y ) in G) if for all policies π ∼ S , the corresponding expected rewards E[Y | do(π)] are identifiable from P (O, Y ) in G. Our next result shows that whenever an identifiable policy scope S is found, one could always reduce the causal IRL problem to the canonical optimization equation in Eq. (3). Theorem 2. Given a causal diagram G, a policy scope S is identifiable from P (O, Y ) in G if and only if for any policy π ∼ S, the expected reward E[Y | do(π)] is computable from P (O, Y ) as E[Y | do(π)] = ∑ x∗,z∗ E[Y | x∗, z∗]ρπ(x∗, z∗) (7) where subsets X∗ ⊆ X , Z∗ ⊆ O \X; and the imitator’s occupancy measure ρπ(x∗, z∗) is a function of the observational distribution P (O) and policy π. Thm. 2 suggests a general procedure to learn an imitating policy via causal IRL. Whenever an identifiable scope S is found, the identification formula in Eq. (7) permits one to reduce the optimization problem in Eq. (1) to the canonical equation in Eq. (3). One could thus obtain an imitating policy π ∼ S by solving Eq. (3) where the expert’s occupancy measure ρ(x∗, z∗) = P (x∗, z∗) and the imitator’s occupancy measure ρπ(x∗, z∗) is given by Eq. (7). As an example, consider the frontdoor diagram described in Fig. 2a and a policy scope S = {⟨X, ∅⟩}. The expected reward E[Y | do(π)] = ∑ x′ E[Y | do(x′)]π(x′) and E[Y | do(x′)] is identifiable from P (X,Y, Z) using the frontdoor adjustment formula (Pearl, 2000, Thm. 3.3.4). The expected reward E[Y | do(π)] of any policy π(X) could be written as: E[Y | do(π)] = ∑ z,x E[Y | x, z]P (x) ∑ x′ P (z|x′)π(x′). (8) Let occupancy measures ρ(x, z) = P (x, z) and ρπ(x, z) = P (x) ∑ x′ P (z|x′)π(x′). We could thus learn an imitating policy in the frontdoor diagram by solving the canonical equation given by: ν∗ = min π∼S max r∈R ∑ x,z r(x, z) (ρ(x, z)− ρπ(x, z)) , (9) where R is a hypothesis class of the reward function r(x, z) ≜ E[Y | x, z]. The solution π∗(X) is an imitating policy performing at least as well as the expert’s behavior policy if the gap ν∗ ≤ 0. Next, we will describe how to obtain the identification formula in Eq. (7) provided with an identifiable scope S . Without loss of generality, we will assume that the reward Y is the only endogenous variable that is latent in the causal diagram G, i.e., V = O∪{Y }.∗ We will utilize a special type of clustering of nodes in the causal diagram G, called the confounded component (for short, c-component). Definition 6 (C-component (Tian & Pearl, 2002)). For a causal diagram G, a subset C ⊆ V is a c-component if any pair Vi, Vj ∈ C is connected by a bi-directed path in G. For instance, the frontdoor diagram in Fig. 2a contains two c-components C1 = {X,Y } and C2 = {Z}. We will utilize a sound and complete procedure IDENTIFY (Tian, 2002; 2008) for identifying causal effects E[Y | do(π)] of an arbitrary policy π ∼ S . Particularly, IDENTIFY takes as input the causal diagram G, a reward Y , and a policy scope S . It returns an identification formula for E[Y | do(π)] from P (O, Y ) if expected rewards of all policies π ∼ S are identifiable. Otherwise, IDENTIFY(G, Y,S) = “FAIL”. Details of IDENTIFY are shown in (Zhang et al., 2020, Appendix B). Recall that GS is the causal diagram of submodel Mπ induced by policy π ∼ S. Fig. 2b shows diagram GS obtained from the frontdoor graph G and scope S = {⟨X, ∅⟩} described in Fig. 2a. Let ZY = An(Y ) be ancestors of Y in GS . Our next result shows that IDENTIFY(G, Y,S) is ensured to find an identification formula of the form in Eq. (7) when it is identifiable. Lemma 1. Given a causal diagram G, a policy scope S is identifiable from P (O, Y ) in G if and only if IDENTIFY(G, Y,S) ̸= “FAIL”. Moreover, IDENTIFY(G, Y,S) returns an identification formula of the form in Eq. (7) where X∗ = Pa(CY ) ∩X and Z∗ = Pa(CY ) \ ({Y } ∪X); and CY is a c-component containing reward Y in subgraph G[An(ZY )]. ∗ Otherwise, one could always simplify the diagram G and project other latent variables L \ {Y } using the projection algorithm (Tian, 2002, Sec. 4.5), without affecting the identifiability of target query E[Y | do(π)]. For example, for the frontdoor diagram G in Fig. 2a, the manipulated diagram GS with scope S = {⟨X, ∅⟩} is described in Fig. 2b. Since ZY = An(Y )GS = {X,Z, Y }, CY is thus given by {X,Y }. Lem. 1 implies that X∗ = Pa({X,Y }) ∩ {X} = {X} and Z∗ = Pa({X,Y }) \ {X,Y } = {Z}. Applying IDENTIFY(G, Y, {⟨X, ∅}) returns the frontdoor adjustment formula in Eq. (8). 3.1 SEARCHING FOR IDENTIFIABLE POLICY SCOPES The remainder of this section describes an effective algorithm to find identifiable policy scopes S had the latent reward signal Y been observed. Let S denote the collection of all identifiable policy scopes S from distribution P (O, Y ) in the causal diagram G. Our algorithm LISTIDSCOPE, described in Alg. 1, enumerates elements in S. It takes as input a causal diagram G, a reward signal Y , and subsets L = ∅ and R = ⋃n i=1 PA ∗ i . More specifically, LISTIDSCOPE maintains two scopes Sl ⊆ Sr (Step 2). It performs backtrack search to find identifiable scopes S in G such that Sl ⊆ S ⊆ Sr. It aborts branches that either (1) all subscopes in Sr are identifiable (Step 3); or (2) all subscopes containing Sl are non-identifiable (Step 6). The following proposition supports our aborting criterion. Lemma 2. Given a causal diagram G, for policy scopes S ′ ⊆ S , S ′ is identifiable from distribution P (O, Y ) in G if S is identifiable from P (O, Y ) in G. Algorithm 1: LISTIDSCOPE 1: Input: G, Y and subsets L ⊆ R 2: Output: a set of identifiable policy scopes S 3: Let scopes Sr = {⟨Xi,R ∩PA∗i ⟩} n i=1 and Sl = {⟨Xi,L ∩PA∗i ⟩} n i=1. 4: if IDENTIFY(G, Y,Sr) ̸= “FAIL′′ then 5: Output Sr. 6: end if 7: if IDENTIFY(G, Y,Sl) ̸= “FAIL′′ then 8: Pick an arbitrary V ∈ R \L. 9: LISTIDSCOPE(G, Y,L ∪ {V },R). 10: LISTIDSCOPE(G, Y,L,R \ {V }). 11: end if At Step 7, LISTIDSCOPE picks an arbitrary variable V that is included in input covariates R but not in L. It then recursively returns all identifiable policy scopes S in G: the first recursive call returns scopes taking V as an input for some actions Xi ∈ X and the second call return all scopes that do not consider V when selecting values for all actions X . We say a policy π is associated with a collection of policy scopes S, denoted by π ∼ S, if there exists S ∈ S so that π ∼ S. It is possible to show that LISTIDSCOPE produces a collection of identifiable scopes that is sufficient for the imitation task. Theorem 3. For a causal diagram G and a reward Y , LISTIDSCOPE(G, Y, ∅, ⋃n i=1 PA ∗ i ) enumerates a subset S∗ ⊆ S so that for any π ∼ S, there is π∗ ∼ S∗ where E[Y | do(π)] = E[Y | do(π∗)]. Moreover, LISTIDSCOPE outputs identifiable policy scopes with a polynomial delay. This follows from the observation that LISTIDSCOPE searches over a tree of policy scopes with height at most | ⋃n i=1 PA ∗ i | and IDENTIFY(G, Y,S) terminates in polynomial steps w.r.t. the size of diagram G. 4 EXPERIMENTS In this section, we demonstrate our framework on various imitation learning tasks, ranging from synthetic causal models to real-world datasets, including highway driving (Krajewski et al., 2018) and images (LeCun, 1998). We find that our approach is able to incorporate parametric knowledge about the reward function and achieve effective imitating policies across different causal diagrams. For all experiments, we evaluate our proposed Causal-IRL based on the canonical equation formulation in Eq. (3). As a baseline, we also include: (1) standard BC mimicking the expert’s nominal behavior policy; (2) standard IRL utilizing all observed covariates preceding every Xi ∈ X while being blind to causal relationships in the underlying model; and (3) Causal-BC (Zhang et al., 2020; Kumor et al., 2021) that learn an imitating policy with the sequential π-backdoor criterion. We refer readers to (Ruan et al., 2023, Appendix D) for additional experiments and more discussions on the experimental setup. Backdoor Consider an SCM instance compatible with Fig. 1c including binary observed variables Z1, X1, Z2, X2, Y ∈ {0, 1}. Causal-BC utilizes a sequential π-backdoor admissible scope {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩}; while Causal-IRL utilizes the scope {⟨X1, ∅⟩, ⟨X2, {Z2}⟩} satisfying the minimal sequential π-backdoor. Simulation results, shown in Fig. 3a, reveal that Causal-IRL consistently outperforms the expert’s policy and other imitation strategies by exploiting additional parametric knowledge about the expected reward E[Y | X1, X2, Z2]; Causal-BC is able to achieve the expert’s performance. Unsurprisingly, neither BC nor IRL is able to obtain an imitating policy. Highway Driving We consider a learning scenario where the agent learns a driving policy from the observed trajectories of a human expert. Causal diagram of this example is provided in (Ruan et al., 2023, Appendix D, Fig. 4) where X1 is the accelerations of the ego vehicle at the previous step; Z1 is both longitudinal and lateral historical accelerations of the ego vehicle two steps ago; X2 is the velocity of the ego vehicle; Z2 is the velocity of the preceding vehicle; W indicates the information from surrounding vehicles. Values of X1, X2, Z1, Z2 are drawn from a real-world driving dataset HighD Krajewski et al. (2018). The reward Y is decided by a non-linear function fY (X2, Z2, UY ). Both Causal-IRL and Causal-BC utilize the scope {⟨X1, ∅⟩, ⟨X2, {Z2}⟩}. Causal-IRL also exploits the additional knowledge that the expected reward E[Y | X1, X2, Z2] is a monotone function via reward augmentation (Li et al., 2017). Simulation results are shown in Fig. 3b. We found that Causal-IRL performs the best among all strategies. Causal-BC is able to achieve the expert’s performance. BC and IRL perform the worst among all and fail to obtain an imitating policy. MNIST Digits Consider again the frontdoor diagram in Fig. 2a. To evaluate the performance of our proposed approach in high-dimensional domains, we now replace variable Z with sampled images drawn from MNIST digits dataset (LeCun, 1998). The reward Y is decided by a linear function taking Z and an unobserved confounder UX,Y as input. The Causal-IRL formulates the imitation problem as a two-person zero-sum game through the frontdoor adjustment described in Eq. (9), which can be solved by the MW algorithm (Freund & Schapire, 1999; Syed & Schapire, 2008). As shown in Fig. 3c, simulation results reveal that Causal-IRL outperforms Causal-BC and BC; while IRL performs the worst among all the algorithms. Infinite MDPUC To demonstrate our proposed framework in the sequential decision-making setting with an infinite horizon, we consider a generalized Markov decision process incorporating unobserved confounders (Ruan & Di, 2022), called the MDPUC (Zhang & Bareinboim, 2022). This sequential model simulates real-world driving dynamics. By exploiting the Markov property over time steps, we are able to decompose the causal diagram over the infinite horizon into a collection of sub-graphs, one for each time step i = 1, 2, . . . . Fig. 1d shows the causal diagram spanning time steps i = 1, 2, 3. As a comparison, BC and IRL still utilize the stationary policy {⟨Xi, {Zi}⟩}. By applying Thm. 1 at each time step, we obtain a π-backdoor admissible policy scope {⟨Xi, {Zi, Xi−1, Zi−1}⟩} for Causal-IRL and Causal-BC. Simulation results are shown in Fig. 3d. One could see by inspection that Causal-IRL performs the best and achieves the expert’s performance. 5 CONCLUSION This paper investigates imitation learning via inverse reinforcement learning (IRL) in the semantical framework of structural causal models. The goal is to find an effective imitating policy that performs at least as well as the expert’s behavior policy from combinations of demonstration data, qualitative knowledge the data-generating mechanisms represented as a causal diagram, and quantitative knowledge about the reward function. We provide a graphical criterion (Thm. 1) based on the sequential backdoor, which allows one to obtain an imitating policy by solving a canonical optimization equation of causal IRL. Such a canonical formulation addresses the challenge of the presence of unobserved confounders (UCs), and is solvable by leveraging standard IRL algorithms (Props. 1 and 2). Finally, we move beyond the backdoor criterion and show that the canonical equation is achievable whenever expected rewards of policies are identifiable had the reward also been observed (Thms. 2 and 3). ACKNOWLEDGEMENTS This research was supported in part by the NSF, ONR, AFOSR, DoE, Amazon, JP Morgan, and The Alfred P. Sloan Foundation. ETHICS STATEMENT This paper investigates the theoretical framework of causal inverse RL from the natural trajectories of an expert demonstrator, even when the reward signal is unobserved. Input covariates used by the expert to determine the original values of the action are unknown, introducing unobserved confounding bias in demonstration data. Our framework may apply to various fields in reality, including autonomous vehicle development, industrial automation, and chronic disease management. A positive impact of this work is that we discuss the potential risk of training IRL policy from demonstrations with the presence of unobserved confounding (UC). Our formulation of causal IRL is inherently robust against confounding bias. For example, solving the causal IRL problem in Eq. (1) requires the imitator to learn an effective policy that maximizes the reward in a worst-case causal model where the performance gap between the expert and imitator is the largest possible. More broadly, automated decision systems using causal inference methods prioritize safety and robustness during their decision-making processes. Such requirements are increasingly essential since black-box AI systems are prevalent, and our understandings of their potential implications are still limited. REPRODUCIBILITY STATEMENT The complete proof of all theoretical results presented in this paper, including Thms. 1 and 2, is provided in (Ruan et al., 2023, Appendix B). Details on the implementation of the proposed algorithms are included (Ruan et al., 2023, Appendix C). Finally, (Ruan et al., 2023, Appendix D) provides a detailed description of the experimental setup. Readers could find all appendices as part of the supplementary text after “References” section. We provided references to all existing datasets used in experiments, including HIGHD (Krajewski et al., 2018) and MNIST (LeCun, 1998). Other experiments are synthetic and do not introduce any new assets. Source codes for all experiments and simulations are released in the complete technical report (Ruan et al., 2023).
1. What is the focus and contribution of the paper on inverse reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of its ability to meet the performance of an expert demonstrator? 3. What are the weaknesses of the paper regarding the need for open-sourcing of code and experiments? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose a new inverse reinforcement learning algorithm that's able to provably meet the performance of an expert demonstrator in the presence of uncontrolled confounders, when certain conditions are met (i.e., does the agent have the causal structure of the data generating process right?). Strengths And Weaknesses Strengths: The submission does a fantastic job laying the groundwork for causal reinforcement learning, motivating its algorithms, theorems, and results within the formalisms of structured causal models. I also greatly appreciated the extensive FAQ in the appendix. The experiments clearly demonstrate the power of the method. Weakness: While the exposition of the paper is exceptionally high quality, there does not appear to be a code repo associated with the submission. This method, and its uptake by the community, would greatly benefit from open sourcing of code / experiments. Clarity, Quality, Novelty And Reproducibility Clarity and quality: Both extremely high Novelty: While these ideas have been "in the water" in the community for a while, this work certainly represents a novel synthesis, and provides the first algorithm that can convincingly exceed expert performance in scenarios with UCs. Reproducibility: Probably possible by an extremely patient researcher, but the authors should strongly consider open sourcing.
ICLR
Title Causal Imitation Learning via Inverse Reinforcement Learning Abstract One of the most common ways children learn when unfamiliar with the environment is by mimicking adults. Imitation learning concerns an imitator learning to behave in an unknown environment from an expert’s demonstration; reward signals remain latent to the imitator. This paper studies imitation learning through causal lenses and extends the analysis and tools developed for behavior cloning (Zhang, Kumor, Bareinboim, 2020) to inverse reinforcement learning. First, we propose novel graphical conditions that allow the imitator to learn a policy performing as well as the expert’s behavior policy, even when the imitator and the expert’s state-action space disagree, and unobserved confounders (UCs) are present. When provided with parametric knowledge about the unknown reward function, such a policy may outperform the expert’s. Also, our method is easily extensible and allows one to leverage existing IRL algorithms even when UCs are present, including the multiplicative-weights algorithm (MWAL) (Syed & Schapire, 2008) and the generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016). Finally, we validate our framework by simulations using real-world and synthetic data. 1 INTRODUCTION Reinforcement Learning (RL) has been deployed and shown to perform extremely well in highly complex environments in the past decades (Sutton & Barto, 1998; Mnih et al., 2013; Silver et al., 2016; Berner et al., 2019). One of the critical assumptions behind many of the classical RL algorithms is that the reward signal is fully observed, and the reward function could be well-specified. In many real-world applications, however, it might be impractical to design a suitable reward function that evaluates each and every scenario (Randløv & Alstrøm, 1998; Ng et al., 1999). For example, in the context of human driving, it is challenging to design a precise reward function, and experimenting in the environment could be ill-advised; still, watching expert drivers operating is usually feasible. In machine learning, the imitation learning paradigm investigates the problem of how an agent should behave and learn in an environment with an unknown reward function by observing demonstrations from a human expert (Argall et al., 2009; Billard et al., 2008; Hussein et al., 2017; Osa et al., 2018). There are two major learning modalities that implements IL – behavioral cloning (BC) (Widrow, 1964; Pomerleau, 1989; Muller et al., 2006; Mülling et al., 2013; Mahler & Goldberg, 2017) and inverse reinforcement learning (IRL) Ng et al. (2000); Ziebart et al. (2008); Ho & Ermon (2016); Fu et al. (2017). BC methods directly mimic the expert’s behavior policy by learning a mapping from observed states to the expert’s action via supervised learning. Alternatively, IRL methods first learn a potential reward function under which the expert’s behavior policy is optimal. The imitator then obtains a policy by employing standard RL methods to maximize the learned reward function. Under some common assumptions, both BC and IRL are able to obtain policies that achieve the expert’s performance (Kumor et al., 2021; Swamy et al., 2021). Moreover, when additional parametric knowledge about the reward function is provided, IRL may produce a policy that outperforms the expert’s in the underlying environment (Syed & Schapire, 2008; Li et al., 2017; Yu et al., 2020). For concreteness, consider a learning scenario depicted in Fig. 1a, describing trajectories of humandriven cars collected by drones flying over highways (Krajewski et al., 2018; Etesami & Geiger, 2020). Using such data, we want to learn a policy X ← π(Z) deciding on the acceleration (action) X ∈ ∗ Equal contribution. {0, 1} of the demonstrator car based on velocities and locations Z of surrounding cars. The driving performance is measured by a latent reward signal Y . Consider an instance where Y ← (1−X)Z + X(1−Z) and values of Z are drawn uniformly over {0, 1}. A human expert generates demonstrations following a behavior policy such that P (X = 1 | Z = 0) = 0.6 and P (X = 0 | Z = 1) = 0.4. Evaluating the expert’s performance gives E[Y ] = P (X = 1, Z = 0) + P (X = 0, Z = 1) = 0.5. Now we apply standard IRL algorithms to learn a policy X ← π(Z) so that the imitator’s driving performance, denoted by E[Y | do(π)], is at least as good as the expert’s performance E[Y ]. Detailed derivations of IRL policy are shown in (Ruan et al., 2023, Appendix A). Note that E[Y |z, x] = x+ z − 2xz belongs to a family of reward functions fY (x, z) = αx+ βz − γxz, where 0 < α < γ. A typical IRL imitator solves a minimax problem minπ maxfY E [fY (X,Z)]−E [fY (X,Z) | do(π)]. The inner step “guesses” a reward function being optimized by the expert; while the outer step learns a policy maximizing the learned reward function. Applying these steps leads to a policy π∗ : X ← ¬Z with the expected reward E[Y | do(π∗)] = 1, which outperforms the sub-optimal expert. Despite the performance guarantees provided by existing imitation methods, both BC and IRL rely on the assumption that the expert’s input observations match those available to the imitator. More recently, there exists an emerging line of research under the rubric of causal imitation learning that augments the imitation paradigm to account for environments consisting of arbitrary causal mechanisms and the aforementioned mismatch between expert and imitator’s sensory capabilities (de Haan et al., 2019; Zhang et al., 2020; Etesami & Geiger, 2020; Kumor et al., 2021). Closest to our work, Zhang et al. (2020); Kumor et al. (2021) derived graphical criteria that completely characterize when and how BC could lead to successful imitation even when the agents perceive reality differently. Still, it is unclear how to perform IRL-type training if some expert’s observed states remain latent to the imitator, which leads to the presence of unobserved confounding (UCs) in expert’s demonstrations. Perhaps surprisingly, naively applying IRL methods when UCs are present does not necessarily lead to satisfactory performance, even when the expert itself behaves optimally. To witness, we now modify the previous highway driving scenario to demonstrate the challenges of UCs. In reality, covariates Z (i.e., velocities and location) are also affected by the car horn U1 of surrounding vehicles and the wind condition U2. However, due to the different perspectives of drones (recording from the top), such critical information (i.e, U1, U2 ) is not recorded by the camera and thus remains unobserved. Fig. 1b graphically describes this modified learning setting. More specifically, consider an instance where Z ← U1 ⊕ U2, Y ← ¬X ⊕ Z ⊕ U2; ⊕ is the exclusive-or operator; and values of U1 and U2 are drawn uniformly over {0, 1}. An expert driver, being able to hear the car horn U1, follows a behavior policy X ← U1 and achieves the optimal performance E[Y ] = 1. Meanwhile, observe that E[Y |z, x] = 1 belongs to a family of reward functions fY (x, z) = α (where α > 0). Solving minπ maxfY E [fY (X,Z)]− E [fY (X,Z) | do(π)] leads to an IRL policy π∗ with expected reward E[Y |do(π∗)] = 0.5, which is far from the expert’s optimal performance E[Y ] = 1. After all, a question that naturally arises is, under what conditions an IRL imitator procedure can perform well when UCs are present, and there is a mismatch between the perception of the two agents? In this paper, we answer this question and, more broadly, investigate the challenge of performing IRL through causal lenses. In particular, our contributions are summarized as follows. (1) We provide a novel, causal formulation of the inverse reinforcement learning problem. This formulation allows one to formally study and understand the conditions under which an IRL policy is learnable, including in settings where UCs cannot be ruled out a priori. (2) We derive a new graphical condition for deciding whether an imitating policy can be computed from the available data and knowledge, which provides a robust generalization of current IRL algorithms to non-Markovian settings, including GAIL (Ho & Ermon, 2016) and MWAL (Syed & Schapire, 2008). (3) Finally, we move beyond this graphical condition and develop an effective IRL algorithm for structural causal models (Pearl, 2000) with arbitrary causal relationships. Due to the space constraints, all proofs are provided in (Ruan et al., 2023, Appendix B). For a more detailed survey on imitation learning and causal inference, we refer readers to (Ruan et al., 2023, Appendix E). 1.1 PRELIMINARIES We use capital letters to denote random variables (X) and small letters for their values (x). DX represents the domain of X and PX the space of probability distributions over DX . For a set X , let |X| denote its dimension. The probability distribution over variables X is denoted by P (X). Similarly, P (Y |X) represents a set of conditional distributions P (Y |X = x) for all realizations x. We use abbreviations P (x) for probabilities P (X = x); so does P (Y = y |X = x) = P (y | x). Finally, indicator function 1{Z = z} returns 1 if Z = z holds true; otherwise 0. The basic semantic framework of our analysis rests on structural causal models (SCMs) (Pearl, 2000, Ch. 7). An SCM M is a tuple ⟨U ,V ,F , P (U)⟩ with V the set of endogenous, and U exogenous variables. F is a set of structural functions s.t. for fV ∈ F , V ← fV (paV ,uV ), with PAV ⊆ V ,UV ⊆ U . Values of U are drawn from an exogenous distribution P (U), inducing distribution P (V ) over endogenous variables V . Since the learner can observe only a subset of endogenous variables, we split V into a partition O ∪L where variable O ⊆ V are observed and L = V \O remain latent to the leaner. The marginal distribution P (O) is thus referred to as the observational distribution. An atomic intervention on a subset X ⊆ V , denoted by do(x), is an operation where values of X are set to constants x, replacing the functions fX = {fX : ∀X ∈X} that would normally determine their values. For an SCM M , let Mx be a submodel of M induced by intervention do(x). For a set Y ⊆ V , the interventional distribution P (s|do(x)) induced by do(x) is defined as the distribution over Y in the submodel Mx, i.e., PM (Y |do(x)) ≜ PMx(Y ). We leave M implicit when it is obvious from the context. Each SCM M is associated with a causal diagram G which is a directed acyclic graph where (e.g., see Fig. 1) solid nodes represent observed variables O, dashed nodes represent latent variables L, and arrows represent the arguments PAV of each function fV ∈ F . Exogenous variables U are not explicitly shown; a bi-directed arrow between nodes Vi and Vj indicates the presence of an unobserved confounder (UC) affecting both Vi and Vj . We will use family abbreviations to represent graphical relationships such as parents, children, descendants, and ancestors. For example, the set of parent nodes of X in G is denoted by pa(X)G = ∪X∈Xpa(X)G ; ch , de and an are similarly defined. Capitalized versions Pa,Ch,De,An include the argument as well, e.g. Pa(X)G = pa(X)G ∪X . For a subset X ⊆ V , the subgraph obtained from G with edges outgoing from X / incoming into X removed is written as GX /GX respectively. G[X] is a subgraph of G containing only nodes X and edges among them. A path from a node X to a node Y in G is a sequence of edges, which does not include a particular node more than once. Two sets of nodes X,Y are said to be d-separated by a third set Z in a DAG G, denoted by (X ⊥ Y |Z)G , if every edge path from nodes in X to nodes in Y is “blocked” by nodes in Z. The criterion of blockage follows (Pearl, 2000, Def. 1.2.3). For a more detailed survey on SCMs, we refer readers to (Pearl, 2000; Bareinboim et al., 2022). 2 CAUSAL INVERSE REINFORCEMENT LEARNING We investigate the sequential decision-making setting concerning a set of actions X , a series of covariates Z, and a latent reward Y in an SCM M . An expert (e.g., a physician, driver), operating in SCM M , selects actions following a behavior policy, which is the collection of structural functions fX = {fX | X ∈ X}. The expert’s performance is evaluated as the expected reward E[Y ]. On the other hand, a learning agent (i.e., the imitator) intervenes on actions X following an ordering X1 ≺ · · · ≺ Xn; each action Xi is associated with a set of features PA∗i ⊆ O \ {Xi}. A policy π over actions X is a sequence of decision rules π = {π1, . . . , πn}. Each decision rule πi(Xi | Zi) is a probability distribution over an action Xi ∈ X , conditioning on values of a set of covariates Zi ⊆ PA∗i . Such policies π are also referred to as dynamic treatment regimes (Murphy et al., 2001; Chakraborty & Murphy, 2014), which generalize personalized medicine to time-varying treatment settings in healthcare, in which treatment is repeatedly tailored to a patient’s dynamic state. A policy intervention on actions X following a policy π, denoted by do(π), entails a submodel Mπ from a SCM M where structural functions fX associated with X (i.e., the expert’s behavior policy) are replaced with decision rules Xi ∼ πi(Xi | Zi) for every Xi ∈X . A critical assumption throughout this paper is that submodel Mπ does not contain any cycles. Similarly, the interventional distribution P (V | do(π)) induced by policy π is defined as the joint distribution over V in Mπ . Throughout this paper, detailed parametrizations of the underlying SCM M are assumed to be unknown to the agent. Instead, the agent has access to the input: (1) a causal diagram G associated with M , and (2) the expert’s demonstrations, summarized as the observational distribution P (O). The goal of the agent is to output an imitating policy π∗ that achieves the expert’s performance. Definition 1. For an SCM M = ⟨U ,V ,F , P (U)⟩, an imitating policy π∗ is a policy such that its expected reward is lower bounded by the expert’s reward, i.e., EM [Y | do(π∗)] ≥ EM [Y ]. In words, the right-hand side is the expert’s performance that the agent wants to achieve, while the left-hand side is the real reward experienced by the agent. The challenge in imitation learning arises from the fact that the reward Y is not specified and latent, i.e., Y ̸∈ O. This precludes approaches that identify E[Y |do(π)] directly from the demonstration data (e.g., through the do- or soft-do-calculus Pearl (2000); Correa & Bareinboim (2020)). There exist methods in the literature for finding an imitating policy in Def. 1. Before describing their details, we first introduce some necessary concepts. For any policy π, we summarize its associated state-action domain using a sequence of pairs of variables called a policy scope S. Definition 2 (Lee & Bareinboim (2020)). For an SCM M , a policy scope S (for short, scope) over actions X is a sequence of tuples {⟨Xi,Zi⟩}ni=1 where Zi ⊆ PA ∗ i for every Xi ∈X . We will consistently use π ∼ S to denote a policy π associated with scope S . For example, consider a policy scope S = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩} over actions X1, X2 in Fig. 1c. A policy π ∼ S is a sequence of distributions π = {π1(X1 | Z1), π2(X2 | Z2)}. Zhang et al. (2020); Kumor et al. (2021) provide a graphical condition that is sufficient for learning an imitating policy via behavioral cloning (BC) provided with a causal diagram G. For a policy scope S = {⟨Xi,Zi⟩}ni=1, let G(i), i = 1, . . . , n, denote a manipulated graph obtained from G by the following steps: for all j = i+1, . . . , n, (1) remove arrows coming into every action Xj ; and (2) add direct arrows from nodes in Zj to Xj . Formally, the sequential π-backdoor criterion is defined as: Definition 3 (Kumor et al. (2021)). Given a causal diagram G, a policy scope S = {⟨Xi,Zi⟩}ni=1 is said to satisfy the sequential π-backdoor criterion in G (for short, π-backdoor admissible) if at each Xi ∈ X , one of the following conditions hold: (1) Xi is not an ancestor of Y in G(i), i.e., X ̸∈ An(Y )G(i) ; or (2) Zi blocks all backdoor path from Xi to Y in G(i), i.e., (Y ⊥ Xi|Zi) in G (i) Xi . (Kumor et al., 2021) showed that whenever a π-backdoor admissible scope S is available, one could learn an imitating policy π∗ ∼ S by setting π∗i (xi | zi) = P (xi | zi) for every action Xi ∈ X . For instance, consider the causal diagram G in Fig. 1c. Scope S = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩} is π-backdoor admissible since (X1 ⊥ Y |Z1) and (X2 ⊥ Y |Z2) hold in G, which is a super graph containing both manipulated G(1) and G(2). An imitating policy π∗ = {π∗1 , π∗2} is thus obtainable by setting π∗1(X1 | Z1) = P (X1 | Z1) and π∗2(X2 | Z2) = P (X2 | Z2). While impressive, a caveat of their results is that the performance of the imitator is restricted by that of the expert, i.e., E[Y | do(π∗)] = E[Y ]. In other words, causal BC provides an efficient way to mimic the expert’s performance. If the expert’s behavior is far from optimal, the same will hold for the learning agent. 2.1 MINIMAL SEQUENTIAL BACKDOOR CRITERION To circumvent this issue, we take a somewhat different approach to causal imitation by incorporating the principle of inverse reinforcement learning (IRL) principle. Following the game-theoretic approach (Syed & Schapire, 2008), we formulate the problem as learning to play a two-player zero-sum game in which the agent chooses a policy, and the nature chooses an SCM instance. A key property of this algorithm is that it allows us to incorporate prior parametric knowledge about the latent reward signal. When such knowledge is informative, our algorithm is about to obtain a policy that could significantly outperform the expert with respect to the unknown causal environment, while at the same time are guaranteed to be no worse. Formally, let M = {∀M | GM = G, PM (O) = P (O)} denote the set of SCMs compatible with both the causal diagram G and the observational distribution P (O). Fix a policy scope S. Now consider the optimization problem defined as follows. ν∗ = min π∼S max M∈M EM [Y ]− EM [Y | do(π)]. (1) The inner maximization in the above equation can be viewed as an causal IRL step where we attempt to “guess” a worst-case SCM M̂ compatible with G and P (O) that prioritizes the expert’s policy. That is, the gap in the performance between the expert’s and the imitator’s policies is maximized. Meanwhile, since the expert’s reward EM [Y ] is not affected by the imitator’s policy π, the outer minimization is equivalent to a planning step that finds a policy π∗ optimizing the learned SCM M̂ . Obviously, the solution π∗ is an imitating policy if gap ν∗ = 0. In cases where the expert is sub-optimal, i.e., EM̂ [Y ] < EM̂ [Y | do(π)] for some policies π, we may have ν∗ < 0. That is, the policy π∗ will dominate the expert’s policy fX regardless of parametrizations of SCM M in the worst-case scenario. In other words, π∗ to some extent ignores the sub-optimal expert, and instead exploits prior knowledge about the underlying model. Despite the clear semantics in terms of causal models, the optimization problem in Eq. (1) requires the learner to search over all possible SCMs compatible with the causal diagram G and observational distribution P (O). In principle, it entails a quite challenging search since one does not have access to the parametric forms of the underlying structural functions F nor the exogenous distribution P (U). It is not clear how the existing optimization procedures can be used. In this paper, we will develop novel methods to circumvent this issue, thus leading to effective imitating policies. Our first algorithm relies on a refinement of the sequential π-backdoor, based on the concept of minimality. A subscope S ′ of a policy scope S = {⟨Xi,Zi⟩}ni=1, denoted by S ′ ⊆ S , is a sequence {⟨Xi,Z ′i⟩} n i=1 where Z ′ i ⊆ Zi for every Xi ∈ X . A proper subscope S ′ ⊂ S is a subscope in S other than S itself. The minimal π-backdoor admissible scope is defined as follows. Definition 4. Given a causal diagram G, a π-backdoor admissible scope S is said to be minimal if there exists no proper subscope S ′ ⊂ S satisfying the sequential π-backdoor in G. Theorem 1. Given a causal diagram G, if there exists a minimal π-backdoor admissible scope S = {⟨Xi,Zi⟩}ni=1 in G, consider the following conditions: 1. Let effective actions X∗ = X ∩An(Y )GS and effective covariates Z∗ = ⋃ Xi∈X∗ Zi; 2. For i = 1, . . . , n+ 1, let X∗<i = {∀Xj ∈X∗ | j < i} and Z∗<i = ⋃ Xj∈X∗<i Zj . Then, for any policy π ∼ S, the expected reward E[Y | do(π)] is computable from P (O, Y ) as: E[Y | do(π)] = ∑ x∗,z∗ E[Y | x∗, z∗]ρπ(x∗, z∗) (2) where the occupancy measure ρπ(x∗, z∗) = ∏ Xi∈X∗ P ( zi | x∗<i, z∗<i ) πi(xi | zi). To illustrate, consider again the causal diagram G in Fig. 1c; the manipulated diagram G(2) = G and G(1) is obtained from G by removing Z2 ↔ X2. While scope S1 = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩} satisfies the sequential π-backdoor, it is not minimal since (X1 ⊥ Y ) in G(1)X1 . On the other hand, S2 = {⟨X1, ∅⟩, ⟨X2, {Z2}⟩} is minimal π-backdoor admissible since (X2 ⊥ Y | Z2) holds true in G(2)X2 ; and the covariate set {Z2} is minimal due to the presence of the backdoor path X2 ← Z2 → Y . Let us focus on the minimal π-backdoor admissible scope S2. Note that GS2 is a subgraph obtained from G by removing the bi-directed arrowZ2 ↔ X2. We must have effective actions X∗ = {X1, X2} and effective covariates Z∗ = {Z2}. Therefore, Z∗<1 = Z∗<2 = ∅ and Z∗<3 = {Z2}. For any policy π ∼ S2, Thm. 1 implies E[Y | do(π)] = ∑ x1,x2,z2 E[Y | x1, x2, z2]P (z2|x1)π2(x2|z2)π(x1). On the other hand, the same result in Thm. 1 does not necessarily hold for a non-minimal π-backdoor admissible scope. For instance, consider again the non-minimal scope S1 = {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩}. The expected reward E[Y | do(π)] of a policy π ∼ S2 is not computable from Eq. (2), and is ultimately not identifiable from distribution P (O, Y ) in G (Tian, 2008). 2.2 IMITATION VIA INVERSE REINFORCEMENT LEARNING Once a minimal π-backdoor admissible scope S is found, there exist effective procedures to solve for an imitating policy in Eq. (1). Let R be a hypothesis class containing all expected rewards EM [Y | x∗, z∗] compatible with candidate SCMs M ∈ M , i.e., R = {EM [Y | x∗, z∗] | ∀M ∈M }. Applying the identification formula in Thm. 1 reduces the optimization problem in Eq. (1) as follows: ν∗ = min π∼S max r∈R ∑ x∗,z∗ r(x∗, z∗) (ρ(x∗, z∗)− ρπ(x∗, z∗)) (3) where the expert’s occupancy measure ρ(x∗, z∗) = P (x∗, z∗) and the agent’s occupancy measure ρπ(x ∗, z∗) is given by Eq. (2). The above minimax problem is solvable using standard IRL algorithms. The identification result in Thm. 1 ensures that the learned policy applies to any SCM compatible with the causal diagram and the observational data, thus robust to the unobserved confounding bias in the expert’s demonstrations. Henceforth, we will consistently refer to Eq. (3) as the canonical equation of causal IRL. In this paper, we solve for an imitating policy π∗ in Eq. (3) using state-of-the-art IRL algorithms, provided with common choices of parametric reward functions. These algorithms include the multiplicative-weights algorithm (MWAL) (Syed & Schapire, 2008) and the generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016). We refer readers to Algs. 3 and 4 in (Ruan et al., 2023, Appendix C) for more discussions on the pseudo-code and implementation details. Causal MWAL (Abbeel & Ng, 2004; Syed & Schapire, 2008) study IRL in Markov decision processes where the reward function r(x∗, z∗) is a linear combination of k-length feature expectations vectors ϕ(x∗, z∗). Particularly, let r(x∗, z∗) = w · ϕ(x∗, z∗) for a coefficient vector w contained in a convex set Sk = { w ∈ Rk | ∥w∥1 = 1 and w ⪰ 0 } . Let ϕ(i) be the i-th component of feature vector ϕ and let deterministic policies with scope S be ordered by π(1), . . . ,π(n). The canonical equation in Eq. (3) is reducible to a two-person zero-sum matrix game under linearity. Proposition 1. For a hypothesis class R = {r = w · ϕ | w ∈ Sk}, the solution ν∗ of the canonical equation in Eq. (3) is obtainable by solving the following minimax problem: ν∗ = min π∼S max w∈Sk w⊤Gπ, (4) where G is a k × n matrix given by G(i, j) = ∑ x∗,z∗ ϕ (i)(x∗, z∗) (ρ(x∗, z∗)− ρπ(j)(x∗, z∗)). There exist effective multiplicative weights algorithms for solving the matrix game in Eq. (4), including MW (Freund & Schapire, 1999) and MWAL (Syed & Schapire, 2008). Causal GAIL (Ho & Ermon, 2016) introduces the GAIL algorithm for learning an imitating policy in Markov decision processes with a general family of non-linear reward functions. In particular, r(x∗, z∗) takes values in the real space R, i.e., r ∈ RX∗,Z∗ where RX∗,Z∗ = {r : DX∗ ×DZ∗ 7→ R}. The complexity of reward function r is penalized by a convex regularization function ψ(r), i.e., ν∗ = min π∼S max r∈RX×Z ∑ x∗,z∗ r(x∗, z∗) (ρ(x∗, z∗)− ρπ(x∗, z∗))− ψ(r) (5) Henceforth, we will consistently refer to Eq. (5) as the penalized canonical equation of causal IRL. It is often preferable to solve its conjugate form. Formally, Proposition 2. For a hypothesis class R = {r : DX∗ ×DZ∗ 7→ R} regularized by ψ, the solution ν∗ of the penalized canonical equation in Eq. (5) is obtainable by solving the following problem: ν∗ = min π∼S ψ∗ (ρ− ρπ) (6) where ψ∗ be a conjugate function of ψ and is given by ψ∗ = maxr∈RX×Z a⊤r − ψ(r). Eq. (6) seeks a policy π which minimizes the divergence of the occupancy measures between the imitator and the expert, as measured by the function ψ∗. The computational framework of generative adversarial networks (Goodfellow et al., 2014) provides an effective approach to solve such a matching problem, e.g., the GAIL algorithm (Ho & Ermon, 2016). 3 CAUSAL IMITATION WITHOUT SEQUENTIAL BACKDOOR In this section, we investigate causal IRL beyond the condition of minimal sequential π-backdoor. Observe that the key to the reduction of the canonical causal IRL equation in Eq. (3) lies in the identification of expected rewards E[Y | do(π)] had the latent reward Y been observed. Next we will study general conditions under which E[Y | do(π)] is uniquely discernible from distribution P (O, Y ) in the causal diagram G, called the identifiability of causal effects (Pearl, 2000, Def. 3.2.4). Definition 5 (Identifiability). Given a causal diagram G and a policy π ∼ S, the expected reward E[Y | do(π)] is said to be identifiable from distribution P (O, Y ) in G if E[Y | do(π)] is uniquely computable from P (O, Y ) in any SCM M compatible with G. We say a policy scope S is identifiable (from P (O, Y ) in G) if for all policies π ∼ S , the corresponding expected rewards E[Y | do(π)] are identifiable from P (O, Y ) in G. Our next result shows that whenever an identifiable policy scope S is found, one could always reduce the causal IRL problem to the canonical optimization equation in Eq. (3). Theorem 2. Given a causal diagram G, a policy scope S is identifiable from P (O, Y ) in G if and only if for any policy π ∼ S, the expected reward E[Y | do(π)] is computable from P (O, Y ) as E[Y | do(π)] = ∑ x∗,z∗ E[Y | x∗, z∗]ρπ(x∗, z∗) (7) where subsets X∗ ⊆ X , Z∗ ⊆ O \X; and the imitator’s occupancy measure ρπ(x∗, z∗) is a function of the observational distribution P (O) and policy π. Thm. 2 suggests a general procedure to learn an imitating policy via causal IRL. Whenever an identifiable scope S is found, the identification formula in Eq. (7) permits one to reduce the optimization problem in Eq. (1) to the canonical equation in Eq. (3). One could thus obtain an imitating policy π ∼ S by solving Eq. (3) where the expert’s occupancy measure ρ(x∗, z∗) = P (x∗, z∗) and the imitator’s occupancy measure ρπ(x∗, z∗) is given by Eq. (7). As an example, consider the frontdoor diagram described in Fig. 2a and a policy scope S = {⟨X, ∅⟩}. The expected reward E[Y | do(π)] = ∑ x′ E[Y | do(x′)]π(x′) and E[Y | do(x′)] is identifiable from P (X,Y, Z) using the frontdoor adjustment formula (Pearl, 2000, Thm. 3.3.4). The expected reward E[Y | do(π)] of any policy π(X) could be written as: E[Y | do(π)] = ∑ z,x E[Y | x, z]P (x) ∑ x′ P (z|x′)π(x′). (8) Let occupancy measures ρ(x, z) = P (x, z) and ρπ(x, z) = P (x) ∑ x′ P (z|x′)π(x′). We could thus learn an imitating policy in the frontdoor diagram by solving the canonical equation given by: ν∗ = min π∼S max r∈R ∑ x,z r(x, z) (ρ(x, z)− ρπ(x, z)) , (9) where R is a hypothesis class of the reward function r(x, z) ≜ E[Y | x, z]. The solution π∗(X) is an imitating policy performing at least as well as the expert’s behavior policy if the gap ν∗ ≤ 0. Next, we will describe how to obtain the identification formula in Eq. (7) provided with an identifiable scope S . Without loss of generality, we will assume that the reward Y is the only endogenous variable that is latent in the causal diagram G, i.e., V = O∪{Y }.∗ We will utilize a special type of clustering of nodes in the causal diagram G, called the confounded component (for short, c-component). Definition 6 (C-component (Tian & Pearl, 2002)). For a causal diagram G, a subset C ⊆ V is a c-component if any pair Vi, Vj ∈ C is connected by a bi-directed path in G. For instance, the frontdoor diagram in Fig. 2a contains two c-components C1 = {X,Y } and C2 = {Z}. We will utilize a sound and complete procedure IDENTIFY (Tian, 2002; 2008) for identifying causal effects E[Y | do(π)] of an arbitrary policy π ∼ S . Particularly, IDENTIFY takes as input the causal diagram G, a reward Y , and a policy scope S . It returns an identification formula for E[Y | do(π)] from P (O, Y ) if expected rewards of all policies π ∼ S are identifiable. Otherwise, IDENTIFY(G, Y,S) = “FAIL”. Details of IDENTIFY are shown in (Zhang et al., 2020, Appendix B). Recall that GS is the causal diagram of submodel Mπ induced by policy π ∼ S. Fig. 2b shows diagram GS obtained from the frontdoor graph G and scope S = {⟨X, ∅⟩} described in Fig. 2a. Let ZY = An(Y ) be ancestors of Y in GS . Our next result shows that IDENTIFY(G, Y,S) is ensured to find an identification formula of the form in Eq. (7) when it is identifiable. Lemma 1. Given a causal diagram G, a policy scope S is identifiable from P (O, Y ) in G if and only if IDENTIFY(G, Y,S) ̸= “FAIL”. Moreover, IDENTIFY(G, Y,S) returns an identification formula of the form in Eq. (7) where X∗ = Pa(CY ) ∩X and Z∗ = Pa(CY ) \ ({Y } ∪X); and CY is a c-component containing reward Y in subgraph G[An(ZY )]. ∗ Otherwise, one could always simplify the diagram G and project other latent variables L \ {Y } using the projection algorithm (Tian, 2002, Sec. 4.5), without affecting the identifiability of target query E[Y | do(π)]. For example, for the frontdoor diagram G in Fig. 2a, the manipulated diagram GS with scope S = {⟨X, ∅⟩} is described in Fig. 2b. Since ZY = An(Y )GS = {X,Z, Y }, CY is thus given by {X,Y }. Lem. 1 implies that X∗ = Pa({X,Y }) ∩ {X} = {X} and Z∗ = Pa({X,Y }) \ {X,Y } = {Z}. Applying IDENTIFY(G, Y, {⟨X, ∅}) returns the frontdoor adjustment formula in Eq. (8). 3.1 SEARCHING FOR IDENTIFIABLE POLICY SCOPES The remainder of this section describes an effective algorithm to find identifiable policy scopes S had the latent reward signal Y been observed. Let S denote the collection of all identifiable policy scopes S from distribution P (O, Y ) in the causal diagram G. Our algorithm LISTIDSCOPE, described in Alg. 1, enumerates elements in S. It takes as input a causal diagram G, a reward signal Y , and subsets L = ∅ and R = ⋃n i=1 PA ∗ i . More specifically, LISTIDSCOPE maintains two scopes Sl ⊆ Sr (Step 2). It performs backtrack search to find identifiable scopes S in G such that Sl ⊆ S ⊆ Sr. It aborts branches that either (1) all subscopes in Sr are identifiable (Step 3); or (2) all subscopes containing Sl are non-identifiable (Step 6). The following proposition supports our aborting criterion. Lemma 2. Given a causal diagram G, for policy scopes S ′ ⊆ S , S ′ is identifiable from distribution P (O, Y ) in G if S is identifiable from P (O, Y ) in G. Algorithm 1: LISTIDSCOPE 1: Input: G, Y and subsets L ⊆ R 2: Output: a set of identifiable policy scopes S 3: Let scopes Sr = {⟨Xi,R ∩PA∗i ⟩} n i=1 and Sl = {⟨Xi,L ∩PA∗i ⟩} n i=1. 4: if IDENTIFY(G, Y,Sr) ̸= “FAIL′′ then 5: Output Sr. 6: end if 7: if IDENTIFY(G, Y,Sl) ̸= “FAIL′′ then 8: Pick an arbitrary V ∈ R \L. 9: LISTIDSCOPE(G, Y,L ∪ {V },R). 10: LISTIDSCOPE(G, Y,L,R \ {V }). 11: end if At Step 7, LISTIDSCOPE picks an arbitrary variable V that is included in input covariates R but not in L. It then recursively returns all identifiable policy scopes S in G: the first recursive call returns scopes taking V as an input for some actions Xi ∈ X and the second call return all scopes that do not consider V when selecting values for all actions X . We say a policy π is associated with a collection of policy scopes S, denoted by π ∼ S, if there exists S ∈ S so that π ∼ S. It is possible to show that LISTIDSCOPE produces a collection of identifiable scopes that is sufficient for the imitation task. Theorem 3. For a causal diagram G and a reward Y , LISTIDSCOPE(G, Y, ∅, ⋃n i=1 PA ∗ i ) enumerates a subset S∗ ⊆ S so that for any π ∼ S, there is π∗ ∼ S∗ where E[Y | do(π)] = E[Y | do(π∗)]. Moreover, LISTIDSCOPE outputs identifiable policy scopes with a polynomial delay. This follows from the observation that LISTIDSCOPE searches over a tree of policy scopes with height at most | ⋃n i=1 PA ∗ i | and IDENTIFY(G, Y,S) terminates in polynomial steps w.r.t. the size of diagram G. 4 EXPERIMENTS In this section, we demonstrate our framework on various imitation learning tasks, ranging from synthetic causal models to real-world datasets, including highway driving (Krajewski et al., 2018) and images (LeCun, 1998). We find that our approach is able to incorporate parametric knowledge about the reward function and achieve effective imitating policies across different causal diagrams. For all experiments, we evaluate our proposed Causal-IRL based on the canonical equation formulation in Eq. (3). As a baseline, we also include: (1) standard BC mimicking the expert’s nominal behavior policy; (2) standard IRL utilizing all observed covariates preceding every Xi ∈ X while being blind to causal relationships in the underlying model; and (3) Causal-BC (Zhang et al., 2020; Kumor et al., 2021) that learn an imitating policy with the sequential π-backdoor criterion. We refer readers to (Ruan et al., 2023, Appendix D) for additional experiments and more discussions on the experimental setup. Backdoor Consider an SCM instance compatible with Fig. 1c including binary observed variables Z1, X1, Z2, X2, Y ∈ {0, 1}. Causal-BC utilizes a sequential π-backdoor admissible scope {⟨X1, {Z1}⟩, ⟨X2, {Z2}⟩}; while Causal-IRL utilizes the scope {⟨X1, ∅⟩, ⟨X2, {Z2}⟩} satisfying the minimal sequential π-backdoor. Simulation results, shown in Fig. 3a, reveal that Causal-IRL consistently outperforms the expert’s policy and other imitation strategies by exploiting additional parametric knowledge about the expected reward E[Y | X1, X2, Z2]; Causal-BC is able to achieve the expert’s performance. Unsurprisingly, neither BC nor IRL is able to obtain an imitating policy. Highway Driving We consider a learning scenario where the agent learns a driving policy from the observed trajectories of a human expert. Causal diagram of this example is provided in (Ruan et al., 2023, Appendix D, Fig. 4) where X1 is the accelerations of the ego vehicle at the previous step; Z1 is both longitudinal and lateral historical accelerations of the ego vehicle two steps ago; X2 is the velocity of the ego vehicle; Z2 is the velocity of the preceding vehicle; W indicates the information from surrounding vehicles. Values of X1, X2, Z1, Z2 are drawn from a real-world driving dataset HighD Krajewski et al. (2018). The reward Y is decided by a non-linear function fY (X2, Z2, UY ). Both Causal-IRL and Causal-BC utilize the scope {⟨X1, ∅⟩, ⟨X2, {Z2}⟩}. Causal-IRL also exploits the additional knowledge that the expected reward E[Y | X1, X2, Z2] is a monotone function via reward augmentation (Li et al., 2017). Simulation results are shown in Fig. 3b. We found that Causal-IRL performs the best among all strategies. Causal-BC is able to achieve the expert’s performance. BC and IRL perform the worst among all and fail to obtain an imitating policy. MNIST Digits Consider again the frontdoor diagram in Fig. 2a. To evaluate the performance of our proposed approach in high-dimensional domains, we now replace variable Z with sampled images drawn from MNIST digits dataset (LeCun, 1998). The reward Y is decided by a linear function taking Z and an unobserved confounder UX,Y as input. The Causal-IRL formulates the imitation problem as a two-person zero-sum game through the frontdoor adjustment described in Eq. (9), which can be solved by the MW algorithm (Freund & Schapire, 1999; Syed & Schapire, 2008). As shown in Fig. 3c, simulation results reveal that Causal-IRL outperforms Causal-BC and BC; while IRL performs the worst among all the algorithms. Infinite MDPUC To demonstrate our proposed framework in the sequential decision-making setting with an infinite horizon, we consider a generalized Markov decision process incorporating unobserved confounders (Ruan & Di, 2022), called the MDPUC (Zhang & Bareinboim, 2022). This sequential model simulates real-world driving dynamics. By exploiting the Markov property over time steps, we are able to decompose the causal diagram over the infinite horizon into a collection of sub-graphs, one for each time step i = 1, 2, . . . . Fig. 1d shows the causal diagram spanning time steps i = 1, 2, 3. As a comparison, BC and IRL still utilize the stationary policy {⟨Xi, {Zi}⟩}. By applying Thm. 1 at each time step, we obtain a π-backdoor admissible policy scope {⟨Xi, {Zi, Xi−1, Zi−1}⟩} for Causal-IRL and Causal-BC. Simulation results are shown in Fig. 3d. One could see by inspection that Causal-IRL performs the best and achieves the expert’s performance. 5 CONCLUSION This paper investigates imitation learning via inverse reinforcement learning (IRL) in the semantical framework of structural causal models. The goal is to find an effective imitating policy that performs at least as well as the expert’s behavior policy from combinations of demonstration data, qualitative knowledge the data-generating mechanisms represented as a causal diagram, and quantitative knowledge about the reward function. We provide a graphical criterion (Thm. 1) based on the sequential backdoor, which allows one to obtain an imitating policy by solving a canonical optimization equation of causal IRL. Such a canonical formulation addresses the challenge of the presence of unobserved confounders (UCs), and is solvable by leveraging standard IRL algorithms (Props. 1 and 2). Finally, we move beyond the backdoor criterion and show that the canonical equation is achievable whenever expected rewards of policies are identifiable had the reward also been observed (Thms. 2 and 3). ACKNOWLEDGEMENTS This research was supported in part by the NSF, ONR, AFOSR, DoE, Amazon, JP Morgan, and The Alfred P. Sloan Foundation. ETHICS STATEMENT This paper investigates the theoretical framework of causal inverse RL from the natural trajectories of an expert demonstrator, even when the reward signal is unobserved. Input covariates used by the expert to determine the original values of the action are unknown, introducing unobserved confounding bias in demonstration data. Our framework may apply to various fields in reality, including autonomous vehicle development, industrial automation, and chronic disease management. A positive impact of this work is that we discuss the potential risk of training IRL policy from demonstrations with the presence of unobserved confounding (UC). Our formulation of causal IRL is inherently robust against confounding bias. For example, solving the causal IRL problem in Eq. (1) requires the imitator to learn an effective policy that maximizes the reward in a worst-case causal model where the performance gap between the expert and imitator is the largest possible. More broadly, automated decision systems using causal inference methods prioritize safety and robustness during their decision-making processes. Such requirements are increasingly essential since black-box AI systems are prevalent, and our understandings of their potential implications are still limited. REPRODUCIBILITY STATEMENT The complete proof of all theoretical results presented in this paper, including Thms. 1 and 2, is provided in (Ruan et al., 2023, Appendix B). Details on the implementation of the proposed algorithms are included (Ruan et al., 2023, Appendix C). Finally, (Ruan et al., 2023, Appendix D) provides a detailed description of the experimental setup. Readers could find all appendices as part of the supplementary text after “References” section. We provided references to all existing datasets used in experiments, including HIGHD (Krajewski et al., 2018) and MNIST (LeCun, 1998). Other experiments are synthetic and do not introduce any new assets. Source codes for all experiments and simulations are released in the complete technical report (Ruan et al., 2023).
1. What is the focus of the paper regarding imitation learning? 2. What are the strengths of the proposed approach, particularly in its extension to existing IRL algorithms? 3. What are the weaknesses of the paper regarding experiment diversity and challenge? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies imitation learning through the lens of structural causal models. Specifically, this paper proposes graphical conditions to enable imitator to learn an expert-level policy, when unobserved confounders (UCs) occur. Interestingly, the proposed method can extends to existing IRL algorithms when UCs are present. Two showcased algorithms are causal version of multiplicative weights algorithm (MWAL) and generative adversarial imitation learning (GAIL). The proposed methods have been evaluated using real-world and synthetic data, showing supervior performance than the original ones. Strengths And Weaknesses Strength The proposed method is novel. Under the structural causal model of inverse reinforcement learning, this paper extends the graphical condition for causal behavioral cloning to causal inverse reinforcement learning. This was done by formulating the problem as learning to paly a two-player zero-sum game, where the prior knowledge about the latent rewards can be incorporated. With informative knowledge, the imitator can outperform the expert in causal environment. The proposed method is well motivated with theoretical analysis, and naturally extended to two existing inverse reinforcement learning algorithms, i.e. MWAL and GAIL. The proposed method was well supported by the empirical results. Weakness More diverse and challenging experiments are expected. Clarity, Quality, Novelty And Reproducibility Clarity The presentation is clear. Quality This paper is of high quality. Novelty The proposed method in this paper is novel. Reproducibility Details of the algorithms and experiments have been provided.
ICLR
Title FEVERLESS: Fast and Secure Vertical Federated Learning based on XGBoost for Decentralized Labels Abstract Vertical Federated Learning (VFL) enables multiple clients to collaboratively train a global model over vertically partitioned data without revealing private local information. Tree-based models, like XGBoost and LightGBM, have been widely used in VFL to enhance the interpretation and efficiency of training. However, there is a fundamental lack of research on how to conduct VFL securely over distributed labels. This work is the first to fill this gap by designing a novel protocol, called FEVERLESS, based on XGBoost. FEVERLESS leverages secure aggregation via information masking technique and global differential privacy provided by a fairly and randomly selected noise leader to prevent private information from being leaked in the training process. Furthermore, it provides label and data privacy against honest-but-curious adversary even in the case of collusion of n − 2 out of n clients. We present a comprehensive security and efficiency analysis for our design, and the empirical results from our experiments demonstrate that FEVERLESS is fast and secure. In particular, it outperforms the solution based on additive homomorphic encryption in runtime cost and provides better accuracy than the local differential privacy approach1. 1 INTRODUCTION Traditional centralized deep learning models, demanding to collect a considerable amount of clients’ data to maintain high accuracy, to some degree, may increase the risk of data breaches. Data may not be easily shared among different entities due to privacy regulations and policies. To tackle this “Data Island” problem (Yang et al. (2019a)), Google proposed Federated Learning (FL) (McMahan et al. (2017)) to allow multiple clients to train a global model without sharing private data. The basic paradigm of FL is that all clients train local models with their own data, and then the information of local models, e.g., gradients, may be exchanged to produce a global model. Based on different types of data partition (Yang et al. (2019a)), FL can be mainly categorized into Horizontal Federated Learning (HFL) and Vertical Federated Learning (VFL). The former focuses on training with horizontally partitioned data where clients share the same feature space but differing in data index set. Several research works (Shokri & Shmatikov (2015); Orekondy et al. (2019); Geiping et al. (2020); Li & Han (2019)) have found that training data of HFL is still at high risk of leakage although private data is kept locally. Other studies (Phong et al. (2018); Truex et al. (2019); Xu et al. (2019); Zhang et al. (2020); Zhu et al. (2020)) have been dedicated to enhancing the security of HFL. On the contrary, VFL is mainly applied in the scenario of training with vertically partitioned data (Wu et al. (2020); Cheng et al. (2021)) where clients share the same data index set but differing in feature space. In this paper, our principal focus is to achieve privacy-preserving training on VFL. To best of our knowledge, many existing studies (Hardy et al. (2017); Nock et al. (2018); Liu et al. (2020); Yang et al. (2019b); Cheng et al. (2021); Chen & Guestrin (2016); Wu et al. (2020)) have proposed innovative approaches to prevent private information breaches in the context of VFL. Specifically, (Hardy et al. (2017)) introduced encryption-based privacy-preserving logistic regression to safeguard the information of data indexes. (Nock et al. (2018)) gave a comprehensive discussion on 1Code is available at: https://github.com/feverless111/vfl the impact of ID resolution. (Yang et al. (2019b)) introduced a scheme without using a coordinator for a limited number of clients. Recently, (Liu et al. (2020)) proposed an asymmetrically VFL scheme for logistic regression tackling privacy concerns on ID alignment. Unlike the training models used in the aforementioned works, XGBoost (Chen & Guestrin (2016)), which is one of the most popular models applied in VFL, can provide better interpretation, easier parameter tuning, and faster execution than deep learning in tabular data training (Goodfellow et al. (2016); LeCun et al. (2015)). These practical features and advantages draw academia and industry’s attention to the research on XGBoost, especially in the privacy-preserving context. (Wu et al. (2020)) introduced an approach for tree-based model training through a hybrid method composing homomorphic encryption and secure Multi-Party Computation (MPC) (Goldreich (1998); Bonawitz et al. (2017)). After that, (Cheng et al. (2021)) proposed a similar system to train XGBoost (Chen & Guestrin (2016)) securely over vertically partitioned data by using Additively Homomorphic Encryption (AHE). By applying Differential Privacy (DP) (Dwork (2008)), (Tian et al. (2020)) designed a VFL system to train GBDT without the need of encryption/decryption. However, most of the above solutions based on AHE and MPC do not scale well in terms of efficiency on training XGBoost. Beyond that, all the existing schemes basically assume that training labels are managed and processed by a sole client. In practice, a VFL scheme supporting distributed labels is necessary. For instance, multiple hospitals, clinics and health centers currently may be set to COVID-19 test spots and aim to train a model, e.g., XGBoost, to predict with good interpretation if citizens (living in various locations) are infected based on their health records and symptoms. In this context, the labels (i.e., the test results) are likely distributed among different health authorities - even targeting to the same group of patients, and feature space is vertically portioned. For example, a cardiac hospital only maintains heart data for the patients, while a psychiatric center holds the mental records, in which both authorities may collect and manage each of its registered patient’s label locally. Another common scenario could be in the financial sector where multiple bank branches and e-commerce companies prefer to build a global model to predict if their customers may pay some service (e.g., car loan) on time. The banks have part of features about the customers (e.g., account balance, funding in-and-out records), while the companies may obtain other features (e.g., payment preference). Since the customers may get the same service, e.g., loan, from different institutions, it is clear that labels must be distributed rather than centralized. In addition to efficiency and functionality aspects, one may also consider capturing stronger security for VFL. Training an XGBoost usually should involve the computation of first and second-order derivatives of the loss function (note gradients and hessians contain labels’ information), and the aggregation of them is required in each round. In the context where the labels are held by different clients, if the gradients and hessians are transmitted in the form of plaintexts and the summations of them are known to an aggregator (whom could be one of the clients engages in training), inference and differential attacks (Appendix C) will be easily conducted by the aggregator, resulting in information leakage. To tackle these problems, we propose a fast and secure VFL protocol, FEVERLESS, to train XGBoost (Appendix B.1) on distributed labels without disclosing both feature and label information. In our design, the privacy protection is guaranteed by secure aggregation (based on a masking scheme) and Global Differential Privacy (GDP) (Appendix B.6). We leverage masking instead of heavy-cost multiparty computation and we guarantee a “perfect secrecy” level for the masked data. In GDP, we use Verifiable Random Function (VRF) (Appendix B.5) to select a noise leader per round (who cannot be predicted and pre-compromised in advance) to aggregate noise from “selected” clients, which significantly maintains model accuracy. Our contributions can be summarized as follows. (1) We define VFL in a more practical scenario where training labels are distributed over multiple clients. Beyond that, we develop FEVERLESS to train XGBoost securely and efficiently with the elegant combination of secure aggregation technique (based on Diffie-Hellman (DH) key exchange (Appendix B.2) and Key Derivation Function (KDF) (Appendix B.4)) and GDP. (2) We give a comprehensive security analysis to demonstrate that FEVERLESS is able to safeguard labels and features privacy in the semi-honest setting, but also maintain the robustness even for the case where n− 2 out of n clients commit collusion. (3) We implement FEVERLESS and perform training time and accuracy evaluation on different realworld datasets. The empirical results show that FEVERLESS can maintain efficiency and accuracy simultaneously, and its performance is comparable to the baseline - a ”pure” XGBoost without using any encryption and differential privacy. Specifically, training the credit card and bank marketing datasets just takes 1% and 6.5% more runtime than the baseline and meanwhile, the accuracy is only lower than that of the baseline by 0.9% and 3.21%, respectively2. 2 PROBLEM FORMULATION 2.1 SYSTEM MODEL Before proceeding, we give some assumptions on our model. We suppose that a private set intersection (Kolesnikov et al. (2017); Pinkas et al. (2014)) has been used to align data IDs before the training starts, so that each client shares the same data index space I. But the names of features are not allowed to share among clients. As for the information of label distribution (indexes indicating a label belongs to which client, e.g., the label of i-th data instance is held by client A), we will consider the following conditions: (1) this information is revealed to the public in advance; or (2) the information is not allowed to publish but the training can still be accomplished (with extra cost). We also consider that the training is conducted on a dataset with m samples composing with feature space X = {x1, · · · , xm}, each containing f features, and label set Y = {y1, · · · , ym}. Besides, features {X(c)j | j ∈ {1, · · · , f}} and labels {y (c) i | i ∈ {1, · · · ,m}} are held among n clients where each client has at least one feature and one label. X(c)j and y (c) i refer to j-th feature and i-th label owned by c-th client, respectively. Considering a practical scenario wherein training labels are distributed among clients, we propose a new variant of VFL, named VFL over Distributed Labels (DL-VFL). The concrete definition is given as follows. Definition 1 (DL-VFL). Given a training set with m data samples consisting of feature space X , label space Y , index space I and clients set C, we have: X c ∩ X c ′ = ∅,Yc ∩ Yc ′ = ∅, Ic = Ic ′ ,∀c, c ′ ∈ C, c 6= c ′ . (1) A client c participating DL-VFL shares the same sample ID space I with the corresponding labels, where a single label belongs to only one client. And different clients hold the subset of X sampled from feature space. To achieve privacy-preserving XGBoost training, we further define two roles. Definition 2 (Source client). A source client with split candidates wants to compute the corresponding Lsplit based on Eq.(4). But some labels are missing so that ∑ gi and ∑ hi are unable to derive. For the case that a source client does not hold all labels in the current split candidates, we propose a solution based on secure aggregation and global differential privacy to help the source client to compute Lsplit while safeguarding other clients’ privacy. We consider the two conditions regarding if label distribution is publicly known. We find that if we keep label distribution hidden, we will take extra communication overhead to perform training. The detailed explanation is given in Appendix F. Note each client may have a chance to act as a source client because all the labels are distributed, where the source client leads the Lsplit computation, and clients provide missing label values to the source client. To achieve GDP, we define noise leader who is selected fairly and randomly from all clients (except for the source client) - preventing clients from being compromised beforehand. Definition 3 (Noise leader). By using VRF, a noise leader is responsible for generating the maximum leader score, aggregating differentially private noise from a portion of clients and adding the noise to the gradients and hessians. Note we summary the main notations in Table 1 (see Appendix A). 2.2 THREAT MODEL We mainly consider potential threats incurred by participating clients and the outside adversaries. We assume that all clients are honest-but-curious, which means they strictly follow designed algo- 2For banknote authentication dataset, FEVERLESS takes 13.96% more training time than the baseline, and the accuracy is 30.4% lower. This is because the model is trained by a small-scale dataset, so that the robustness is seriously affected by noise. rithms but try to infer private information of other clients from the received messages. Besides, we also consider up to n − 2 clients’ collusion to conduct attacks, and at least one non-colluded client adds noise per round. Through authenticated channels, DH key exchange can be securely executed among clients. Other messages are transmitted by public channels, and outside attackers can eavesdrop on these channels and try to reveal information about clients during the whole DL-VFL process. Note this paper mainly focuses on solving privacy issues in training DL-VFL based on XGBoost. Thus, other attacks, like data poisoning and backdoor attacks deteriorating model performance, are orthogonal to our problem. 3 A PRACTICAL PRIVACY-PRESERVING PROTOCOL 3.1 FEVERLESS PROTOCOL DESCRIPTION To prevent a source client from knowing gradients and hessians sent by other clients, one may directly use MPC (Damgård et al. (2012)) based on AHE (Paillier (1999); Wu et al. (2020)). But this method yields expensive computation cost. Getting rid of the complex mechanism like MPC, we leverage secure aggregation protocol via masking scheme based on DH key exchange(Bonawitz et al. (2017); Ács & Castelluccia (2011); Tian et al. (2020)). By further using KDF and Hash Function (see Appendix B.3&B.4), our masking (for gradients and hessians) can be derived without exchanging keys per training round. Our approach significantly reduces the communication cost but still maintains the robustness up to n − 2 colluded clients. Meanwhile, the secure aggregation can provide “perfect secrecy” for broadcast messages. After receiving the broadcast messages, the masking will be canceled out at the source client side. But only using the masking is unable to defend against differential attacks. One may consider using Local Differential Privacy (LDP) (Kairouz et al. (2014)) to make sure that each client may add noise to per send-out message, barely consuming any extra computation cost. The accumulated noise, from all clients, may seriously affect the model accuracy. To tackle this problem, we use a GDP (Wei et al. (2020)) approach with noise leader selection. A hybrid method is finally formed based on masking scheme and GDP, so that per client’s sensitive information can be protected by the “masks” and the aggregated values are secured by the noise which is injected by the chosen clients. We briefly introduce our design, and the detailed algorithms and more explanations are given in Appendix D. Assume each client c ∈ [1, n] generates respective secret key skc and computes gradients g (c) i and hessians h (c) i locally, where {i | yi ∈ Yc}. FEVERLESS works as follows. 1. Broadcast missing indexes. The source client broadcasts the mIDs= {i | yi /∈ Yc}. 2. Key exchange computation. Each client c computes public key pkc = gskc using secret keys skc, sends pkc to other clients and computes the corresponding shared keys3 {Sc,c′ = pk skc c′ = gskcskc′ | c, c ′ ∈ C, c 6= c′} based on secret key skc received public keys {pkc′ | c ′ ∈ C}. 3. Data masking. Each client c runs the masking generation algorithm to compute the maskings for protecting gradients and hessians. Specifically, based on KDF, clients’ indexes and the number of queries, the masking generation algorithm is conducted by mask(c)g ← ∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ ·( H(Sc,c′‖0‖query) ) , mask(c)h ← ∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ · ( H(Sc,c′‖1‖query) ) 4. Then the masked gra- dients G(c) and hessians H(c) are generated by G(c) = ∑ i∈mIDs g (c) i + mask (c) g −r(c)g , H(c) =∑ i∈mIDs h (c) i + mask (c) h −r (c) h . 4. Noise leader selection. Each client generates the selection score selecc using the VRF, H(SIGNskc(count,mIDs,r)), and broadcasts it, where count is the number of times clients conduct VRF, r is a fresh random number, and SIGN is the signature scheme (see Appendix B.5 for more details). The client with maximum score will be the noise leader. For ease of understanding, we assume client n with the largest selection score selectmaxn is the leader, in Figure 1. 3Shared keys are only generated once, and the KDF is used to generate the remaining maskings. 4For purpose of simplicity, we omit modular computations. The complete calculation processes are elabo- rated on Algorithm 3-5. 5. Noise injection. a) Noise leader selects k clients adding noise. For the details of the selec- tion, please see Algorithm 5 in Appendix D. b) The selected clients send {ñ(c)g = N(0,∆2gσ2) + r (c) g , ñ (c) h = N(0,∆ 2 hσ 2) + r (c) h |c ∈ k} to noise leader, in which the r (c) g and r (c) h are two random values to mask noise. c) The leader aggregates the noise: Ñg = k · N(0,∆2gσ2) + Rg and Ñh = k · N(0,∆2hσ2) +Rh, and further adds them to G(n) and H(n), respectively. 6. Aggregation and computation. All clients send the masked values to the source client. The source client computes ∑n c=1G (c) + k·N(0,∆2gσ2), ∑n c=1H (c) + k·N(0,∆2hσ2) and Lsplit. 7. Final update. The source client with maximum Lsplit updates model following XGBoost (Chen & Guestrin (2016)) and broadcasts the updated model and data indexes in child nodes as step 8. Figure 1 gives an overview of FEVERLESS. Note this process can be conducted iteratively. For simplicity, the core calculation processes are shown here, and more details are in Appendix D. 3.2 THEORETICAL ANALYSIS Computation cost: We use B and d to denote the number of buckets and the maximum depth respectively, and f (c) here represents the number of features held by a client c. For each client c, the computation cost can be divided into 4 parts: (1) Performing at most f (c) · B · NT · (2d − 1) times computation of Lsplit and w, taking O(f (c) · B · NT · 2d) time; (2) Creating n − 1 shared keys and 1 public key, which is O(n); (3) Conducting O(f (c) · B · NT · 2d) time to compute VRF outputs, select noise leader and generate noise; (4) Generating 2f (c) · B · NT · (2d − 1) maskings, which takes O(f (c) ·B ·NT · 2d ·n) time. Overall, each client’s computation complexity is O(f (c) ·B ·NT · 2d · n). Communication cost: Each client’s communication cost can be calculated as (1) Broadcasting at most f (c) · B · NT · (2d − 1) times of missing indexes mID; (2) Broadcasting 1 public key and receiving n − 1 public keys from other clients; (3) Broadcasting 1 leader selection score and sending noise to noise leader at most f (c) · B · NT · (2d − 1) times; (4) Sending source client 2 masked gradients and hessians of size 2dlog2Ne. Therefore the overall communication cost is f (c) · B · NT · (2d − 1) · (‖mID‖ · αI + αL + αN + n · αK2dlog2Ne), where αI , αL, αN and αK refer to the number of bits of index, leader selection score, noise and public keys, respectively. Thus, we have the communication complexity O(f (c) ·B ·NT · 2d). 3.3 SECURITY ANALYSIS We prove that FEVERLESS provides label and data privacy against an adversary controlling at most n − 2 clients in the semi-honest setting (Smart (2016)). Here, we provide a brief summary of our analysis and theorems. The formal proofs, in the random oracle model, are given in Appendix E. Label Privacy: Label privacy implies that the owner of a label among honest parties should not be leaked to the adversary. We achieve this by using a secure aggregation mechanism where the masks are created via DH key exchange and KDF. In brief, we show that because of the Decisional DH problem (see Definition 4), the adversary cannot distinguish the individual values from randomly chosen ones. That is why the adversary A cannot learn the owner of the label. Data Privacy: FEVERLESS provides data privacy, meaning that an adversary A cannot extract the data of any honest party. Individual data values are not separable from random values because of the secure masking. If the source client is not part of the adversary, no data information is leaked. But we require an additional countermeasure for the case where the source client is part of the adversary because it can collect the summation of the data values. We use differential privacy (Dwork et al. (2006a;b)) to achieve data privacy. Because of the noise added by differential privacy, the adversary cannot learn the individual data of an honest client. Moreover, we select the noise clients by the VRF which ensures that the noise leader cannot be predicted or compromised in advance. Theorem 3.1 (A not including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C so that |A| ≤ n− 2, the output of Sim is indistinguishable from the output of REAL : REALC,X ,YA (X C ,YC) ≡ Sim C,X ,Y A (XA,YA). Theorem 3.2 (A including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C so that |A| ≤ n− 2, the output of Sim is indistinguishable from the output of REAL:REALC,X ,YA (X C ,YC) ≡ Sim C,X ,Y A (G,H,XA,YA) where G = ∑ i∈mIDs g (c) i + N(0, (∆gσ) 2), H = ∑ i∈mIDs h (c) i + N(0, (∆hσ) 2). Theorem 3.3 (Privacy of the Inputs). No A ⊂ C such that |A| ≤ n − 2 can retrieve the individual values of the honest clients with probability 1 − ∑k̂ i=0 C i hC k̂−i n−2−h(Pt) k̂(1 − Pt)(n−k̂) Ck k̂−i Ck k̂ , where h and k̂ refer to the number of non-colluded clients and the number of clients who have selection score larger than threshold, respectively; and Pt is the probability of selection score larger than the threshold. 4 EXPERIMENT We perform evaluations on accuracy, runtime performance and communication cost, and compare our design with two straightforward secure approaches: one is based on LDP (for accuracy), and the other is built on AHE with GDP (for runtime). These approaches are most-commonly-used components for privacy-preserving FL, and they could be the building blocks for complex mechanisms, e.g., MPC. We note the protocol should intuitively outperform those MPC-based solutions, and one may leverage our source code to make further comparison if interested. In the experiments, the baseline, which is the pure XGBoots algorithm, follows the training process of Figure 1 without using any privacy-preserving tools (steps Ë - Î). And LDP does not conduct DH key exchange but each client injects noise into the aggregation of gradients and hessians, while AHE follows Figure 1 except executing DH key exchange. In AHE, each client sends (additive) encrypted messages to source client after step Î. We here show the performance of the best case where there is only one (non-colluded and randomly selected) client adding noise per round (k = 1). For other results (where k 6= 1), see Appendix H.2. Note we present the communication cost in Appendix H.5). 4.1 EXPERIMENT SETUP To present comprehensive results on accuracy, we set to be: 10, 5, 2 and 1, and δ is set to 10−5. In terms of accuracy and runtime, we evaluate different situations by varying the number of clients, the number of trees, and the maximum depth of trees (from 2 to 10). Other parameters regarding to training follow the suggestions in (Chen & Guestrin (2016)) and the library 5 of XGBoost. To deliver fair results, we conduct each test for 20 independent trials and then calculate the average. Datasets. We run the experiments on three datasets - Credit Card (Yeh & Lien (2009)), Bank Marketing (Moro et al. (2014)) and Banknote Authentication6 - for classification tasks. To fairly investigate the model performance in DL-VFL, we make the labels as sparse as possible, and they are uniformally distributed on clients. We give the more details of experiment setup in Appendix G. 4.2 EVALUATION ON ACCURACY In Figure 2, we present a clear picture on the accuracy performance based on the #tree and the maximum depth in (2, 10−5)−DP. We merge the #client in one tree structure, which means in one bar, and the value is the mean of accuracy when conducting on different numbers. The accuracy of the baseline in credit card (about 0.82) and bank marketing (nearly 0.9) remains unchanged as the #tree and maximum depth increases, while the accuracy in banknote authentication rises from 0.9 to approximately 1.0. To highlight the differences and ensure all results to be displayed clearly, we set the ranges of accuracy as [0.5, 0.9], [0.5, 1] and [0, 1] for the three datasets, respectively. Note the performance based on the #client is given in Appendix H.1. Comparing with the baseline, shown in the top and middle rows of the Figure 2, FEVERLESS and LDP suffer from continuously shrinking accuracy as tree structure becomes complex. This is so because the injected noises are accumulated into the model via the increase of query number. And the accuracy is easily affected by the depth. In the worst case where the #tree and maximum depth are both equal to 10, FEVERLESS decreases 10.37% (resp. 14.98% ), and LDP drops 24.78% (resp.24.59%) in credit card (resp. bank marketing). But on average, FEVERLESS’ accuracy only shrinks by around 0.9% (resp. 3.21%), while LDP suffers from estimated 3x (resp. 2x) accuracy loss. The difference in the degree of deterioration mainly comes from how much noise is added for each query. We note the deterioration of FEVERLESS is independent with the #client. Thus, we can maintain great accuracy even for the case where there exists a considerable amount of clients. Despite the fact that less noise is added in FEVERLESS, we do not predict that the accuracy falls to the same level (around 50%, like randomly guess in binary classification) as LDP in the bottom row of Figure 2. This is so because the model is trained by an extremely small-size dataset, which makes it hard to maintain the robustness but relatively sensitive to noise. If setting a larger , we may see our advantage more clear. The experiments conducted on banknote authentication dataset with larger are given in Appendix H.3 To distinguish the performance between FEVERLESS and LDP more clearly, Figure 3 shows the comparison over different , when #depth and #tree are set to 10. The performance of the model is decayed as the decrease of . In the left (resp. middle) of the Figure 3, the averaged accuracy of FEVERLESS falls from 0.7686 to 0.5967 (resp. from 0.8517 to 0.6831), while that of LDP also decreases to 0.5299 (resp. 0.5853). We notice that the highest values of LDP stay at the same level as those of FEVERLESS. This is because, in the case of 2-client training, only one client needs to add the noise in LDP (which is identical to our GDP solution). At last, the worse case can be seen in the right of the Figure 3 due to the weak robustness of the model obtained from the banknote authentication. The results are much far away from the baseline there. But even in this case, FEVERLESS still holds a tiny advantage over LDP. 4.3 EVALUATION ON TRAINING TIME To highlight the runtime complexity, we average the results varying by client number into one tree structure as well. We further set the ranges of time as [0s, 9,500s], [0s, 3,500s] and [0s, 110s] for the datasets to deliver visible results. Note since the banknote dataset contains the least samples, it 5https://xgboost.readthedocs.io/ 6https://archive.ics.uci.edu/ml/datasets/banknote+authentication does deliver the best training efficiency here. Figure 4 presents the comparison on the training time by varying maximum depth and the number of trees among the datasets. The training time increases exponentially and linearly with depth and the number of tree, which is consistent with our analysis given in Section 3.2. In Figure 4, compared with the baseline, the runtime of FEVERLESS at most increases 110.3s (resp. 50s, 4.3s), while AHE requires around 70x spike (resp. 48x, 21x) in credit card (resp. bank marketing, banknote authentication), where #depth and #trees are equal to 10. For the average case, FEVERLESS consumes Approx. 1%(resp.6.5%, 13.96%) more training time than the baseline, while AHE requires the 351%(resp.155.1%, 674%) extra, w.r.t. the three datasets. Its poor performances are due to the laborious calculations in encryption, in which each client has to conduct an encryption per query. By contrast, the masksings in FEVERLESS avoid these excessive costs. We further investigate the runtime performance on the #client in Appendix H. 5 CONCLUSION AND FUTURE WORK We consider a practical scenario where labels are distributedly maintained by different clients for VFL. By leveraging secure aggregation and GDP, we present a novel system, FEVERLESS, to train XGBoost securely. FEVERLESS can achieve perfect secrecy for label and data, and adversaries cannot learn any information about the data if the source client is not corrupted. With DP against differential attack, the source client knows nothing more than summation. Our design is also robust for the collusion of n−2 out of n clients. The experiment results show that FEVERLESS is fast and accurate, only taking 1% extra training time, and sacrificing 0.9% accuracy, as compared to the pure XGBoost. In Appendix F, we discuss how to reduce noise, hide distribution of labels and use other security tools. Although our system achieves great performance in terms of security and efficiency, its accuracy still does not work well in small-scale datasets. This remains an open problem. And we will also consider secure solutions against malicious adversaries. A NOTATIONS The frequently used notations are summarized in Table 1. B PRELIMINARIES B.1 XGBOOST XGBoost (Chen & Guestrin (2016)) is a popular tree-based model in tabular data training that can provide better interpretation, easier parameters tuning and faster execution speed than deep learning Goodfellow et al. (2016); LeCun et al. (2015). It also outperforms other well-known boosting tree systems in terms of accuracy and efficiency, like Spark MLLib Meng et al. (2016) and H2O Chen & Guestrin (2016), especially for large-scale datasets. Therefore, in this paper, we consider using XGBoost as a building block for classification tasks. Assume that a training set with m data points composing with feature space X = {x1, · · · , xm} and label space Y = {y1, · · · , ym}. Before training starts, every feature will be sorted based their values, and split candidates will be set for features. XGBoost builds trees based on the determination of defined splits candidates and some pruning conditions. Specifically, computing gradients and hessians first according to Eq.(2) and Eq.(3) for each data entry, where y(t−1)i denotes the prediction of previous tree for i-th data point, and yi is the label of i-th data point: gi = 1 1 + e−y (t−1) i − yi = ŷi − yi, (2) hi = e−y (t−1) i (1 + e−y (t−1) i )2 . (3) For splitting nodes, the XGBoost algorithm determines the best split candidate from all others based on maximum Lsplit in Eq.(4), where λ and γ are regularization parameters: Lsplit = 1 2 [ ∑ i∈IL gi∑ i∈IL hi + λ + ∑ i∈IR gi∑ i∈IR hi + λ − ∑ i∈I gi∑ i∈I hi + λ ]− γ. (4) The current node will be the leaf node if the following conditions are fulfilled: reaching the maximum depth of tree, the maximum value of impurity is less than preset threshold. The calculation of the leaf value follows Eq.(5): w = − ∑ i∈I gi∑ i∈I hi + λ . (5) B.2 DIFFIE-HELLMAN KEY EXCHANGE Based on Decision Diffie-Hellman (DDH) hard problem (Boneh (1998)) defined below, DiffieHellman key exchange (DH) (Diffie & Hellman (1976)) provides a method used for exchanging keys across public communication channels. Without losing generality and correctness, it consists of a tuple of algorithms (Param.Gen, Key.Gen, Key.Exc). The algorithm (G, g, q) ← Param.Gen (1α) generates public parameters (a group G with prime order q generated by a generator g) based on secure parameter α. (ski, pki) ← Key.Gen(G, g, q) allows client i to generate secret key (ski $←− Zq) and compute public key (pki ← gski ). Shared key is computed by (pk skj i , pk ski j ) ← Key.Exc(ski, pki, skj , pkj). Inspired by (Bonawitz et al. (2017); Ács & Castelluccia (2011)), we utilize shared keys as maskings to protect information of labels against inference attack during transmitting in public channels. The correctness requires pkskji = pk ski j . The security relies on the DDH problem (Boneh (1998)), which is defined as: Definition 4 (Decision Diffie-Hellman). Let G be a group with prime order q and g be the fixed generator of the group. The Probabilistic Polynomial Time (PPT) adversary A is given and ga and gb where a and b are randomly chosen. The probability of A distinguishing (ga, gb, gab) and (ga, gb, gc) for a randomly chosen c is negligible:∣∣∣Pr[a, b $←− Zq : A(g, ga, gb, gab) = true]− Pr [ a, b, c $←− Zq : A(g, ga, gb, gc) = true ]∣∣∣ < negl(α). B.3 PSEUDO-RANDOM GENERATOR AND HASH FUNCTION Pseudo-Random Generator (PRG) (Håstad et al. (1999)) is an algorithm which is able to generate random numbers. The ”pseudo-random” here means that the generated number is not truly random but has the similar properties with random number. Generally, the pseudo-random numbers are determined by given initial values a.k.a seeds. In cryptographic applications, a secure PRG requires attackers not knowing seeds can distinguish a truly random number from a output of PRG with a negligible probability. Similar with PRG, hash function allows mapping arbitrary size of data to a fixed bit value. For reducing communication cost of FEVERLESS, we use SHAKE-256 (Sha (2015)), one of the hash functions in SHA-3 (Aumasson et al. (2008)) family, to generate customize size of maskings. B.4 KEY DERIVATION FUNCTION Key Derivation Function (KDF) (Krawczyk & Eronen (2010)) is a kind of hash function that derives multiple secret keys from a main key by utilizing Pesudo-Random Function (PRF) (Kaliski (2005)). In general, KDF algorithm DK ← KDF (mainkey, salt, rounds) derives keys DK based on a main key, a cryptographic salt and current round of processing algorithm. The security requires a secure KDF is robust for brute-force attack or dictionary attack. Inspired by (Zdziarski (2012)) where key shares generated by DH key exchange are converted to AES keys, in this paper, we use KDF to generate maskings for every round to reduce communication cost. The main key we use is generated by DH key exchange. B.5 VERIFIABLE RANDOM FUNCTION Verifiable Random Function (VRF) (Micali et al. (1999)) is a PRF providing verifiable proofs of correctness of outputs. It is a tool widely used in cryptocurrencies, smart contracts and leader selection in distributed systems (Micali (2016)). Basically, given a input x, a signature scheme and a hash function, a practical leader selection scheme with VRF (Micali (2016)) works as: Sleader ← H(signski(x)) (6) where ski is the secret key for i-th client, and the maximum leader score Sleader is used to determine leader. The security and unforgeability of VRF requires that the signature scheme has the property of uniqueness, and hash function is able to map the signature to a random string with fixed size. The correctness of this Sleader is proved by the signature of x. B.6 DIFFERENTIAL PRIVACY Differential Privacy (DP) (Dwork et al. (2006a;b)) is a data protection system targeting on publishing statistical information of datasets while keeping individual data private. The security of DP requires that adversaries cannot distinguish statistically change from two datasets where an arbitrary data point is different. The most widely used DP mechanism is called ( , δ)-DP requiring less noise injected than original proposed -DP but with the same privacy level. The formal definition is given as follows. Definition 5. (( , δ) - Differential Privacy) Given two real positive numbers ( , δ) and a randomized algorithm A: Dn → Y , the algorithm A provides ( , δ) - differential privacy if for all data sets D, D ′ ∈ Dn differing in only one data sample, and all S ⊆ Y: Pr[A(D) ∈ S] ≤ exp( ) · Pr[A(D ′ ) ∈ S] + δ. (7) Note the noise N ∼ N(0,∆2σ2) will be put into the output of the algorithm, where ∆ is l2 - norm sensitivity of D and σ = √ 2 ln(1.25/δ) (Abadi et al. (2016)). C PRIVACY CONCERN Since we assume feature names are not public information for all clients, and the values of features never leave from clients, the privacy issues are mainly incurred by the leakage of label information. C.1 INFERENCE ATTACK During training process, gradients and hessians are sent to source client for Lsplit computation. For classification task, the single gradient is in range (−1, 0)∪(0, 1) for binary classification. According to Eq.(2), a label can be inferred as 1 and 0 if the range is (−1, 0) and (0, 1), respectively. Besides, hessian illustrated in Eq.(3) can leak a prediction of the corresponding data sample. With training processing, the prediction is increasingly closer to a true label. The source client and outside attackers can infer the true label with high probability. Gradients and hessians cannot be transmitted in plaintext. We thus use secure aggregation scheme to protect them from inference attack. C.2 DIFFERENTIAL ATTACK Differential attack can happen anytime and many times during the calculation of gradients and hessians. Figure 5 describes an example of differential attack taking place in single node split. After sorting feature1, the semi-honest source client defines 2 split candidates and further computes G{2,5} = g2 + g5 and G{1,2,3,5} = g2 + g5 + g1 + g3 for the candidates 1 and 2, respectively. Since the source client holds label 2, even if G{2,5} is derived by secure aggregation, the g5 still can be revealed by G{2,5} − g2. Another example for differential attack is shown in Figure 6. Assume split candidate 1 is the one for splitting root node. In the current tree structure, source client may split right node by computing Lsplit of split candidate 2. In this case, G{1,3} should be aggregated by source client. And the g5 can be revealed by G{1,2,3,5} −G{1,3} − g2, where G{1,2,3,5} is computed in the previous node. D MORE DETAILS ON FEVERLESS PROTOCOL D.1 XGBOOST TRAINING OVER DISTRIBUTED LABELS At the initial stage, we allow all clients to agree on a tree structure (maximum depth and the number of trees) and the learning rate for updating prediction. To avoid overfitting problem, we should define regularization parameters. Threshold impurity is also another vital parameter used to identify tree and leaf nodes via the maximum impurity. After that, we should choose , δ for DP, hash function for masking generation and noise leader selection. Besides, we select a multiplicative group G with order q generated by a generator g and a large prime number p to run DH. At initialization process, all clients set parameters and sort their own feature based on values. Then, split candidates can be defined, and data samples between two different candidates will be grouped as a bucket. At the end, all entries are assigned initialized values to calculate the derivatives of loss function. The detailed algorithm is described as follows. Algorithm 1: Initialization 1 Set parameters: all clients agree on the maximum depth of a tree d, the number of trees (NT ), learning rate (η), regularization parameters (λ, γ), the threshold of Lsplit, , δ, p, g, selection portion (p) and hash function 2 for c ∈ [1, n] do 3 for each feature j owned by c do 4 sort(X(c)j ) 5 define buckets: Bjz 6 end 7 set initialized values: ŷi(c) 8 end After initialization, all clients can invoke Algorithm 2 to train model collaboratively. The inputs are from feature space consisting of features X(c)j and labels y (c) i distributed on different clients, respectively; while the output is a trained XGBoost model that can be used for prediction. Generally, trees are built one by one. And we see from line 4-10 in Algorithm 2 that each client can compute gradients and hessians at beginning of a new tree construction. Following that, clients are to split current node. Note that XGBoost training in DL-VFL requires each client to calculate G and H . If the labels in some buckets are incomplete, the corresponding gradients and hessians cannot be computed. Thus, each client should first broadcast missing data index setmID (see line 15-17 in Algorithm 2). Based on the predefined bucketBjz ,mID can be defined if labels in Bjz are not held by clients. In each broadcast, a client sending messages is regarded as a source client. Then others send the corresponding g(c ′ ) i and h (c ′ ) i back to the source client to computeLsplit through Algorithm 3-5 depicted in Appendix D.2. After finding a maximum impurity Lcsplit max, the current node will be split to “left” and “right” nodes if L c split max>threshold Lsplit, in which the value of the split candidate is own by c. In node splitting, clients should set a given Algorithm 2: Protocol overview 1 Input: {X(c)j | j ∈ f, c ∈ |C|}: features, {y (c) i | i ∈ m, c ∈ |C|}: labels 2 Output: XGBoost model 3 Building trees: 4 for nt ∈ [1, NT ] do 5 for c ∈ [1, n] do 6 for each data entry i owned by c do 7 g (c) i ← ∂ŷi(c)Loss(ŷi (c), y (c) i ) 8 h (c) i ← ∂2ŷi(c)Loss(ŷi (c), y (c) i ) 9 end 10 end 11 for each node in the current tree do 12 while current depth <d do 13 for c ∈ [1, n] do 14 for each feature j owned by c do 15 for each Bjz owned by c do 16 BroadcastmID = {i | yi /∈ Yc} 17 end 18 aggregate G, H by Algorithm 3-5 19 compute Lsplit according to Eq.(4) 20 end 21 find the maximum L(c)split and broadcast 22 end 23 L (c) split max ← max({L (c) split | c ∈ [1, n]}) 24 if L(c)split max ≤ threshold Lsplit then 25 set current node as leaf node 26 c computes w and broadcast 27 Break 28 else 29 c splits current node to left node and right node, and broadcasts data index of them. 30 end 31 end 32 set remaining nodes as leaf nodes 33 c computes w and broadcast 34 clients participating in calculation of w: update ŷi(c) 35 end 36 end node as ”leaf” if current depth reaches the predefined maximum depth or the maximum Lsplit is less than the predefined threshold of Lsplit (see line 12, 24-32 in Algorithm 2). The derivation of leaf value is followed by Eq. 5 where G and H are intaken. Since a leaf node is either “left” or “right” split by one of the clients in C from its parent node, this client knows G and H and leaf value can be derived. Finally, this leaf value will be broadcast, and clients who own the corresponding g(c)i and h (c) i can use it to update predictions. The details for the above process are shown in Algorithm 2. D.2 SECURE AGGREGATION WITH GLOBAL DIFFERENTIAL PRIVACY In line 15-19 of Algorithm 2, source client is able to compute Lsplit from the requested missing data indexes and the aggregation of received messages. To avoid that inference and differential attacks are conducted on labels by source client and outside adversaries, we propose a privacy-preserving approach, shown in Algorithm 3-5, to “twist” the DH key exchange, noise leader selection and secure aggregation together. This method represents a viable alternative to train XGBoost securely in DL-VFL without demanding excessive computational resources and affecting model accuracy. To generate the secure-but-can-be-cancelled-out maskings, we adopt DH here. In Algorithm 3, all clients randomly select numbers as their secret keys and generate the corresponding public keys. For any two clients in the set C, they will exchange public key and compute the corresponding shared keys. For simplicity, we do not describe the signature scheme for DH. We assume DH is conducted on authenticated channels, which means the man-in-the-middle attack (Khader & Lai (2015)) should be invalid here. Algorithm 3: Diffie-Hellman key exchange 1 for c ∈ [1, n] do 2 skc ← Z∗p 3 end 4 for c ∈ [1, n] do 5 pkc = g skc mod p 6 for c ′ ∈ [1, n] ∧ c′ 6= c do 7 Sc,c′ = pk sk c ′ c mod p 8 end 9 end If the shared keys are used as maskings directly, our system is not robust for clients collusion unless the amount of communication has been sacrificed as a cost to update maskings per round. But the communication complexity is exponentially increased with the number of clients for a single node splitting. Considering the structure of trees, the overall communication complexity will be O(2d ·NT · n2), which may not scale well in practical applications. To tackle this issue, we use KDF to update maskings per round automatically. Specifically, in line 24-25 of Algorithm 5, shared keys are taken as main keys. 0 and 1 are salt values for gradients and hessians, respectively. Since query in each round varies, the generated maskings should be dynamic accordingly. Besides, the sign of maskings is determined by the indexes of clients. In this way, we only need to use DH once, and the communication complexity is independent with tree structure. To enable FEVERLESS to hold against differential attack, we use GDP approach allowing the chosen one to inject a global noise to aggregated values per round. The approach is quite subtle. If the noise leader is selected by source client, the system will be vulnerable to the collusion. Moreover, a client could be easily identified as a target if we choose it in advance, e.g., selecting a list of leaders before the training. To avoid these issues and limit the probability of collusion to the greatest extent, we use VRF to iteratively select the leader (see Algorithm 4) to securely inject a global noise. The input of VRF includes mIDs and a fresh random number r (line 4 in Algorithm 4), so that this client will not be predicted and set beforehand - reducing its chance to be corrupted in advance by outsiders and the source client. All clients can broadcast their scores and then the one who holds the “max value” will become the leader. Then the leader re-generates a selection score as score threshold (selecthreshold) and sends it to the rest of the clients. (line 2-6 in Algorithm 5). The clients send the masked noise back to the leader if the re-generated score is larger than the threshold (line 7-13 in Algorithm 5). Subsequently, the leader will select k̂ clients, notify them and aggregate these masked noise to generate a global noise with a random number. In this context, even these selected clients are colluded (note at least one is not) with noise leader and source client, there is still a noise that cannot be recovered, safeguarding the training differentially private. Note since the noise is masked by the random number, the source client (even colluding with the leader) cannot recover the “pure” global Algorithm 4: Noise leader selection 1 count = 1 2 for each time run this algorithm do 3 for c ∈ [1, n] ∧ c 6= source client do 4 selecc ← H(SIGNskc(count,mIDs,r)) 5 Broadcast 6 end 7 selecmaxc ← max({selecc | c ∈ [1, n]}) 8 set c as noise leader 9 count+=1 10 end noise to conduct differential attack. And each client adds a noise with a probability p. If k out of k̂ are non-colluded, the probability of collusion is (1 − kn ) h. To cancel out the randomness, the selected clients will subtract the same randomness from masked messages (line 28-31 in Algorithm 5). Considering that the source client may procrastinate the leader selection and noise injection procedure so as to buy some time for its colluded clients to prepare sufficient large VRF values to participate into the competition of selection and adding noise. One may apply a heartbeat protocol (Nikoletseas & Rolim (2011)) to prevent that a new selected leader intentionally halts the noise adding stage for a long period, say 1 min. If there is no response from the leader after for a short while, a new leader will be randomly selected. Furthermore, the heartbeat may help to solve the problem that the leader accidentally drops from the network. We note that the heartbeat protocol is not our main focus in this paper. Before replying to source client, we have that the clients with labels put maskings to gradients and hessians, and for those without labels, they just generate and later send out maskings, in which the noise leader (i.e. one of the maskings generators) injects the noise. In this way, the maskings, guaranteeing perfect secrecy of the messages, will be cancelled out after the values aggregation, and the differentially private noise will solidate indistinguishability of individual data entry. Note that in line 24-34 of Algorithm 5, the maskings and masked values are in the range [0, N − 1]. And N should be sufficiently large to avoid overflow, and the summation of gradients and hessians should not exceed N . Algorithm 5: Secure aggregation with global differential privacy 1 Noise injection: 2 if c = leader then 3 selecthresholdc ← H(SIGNskc(count,mIDs,r)) 4 Broadcast 5 count+=1 6 end 7 for c ∈ [1, n] ∧ c 6= source client ∧ c 6= noise leader do 8 selecc ← H(SIGNskc(count,mIDs,r)) 9 if selecc > selecthresholdc then 10 send ñ(c)g = N(0,∆2gσ 2) + r (c) g and ñ (c) h = N(0,∆ 2 hσ 2) + r (c) h to noise leader 11 count+=1 12 end 13 end 14 if c = leader then 15 c selects k clients from clients of sending noise, k = d|{ñ(c)g }| · pe 16 if k < 1 then 17 redo noise injection 18 end 19 notify k clients 20 noise aggregation: Ñg = k · N(0,∆2gσ2) +Rg , Ñh = k · N(0,∆2hσ2) +Rh 21 end 22 Secure aggregation: 23 for c ∈ [1, n] do 24 mask (c) g ← (∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ · ( H(Sc,c′‖0‖query) mod N )) mod N 25 mask (c) h ← (∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ · ( H(Sc,c′‖1‖query) mod N )) mod N 26 G(c) = ∑ i∈mIDs g (c) i + mask (c) g mod N 27 H(c) = ∑ i∈mIDs h (c) i + mask (c) h mod N 28 if selecc > selecthresholdc ∧ received notification then 29 G(c) = G(c) − r(c)g mod N 30 H(c) = H(c) − r(c)h mod N 31 end 32 if c = leader then 33 G(c) = G(c) + Ñg mod N 34 H(c) = H(c) + Ñh mod N 35 end 36 send {G(c), H(c)} to source client 37 end E SECURITY ANALYSIS We investigate the security and privacy properties of our protocol. First, we define the security model of our setting and the properties. Then, we prove that our protocol satisfies these properties. Security Model. Our security is based on the random oracle model (ROM) (Smart (2016)) where the hash function outputs uniformly random value for a new query and the same value for a previously answered query. Adversarial Model. Our protocol is designed for semi-honest security model (Smart (2016)) where all parties follow the protocol while trying to obtain information regarding other parties’ inputs. We assume that the source client can collude with other clients, but the size of colluding clients is no more than n− 2. E.1 PRIVACY GOALS Our privacy goals can be summarized as: • Label privacy: No adversary controlling at most n−2 clients can learn who is the owner of a label among the honest parties. • Data privacy: No adversary controlling at most n − 2 clients can extract the data of an honest party. We first investigate the case where the source client is not part of the adversary. In the following theorem, we show that there exists a simulator Sim that simulates the joint view of clients in A by only using the inputs belonging to them. This implies that A does not learn more than what they have. Theorem E.1 (A not including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C such that |A| ≤ n − 2, the output of Sim is indistinguishable from the output of REAL: REAL C,X ,Y A (X C ,YC) ≡ SimC,X ,YA (X A,YA) (8) Proof. In order to prove that simulator Sim can simulate the outputs of the honest parties in H := C − A, we show that the distribution of the inputs belonging to the rest of the network cannot be distinguished from a randomly generated data. In this way, the simulator can use any dummy values as inputs of the honest parties to simulate their outputs. We will simulate the view of the A regarding the messages broadcast by the honest clients. A client c, first makes a key exchange with others, then after some internal operations, outputs G(c) and H(c) values. Let us investigate G(c) value, which is in the form of ∑ i∈mIDs g (c) i + mask (c) g , except for the noise leader who has additional noise of N(0, (∆gσ)2). The mask values are computed as∑ c 6=c′ |c−c′| c−c′ · H(Sc,c′‖0‖query) mod N . Here, we will use a hybrid model where we modify the protocol in several steps, and for each step, we will show that modifications are indistinguishable for the adversary A. In the end, we will achieve a hybrid that can be simulated by Sim. Hybrid1: The first hybrid directly follows the protocol. The distribution of the variables and the view of A is the same as REAL. Hybrid2: In the second hybrid, we replace the agreed keys between honest clients Sc,c′ for all c, c′ ∈ H with random values rc,c′ ∈ G where G is the group of key exchange protocol G. In the original protocol, Diffie-Hellman key exchange is used. The replacement is indistinguishable for the adversary because of the decision Diffie-Hellman assumption given in Definition 4. Also, note that these random values are only available to parties involved in the key exchange unless they are corrupted by the adversary. Hybrid3: In this hybrid, we replace the mask values of honest clients mask (c) g for all c ∈ H with random values R(c). Note that with the replacement in the previous step, the mask values are computed via ∑ c6=c′ |c−c′| c−c′ · H(rc,c′‖0‖query) mod N where rc,c′ ∈ ZN is a random value that is unknown to the adversary (if both c and c′ are honest). Because of the random oracle model, the output of the hash function will be a uniformly random value that is also unknown to the adversary. Since there are at most n − 2 clients in A , we have at least two honest clients c and c′ for which the adversary cannot know the uniformly chosen output of H(rc,c′‖0‖query). Then, the modular summation of these outputs includes at least one value that the adversary does not know and is uniformly random. Thus, it cannot be distinguishable from a random value R(c). Hybrid4: In this hybrid, we replace gradients of honest clients g (c) i for all c ∈ H with ’0’s. This is done by replacing mask values with R(c) := R(c) − ∑ i∈mIDs g (c) i mod N to keep the G (c) value the same. From the adversary’s perspective, sinceR(c) values are unknown and uniformly randomly chosen, the replacement is not distinguishable. In Hybrid4, we replace the gradients of honest parties with ’0’s, and the mask values are replaced by R(c) which is unknown to the adversary and chosen from a uniform distribution. Thus, a simulator Sim can simulate the outputs of honest parties G(c) without necessarily knowing their inputs. The same can be analyzed for hessian value, H(c). Since the masking values of G(c) and H(c) are different and the hash function is modeled as a random oracle, the randomness in both parts of them are independent of each other and indistinguishable by the adversary A. Overall, the simulator Sim can simulate our protocol. Thus, the view of the A can be simulated by replacing the inputs of the honest parties with zeros. Thus, the adversary does not learn any information on the inputs of the honest parties. Now, we analyze the case where the source client is part of the A. We show that there exists a simulator Sim that simulates the joint view of clients in A by only using the inputs belonging to them and the summations G and H . This implies that A does not learn more than what they have and the summation. Theorem E.2 (A including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C such that |A| ≤ n− 2, the output of Sim is indistinguishable from the output of REAL: REAL C,X ,Y A (X C ,YC) ≡ SimC,X ,YA (G,H,X A,YA) (9) where G = ∑ i∈mIDs g (c) i + N(0, (∆gσ) 2), H = ∑ i∈mIDs h (c) i + N(0, (∆hσ) 2). Proof. Here, we again show that Sim can simulate the outputs of the honest parties in H without knowing their inputs. Unlike Theorem E.1, Sim is also given the summations G and H because the adversary includes the source client. We can use the same hybrids with Theorem E.1 until Hybrid4, this is because that the inputs of the honest clients are not required yet. We need to update Hybrid4 such that it takes into account the summation. Here are the hybrids for the A with source client: Hybrid1,Hybrid2,Hybrid3: The same with Theorem E.1. Hybrid4: In this hybrid, we replace gradients of honest clients g (c) i for all c ∈ H with ’0’s, except one c′ which will be equal to ∑ i∈mIDs g (H) i mod N = G − ∑ i∈mIDs g (A) i mod N . The honest client c′ is randomly chosen among H. From the adversary’s perspective, since R(c) are unknown uniformly random chosen values, the replacement is not distinguishable. Overall, the view of theA can be simulated by replacing the inputs of the honest parties with zeros, except one with ∑ i∈mIDs g (H) i mod N . Thus, A does not learn any information from the honest clients, except the summation ∑ i∈mIDs g (H) i mod N . With Theorem E.2, we show that even the adversaryAwith source client cannot know more than the summation of gradient and hessian values, G and H . The proof is done via Sim without requiring individual data of the honest clients except for the summation. This implies that the adversary cannot distinguish which party provided which gradient or hessian values. Moreover, the parties who do not have any of the requested g or h values will send ’0’ together with the mask (and noise for the leader). This implies that we provide label privacy. Meaning that the adversary cannot distinguish which label’s g or h values are coming from which honest client. In the case when the adversary includes the source client, the summation of gradient and hessian values can be known to the adversary. In the following theorem, we show that these summations do not leak any individual data due to differential privacy. Theorem E.3 (Privacy of the Inputs). No A ⊂ C such that |A| ≤ n− 2 can retrieve the individual values of the honest clients with probability 1− k̂∑ i=0 CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂) Ck k̂−i Ck k̂ , where h and k̂ refer to the number of non-colluded clients and the number of clients who have selection score larger than threshold. Pt is the probability of selection score larger than the threshold. Proof. If the adversary does not include the source client, then following the previous theorems, the adversary cannot know any of the inputs belonging to the honest parties. Otherwise, it knows the summations G and H . Since we apply differential privacy (Dwork et al. (2006a;b)), the summation cannot leak information regarding the inputs. According to Definition 5, we add differentially private noise guaranteeing the security of individual data points while summation can be calculated. Proof of probability. Note noise leader selects k clients from n clients (rather than itself and the source client) to add noise. Suppose that there are h non-colluded clients out of n − 2 clients, the number of clients whose selection scores are larger than the threshold is k̂. The number of events is C k̂n−2−h + C 1 hC k̂−1 n−2−h + · · ·+ C k̂ hC 0 n−2−h, in which the events are that {“there are k̂ colluded clients out of k̂ clients and 0 non-colluded client”,· · · ,“there are 0 colluded client out of k̂ clients and k̂ non-colluded clients”}. Therefore, P (Ei) = C i h(Pt) i(1− Pt)h−i · C k̂−in−2−h(Pt) k̂−i(1− Pt)(n−h−k̂+i) = CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂), where Pt is the probability that the selection score is larger than the threshold, and Ei is i-th event. Then, the probability that noise leader selects k colluded clients from k̂ clients is P0 = Ck k̂−i Ck k̂ . At the end, the probability of all aggregated noise coming from colluded clients is k̂∑ i=0 P (Ei) · P0 = k̂∑ i=0 Cih(Pt) i(1− Pt)h−i · C k̂−in−2−h(Pt) k̂−i(1− Pt)(n−h−k̂+i) = k̂∑ i=0 CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂) Ck k̂−i Ck k̂ . Conversely, the probability of at least one non-colluded client participating in noise injection is 1− k̂∑ i=0 CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂) Ck k̂−i Ck k̂ . Note that because of the secure aggregation, the adversary cannot learn anything but the summation. Thus, our protocol does not require the addition of noise to each data. Instead, we only require the noise leader to add the noise, which prevents the retrieval of the individual data from the summation. In Theorems E.1 and E.2, we show that A cannot distinguish the individual values from randomly chosen values and can only know the summation if the source is part of the adversary. In Theorem E.3, we show that A cannot extract the individual values of the users from the summation due to the added noise and differential privacy. Thus, our protocol satisfies data privacy. In other words, the adversary cannot learn the data point of an honest client. It is important to note that since the noise leader is selected via VRF, no adversary can guess if any honest party will be the leader in the upcoming round beforehand. This provides additional security regarding the manipulation of the noise leader. F DISCUSSION To reduce the negative impact brought by noise, according to infinity divisibility of Gaussian distribution (Patel & Read (1996)), one may split global noise (N(0, (∆σ)2)) into n parts (N(0, (∆σ) 2 n )). But a drawback is that the privacy budget will increase linearly as an increasing number of colluded clients appear. For example, if GDP achieves -DP , in the worst case where there are n−1 colluded clients, the privacy budget will raise to n× . Hiding labels distribution. In the semi-honest setting, if the source client sends the missing indexes consistently, adversaries may figure out which labels are distributed (on the source clients) by statistical analysis. We show that this issue can be tackled. In the proposed protocol, source client broadcasts the missing data indexes mID (line 16 of Algorithm 2). Under the semi-honest setting, if source client sends missing indexes consistently, the adversaries will figure out which labels are distributed on source clients by statistic analysis. We note that FEVERLESS can be expanded to avoid this type of leakage by yielding extra communication overheads. Specifically, during broadcasting period, source client should send indexes of one bucket instead of mID, and the rest of protocol remains constant. In this way, others cannot distinguish the distribution of labels because all clients share the same index set I. If we assume labels are uniformly distributed on each client, the extra overheads are restricted to |I|/|C|. This cost is clearly noticeable in those datasets with a large number of data points. Other security tools. The masking scheme realizing secure aggregation may be replaced with an MPC (Damgård et al. (2012); Wu et al. (2020)) or additively homomorphic encryption (Paillier (1999)). However, the major defect of these tools is that they entail labor-intensive calculation with regard to encryption, which may not scale well in large-scale datasets. Due to this concern, we only put light-weight computation in FEVERLESE and further, we enhance the security to “perfect secrecy”. In our design, the selection of noise leader is captured by VRF. We note that there may be other options to fulfil the goal. For example, Proof of Elapsed Time (PoET) (Chen et al. (2017); Corso (2019)) is an interesting and effective mechanism which is used to maintain the consensus of distributed peers in Hyperledger Sawtooth. It provides a fair and trusted lottery strategy to select a block winner (per consensus round). Sharing the same philosophy with the VRF, it may be deployed in our protocol to yield leader. And building a more efficient noise leader selection algorithm could be an interesting open problem. G MORE DETAILS ON EXPERIMENT SETUP All the experiments are implemented in Python, and conducted on a cluster of machines with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz, with 15GB RAM in a local area network. Intuitively, the smaller we set, more secure FEVERLESS will be; but larger noise will be added. We note the above statement can be seen from the experimental results. As for the cryptographic tools, we set the key size of DH and Paillier as 160 bits and 1024 bits respectively(to save some time in running the experiments). This size can reach a symmetric security level with 80 bits key length. Note one may indeed increase the key size to obtain stronger security 7, but this will bring a longer experiment time as a side effect. We use 1024-bit MODP Group with 160-bit Prime Order Subgroup from RFC 5114 8 for DH Key exchange. SHAKE-256 (Dworkin (2015)), a member of SHA3 (Dworkin (2015)) family, is used as a hash function in leader selection and secure aggregation. •Credit Card: It is a commercial dataset used for predicting whether costumers will make payment on time. It provides 30,000 samples, and each sample composes of 23 features. • Bank marketing: Consisting with 45,211 data points and 17 features, the goal of bank marketing is to predict if a client will subscribe a term deposit. • Banknote authentication: Offering 1,372 data points and 4 features, this dataset is used to classify authenticated and unauthenticated banknotes. Note that different from traditional tabular data, features in the dataset are extracted from images that are taken from genuine and forged banknotelike specimens through Wavelet Transform (Antonini et al. (1992)). Using the small-scale dataset, the trained model may not be robust for noise, which brings negative impact on accuracy. H ADDITIONAL EXPERIMENTS AND FIGURES We present additional experiments, and all the experimental settings follow those defined in Section 4.1. In each presented figure, we show the results executed on the datesets Credit card (left), Bank Marketing (middle) and Banknote Authentication (right). Note that the comparison among FEVERLESS, LDP, and AHE requires a condition that #client=2; when #client=1, we can only show the results of the baseline. And the average performance of FEVERLESS in these figures is highlighted as the red dotted line. Via the experiments, we elaborate that how the accuracy varies with the increasing number of client among the baseline, FEVERLESS and LDP, w.r.t. different tree structures and . Figure 7-18 are presented for the best case where only a non-colluded client adds the noise. And other cases are demonstrated in Figure 19-26 with the selection scores: 1/2 and 1/3. Beyond those, we also add the comparison results for AHE in Table 2-4 with = 2. In general, without any added noise, the baseline can reach the highest accuracy and meanwhile, the accuracy remains stable as the client number increases. The performance of FEVERLESS is right behind that of baseline but still keeps stable. Note there are slight fluctuations in some figures (e.g. Figure 10, 12 and 14), especially for the case where complex tree structure and small are used. The LDP approach does harm accuracy, which can be seen from the continuously and significantly falling bars in the figures. Naturally, when more clients engage into the training, more noise should be added into the model. This makes LDP’s performance far lower than the red line. Note that banknote dataset is composed of 4 features. In the VFL setting, every client should have at least one feature. Therefore, we can only allow up to 4 clients to participate in the training. Beside, FEVERLESS does not perform well in banknote dataset. This is so because the model is trained by a small number of samples, so that the robustness is seriously affected by noise. H.1 BEST CASE: ACCURACY ON CLIENT NUMBER 7Note a stronger security level will not affect the training accuracy. 8https://tools.ietf.org/html/rfc5114 AHE AHE AHE H.2 OTHER CASES: ACCURACY ON CLIENT NUMBER H.3 ADDITIONAL RESULTS ON ACCURACY FOR BANKNOTE AUTHENTICATION H.4 ADDITIONAL RESULTS ON TIME In Figure 29-33, we show the time performance based on various numbers of client, tree and depth. Besides, we present the concrete results in Table 5-7. Table 8 also shows more specific runtime of tree construction in #tree=4 and depth=4 among baseline, FEVERLESS, LDP and AHE. In general, the runtime of FEVERLESS is slightly higher that that of the baseline. Compared to AHE, FEVERLESS significantly reduces training time while preserving privacy. This advantage is clearly seen from the cases using complex tree structures. Note that AHE can be replaced by other more complex cartographic solutions, such as secure MPC, which can also maintain data/label privacy. But the MPC-based solutions will consume more runtime. AHE AHE AHE AHE LDP H.5 RESULTS ON COMMUNICATION COST In Figure 34-36, we demonstrate the communication cost based on the numbers of clients, tree and depth. For the convenience of comparison, we set #clients=4, #tree=4 and depth=4 as default. We use Table 9-11 to elaborate the concrete costs. To sum up, we see that the communication cost of FEVERLESS is almost the same as those of the baseline and LDP. But as compared to AHE, FEVERLESS significantly reduces costs while maintaining privacy. AHE LDP
1. What is the focus and contribution of the paper on vertical federated learning? 2. What are the strengths of the proposed approach, particularly in terms of security and efficiency? 3. What are the weaknesses of the paper, especially regarding its performance on small datasets? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The paper introduces a novel setting for vertical federated learning in which labels are distributed among clients, and proposes a novel fast and secure protocol for this setting based on XGBoost . The protocol enables secure aggregation of gradients and hessians for XGBoost via (a) a masking scheme based on Diffie-Hellman key exchange and a key derivation function and (b) global differential privacy. Security analysis presented in the paper demonstrate that label/feature privacy is persevered even if n-2 clients collude. Finally, the efficacy of the proposed protocol is demonstrated via experimental evaluations on multiple datasets. Review Strengths The writing and figures are clear. Thorough review of background material. Theoretical and empirical analysis of computation and communication costs demonstrate that the proposed approach compares very favorably to MPC-based solutions. Thorough security analysis proves that the proposed approach is robust to collusion of up to n-2 clients. The proposed combination of Diffie-Hellman key exchange and global differential privacy is ingenious and the experimental results demonstrate the clear advantage of using this combination compared to local differential privacy at each client. The proposed novel setting for vertical federated learning with decentralized labels is very well motivated as shown from the COVID19 example highlighted in the paper. Weaknesses The proposed method performs poorly on small datasets.
ICLR
Title FEVERLESS: Fast and Secure Vertical Federated Learning based on XGBoost for Decentralized Labels Abstract Vertical Federated Learning (VFL) enables multiple clients to collaboratively train a global model over vertically partitioned data without revealing private local information. Tree-based models, like XGBoost and LightGBM, have been widely used in VFL to enhance the interpretation and efficiency of training. However, there is a fundamental lack of research on how to conduct VFL securely over distributed labels. This work is the first to fill this gap by designing a novel protocol, called FEVERLESS, based on XGBoost. FEVERLESS leverages secure aggregation via information masking technique and global differential privacy provided by a fairly and randomly selected noise leader to prevent private information from being leaked in the training process. Furthermore, it provides label and data privacy against honest-but-curious adversary even in the case of collusion of n − 2 out of n clients. We present a comprehensive security and efficiency analysis for our design, and the empirical results from our experiments demonstrate that FEVERLESS is fast and secure. In particular, it outperforms the solution based on additive homomorphic encryption in runtime cost and provides better accuracy than the local differential privacy approach1. 1 INTRODUCTION Traditional centralized deep learning models, demanding to collect a considerable amount of clients’ data to maintain high accuracy, to some degree, may increase the risk of data breaches. Data may not be easily shared among different entities due to privacy regulations and policies. To tackle this “Data Island” problem (Yang et al. (2019a)), Google proposed Federated Learning (FL) (McMahan et al. (2017)) to allow multiple clients to train a global model without sharing private data. The basic paradigm of FL is that all clients train local models with their own data, and then the information of local models, e.g., gradients, may be exchanged to produce a global model. Based on different types of data partition (Yang et al. (2019a)), FL can be mainly categorized into Horizontal Federated Learning (HFL) and Vertical Federated Learning (VFL). The former focuses on training with horizontally partitioned data where clients share the same feature space but differing in data index set. Several research works (Shokri & Shmatikov (2015); Orekondy et al. (2019); Geiping et al. (2020); Li & Han (2019)) have found that training data of HFL is still at high risk of leakage although private data is kept locally. Other studies (Phong et al. (2018); Truex et al. (2019); Xu et al. (2019); Zhang et al. (2020); Zhu et al. (2020)) have been dedicated to enhancing the security of HFL. On the contrary, VFL is mainly applied in the scenario of training with vertically partitioned data (Wu et al. (2020); Cheng et al. (2021)) where clients share the same data index set but differing in feature space. In this paper, our principal focus is to achieve privacy-preserving training on VFL. To best of our knowledge, many existing studies (Hardy et al. (2017); Nock et al. (2018); Liu et al. (2020); Yang et al. (2019b); Cheng et al. (2021); Chen & Guestrin (2016); Wu et al. (2020)) have proposed innovative approaches to prevent private information breaches in the context of VFL. Specifically, (Hardy et al. (2017)) introduced encryption-based privacy-preserving logistic regression to safeguard the information of data indexes. (Nock et al. (2018)) gave a comprehensive discussion on 1Code is available at: https://github.com/feverless111/vfl the impact of ID resolution. (Yang et al. (2019b)) introduced a scheme without using a coordinator for a limited number of clients. Recently, (Liu et al. (2020)) proposed an asymmetrically VFL scheme for logistic regression tackling privacy concerns on ID alignment. Unlike the training models used in the aforementioned works, XGBoost (Chen & Guestrin (2016)), which is one of the most popular models applied in VFL, can provide better interpretation, easier parameter tuning, and faster execution than deep learning in tabular data training (Goodfellow et al. (2016); LeCun et al. (2015)). These practical features and advantages draw academia and industry’s attention to the research on XGBoost, especially in the privacy-preserving context. (Wu et al. (2020)) introduced an approach for tree-based model training through a hybrid method composing homomorphic encryption and secure Multi-Party Computation (MPC) (Goldreich (1998); Bonawitz et al. (2017)). After that, (Cheng et al. (2021)) proposed a similar system to train XGBoost (Chen & Guestrin (2016)) securely over vertically partitioned data by using Additively Homomorphic Encryption (AHE). By applying Differential Privacy (DP) (Dwork (2008)), (Tian et al. (2020)) designed a VFL system to train GBDT without the need of encryption/decryption. However, most of the above solutions based on AHE and MPC do not scale well in terms of efficiency on training XGBoost. Beyond that, all the existing schemes basically assume that training labels are managed and processed by a sole client. In practice, a VFL scheme supporting distributed labels is necessary. For instance, multiple hospitals, clinics and health centers currently may be set to COVID-19 test spots and aim to train a model, e.g., XGBoost, to predict with good interpretation if citizens (living in various locations) are infected based on their health records and symptoms. In this context, the labels (i.e., the test results) are likely distributed among different health authorities - even targeting to the same group of patients, and feature space is vertically portioned. For example, a cardiac hospital only maintains heart data for the patients, while a psychiatric center holds the mental records, in which both authorities may collect and manage each of its registered patient’s label locally. Another common scenario could be in the financial sector where multiple bank branches and e-commerce companies prefer to build a global model to predict if their customers may pay some service (e.g., car loan) on time. The banks have part of features about the customers (e.g., account balance, funding in-and-out records), while the companies may obtain other features (e.g., payment preference). Since the customers may get the same service, e.g., loan, from different institutions, it is clear that labels must be distributed rather than centralized. In addition to efficiency and functionality aspects, one may also consider capturing stronger security for VFL. Training an XGBoost usually should involve the computation of first and second-order derivatives of the loss function (note gradients and hessians contain labels’ information), and the aggregation of them is required in each round. In the context where the labels are held by different clients, if the gradients and hessians are transmitted in the form of plaintexts and the summations of them are known to an aggregator (whom could be one of the clients engages in training), inference and differential attacks (Appendix C) will be easily conducted by the aggregator, resulting in information leakage. To tackle these problems, we propose a fast and secure VFL protocol, FEVERLESS, to train XGBoost (Appendix B.1) on distributed labels without disclosing both feature and label information. In our design, the privacy protection is guaranteed by secure aggregation (based on a masking scheme) and Global Differential Privacy (GDP) (Appendix B.6). We leverage masking instead of heavy-cost multiparty computation and we guarantee a “perfect secrecy” level for the masked data. In GDP, we use Verifiable Random Function (VRF) (Appendix B.5) to select a noise leader per round (who cannot be predicted and pre-compromised in advance) to aggregate noise from “selected” clients, which significantly maintains model accuracy. Our contributions can be summarized as follows. (1) We define VFL in a more practical scenario where training labels are distributed over multiple clients. Beyond that, we develop FEVERLESS to train XGBoost securely and efficiently with the elegant combination of secure aggregation technique (based on Diffie-Hellman (DH) key exchange (Appendix B.2) and Key Derivation Function (KDF) (Appendix B.4)) and GDP. (2) We give a comprehensive security analysis to demonstrate that FEVERLESS is able to safeguard labels and features privacy in the semi-honest setting, but also maintain the robustness even for the case where n− 2 out of n clients commit collusion. (3) We implement FEVERLESS and perform training time and accuracy evaluation on different realworld datasets. The empirical results show that FEVERLESS can maintain efficiency and accuracy simultaneously, and its performance is comparable to the baseline - a ”pure” XGBoost without using any encryption and differential privacy. Specifically, training the credit card and bank marketing datasets just takes 1% and 6.5% more runtime than the baseline and meanwhile, the accuracy is only lower than that of the baseline by 0.9% and 3.21%, respectively2. 2 PROBLEM FORMULATION 2.1 SYSTEM MODEL Before proceeding, we give some assumptions on our model. We suppose that a private set intersection (Kolesnikov et al. (2017); Pinkas et al. (2014)) has been used to align data IDs before the training starts, so that each client shares the same data index space I. But the names of features are not allowed to share among clients. As for the information of label distribution (indexes indicating a label belongs to which client, e.g., the label of i-th data instance is held by client A), we will consider the following conditions: (1) this information is revealed to the public in advance; or (2) the information is not allowed to publish but the training can still be accomplished (with extra cost). We also consider that the training is conducted on a dataset with m samples composing with feature space X = {x1, · · · , xm}, each containing f features, and label set Y = {y1, · · · , ym}. Besides, features {X(c)j | j ∈ {1, · · · , f}} and labels {y (c) i | i ∈ {1, · · · ,m}} are held among n clients where each client has at least one feature and one label. X(c)j and y (c) i refer to j-th feature and i-th label owned by c-th client, respectively. Considering a practical scenario wherein training labels are distributed among clients, we propose a new variant of VFL, named VFL over Distributed Labels (DL-VFL). The concrete definition is given as follows. Definition 1 (DL-VFL). Given a training set with m data samples consisting of feature space X , label space Y , index space I and clients set C, we have: X c ∩ X c ′ = ∅,Yc ∩ Yc ′ = ∅, Ic = Ic ′ ,∀c, c ′ ∈ C, c 6= c ′ . (1) A client c participating DL-VFL shares the same sample ID space I with the corresponding labels, where a single label belongs to only one client. And different clients hold the subset of X sampled from feature space. To achieve privacy-preserving XGBoost training, we further define two roles. Definition 2 (Source client). A source client with split candidates wants to compute the corresponding Lsplit based on Eq.(4). But some labels are missing so that ∑ gi and ∑ hi are unable to derive. For the case that a source client does not hold all labels in the current split candidates, we propose a solution based on secure aggregation and global differential privacy to help the source client to compute Lsplit while safeguarding other clients’ privacy. We consider the two conditions regarding if label distribution is publicly known. We find that if we keep label distribution hidden, we will take extra communication overhead to perform training. The detailed explanation is given in Appendix F. Note each client may have a chance to act as a source client because all the labels are distributed, where the source client leads the Lsplit computation, and clients provide missing label values to the source client. To achieve GDP, we define noise leader who is selected fairly and randomly from all clients (except for the source client) - preventing clients from being compromised beforehand. Definition 3 (Noise leader). By using VRF, a noise leader is responsible for generating the maximum leader score, aggregating differentially private noise from a portion of clients and adding the noise to the gradients and hessians. Note we summary the main notations in Table 1 (see Appendix A). 2.2 THREAT MODEL We mainly consider potential threats incurred by participating clients and the outside adversaries. We assume that all clients are honest-but-curious, which means they strictly follow designed algo- 2For banknote authentication dataset, FEVERLESS takes 13.96% more training time than the baseline, and the accuracy is 30.4% lower. This is because the model is trained by a small-scale dataset, so that the robustness is seriously affected by noise. rithms but try to infer private information of other clients from the received messages. Besides, we also consider up to n − 2 clients’ collusion to conduct attacks, and at least one non-colluded client adds noise per round. Through authenticated channels, DH key exchange can be securely executed among clients. Other messages are transmitted by public channels, and outside attackers can eavesdrop on these channels and try to reveal information about clients during the whole DL-VFL process. Note this paper mainly focuses on solving privacy issues in training DL-VFL based on XGBoost. Thus, other attacks, like data poisoning and backdoor attacks deteriorating model performance, are orthogonal to our problem. 3 A PRACTICAL PRIVACY-PRESERVING PROTOCOL 3.1 FEVERLESS PROTOCOL DESCRIPTION To prevent a source client from knowing gradients and hessians sent by other clients, one may directly use MPC (Damgård et al. (2012)) based on AHE (Paillier (1999); Wu et al. (2020)). But this method yields expensive computation cost. Getting rid of the complex mechanism like MPC, we leverage secure aggregation protocol via masking scheme based on DH key exchange(Bonawitz et al. (2017); Ács & Castelluccia (2011); Tian et al. (2020)). By further using KDF and Hash Function (see Appendix B.3&B.4), our masking (for gradients and hessians) can be derived without exchanging keys per training round. Our approach significantly reduces the communication cost but still maintains the robustness up to n − 2 colluded clients. Meanwhile, the secure aggregation can provide “perfect secrecy” for broadcast messages. After receiving the broadcast messages, the masking will be canceled out at the source client side. But only using the masking is unable to defend against differential attacks. One may consider using Local Differential Privacy (LDP) (Kairouz et al. (2014)) to make sure that each client may add noise to per send-out message, barely consuming any extra computation cost. The accumulated noise, from all clients, may seriously affect the model accuracy. To tackle this problem, we use a GDP (Wei et al. (2020)) approach with noise leader selection. A hybrid method is finally formed based on masking scheme and GDP, so that per client’s sensitive information can be protected by the “masks” and the aggregated values are secured by the noise which is injected by the chosen clients. We briefly introduce our design, and the detailed algorithms and more explanations are given in Appendix D. Assume each client c ∈ [1, n] generates respective secret key skc and computes gradients g (c) i and hessians h (c) i locally, where {i | yi ∈ Yc}. FEVERLESS works as follows. 1. Broadcast missing indexes. The source client broadcasts the mIDs= {i | yi /∈ Yc}. 2. Key exchange computation. Each client c computes public key pkc = gskc using secret keys skc, sends pkc to other clients and computes the corresponding shared keys3 {Sc,c′ = pk skc c′ = gskcskc′ | c, c ′ ∈ C, c 6= c′} based on secret key skc received public keys {pkc′ | c ′ ∈ C}. 3. Data masking. Each client c runs the masking generation algorithm to compute the maskings for protecting gradients and hessians. Specifically, based on KDF, clients’ indexes and the number of queries, the masking generation algorithm is conducted by mask(c)g ← ∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ ·( H(Sc,c′‖0‖query) ) , mask(c)h ← ∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ · ( H(Sc,c′‖1‖query) ) 4. Then the masked gra- dients G(c) and hessians H(c) are generated by G(c) = ∑ i∈mIDs g (c) i + mask (c) g −r(c)g , H(c) =∑ i∈mIDs h (c) i + mask (c) h −r (c) h . 4. Noise leader selection. Each client generates the selection score selecc using the VRF, H(SIGNskc(count,mIDs,r)), and broadcasts it, where count is the number of times clients conduct VRF, r is a fresh random number, and SIGN is the signature scheme (see Appendix B.5 for more details). The client with maximum score will be the noise leader. For ease of understanding, we assume client n with the largest selection score selectmaxn is the leader, in Figure 1. 3Shared keys are only generated once, and the KDF is used to generate the remaining maskings. 4For purpose of simplicity, we omit modular computations. The complete calculation processes are elabo- rated on Algorithm 3-5. 5. Noise injection. a) Noise leader selects k clients adding noise. For the details of the selec- tion, please see Algorithm 5 in Appendix D. b) The selected clients send {ñ(c)g = N(0,∆2gσ2) + r (c) g , ñ (c) h = N(0,∆ 2 hσ 2) + r (c) h |c ∈ k} to noise leader, in which the r (c) g and r (c) h are two random values to mask noise. c) The leader aggregates the noise: Ñg = k · N(0,∆2gσ2) + Rg and Ñh = k · N(0,∆2hσ2) +Rh, and further adds them to G(n) and H(n), respectively. 6. Aggregation and computation. All clients send the masked values to the source client. The source client computes ∑n c=1G (c) + k·N(0,∆2gσ2), ∑n c=1H (c) + k·N(0,∆2hσ2) and Lsplit. 7. Final update. The source client with maximum Lsplit updates model following XGBoost (Chen & Guestrin (2016)) and broadcasts the updated model and data indexes in child nodes as step 8. Figure 1 gives an overview of FEVERLESS. Note this process can be conducted iteratively. For simplicity, the core calculation processes are shown here, and more details are in Appendix D. 3.2 THEORETICAL ANALYSIS Computation cost: We use B and d to denote the number of buckets and the maximum depth respectively, and f (c) here represents the number of features held by a client c. For each client c, the computation cost can be divided into 4 parts: (1) Performing at most f (c) · B · NT · (2d − 1) times computation of Lsplit and w, taking O(f (c) · B · NT · 2d) time; (2) Creating n − 1 shared keys and 1 public key, which is O(n); (3) Conducting O(f (c) · B · NT · 2d) time to compute VRF outputs, select noise leader and generate noise; (4) Generating 2f (c) · B · NT · (2d − 1) maskings, which takes O(f (c) ·B ·NT · 2d ·n) time. Overall, each client’s computation complexity is O(f (c) ·B ·NT · 2d · n). Communication cost: Each client’s communication cost can be calculated as (1) Broadcasting at most f (c) · B · NT · (2d − 1) times of missing indexes mID; (2) Broadcasting 1 public key and receiving n − 1 public keys from other clients; (3) Broadcasting 1 leader selection score and sending noise to noise leader at most f (c) · B · NT · (2d − 1) times; (4) Sending source client 2 masked gradients and hessians of size 2dlog2Ne. Therefore the overall communication cost is f (c) · B · NT · (2d − 1) · (‖mID‖ · αI + αL + αN + n · αK2dlog2Ne), where αI , αL, αN and αK refer to the number of bits of index, leader selection score, noise and public keys, respectively. Thus, we have the communication complexity O(f (c) ·B ·NT · 2d). 3.3 SECURITY ANALYSIS We prove that FEVERLESS provides label and data privacy against an adversary controlling at most n − 2 clients in the semi-honest setting (Smart (2016)). Here, we provide a brief summary of our analysis and theorems. The formal proofs, in the random oracle model, are given in Appendix E. Label Privacy: Label privacy implies that the owner of a label among honest parties should not be leaked to the adversary. We achieve this by using a secure aggregation mechanism where the masks are created via DH key exchange and KDF. In brief, we show that because of the Decisional DH problem (see Definition 4), the adversary cannot distinguish the individual values from randomly chosen ones. That is why the adversary A cannot learn the owner of the label. Data Privacy: FEVERLESS provides data privacy, meaning that an adversary A cannot extract the data of any honest party. Individual data values are not separable from random values because of the secure masking. If the source client is not part of the adversary, no data information is leaked. But we require an additional countermeasure for the case where the source client is part of the adversary because it can collect the summation of the data values. We use differential privacy (Dwork et al. (2006a;b)) to achieve data privacy. Because of the noise added by differential privacy, the adversary cannot learn the individual data of an honest client. Moreover, we select the noise clients by the VRF which ensures that the noise leader cannot be predicted or compromised in advance. Theorem 3.1 (A not including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C so that |A| ≤ n− 2, the output of Sim is indistinguishable from the output of REAL : REALC,X ,YA (X C ,YC) ≡ Sim C,X ,Y A (XA,YA). Theorem 3.2 (A including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C so that |A| ≤ n− 2, the output of Sim is indistinguishable from the output of REAL:REALC,X ,YA (X C ,YC) ≡ Sim C,X ,Y A (G,H,XA,YA) where G = ∑ i∈mIDs g (c) i + N(0, (∆gσ) 2), H = ∑ i∈mIDs h (c) i + N(0, (∆hσ) 2). Theorem 3.3 (Privacy of the Inputs). No A ⊂ C such that |A| ≤ n − 2 can retrieve the individual values of the honest clients with probability 1 − ∑k̂ i=0 C i hC k̂−i n−2−h(Pt) k̂(1 − Pt)(n−k̂) Ck k̂−i Ck k̂ , where h and k̂ refer to the number of non-colluded clients and the number of clients who have selection score larger than threshold, respectively; and Pt is the probability of selection score larger than the threshold. 4 EXPERIMENT We perform evaluations on accuracy, runtime performance and communication cost, and compare our design with two straightforward secure approaches: one is based on LDP (for accuracy), and the other is built on AHE with GDP (for runtime). These approaches are most-commonly-used components for privacy-preserving FL, and they could be the building blocks for complex mechanisms, e.g., MPC. We note the protocol should intuitively outperform those MPC-based solutions, and one may leverage our source code to make further comparison if interested. In the experiments, the baseline, which is the pure XGBoots algorithm, follows the training process of Figure 1 without using any privacy-preserving tools (steps Ë - Î). And LDP does not conduct DH key exchange but each client injects noise into the aggregation of gradients and hessians, while AHE follows Figure 1 except executing DH key exchange. In AHE, each client sends (additive) encrypted messages to source client after step Î. We here show the performance of the best case where there is only one (non-colluded and randomly selected) client adding noise per round (k = 1). For other results (where k 6= 1), see Appendix H.2. Note we present the communication cost in Appendix H.5). 4.1 EXPERIMENT SETUP To present comprehensive results on accuracy, we set to be: 10, 5, 2 and 1, and δ is set to 10−5. In terms of accuracy and runtime, we evaluate different situations by varying the number of clients, the number of trees, and the maximum depth of trees (from 2 to 10). Other parameters regarding to training follow the suggestions in (Chen & Guestrin (2016)) and the library 5 of XGBoost. To deliver fair results, we conduct each test for 20 independent trials and then calculate the average. Datasets. We run the experiments on three datasets - Credit Card (Yeh & Lien (2009)), Bank Marketing (Moro et al. (2014)) and Banknote Authentication6 - for classification tasks. To fairly investigate the model performance in DL-VFL, we make the labels as sparse as possible, and they are uniformally distributed on clients. We give the more details of experiment setup in Appendix G. 4.2 EVALUATION ON ACCURACY In Figure 2, we present a clear picture on the accuracy performance based on the #tree and the maximum depth in (2, 10−5)−DP. We merge the #client in one tree structure, which means in one bar, and the value is the mean of accuracy when conducting on different numbers. The accuracy of the baseline in credit card (about 0.82) and bank marketing (nearly 0.9) remains unchanged as the #tree and maximum depth increases, while the accuracy in banknote authentication rises from 0.9 to approximately 1.0. To highlight the differences and ensure all results to be displayed clearly, we set the ranges of accuracy as [0.5, 0.9], [0.5, 1] and [0, 1] for the three datasets, respectively. Note the performance based on the #client is given in Appendix H.1. Comparing with the baseline, shown in the top and middle rows of the Figure 2, FEVERLESS and LDP suffer from continuously shrinking accuracy as tree structure becomes complex. This is so because the injected noises are accumulated into the model via the increase of query number. And the accuracy is easily affected by the depth. In the worst case where the #tree and maximum depth are both equal to 10, FEVERLESS decreases 10.37% (resp. 14.98% ), and LDP drops 24.78% (resp.24.59%) in credit card (resp. bank marketing). But on average, FEVERLESS’ accuracy only shrinks by around 0.9% (resp. 3.21%), while LDP suffers from estimated 3x (resp. 2x) accuracy loss. The difference in the degree of deterioration mainly comes from how much noise is added for each query. We note the deterioration of FEVERLESS is independent with the #client. Thus, we can maintain great accuracy even for the case where there exists a considerable amount of clients. Despite the fact that less noise is added in FEVERLESS, we do not predict that the accuracy falls to the same level (around 50%, like randomly guess in binary classification) as LDP in the bottom row of Figure 2. This is so because the model is trained by an extremely small-size dataset, which makes it hard to maintain the robustness but relatively sensitive to noise. If setting a larger , we may see our advantage more clear. The experiments conducted on banknote authentication dataset with larger are given in Appendix H.3 To distinguish the performance between FEVERLESS and LDP more clearly, Figure 3 shows the comparison over different , when #depth and #tree are set to 10. The performance of the model is decayed as the decrease of . In the left (resp. middle) of the Figure 3, the averaged accuracy of FEVERLESS falls from 0.7686 to 0.5967 (resp. from 0.8517 to 0.6831), while that of LDP also decreases to 0.5299 (resp. 0.5853). We notice that the highest values of LDP stay at the same level as those of FEVERLESS. This is because, in the case of 2-client training, only one client needs to add the noise in LDP (which is identical to our GDP solution). At last, the worse case can be seen in the right of the Figure 3 due to the weak robustness of the model obtained from the banknote authentication. The results are much far away from the baseline there. But even in this case, FEVERLESS still holds a tiny advantage over LDP. 4.3 EVALUATION ON TRAINING TIME To highlight the runtime complexity, we average the results varying by client number into one tree structure as well. We further set the ranges of time as [0s, 9,500s], [0s, 3,500s] and [0s, 110s] for the datasets to deliver visible results. Note since the banknote dataset contains the least samples, it 5https://xgboost.readthedocs.io/ 6https://archive.ics.uci.edu/ml/datasets/banknote+authentication does deliver the best training efficiency here. Figure 4 presents the comparison on the training time by varying maximum depth and the number of trees among the datasets. The training time increases exponentially and linearly with depth and the number of tree, which is consistent with our analysis given in Section 3.2. In Figure 4, compared with the baseline, the runtime of FEVERLESS at most increases 110.3s (resp. 50s, 4.3s), while AHE requires around 70x spike (resp. 48x, 21x) in credit card (resp. bank marketing, banknote authentication), where #depth and #trees are equal to 10. For the average case, FEVERLESS consumes Approx. 1%(resp.6.5%, 13.96%) more training time than the baseline, while AHE requires the 351%(resp.155.1%, 674%) extra, w.r.t. the three datasets. Its poor performances are due to the laborious calculations in encryption, in which each client has to conduct an encryption per query. By contrast, the masksings in FEVERLESS avoid these excessive costs. We further investigate the runtime performance on the #client in Appendix H. 5 CONCLUSION AND FUTURE WORK We consider a practical scenario where labels are distributedly maintained by different clients for VFL. By leveraging secure aggregation and GDP, we present a novel system, FEVERLESS, to train XGBoost securely. FEVERLESS can achieve perfect secrecy for label and data, and adversaries cannot learn any information about the data if the source client is not corrupted. With DP against differential attack, the source client knows nothing more than summation. Our design is also robust for the collusion of n−2 out of n clients. The experiment results show that FEVERLESS is fast and accurate, only taking 1% extra training time, and sacrificing 0.9% accuracy, as compared to the pure XGBoost. In Appendix F, we discuss how to reduce noise, hide distribution of labels and use other security tools. Although our system achieves great performance in terms of security and efficiency, its accuracy still does not work well in small-scale datasets. This remains an open problem. And we will also consider secure solutions against malicious adversaries. A NOTATIONS The frequently used notations are summarized in Table 1. B PRELIMINARIES B.1 XGBOOST XGBoost (Chen & Guestrin (2016)) is a popular tree-based model in tabular data training that can provide better interpretation, easier parameters tuning and faster execution speed than deep learning Goodfellow et al. (2016); LeCun et al. (2015). It also outperforms other well-known boosting tree systems in terms of accuracy and efficiency, like Spark MLLib Meng et al. (2016) and H2O Chen & Guestrin (2016), especially for large-scale datasets. Therefore, in this paper, we consider using XGBoost as a building block for classification tasks. Assume that a training set with m data points composing with feature space X = {x1, · · · , xm} and label space Y = {y1, · · · , ym}. Before training starts, every feature will be sorted based their values, and split candidates will be set for features. XGBoost builds trees based on the determination of defined splits candidates and some pruning conditions. Specifically, computing gradients and hessians first according to Eq.(2) and Eq.(3) for each data entry, where y(t−1)i denotes the prediction of previous tree for i-th data point, and yi is the label of i-th data point: gi = 1 1 + e−y (t−1) i − yi = ŷi − yi, (2) hi = e−y (t−1) i (1 + e−y (t−1) i )2 . (3) For splitting nodes, the XGBoost algorithm determines the best split candidate from all others based on maximum Lsplit in Eq.(4), where λ and γ are regularization parameters: Lsplit = 1 2 [ ∑ i∈IL gi∑ i∈IL hi + λ + ∑ i∈IR gi∑ i∈IR hi + λ − ∑ i∈I gi∑ i∈I hi + λ ]− γ. (4) The current node will be the leaf node if the following conditions are fulfilled: reaching the maximum depth of tree, the maximum value of impurity is less than preset threshold. The calculation of the leaf value follows Eq.(5): w = − ∑ i∈I gi∑ i∈I hi + λ . (5) B.2 DIFFIE-HELLMAN KEY EXCHANGE Based on Decision Diffie-Hellman (DDH) hard problem (Boneh (1998)) defined below, DiffieHellman key exchange (DH) (Diffie & Hellman (1976)) provides a method used for exchanging keys across public communication channels. Without losing generality and correctness, it consists of a tuple of algorithms (Param.Gen, Key.Gen, Key.Exc). The algorithm (G, g, q) ← Param.Gen (1α) generates public parameters (a group G with prime order q generated by a generator g) based on secure parameter α. (ski, pki) ← Key.Gen(G, g, q) allows client i to generate secret key (ski $←− Zq) and compute public key (pki ← gski ). Shared key is computed by (pk skj i , pk ski j ) ← Key.Exc(ski, pki, skj , pkj). Inspired by (Bonawitz et al. (2017); Ács & Castelluccia (2011)), we utilize shared keys as maskings to protect information of labels against inference attack during transmitting in public channels. The correctness requires pkskji = pk ski j . The security relies on the DDH problem (Boneh (1998)), which is defined as: Definition 4 (Decision Diffie-Hellman). Let G be a group with prime order q and g be the fixed generator of the group. The Probabilistic Polynomial Time (PPT) adversary A is given and ga and gb where a and b are randomly chosen. The probability of A distinguishing (ga, gb, gab) and (ga, gb, gc) for a randomly chosen c is negligible:∣∣∣Pr[a, b $←− Zq : A(g, ga, gb, gab) = true]− Pr [ a, b, c $←− Zq : A(g, ga, gb, gc) = true ]∣∣∣ < negl(α). B.3 PSEUDO-RANDOM GENERATOR AND HASH FUNCTION Pseudo-Random Generator (PRG) (Håstad et al. (1999)) is an algorithm which is able to generate random numbers. The ”pseudo-random” here means that the generated number is not truly random but has the similar properties with random number. Generally, the pseudo-random numbers are determined by given initial values a.k.a seeds. In cryptographic applications, a secure PRG requires attackers not knowing seeds can distinguish a truly random number from a output of PRG with a negligible probability. Similar with PRG, hash function allows mapping arbitrary size of data to a fixed bit value. For reducing communication cost of FEVERLESS, we use SHAKE-256 (Sha (2015)), one of the hash functions in SHA-3 (Aumasson et al. (2008)) family, to generate customize size of maskings. B.4 KEY DERIVATION FUNCTION Key Derivation Function (KDF) (Krawczyk & Eronen (2010)) is a kind of hash function that derives multiple secret keys from a main key by utilizing Pesudo-Random Function (PRF) (Kaliski (2005)). In general, KDF algorithm DK ← KDF (mainkey, salt, rounds) derives keys DK based on a main key, a cryptographic salt and current round of processing algorithm. The security requires a secure KDF is robust for brute-force attack or dictionary attack. Inspired by (Zdziarski (2012)) where key shares generated by DH key exchange are converted to AES keys, in this paper, we use KDF to generate maskings for every round to reduce communication cost. The main key we use is generated by DH key exchange. B.5 VERIFIABLE RANDOM FUNCTION Verifiable Random Function (VRF) (Micali et al. (1999)) is a PRF providing verifiable proofs of correctness of outputs. It is a tool widely used in cryptocurrencies, smart contracts and leader selection in distributed systems (Micali (2016)). Basically, given a input x, a signature scheme and a hash function, a practical leader selection scheme with VRF (Micali (2016)) works as: Sleader ← H(signski(x)) (6) where ski is the secret key for i-th client, and the maximum leader score Sleader is used to determine leader. The security and unforgeability of VRF requires that the signature scheme has the property of uniqueness, and hash function is able to map the signature to a random string with fixed size. The correctness of this Sleader is proved by the signature of x. B.6 DIFFERENTIAL PRIVACY Differential Privacy (DP) (Dwork et al. (2006a;b)) is a data protection system targeting on publishing statistical information of datasets while keeping individual data private. The security of DP requires that adversaries cannot distinguish statistically change from two datasets where an arbitrary data point is different. The most widely used DP mechanism is called ( , δ)-DP requiring less noise injected than original proposed -DP but with the same privacy level. The formal definition is given as follows. Definition 5. (( , δ) - Differential Privacy) Given two real positive numbers ( , δ) and a randomized algorithm A: Dn → Y , the algorithm A provides ( , δ) - differential privacy if for all data sets D, D ′ ∈ Dn differing in only one data sample, and all S ⊆ Y: Pr[A(D) ∈ S] ≤ exp( ) · Pr[A(D ′ ) ∈ S] + δ. (7) Note the noise N ∼ N(0,∆2σ2) will be put into the output of the algorithm, where ∆ is l2 - norm sensitivity of D and σ = √ 2 ln(1.25/δ) (Abadi et al. (2016)). C PRIVACY CONCERN Since we assume feature names are not public information for all clients, and the values of features never leave from clients, the privacy issues are mainly incurred by the leakage of label information. C.1 INFERENCE ATTACK During training process, gradients and hessians are sent to source client for Lsplit computation. For classification task, the single gradient is in range (−1, 0)∪(0, 1) for binary classification. According to Eq.(2), a label can be inferred as 1 and 0 if the range is (−1, 0) and (0, 1), respectively. Besides, hessian illustrated in Eq.(3) can leak a prediction of the corresponding data sample. With training processing, the prediction is increasingly closer to a true label. The source client and outside attackers can infer the true label with high probability. Gradients and hessians cannot be transmitted in plaintext. We thus use secure aggregation scheme to protect them from inference attack. C.2 DIFFERENTIAL ATTACK Differential attack can happen anytime and many times during the calculation of gradients and hessians. Figure 5 describes an example of differential attack taking place in single node split. After sorting feature1, the semi-honest source client defines 2 split candidates and further computes G{2,5} = g2 + g5 and G{1,2,3,5} = g2 + g5 + g1 + g3 for the candidates 1 and 2, respectively. Since the source client holds label 2, even if G{2,5} is derived by secure aggregation, the g5 still can be revealed by G{2,5} − g2. Another example for differential attack is shown in Figure 6. Assume split candidate 1 is the one for splitting root node. In the current tree structure, source client may split right node by computing Lsplit of split candidate 2. In this case, G{1,3} should be aggregated by source client. And the g5 can be revealed by G{1,2,3,5} −G{1,3} − g2, where G{1,2,3,5} is computed in the previous node. D MORE DETAILS ON FEVERLESS PROTOCOL D.1 XGBOOST TRAINING OVER DISTRIBUTED LABELS At the initial stage, we allow all clients to agree on a tree structure (maximum depth and the number of trees) and the learning rate for updating prediction. To avoid overfitting problem, we should define regularization parameters. Threshold impurity is also another vital parameter used to identify tree and leaf nodes via the maximum impurity. After that, we should choose , δ for DP, hash function for masking generation and noise leader selection. Besides, we select a multiplicative group G with order q generated by a generator g and a large prime number p to run DH. At initialization process, all clients set parameters and sort their own feature based on values. Then, split candidates can be defined, and data samples between two different candidates will be grouped as a bucket. At the end, all entries are assigned initialized values to calculate the derivatives of loss function. The detailed algorithm is described as follows. Algorithm 1: Initialization 1 Set parameters: all clients agree on the maximum depth of a tree d, the number of trees (NT ), learning rate (η), regularization parameters (λ, γ), the threshold of Lsplit, , δ, p, g, selection portion (p) and hash function 2 for c ∈ [1, n] do 3 for each feature j owned by c do 4 sort(X(c)j ) 5 define buckets: Bjz 6 end 7 set initialized values: ŷi(c) 8 end After initialization, all clients can invoke Algorithm 2 to train model collaboratively. The inputs are from feature space consisting of features X(c)j and labels y (c) i distributed on different clients, respectively; while the output is a trained XGBoost model that can be used for prediction. Generally, trees are built one by one. And we see from line 4-10 in Algorithm 2 that each client can compute gradients and hessians at beginning of a new tree construction. Following that, clients are to split current node. Note that XGBoost training in DL-VFL requires each client to calculate G and H . If the labels in some buckets are incomplete, the corresponding gradients and hessians cannot be computed. Thus, each client should first broadcast missing data index setmID (see line 15-17 in Algorithm 2). Based on the predefined bucketBjz ,mID can be defined if labels in Bjz are not held by clients. In each broadcast, a client sending messages is regarded as a source client. Then others send the corresponding g(c ′ ) i and h (c ′ ) i back to the source client to computeLsplit through Algorithm 3-5 depicted in Appendix D.2. After finding a maximum impurity Lcsplit max, the current node will be split to “left” and “right” nodes if L c split max>threshold Lsplit, in which the value of the split candidate is own by c. In node splitting, clients should set a given Algorithm 2: Protocol overview 1 Input: {X(c)j | j ∈ f, c ∈ |C|}: features, {y (c) i | i ∈ m, c ∈ |C|}: labels 2 Output: XGBoost model 3 Building trees: 4 for nt ∈ [1, NT ] do 5 for c ∈ [1, n] do 6 for each data entry i owned by c do 7 g (c) i ← ∂ŷi(c)Loss(ŷi (c), y (c) i ) 8 h (c) i ← ∂2ŷi(c)Loss(ŷi (c), y (c) i ) 9 end 10 end 11 for each node in the current tree do 12 while current depth <d do 13 for c ∈ [1, n] do 14 for each feature j owned by c do 15 for each Bjz owned by c do 16 BroadcastmID = {i | yi /∈ Yc} 17 end 18 aggregate G, H by Algorithm 3-5 19 compute Lsplit according to Eq.(4) 20 end 21 find the maximum L(c)split and broadcast 22 end 23 L (c) split max ← max({L (c) split | c ∈ [1, n]}) 24 if L(c)split max ≤ threshold Lsplit then 25 set current node as leaf node 26 c computes w and broadcast 27 Break 28 else 29 c splits current node to left node and right node, and broadcasts data index of them. 30 end 31 end 32 set remaining nodes as leaf nodes 33 c computes w and broadcast 34 clients participating in calculation of w: update ŷi(c) 35 end 36 end node as ”leaf” if current depth reaches the predefined maximum depth or the maximum Lsplit is less than the predefined threshold of Lsplit (see line 12, 24-32 in Algorithm 2). The derivation of leaf value is followed by Eq. 5 where G and H are intaken. Since a leaf node is either “left” or “right” split by one of the clients in C from its parent node, this client knows G and H and leaf value can be derived. Finally, this leaf value will be broadcast, and clients who own the corresponding g(c)i and h (c) i can use it to update predictions. The details for the above process are shown in Algorithm 2. D.2 SECURE AGGREGATION WITH GLOBAL DIFFERENTIAL PRIVACY In line 15-19 of Algorithm 2, source client is able to compute Lsplit from the requested missing data indexes and the aggregation of received messages. To avoid that inference and differential attacks are conducted on labels by source client and outside adversaries, we propose a privacy-preserving approach, shown in Algorithm 3-5, to “twist” the DH key exchange, noise leader selection and secure aggregation together. This method represents a viable alternative to train XGBoost securely in DL-VFL without demanding excessive computational resources and affecting model accuracy. To generate the secure-but-can-be-cancelled-out maskings, we adopt DH here. In Algorithm 3, all clients randomly select numbers as their secret keys and generate the corresponding public keys. For any two clients in the set C, they will exchange public key and compute the corresponding shared keys. For simplicity, we do not describe the signature scheme for DH. We assume DH is conducted on authenticated channels, which means the man-in-the-middle attack (Khader & Lai (2015)) should be invalid here. Algorithm 3: Diffie-Hellman key exchange 1 for c ∈ [1, n] do 2 skc ← Z∗p 3 end 4 for c ∈ [1, n] do 5 pkc = g skc mod p 6 for c ′ ∈ [1, n] ∧ c′ 6= c do 7 Sc,c′ = pk sk c ′ c mod p 8 end 9 end If the shared keys are used as maskings directly, our system is not robust for clients collusion unless the amount of communication has been sacrificed as a cost to update maskings per round. But the communication complexity is exponentially increased with the number of clients for a single node splitting. Considering the structure of trees, the overall communication complexity will be O(2d ·NT · n2), which may not scale well in practical applications. To tackle this issue, we use KDF to update maskings per round automatically. Specifically, in line 24-25 of Algorithm 5, shared keys are taken as main keys. 0 and 1 are salt values for gradients and hessians, respectively. Since query in each round varies, the generated maskings should be dynamic accordingly. Besides, the sign of maskings is determined by the indexes of clients. In this way, we only need to use DH once, and the communication complexity is independent with tree structure. To enable FEVERLESS to hold against differential attack, we use GDP approach allowing the chosen one to inject a global noise to aggregated values per round. The approach is quite subtle. If the noise leader is selected by source client, the system will be vulnerable to the collusion. Moreover, a client could be easily identified as a target if we choose it in advance, e.g., selecting a list of leaders before the training. To avoid these issues and limit the probability of collusion to the greatest extent, we use VRF to iteratively select the leader (see Algorithm 4) to securely inject a global noise. The input of VRF includes mIDs and a fresh random number r (line 4 in Algorithm 4), so that this client will not be predicted and set beforehand - reducing its chance to be corrupted in advance by outsiders and the source client. All clients can broadcast their scores and then the one who holds the “max value” will become the leader. Then the leader re-generates a selection score as score threshold (selecthreshold) and sends it to the rest of the clients. (line 2-6 in Algorithm 5). The clients send the masked noise back to the leader if the re-generated score is larger than the threshold (line 7-13 in Algorithm 5). Subsequently, the leader will select k̂ clients, notify them and aggregate these masked noise to generate a global noise with a random number. In this context, even these selected clients are colluded (note at least one is not) with noise leader and source client, there is still a noise that cannot be recovered, safeguarding the training differentially private. Note since the noise is masked by the random number, the source client (even colluding with the leader) cannot recover the “pure” global Algorithm 4: Noise leader selection 1 count = 1 2 for each time run this algorithm do 3 for c ∈ [1, n] ∧ c 6= source client do 4 selecc ← H(SIGNskc(count,mIDs,r)) 5 Broadcast 6 end 7 selecmaxc ← max({selecc | c ∈ [1, n]}) 8 set c as noise leader 9 count+=1 10 end noise to conduct differential attack. And each client adds a noise with a probability p. If k out of k̂ are non-colluded, the probability of collusion is (1 − kn ) h. To cancel out the randomness, the selected clients will subtract the same randomness from masked messages (line 28-31 in Algorithm 5). Considering that the source client may procrastinate the leader selection and noise injection procedure so as to buy some time for its colluded clients to prepare sufficient large VRF values to participate into the competition of selection and adding noise. One may apply a heartbeat protocol (Nikoletseas & Rolim (2011)) to prevent that a new selected leader intentionally halts the noise adding stage for a long period, say 1 min. If there is no response from the leader after for a short while, a new leader will be randomly selected. Furthermore, the heartbeat may help to solve the problem that the leader accidentally drops from the network. We note that the heartbeat protocol is not our main focus in this paper. Before replying to source client, we have that the clients with labels put maskings to gradients and hessians, and for those without labels, they just generate and later send out maskings, in which the noise leader (i.e. one of the maskings generators) injects the noise. In this way, the maskings, guaranteeing perfect secrecy of the messages, will be cancelled out after the values aggregation, and the differentially private noise will solidate indistinguishability of individual data entry. Note that in line 24-34 of Algorithm 5, the maskings and masked values are in the range [0, N − 1]. And N should be sufficiently large to avoid overflow, and the summation of gradients and hessians should not exceed N . Algorithm 5: Secure aggregation with global differential privacy 1 Noise injection: 2 if c = leader then 3 selecthresholdc ← H(SIGNskc(count,mIDs,r)) 4 Broadcast 5 count+=1 6 end 7 for c ∈ [1, n] ∧ c 6= source client ∧ c 6= noise leader do 8 selecc ← H(SIGNskc(count,mIDs,r)) 9 if selecc > selecthresholdc then 10 send ñ(c)g = N(0,∆2gσ 2) + r (c) g and ñ (c) h = N(0,∆ 2 hσ 2) + r (c) h to noise leader 11 count+=1 12 end 13 end 14 if c = leader then 15 c selects k clients from clients of sending noise, k = d|{ñ(c)g }| · pe 16 if k < 1 then 17 redo noise injection 18 end 19 notify k clients 20 noise aggregation: Ñg = k · N(0,∆2gσ2) +Rg , Ñh = k · N(0,∆2hσ2) +Rh 21 end 22 Secure aggregation: 23 for c ∈ [1, n] do 24 mask (c) g ← (∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ · ( H(Sc,c′‖0‖query) mod N )) mod N 25 mask (c) h ← (∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ · ( H(Sc,c′‖1‖query) mod N )) mod N 26 G(c) = ∑ i∈mIDs g (c) i + mask (c) g mod N 27 H(c) = ∑ i∈mIDs h (c) i + mask (c) h mod N 28 if selecc > selecthresholdc ∧ received notification then 29 G(c) = G(c) − r(c)g mod N 30 H(c) = H(c) − r(c)h mod N 31 end 32 if c = leader then 33 G(c) = G(c) + Ñg mod N 34 H(c) = H(c) + Ñh mod N 35 end 36 send {G(c), H(c)} to source client 37 end E SECURITY ANALYSIS We investigate the security and privacy properties of our protocol. First, we define the security model of our setting and the properties. Then, we prove that our protocol satisfies these properties. Security Model. Our security is based on the random oracle model (ROM) (Smart (2016)) where the hash function outputs uniformly random value for a new query and the same value for a previously answered query. Adversarial Model. Our protocol is designed for semi-honest security model (Smart (2016)) where all parties follow the protocol while trying to obtain information regarding other parties’ inputs. We assume that the source client can collude with other clients, but the size of colluding clients is no more than n− 2. E.1 PRIVACY GOALS Our privacy goals can be summarized as: • Label privacy: No adversary controlling at most n−2 clients can learn who is the owner of a label among the honest parties. • Data privacy: No adversary controlling at most n − 2 clients can extract the data of an honest party. We first investigate the case where the source client is not part of the adversary. In the following theorem, we show that there exists a simulator Sim that simulates the joint view of clients in A by only using the inputs belonging to them. This implies that A does not learn more than what they have. Theorem E.1 (A not including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C such that |A| ≤ n − 2, the output of Sim is indistinguishable from the output of REAL: REAL C,X ,Y A (X C ,YC) ≡ SimC,X ,YA (X A,YA) (8) Proof. In order to prove that simulator Sim can simulate the outputs of the honest parties in H := C − A, we show that the distribution of the inputs belonging to the rest of the network cannot be distinguished from a randomly generated data. In this way, the simulator can use any dummy values as inputs of the honest parties to simulate their outputs. We will simulate the view of the A regarding the messages broadcast by the honest clients. A client c, first makes a key exchange with others, then after some internal operations, outputs G(c) and H(c) values. Let us investigate G(c) value, which is in the form of ∑ i∈mIDs g (c) i + mask (c) g , except for the noise leader who has additional noise of N(0, (∆gσ)2). The mask values are computed as∑ c 6=c′ |c−c′| c−c′ · H(Sc,c′‖0‖query) mod N . Here, we will use a hybrid model where we modify the protocol in several steps, and for each step, we will show that modifications are indistinguishable for the adversary A. In the end, we will achieve a hybrid that can be simulated by Sim. Hybrid1: The first hybrid directly follows the protocol. The distribution of the variables and the view of A is the same as REAL. Hybrid2: In the second hybrid, we replace the agreed keys between honest clients Sc,c′ for all c, c′ ∈ H with random values rc,c′ ∈ G where G is the group of key exchange protocol G. In the original protocol, Diffie-Hellman key exchange is used. The replacement is indistinguishable for the adversary because of the decision Diffie-Hellman assumption given in Definition 4. Also, note that these random values are only available to parties involved in the key exchange unless they are corrupted by the adversary. Hybrid3: In this hybrid, we replace the mask values of honest clients mask (c) g for all c ∈ H with random values R(c). Note that with the replacement in the previous step, the mask values are computed via ∑ c6=c′ |c−c′| c−c′ · H(rc,c′‖0‖query) mod N where rc,c′ ∈ ZN is a random value that is unknown to the adversary (if both c and c′ are honest). Because of the random oracle model, the output of the hash function will be a uniformly random value that is also unknown to the adversary. Since there are at most n − 2 clients in A , we have at least two honest clients c and c′ for which the adversary cannot know the uniformly chosen output of H(rc,c′‖0‖query). Then, the modular summation of these outputs includes at least one value that the adversary does not know and is uniformly random. Thus, it cannot be distinguishable from a random value R(c). Hybrid4: In this hybrid, we replace gradients of honest clients g (c) i for all c ∈ H with ’0’s. This is done by replacing mask values with R(c) := R(c) − ∑ i∈mIDs g (c) i mod N to keep the G (c) value the same. From the adversary’s perspective, sinceR(c) values are unknown and uniformly randomly chosen, the replacement is not distinguishable. In Hybrid4, we replace the gradients of honest parties with ’0’s, and the mask values are replaced by R(c) which is unknown to the adversary and chosen from a uniform distribution. Thus, a simulator Sim can simulate the outputs of honest parties G(c) without necessarily knowing their inputs. The same can be analyzed for hessian value, H(c). Since the masking values of G(c) and H(c) are different and the hash function is modeled as a random oracle, the randomness in both parts of them are independent of each other and indistinguishable by the adversary A. Overall, the simulator Sim can simulate our protocol. Thus, the view of the A can be simulated by replacing the inputs of the honest parties with zeros. Thus, the adversary does not learn any information on the inputs of the honest parties. Now, we analyze the case where the source client is part of the A. We show that there exists a simulator Sim that simulates the joint view of clients in A by only using the inputs belonging to them and the summations G and H . This implies that A does not learn more than what they have and the summation. Theorem E.2 (A including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C such that |A| ≤ n− 2, the output of Sim is indistinguishable from the output of REAL: REAL C,X ,Y A (X C ,YC) ≡ SimC,X ,YA (G,H,X A,YA) (9) where G = ∑ i∈mIDs g (c) i + N(0, (∆gσ) 2), H = ∑ i∈mIDs h (c) i + N(0, (∆hσ) 2). Proof. Here, we again show that Sim can simulate the outputs of the honest parties in H without knowing their inputs. Unlike Theorem E.1, Sim is also given the summations G and H because the adversary includes the source client. We can use the same hybrids with Theorem E.1 until Hybrid4, this is because that the inputs of the honest clients are not required yet. We need to update Hybrid4 such that it takes into account the summation. Here are the hybrids for the A with source client: Hybrid1,Hybrid2,Hybrid3: The same with Theorem E.1. Hybrid4: In this hybrid, we replace gradients of honest clients g (c) i for all c ∈ H with ’0’s, except one c′ which will be equal to ∑ i∈mIDs g (H) i mod N = G − ∑ i∈mIDs g (A) i mod N . The honest client c′ is randomly chosen among H. From the adversary’s perspective, since R(c) are unknown uniformly random chosen values, the replacement is not distinguishable. Overall, the view of theA can be simulated by replacing the inputs of the honest parties with zeros, except one with ∑ i∈mIDs g (H) i mod N . Thus, A does not learn any information from the honest clients, except the summation ∑ i∈mIDs g (H) i mod N . With Theorem E.2, we show that even the adversaryAwith source client cannot know more than the summation of gradient and hessian values, G and H . The proof is done via Sim without requiring individual data of the honest clients except for the summation. This implies that the adversary cannot distinguish which party provided which gradient or hessian values. Moreover, the parties who do not have any of the requested g or h values will send ’0’ together with the mask (and noise for the leader). This implies that we provide label privacy. Meaning that the adversary cannot distinguish which label’s g or h values are coming from which honest client. In the case when the adversary includes the source client, the summation of gradient and hessian values can be known to the adversary. In the following theorem, we show that these summations do not leak any individual data due to differential privacy. Theorem E.3 (Privacy of the Inputs). No A ⊂ C such that |A| ≤ n− 2 can retrieve the individual values of the honest clients with probability 1− k̂∑ i=0 CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂) Ck k̂−i Ck k̂ , where h and k̂ refer to the number of non-colluded clients and the number of clients who have selection score larger than threshold. Pt is the probability of selection score larger than the threshold. Proof. If the adversary does not include the source client, then following the previous theorems, the adversary cannot know any of the inputs belonging to the honest parties. Otherwise, it knows the summations G and H . Since we apply differential privacy (Dwork et al. (2006a;b)), the summation cannot leak information regarding the inputs. According to Definition 5, we add differentially private noise guaranteeing the security of individual data points while summation can be calculated. Proof of probability. Note noise leader selects k clients from n clients (rather than itself and the source client) to add noise. Suppose that there are h non-colluded clients out of n − 2 clients, the number of clients whose selection scores are larger than the threshold is k̂. The number of events is C k̂n−2−h + C 1 hC k̂−1 n−2−h + · · ·+ C k̂ hC 0 n−2−h, in which the events are that {“there are k̂ colluded clients out of k̂ clients and 0 non-colluded client”,· · · ,“there are 0 colluded client out of k̂ clients and k̂ non-colluded clients”}. Therefore, P (Ei) = C i h(Pt) i(1− Pt)h−i · C k̂−in−2−h(Pt) k̂−i(1− Pt)(n−h−k̂+i) = CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂), where Pt is the probability that the selection score is larger than the threshold, and Ei is i-th event. Then, the probability that noise leader selects k colluded clients from k̂ clients is P0 = Ck k̂−i Ck k̂ . At the end, the probability of all aggregated noise coming from colluded clients is k̂∑ i=0 P (Ei) · P0 = k̂∑ i=0 Cih(Pt) i(1− Pt)h−i · C k̂−in−2−h(Pt) k̂−i(1− Pt)(n−h−k̂+i) = k̂∑ i=0 CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂) Ck k̂−i Ck k̂ . Conversely, the probability of at least one non-colluded client participating in noise injection is 1− k̂∑ i=0 CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂) Ck k̂−i Ck k̂ . Note that because of the secure aggregation, the adversary cannot learn anything but the summation. Thus, our protocol does not require the addition of noise to each data. Instead, we only require the noise leader to add the noise, which prevents the retrieval of the individual data from the summation. In Theorems E.1 and E.2, we show that A cannot distinguish the individual values from randomly chosen values and can only know the summation if the source is part of the adversary. In Theorem E.3, we show that A cannot extract the individual values of the users from the summation due to the added noise and differential privacy. Thus, our protocol satisfies data privacy. In other words, the adversary cannot learn the data point of an honest client. It is important to note that since the noise leader is selected via VRF, no adversary can guess if any honest party will be the leader in the upcoming round beforehand. This provides additional security regarding the manipulation of the noise leader. F DISCUSSION To reduce the negative impact brought by noise, according to infinity divisibility of Gaussian distribution (Patel & Read (1996)), one may split global noise (N(0, (∆σ)2)) into n parts (N(0, (∆σ) 2 n )). But a drawback is that the privacy budget will increase linearly as an increasing number of colluded clients appear. For example, if GDP achieves -DP , in the worst case where there are n−1 colluded clients, the privacy budget will raise to n× . Hiding labels distribution. In the semi-honest setting, if the source client sends the missing indexes consistently, adversaries may figure out which labels are distributed (on the source clients) by statistical analysis. We show that this issue can be tackled. In the proposed protocol, source client broadcasts the missing data indexes mID (line 16 of Algorithm 2). Under the semi-honest setting, if source client sends missing indexes consistently, the adversaries will figure out which labels are distributed on source clients by statistic analysis. We note that FEVERLESS can be expanded to avoid this type of leakage by yielding extra communication overheads. Specifically, during broadcasting period, source client should send indexes of one bucket instead of mID, and the rest of protocol remains constant. In this way, others cannot distinguish the distribution of labels because all clients share the same index set I. If we assume labels are uniformly distributed on each client, the extra overheads are restricted to |I|/|C|. This cost is clearly noticeable in those datasets with a large number of data points. Other security tools. The masking scheme realizing secure aggregation may be replaced with an MPC (Damgård et al. (2012); Wu et al. (2020)) or additively homomorphic encryption (Paillier (1999)). However, the major defect of these tools is that they entail labor-intensive calculation with regard to encryption, which may not scale well in large-scale datasets. Due to this concern, we only put light-weight computation in FEVERLESE and further, we enhance the security to “perfect secrecy”. In our design, the selection of noise leader is captured by VRF. We note that there may be other options to fulfil the goal. For example, Proof of Elapsed Time (PoET) (Chen et al. (2017); Corso (2019)) is an interesting and effective mechanism which is used to maintain the consensus of distributed peers in Hyperledger Sawtooth. It provides a fair and trusted lottery strategy to select a block winner (per consensus round). Sharing the same philosophy with the VRF, it may be deployed in our protocol to yield leader. And building a more efficient noise leader selection algorithm could be an interesting open problem. G MORE DETAILS ON EXPERIMENT SETUP All the experiments are implemented in Python, and conducted on a cluster of machines with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz, with 15GB RAM in a local area network. Intuitively, the smaller we set, more secure FEVERLESS will be; but larger noise will be added. We note the above statement can be seen from the experimental results. As for the cryptographic tools, we set the key size of DH and Paillier as 160 bits and 1024 bits respectively(to save some time in running the experiments). This size can reach a symmetric security level with 80 bits key length. Note one may indeed increase the key size to obtain stronger security 7, but this will bring a longer experiment time as a side effect. We use 1024-bit MODP Group with 160-bit Prime Order Subgroup from RFC 5114 8 for DH Key exchange. SHAKE-256 (Dworkin (2015)), a member of SHA3 (Dworkin (2015)) family, is used as a hash function in leader selection and secure aggregation. •Credit Card: It is a commercial dataset used for predicting whether costumers will make payment on time. It provides 30,000 samples, and each sample composes of 23 features. • Bank marketing: Consisting with 45,211 data points and 17 features, the goal of bank marketing is to predict if a client will subscribe a term deposit. • Banknote authentication: Offering 1,372 data points and 4 features, this dataset is used to classify authenticated and unauthenticated banknotes. Note that different from traditional tabular data, features in the dataset are extracted from images that are taken from genuine and forged banknotelike specimens through Wavelet Transform (Antonini et al. (1992)). Using the small-scale dataset, the trained model may not be robust for noise, which brings negative impact on accuracy. H ADDITIONAL EXPERIMENTS AND FIGURES We present additional experiments, and all the experimental settings follow those defined in Section 4.1. In each presented figure, we show the results executed on the datesets Credit card (left), Bank Marketing (middle) and Banknote Authentication (right). Note that the comparison among FEVERLESS, LDP, and AHE requires a condition that #client=2; when #client=1, we can only show the results of the baseline. And the average performance of FEVERLESS in these figures is highlighted as the red dotted line. Via the experiments, we elaborate that how the accuracy varies with the increasing number of client among the baseline, FEVERLESS and LDP, w.r.t. different tree structures and . Figure 7-18 are presented for the best case where only a non-colluded client adds the noise. And other cases are demonstrated in Figure 19-26 with the selection scores: 1/2 and 1/3. Beyond those, we also add the comparison results for AHE in Table 2-4 with = 2. In general, without any added noise, the baseline can reach the highest accuracy and meanwhile, the accuracy remains stable as the client number increases. The performance of FEVERLESS is right behind that of baseline but still keeps stable. Note there are slight fluctuations in some figures (e.g. Figure 10, 12 and 14), especially for the case where complex tree structure and small are used. The LDP approach does harm accuracy, which can be seen from the continuously and significantly falling bars in the figures. Naturally, when more clients engage into the training, more noise should be added into the model. This makes LDP’s performance far lower than the red line. Note that banknote dataset is composed of 4 features. In the VFL setting, every client should have at least one feature. Therefore, we can only allow up to 4 clients to participate in the training. Beside, FEVERLESS does not perform well in banknote dataset. This is so because the model is trained by a small number of samples, so that the robustness is seriously affected by noise. H.1 BEST CASE: ACCURACY ON CLIENT NUMBER 7Note a stronger security level will not affect the training accuracy. 8https://tools.ietf.org/html/rfc5114 AHE AHE AHE H.2 OTHER CASES: ACCURACY ON CLIENT NUMBER H.3 ADDITIONAL RESULTS ON ACCURACY FOR BANKNOTE AUTHENTICATION H.4 ADDITIONAL RESULTS ON TIME In Figure 29-33, we show the time performance based on various numbers of client, tree and depth. Besides, we present the concrete results in Table 5-7. Table 8 also shows more specific runtime of tree construction in #tree=4 and depth=4 among baseline, FEVERLESS, LDP and AHE. In general, the runtime of FEVERLESS is slightly higher that that of the baseline. Compared to AHE, FEVERLESS significantly reduces training time while preserving privacy. This advantage is clearly seen from the cases using complex tree structures. Note that AHE can be replaced by other more complex cartographic solutions, such as secure MPC, which can also maintain data/label privacy. But the MPC-based solutions will consume more runtime. AHE AHE AHE AHE LDP H.5 RESULTS ON COMMUNICATION COST In Figure 34-36, we demonstrate the communication cost based on the numbers of clients, tree and depth. For the convenience of comparison, we set #clients=4, #tree=4 and depth=4 as default. We use Table 9-11 to elaborate the concrete costs. To sum up, we see that the communication cost of FEVERLESS is almost the same as those of the baseline and LDP. But as compared to AHE, FEVERLESS significantly reduces costs while maintaining privacy. AHE LDP
1. What is the focus and contribution of the paper regarding secure protocols in the FL setting? 2. What are the writing issues and suggestions for improvement mentioned in the review? 3. What is the fundamental flaw in the security guarantee of the paper, according to the reviewer? 4. How does the reviewer question the problem setting and its similarity to horizontal data split? 5. Why does the reviewer find the public release of label information confusing? 6. What notation confusion does the reviewer have regarding label space and feature representation? 7. What kind of evaluation and performance microbenchmarks are missing from the paper, according to the reviewer?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a secure protocol in the FL setting for XGBoost where the dataset is vertically split. The proposed mechanism is based on masking and selecting a random client to generate DP noise. Review There are several writing issues with the paper - i) There should be a background section that introduces the crypto primitives briefly - the main body should be self-contained. ii) Formal security theorems should be presented in the main paper - only full proofs should be in the Appendix. iii) A copy-edit pass is required for the paper - some examples "In practice, a VFL scheme supporting distributed labels is of necessity." --> is necessary "without disclosing both feature" --> without disclosing either ... or.. "where a single label is only associated with a client." - client is associated with a single label There is a fundamental flaw in the security guarantee of the paper - the source client and the noise leader could collude, this would reveal the aggregation without noise (the noise can be subtracted out). I could not understand the problem setting - the motivating examples are hospitals and banks. But would not every patient visit a particular hospital or a particular bank (based on locality etc), how is this different from horizontal data split? I could understand why would the info that client A holds label a be made public (Sec 2.3)? Is not label privacy a goal of the scheme? I was confused with the term label space Y = y 1 , ⋯ , y m - it seems like the label set and not space (label space would be the space of classes like binary for covid test and so on). Also I am confused with the notation X j | j ∈ 1 , ⋯ , f since the number of features is d (instead of f )? Evaluation is inadequate - performance microbenchmarks such as break down of client/ noise leader and source client time, bandwidth analysis etc are missing.
ICLR
Title FEVERLESS: Fast and Secure Vertical Federated Learning based on XGBoost for Decentralized Labels Abstract Vertical Federated Learning (VFL) enables multiple clients to collaboratively train a global model over vertically partitioned data without revealing private local information. Tree-based models, like XGBoost and LightGBM, have been widely used in VFL to enhance the interpretation and efficiency of training. However, there is a fundamental lack of research on how to conduct VFL securely over distributed labels. This work is the first to fill this gap by designing a novel protocol, called FEVERLESS, based on XGBoost. FEVERLESS leverages secure aggregation via information masking technique and global differential privacy provided by a fairly and randomly selected noise leader to prevent private information from being leaked in the training process. Furthermore, it provides label and data privacy against honest-but-curious adversary even in the case of collusion of n − 2 out of n clients. We present a comprehensive security and efficiency analysis for our design, and the empirical results from our experiments demonstrate that FEVERLESS is fast and secure. In particular, it outperforms the solution based on additive homomorphic encryption in runtime cost and provides better accuracy than the local differential privacy approach1. 1 INTRODUCTION Traditional centralized deep learning models, demanding to collect a considerable amount of clients’ data to maintain high accuracy, to some degree, may increase the risk of data breaches. Data may not be easily shared among different entities due to privacy regulations and policies. To tackle this “Data Island” problem (Yang et al. (2019a)), Google proposed Federated Learning (FL) (McMahan et al. (2017)) to allow multiple clients to train a global model without sharing private data. The basic paradigm of FL is that all clients train local models with their own data, and then the information of local models, e.g., gradients, may be exchanged to produce a global model. Based on different types of data partition (Yang et al. (2019a)), FL can be mainly categorized into Horizontal Federated Learning (HFL) and Vertical Federated Learning (VFL). The former focuses on training with horizontally partitioned data where clients share the same feature space but differing in data index set. Several research works (Shokri & Shmatikov (2015); Orekondy et al. (2019); Geiping et al. (2020); Li & Han (2019)) have found that training data of HFL is still at high risk of leakage although private data is kept locally. Other studies (Phong et al. (2018); Truex et al. (2019); Xu et al. (2019); Zhang et al. (2020); Zhu et al. (2020)) have been dedicated to enhancing the security of HFL. On the contrary, VFL is mainly applied in the scenario of training with vertically partitioned data (Wu et al. (2020); Cheng et al. (2021)) where clients share the same data index set but differing in feature space. In this paper, our principal focus is to achieve privacy-preserving training on VFL. To best of our knowledge, many existing studies (Hardy et al. (2017); Nock et al. (2018); Liu et al. (2020); Yang et al. (2019b); Cheng et al. (2021); Chen & Guestrin (2016); Wu et al. (2020)) have proposed innovative approaches to prevent private information breaches in the context of VFL. Specifically, (Hardy et al. (2017)) introduced encryption-based privacy-preserving logistic regression to safeguard the information of data indexes. (Nock et al. (2018)) gave a comprehensive discussion on 1Code is available at: https://github.com/feverless111/vfl the impact of ID resolution. (Yang et al. (2019b)) introduced a scheme without using a coordinator for a limited number of clients. Recently, (Liu et al. (2020)) proposed an asymmetrically VFL scheme for logistic regression tackling privacy concerns on ID alignment. Unlike the training models used in the aforementioned works, XGBoost (Chen & Guestrin (2016)), which is one of the most popular models applied in VFL, can provide better interpretation, easier parameter tuning, and faster execution than deep learning in tabular data training (Goodfellow et al. (2016); LeCun et al. (2015)). These practical features and advantages draw academia and industry’s attention to the research on XGBoost, especially in the privacy-preserving context. (Wu et al. (2020)) introduced an approach for tree-based model training through a hybrid method composing homomorphic encryption and secure Multi-Party Computation (MPC) (Goldreich (1998); Bonawitz et al. (2017)). After that, (Cheng et al. (2021)) proposed a similar system to train XGBoost (Chen & Guestrin (2016)) securely over vertically partitioned data by using Additively Homomorphic Encryption (AHE). By applying Differential Privacy (DP) (Dwork (2008)), (Tian et al. (2020)) designed a VFL system to train GBDT without the need of encryption/decryption. However, most of the above solutions based on AHE and MPC do not scale well in terms of efficiency on training XGBoost. Beyond that, all the existing schemes basically assume that training labels are managed and processed by a sole client. In practice, a VFL scheme supporting distributed labels is necessary. For instance, multiple hospitals, clinics and health centers currently may be set to COVID-19 test spots and aim to train a model, e.g., XGBoost, to predict with good interpretation if citizens (living in various locations) are infected based on their health records and symptoms. In this context, the labels (i.e., the test results) are likely distributed among different health authorities - even targeting to the same group of patients, and feature space is vertically portioned. For example, a cardiac hospital only maintains heart data for the patients, while a psychiatric center holds the mental records, in which both authorities may collect and manage each of its registered patient’s label locally. Another common scenario could be in the financial sector where multiple bank branches and e-commerce companies prefer to build a global model to predict if their customers may pay some service (e.g., car loan) on time. The banks have part of features about the customers (e.g., account balance, funding in-and-out records), while the companies may obtain other features (e.g., payment preference). Since the customers may get the same service, e.g., loan, from different institutions, it is clear that labels must be distributed rather than centralized. In addition to efficiency and functionality aspects, one may also consider capturing stronger security for VFL. Training an XGBoost usually should involve the computation of first and second-order derivatives of the loss function (note gradients and hessians contain labels’ information), and the aggregation of them is required in each round. In the context where the labels are held by different clients, if the gradients and hessians are transmitted in the form of plaintexts and the summations of them are known to an aggregator (whom could be one of the clients engages in training), inference and differential attacks (Appendix C) will be easily conducted by the aggregator, resulting in information leakage. To tackle these problems, we propose a fast and secure VFL protocol, FEVERLESS, to train XGBoost (Appendix B.1) on distributed labels without disclosing both feature and label information. In our design, the privacy protection is guaranteed by secure aggregation (based on a masking scheme) and Global Differential Privacy (GDP) (Appendix B.6). We leverage masking instead of heavy-cost multiparty computation and we guarantee a “perfect secrecy” level for the masked data. In GDP, we use Verifiable Random Function (VRF) (Appendix B.5) to select a noise leader per round (who cannot be predicted and pre-compromised in advance) to aggregate noise from “selected” clients, which significantly maintains model accuracy. Our contributions can be summarized as follows. (1) We define VFL in a more practical scenario where training labels are distributed over multiple clients. Beyond that, we develop FEVERLESS to train XGBoost securely and efficiently with the elegant combination of secure aggregation technique (based on Diffie-Hellman (DH) key exchange (Appendix B.2) and Key Derivation Function (KDF) (Appendix B.4)) and GDP. (2) We give a comprehensive security analysis to demonstrate that FEVERLESS is able to safeguard labels and features privacy in the semi-honest setting, but also maintain the robustness even for the case where n− 2 out of n clients commit collusion. (3) We implement FEVERLESS and perform training time and accuracy evaluation on different realworld datasets. The empirical results show that FEVERLESS can maintain efficiency and accuracy simultaneously, and its performance is comparable to the baseline - a ”pure” XGBoost without using any encryption and differential privacy. Specifically, training the credit card and bank marketing datasets just takes 1% and 6.5% more runtime than the baseline and meanwhile, the accuracy is only lower than that of the baseline by 0.9% and 3.21%, respectively2. 2 PROBLEM FORMULATION 2.1 SYSTEM MODEL Before proceeding, we give some assumptions on our model. We suppose that a private set intersection (Kolesnikov et al. (2017); Pinkas et al. (2014)) has been used to align data IDs before the training starts, so that each client shares the same data index space I. But the names of features are not allowed to share among clients. As for the information of label distribution (indexes indicating a label belongs to which client, e.g., the label of i-th data instance is held by client A), we will consider the following conditions: (1) this information is revealed to the public in advance; or (2) the information is not allowed to publish but the training can still be accomplished (with extra cost). We also consider that the training is conducted on a dataset with m samples composing with feature space X = {x1, · · · , xm}, each containing f features, and label set Y = {y1, · · · , ym}. Besides, features {X(c)j | j ∈ {1, · · · , f}} and labels {y (c) i | i ∈ {1, · · · ,m}} are held among n clients where each client has at least one feature and one label. X(c)j and y (c) i refer to j-th feature and i-th label owned by c-th client, respectively. Considering a practical scenario wherein training labels are distributed among clients, we propose a new variant of VFL, named VFL over Distributed Labels (DL-VFL). The concrete definition is given as follows. Definition 1 (DL-VFL). Given a training set with m data samples consisting of feature space X , label space Y , index space I and clients set C, we have: X c ∩ X c ′ = ∅,Yc ∩ Yc ′ = ∅, Ic = Ic ′ ,∀c, c ′ ∈ C, c 6= c ′ . (1) A client c participating DL-VFL shares the same sample ID space I with the corresponding labels, where a single label belongs to only one client. And different clients hold the subset of X sampled from feature space. To achieve privacy-preserving XGBoost training, we further define two roles. Definition 2 (Source client). A source client with split candidates wants to compute the corresponding Lsplit based on Eq.(4). But some labels are missing so that ∑ gi and ∑ hi are unable to derive. For the case that a source client does not hold all labels in the current split candidates, we propose a solution based on secure aggregation and global differential privacy to help the source client to compute Lsplit while safeguarding other clients’ privacy. We consider the two conditions regarding if label distribution is publicly known. We find that if we keep label distribution hidden, we will take extra communication overhead to perform training. The detailed explanation is given in Appendix F. Note each client may have a chance to act as a source client because all the labels are distributed, where the source client leads the Lsplit computation, and clients provide missing label values to the source client. To achieve GDP, we define noise leader who is selected fairly and randomly from all clients (except for the source client) - preventing clients from being compromised beforehand. Definition 3 (Noise leader). By using VRF, a noise leader is responsible for generating the maximum leader score, aggregating differentially private noise from a portion of clients and adding the noise to the gradients and hessians. Note we summary the main notations in Table 1 (see Appendix A). 2.2 THREAT MODEL We mainly consider potential threats incurred by participating clients and the outside adversaries. We assume that all clients are honest-but-curious, which means they strictly follow designed algo- 2For banknote authentication dataset, FEVERLESS takes 13.96% more training time than the baseline, and the accuracy is 30.4% lower. This is because the model is trained by a small-scale dataset, so that the robustness is seriously affected by noise. rithms but try to infer private information of other clients from the received messages. Besides, we also consider up to n − 2 clients’ collusion to conduct attacks, and at least one non-colluded client adds noise per round. Through authenticated channels, DH key exchange can be securely executed among clients. Other messages are transmitted by public channels, and outside attackers can eavesdrop on these channels and try to reveal information about clients during the whole DL-VFL process. Note this paper mainly focuses on solving privacy issues in training DL-VFL based on XGBoost. Thus, other attacks, like data poisoning and backdoor attacks deteriorating model performance, are orthogonal to our problem. 3 A PRACTICAL PRIVACY-PRESERVING PROTOCOL 3.1 FEVERLESS PROTOCOL DESCRIPTION To prevent a source client from knowing gradients and hessians sent by other clients, one may directly use MPC (Damgård et al. (2012)) based on AHE (Paillier (1999); Wu et al. (2020)). But this method yields expensive computation cost. Getting rid of the complex mechanism like MPC, we leverage secure aggregation protocol via masking scheme based on DH key exchange(Bonawitz et al. (2017); Ács & Castelluccia (2011); Tian et al. (2020)). By further using KDF and Hash Function (see Appendix B.3&B.4), our masking (for gradients and hessians) can be derived without exchanging keys per training round. Our approach significantly reduces the communication cost but still maintains the robustness up to n − 2 colluded clients. Meanwhile, the secure aggregation can provide “perfect secrecy” for broadcast messages. After receiving the broadcast messages, the masking will be canceled out at the source client side. But only using the masking is unable to defend against differential attacks. One may consider using Local Differential Privacy (LDP) (Kairouz et al. (2014)) to make sure that each client may add noise to per send-out message, barely consuming any extra computation cost. The accumulated noise, from all clients, may seriously affect the model accuracy. To tackle this problem, we use a GDP (Wei et al. (2020)) approach with noise leader selection. A hybrid method is finally formed based on masking scheme and GDP, so that per client’s sensitive information can be protected by the “masks” and the aggregated values are secured by the noise which is injected by the chosen clients. We briefly introduce our design, and the detailed algorithms and more explanations are given in Appendix D. Assume each client c ∈ [1, n] generates respective secret key skc and computes gradients g (c) i and hessians h (c) i locally, where {i | yi ∈ Yc}. FEVERLESS works as follows. 1. Broadcast missing indexes. The source client broadcasts the mIDs= {i | yi /∈ Yc}. 2. Key exchange computation. Each client c computes public key pkc = gskc using secret keys skc, sends pkc to other clients and computes the corresponding shared keys3 {Sc,c′ = pk skc c′ = gskcskc′ | c, c ′ ∈ C, c 6= c′} based on secret key skc received public keys {pkc′ | c ′ ∈ C}. 3. Data masking. Each client c runs the masking generation algorithm to compute the maskings for protecting gradients and hessians. Specifically, based on KDF, clients’ indexes and the number of queries, the masking generation algorithm is conducted by mask(c)g ← ∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ ·( H(Sc,c′‖0‖query) ) , mask(c)h ← ∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ · ( H(Sc,c′‖1‖query) ) 4. Then the masked gra- dients G(c) and hessians H(c) are generated by G(c) = ∑ i∈mIDs g (c) i + mask (c) g −r(c)g , H(c) =∑ i∈mIDs h (c) i + mask (c) h −r (c) h . 4. Noise leader selection. Each client generates the selection score selecc using the VRF, H(SIGNskc(count,mIDs,r)), and broadcasts it, where count is the number of times clients conduct VRF, r is a fresh random number, and SIGN is the signature scheme (see Appendix B.5 for more details). The client with maximum score will be the noise leader. For ease of understanding, we assume client n with the largest selection score selectmaxn is the leader, in Figure 1. 3Shared keys are only generated once, and the KDF is used to generate the remaining maskings. 4For purpose of simplicity, we omit modular computations. The complete calculation processes are elabo- rated on Algorithm 3-5. 5. Noise injection. a) Noise leader selects k clients adding noise. For the details of the selec- tion, please see Algorithm 5 in Appendix D. b) The selected clients send {ñ(c)g = N(0,∆2gσ2) + r (c) g , ñ (c) h = N(0,∆ 2 hσ 2) + r (c) h |c ∈ k} to noise leader, in which the r (c) g and r (c) h are two random values to mask noise. c) The leader aggregates the noise: Ñg = k · N(0,∆2gσ2) + Rg and Ñh = k · N(0,∆2hσ2) +Rh, and further adds them to G(n) and H(n), respectively. 6. Aggregation and computation. All clients send the masked values to the source client. The source client computes ∑n c=1G (c) + k·N(0,∆2gσ2), ∑n c=1H (c) + k·N(0,∆2hσ2) and Lsplit. 7. Final update. The source client with maximum Lsplit updates model following XGBoost (Chen & Guestrin (2016)) and broadcasts the updated model and data indexes in child nodes as step 8. Figure 1 gives an overview of FEVERLESS. Note this process can be conducted iteratively. For simplicity, the core calculation processes are shown here, and more details are in Appendix D. 3.2 THEORETICAL ANALYSIS Computation cost: We use B and d to denote the number of buckets and the maximum depth respectively, and f (c) here represents the number of features held by a client c. For each client c, the computation cost can be divided into 4 parts: (1) Performing at most f (c) · B · NT · (2d − 1) times computation of Lsplit and w, taking O(f (c) · B · NT · 2d) time; (2) Creating n − 1 shared keys and 1 public key, which is O(n); (3) Conducting O(f (c) · B · NT · 2d) time to compute VRF outputs, select noise leader and generate noise; (4) Generating 2f (c) · B · NT · (2d − 1) maskings, which takes O(f (c) ·B ·NT · 2d ·n) time. Overall, each client’s computation complexity is O(f (c) ·B ·NT · 2d · n). Communication cost: Each client’s communication cost can be calculated as (1) Broadcasting at most f (c) · B · NT · (2d − 1) times of missing indexes mID; (2) Broadcasting 1 public key and receiving n − 1 public keys from other clients; (3) Broadcasting 1 leader selection score and sending noise to noise leader at most f (c) · B · NT · (2d − 1) times; (4) Sending source client 2 masked gradients and hessians of size 2dlog2Ne. Therefore the overall communication cost is f (c) · B · NT · (2d − 1) · (‖mID‖ · αI + αL + αN + n · αK2dlog2Ne), where αI , αL, αN and αK refer to the number of bits of index, leader selection score, noise and public keys, respectively. Thus, we have the communication complexity O(f (c) ·B ·NT · 2d). 3.3 SECURITY ANALYSIS We prove that FEVERLESS provides label and data privacy against an adversary controlling at most n − 2 clients in the semi-honest setting (Smart (2016)). Here, we provide a brief summary of our analysis and theorems. The formal proofs, in the random oracle model, are given in Appendix E. Label Privacy: Label privacy implies that the owner of a label among honest parties should not be leaked to the adversary. We achieve this by using a secure aggregation mechanism where the masks are created via DH key exchange and KDF. In brief, we show that because of the Decisional DH problem (see Definition 4), the adversary cannot distinguish the individual values from randomly chosen ones. That is why the adversary A cannot learn the owner of the label. Data Privacy: FEVERLESS provides data privacy, meaning that an adversary A cannot extract the data of any honest party. Individual data values are not separable from random values because of the secure masking. If the source client is not part of the adversary, no data information is leaked. But we require an additional countermeasure for the case where the source client is part of the adversary because it can collect the summation of the data values. We use differential privacy (Dwork et al. (2006a;b)) to achieve data privacy. Because of the noise added by differential privacy, the adversary cannot learn the individual data of an honest client. Moreover, we select the noise clients by the VRF which ensures that the noise leader cannot be predicted or compromised in advance. Theorem 3.1 (A not including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C so that |A| ≤ n− 2, the output of Sim is indistinguishable from the output of REAL : REALC,X ,YA (X C ,YC) ≡ Sim C,X ,Y A (XA,YA). Theorem 3.2 (A including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C so that |A| ≤ n− 2, the output of Sim is indistinguishable from the output of REAL:REALC,X ,YA (X C ,YC) ≡ Sim C,X ,Y A (G,H,XA,YA) where G = ∑ i∈mIDs g (c) i + N(0, (∆gσ) 2), H = ∑ i∈mIDs h (c) i + N(0, (∆hσ) 2). Theorem 3.3 (Privacy of the Inputs). No A ⊂ C such that |A| ≤ n − 2 can retrieve the individual values of the honest clients with probability 1 − ∑k̂ i=0 C i hC k̂−i n−2−h(Pt) k̂(1 − Pt)(n−k̂) Ck k̂−i Ck k̂ , where h and k̂ refer to the number of non-colluded clients and the number of clients who have selection score larger than threshold, respectively; and Pt is the probability of selection score larger than the threshold. 4 EXPERIMENT We perform evaluations on accuracy, runtime performance and communication cost, and compare our design with two straightforward secure approaches: one is based on LDP (for accuracy), and the other is built on AHE with GDP (for runtime). These approaches are most-commonly-used components for privacy-preserving FL, and they could be the building blocks for complex mechanisms, e.g., MPC. We note the protocol should intuitively outperform those MPC-based solutions, and one may leverage our source code to make further comparison if interested. In the experiments, the baseline, which is the pure XGBoots algorithm, follows the training process of Figure 1 without using any privacy-preserving tools (steps Ë - Î). And LDP does not conduct DH key exchange but each client injects noise into the aggregation of gradients and hessians, while AHE follows Figure 1 except executing DH key exchange. In AHE, each client sends (additive) encrypted messages to source client after step Î. We here show the performance of the best case where there is only one (non-colluded and randomly selected) client adding noise per round (k = 1). For other results (where k 6= 1), see Appendix H.2. Note we present the communication cost in Appendix H.5). 4.1 EXPERIMENT SETUP To present comprehensive results on accuracy, we set to be: 10, 5, 2 and 1, and δ is set to 10−5. In terms of accuracy and runtime, we evaluate different situations by varying the number of clients, the number of trees, and the maximum depth of trees (from 2 to 10). Other parameters regarding to training follow the suggestions in (Chen & Guestrin (2016)) and the library 5 of XGBoost. To deliver fair results, we conduct each test for 20 independent trials and then calculate the average. Datasets. We run the experiments on three datasets - Credit Card (Yeh & Lien (2009)), Bank Marketing (Moro et al. (2014)) and Banknote Authentication6 - for classification tasks. To fairly investigate the model performance in DL-VFL, we make the labels as sparse as possible, and they are uniformally distributed on clients. We give the more details of experiment setup in Appendix G. 4.2 EVALUATION ON ACCURACY In Figure 2, we present a clear picture on the accuracy performance based on the #tree and the maximum depth in (2, 10−5)−DP. We merge the #client in one tree structure, which means in one bar, and the value is the mean of accuracy when conducting on different numbers. The accuracy of the baseline in credit card (about 0.82) and bank marketing (nearly 0.9) remains unchanged as the #tree and maximum depth increases, while the accuracy in banknote authentication rises from 0.9 to approximately 1.0. To highlight the differences and ensure all results to be displayed clearly, we set the ranges of accuracy as [0.5, 0.9], [0.5, 1] and [0, 1] for the three datasets, respectively. Note the performance based on the #client is given in Appendix H.1. Comparing with the baseline, shown in the top and middle rows of the Figure 2, FEVERLESS and LDP suffer from continuously shrinking accuracy as tree structure becomes complex. This is so because the injected noises are accumulated into the model via the increase of query number. And the accuracy is easily affected by the depth. In the worst case where the #tree and maximum depth are both equal to 10, FEVERLESS decreases 10.37% (resp. 14.98% ), and LDP drops 24.78% (resp.24.59%) in credit card (resp. bank marketing). But on average, FEVERLESS’ accuracy only shrinks by around 0.9% (resp. 3.21%), while LDP suffers from estimated 3x (resp. 2x) accuracy loss. The difference in the degree of deterioration mainly comes from how much noise is added for each query. We note the deterioration of FEVERLESS is independent with the #client. Thus, we can maintain great accuracy even for the case where there exists a considerable amount of clients. Despite the fact that less noise is added in FEVERLESS, we do not predict that the accuracy falls to the same level (around 50%, like randomly guess in binary classification) as LDP in the bottom row of Figure 2. This is so because the model is trained by an extremely small-size dataset, which makes it hard to maintain the robustness but relatively sensitive to noise. If setting a larger , we may see our advantage more clear. The experiments conducted on banknote authentication dataset with larger are given in Appendix H.3 To distinguish the performance between FEVERLESS and LDP more clearly, Figure 3 shows the comparison over different , when #depth and #tree are set to 10. The performance of the model is decayed as the decrease of . In the left (resp. middle) of the Figure 3, the averaged accuracy of FEVERLESS falls from 0.7686 to 0.5967 (resp. from 0.8517 to 0.6831), while that of LDP also decreases to 0.5299 (resp. 0.5853). We notice that the highest values of LDP stay at the same level as those of FEVERLESS. This is because, in the case of 2-client training, only one client needs to add the noise in LDP (which is identical to our GDP solution). At last, the worse case can be seen in the right of the Figure 3 due to the weak robustness of the model obtained from the banknote authentication. The results are much far away from the baseline there. But even in this case, FEVERLESS still holds a tiny advantage over LDP. 4.3 EVALUATION ON TRAINING TIME To highlight the runtime complexity, we average the results varying by client number into one tree structure as well. We further set the ranges of time as [0s, 9,500s], [0s, 3,500s] and [0s, 110s] for the datasets to deliver visible results. Note since the banknote dataset contains the least samples, it 5https://xgboost.readthedocs.io/ 6https://archive.ics.uci.edu/ml/datasets/banknote+authentication does deliver the best training efficiency here. Figure 4 presents the comparison on the training time by varying maximum depth and the number of trees among the datasets. The training time increases exponentially and linearly with depth and the number of tree, which is consistent with our analysis given in Section 3.2. In Figure 4, compared with the baseline, the runtime of FEVERLESS at most increases 110.3s (resp. 50s, 4.3s), while AHE requires around 70x spike (resp. 48x, 21x) in credit card (resp. bank marketing, banknote authentication), where #depth and #trees are equal to 10. For the average case, FEVERLESS consumes Approx. 1%(resp.6.5%, 13.96%) more training time than the baseline, while AHE requires the 351%(resp.155.1%, 674%) extra, w.r.t. the three datasets. Its poor performances are due to the laborious calculations in encryption, in which each client has to conduct an encryption per query. By contrast, the masksings in FEVERLESS avoid these excessive costs. We further investigate the runtime performance on the #client in Appendix H. 5 CONCLUSION AND FUTURE WORK We consider a practical scenario where labels are distributedly maintained by different clients for VFL. By leveraging secure aggregation and GDP, we present a novel system, FEVERLESS, to train XGBoost securely. FEVERLESS can achieve perfect secrecy for label and data, and adversaries cannot learn any information about the data if the source client is not corrupted. With DP against differential attack, the source client knows nothing more than summation. Our design is also robust for the collusion of n−2 out of n clients. The experiment results show that FEVERLESS is fast and accurate, only taking 1% extra training time, and sacrificing 0.9% accuracy, as compared to the pure XGBoost. In Appendix F, we discuss how to reduce noise, hide distribution of labels and use other security tools. Although our system achieves great performance in terms of security and efficiency, its accuracy still does not work well in small-scale datasets. This remains an open problem. And we will also consider secure solutions against malicious adversaries. A NOTATIONS The frequently used notations are summarized in Table 1. B PRELIMINARIES B.1 XGBOOST XGBoost (Chen & Guestrin (2016)) is a popular tree-based model in tabular data training that can provide better interpretation, easier parameters tuning and faster execution speed than deep learning Goodfellow et al. (2016); LeCun et al. (2015). It also outperforms other well-known boosting tree systems in terms of accuracy and efficiency, like Spark MLLib Meng et al. (2016) and H2O Chen & Guestrin (2016), especially for large-scale datasets. Therefore, in this paper, we consider using XGBoost as a building block for classification tasks. Assume that a training set with m data points composing with feature space X = {x1, · · · , xm} and label space Y = {y1, · · · , ym}. Before training starts, every feature will be sorted based their values, and split candidates will be set for features. XGBoost builds trees based on the determination of defined splits candidates and some pruning conditions. Specifically, computing gradients and hessians first according to Eq.(2) and Eq.(3) for each data entry, where y(t−1)i denotes the prediction of previous tree for i-th data point, and yi is the label of i-th data point: gi = 1 1 + e−y (t−1) i − yi = ŷi − yi, (2) hi = e−y (t−1) i (1 + e−y (t−1) i )2 . (3) For splitting nodes, the XGBoost algorithm determines the best split candidate from all others based on maximum Lsplit in Eq.(4), where λ and γ are regularization parameters: Lsplit = 1 2 [ ∑ i∈IL gi∑ i∈IL hi + λ + ∑ i∈IR gi∑ i∈IR hi + λ − ∑ i∈I gi∑ i∈I hi + λ ]− γ. (4) The current node will be the leaf node if the following conditions are fulfilled: reaching the maximum depth of tree, the maximum value of impurity is less than preset threshold. The calculation of the leaf value follows Eq.(5): w = − ∑ i∈I gi∑ i∈I hi + λ . (5) B.2 DIFFIE-HELLMAN KEY EXCHANGE Based on Decision Diffie-Hellman (DDH) hard problem (Boneh (1998)) defined below, DiffieHellman key exchange (DH) (Diffie & Hellman (1976)) provides a method used for exchanging keys across public communication channels. Without losing generality and correctness, it consists of a tuple of algorithms (Param.Gen, Key.Gen, Key.Exc). The algorithm (G, g, q) ← Param.Gen (1α) generates public parameters (a group G with prime order q generated by a generator g) based on secure parameter α. (ski, pki) ← Key.Gen(G, g, q) allows client i to generate secret key (ski $←− Zq) and compute public key (pki ← gski ). Shared key is computed by (pk skj i , pk ski j ) ← Key.Exc(ski, pki, skj , pkj). Inspired by (Bonawitz et al. (2017); Ács & Castelluccia (2011)), we utilize shared keys as maskings to protect information of labels against inference attack during transmitting in public channels. The correctness requires pkskji = pk ski j . The security relies on the DDH problem (Boneh (1998)), which is defined as: Definition 4 (Decision Diffie-Hellman). Let G be a group with prime order q and g be the fixed generator of the group. The Probabilistic Polynomial Time (PPT) adversary A is given and ga and gb where a and b are randomly chosen. The probability of A distinguishing (ga, gb, gab) and (ga, gb, gc) for a randomly chosen c is negligible:∣∣∣Pr[a, b $←− Zq : A(g, ga, gb, gab) = true]− Pr [ a, b, c $←− Zq : A(g, ga, gb, gc) = true ]∣∣∣ < negl(α). B.3 PSEUDO-RANDOM GENERATOR AND HASH FUNCTION Pseudo-Random Generator (PRG) (Håstad et al. (1999)) is an algorithm which is able to generate random numbers. The ”pseudo-random” here means that the generated number is not truly random but has the similar properties with random number. Generally, the pseudo-random numbers are determined by given initial values a.k.a seeds. In cryptographic applications, a secure PRG requires attackers not knowing seeds can distinguish a truly random number from a output of PRG with a negligible probability. Similar with PRG, hash function allows mapping arbitrary size of data to a fixed bit value. For reducing communication cost of FEVERLESS, we use SHAKE-256 (Sha (2015)), one of the hash functions in SHA-3 (Aumasson et al. (2008)) family, to generate customize size of maskings. B.4 KEY DERIVATION FUNCTION Key Derivation Function (KDF) (Krawczyk & Eronen (2010)) is a kind of hash function that derives multiple secret keys from a main key by utilizing Pesudo-Random Function (PRF) (Kaliski (2005)). In general, KDF algorithm DK ← KDF (mainkey, salt, rounds) derives keys DK based on a main key, a cryptographic salt and current round of processing algorithm. The security requires a secure KDF is robust for brute-force attack or dictionary attack. Inspired by (Zdziarski (2012)) where key shares generated by DH key exchange are converted to AES keys, in this paper, we use KDF to generate maskings for every round to reduce communication cost. The main key we use is generated by DH key exchange. B.5 VERIFIABLE RANDOM FUNCTION Verifiable Random Function (VRF) (Micali et al. (1999)) is a PRF providing verifiable proofs of correctness of outputs. It is a tool widely used in cryptocurrencies, smart contracts and leader selection in distributed systems (Micali (2016)). Basically, given a input x, a signature scheme and a hash function, a practical leader selection scheme with VRF (Micali (2016)) works as: Sleader ← H(signski(x)) (6) where ski is the secret key for i-th client, and the maximum leader score Sleader is used to determine leader. The security and unforgeability of VRF requires that the signature scheme has the property of uniqueness, and hash function is able to map the signature to a random string with fixed size. The correctness of this Sleader is proved by the signature of x. B.6 DIFFERENTIAL PRIVACY Differential Privacy (DP) (Dwork et al. (2006a;b)) is a data protection system targeting on publishing statistical information of datasets while keeping individual data private. The security of DP requires that adversaries cannot distinguish statistically change from two datasets where an arbitrary data point is different. The most widely used DP mechanism is called ( , δ)-DP requiring less noise injected than original proposed -DP but with the same privacy level. The formal definition is given as follows. Definition 5. (( , δ) - Differential Privacy) Given two real positive numbers ( , δ) and a randomized algorithm A: Dn → Y , the algorithm A provides ( , δ) - differential privacy if for all data sets D, D ′ ∈ Dn differing in only one data sample, and all S ⊆ Y: Pr[A(D) ∈ S] ≤ exp( ) · Pr[A(D ′ ) ∈ S] + δ. (7) Note the noise N ∼ N(0,∆2σ2) will be put into the output of the algorithm, where ∆ is l2 - norm sensitivity of D and σ = √ 2 ln(1.25/δ) (Abadi et al. (2016)). C PRIVACY CONCERN Since we assume feature names are not public information for all clients, and the values of features never leave from clients, the privacy issues are mainly incurred by the leakage of label information. C.1 INFERENCE ATTACK During training process, gradients and hessians are sent to source client for Lsplit computation. For classification task, the single gradient is in range (−1, 0)∪(0, 1) for binary classification. According to Eq.(2), a label can be inferred as 1 and 0 if the range is (−1, 0) and (0, 1), respectively. Besides, hessian illustrated in Eq.(3) can leak a prediction of the corresponding data sample. With training processing, the prediction is increasingly closer to a true label. The source client and outside attackers can infer the true label with high probability. Gradients and hessians cannot be transmitted in plaintext. We thus use secure aggregation scheme to protect them from inference attack. C.2 DIFFERENTIAL ATTACK Differential attack can happen anytime and many times during the calculation of gradients and hessians. Figure 5 describes an example of differential attack taking place in single node split. After sorting feature1, the semi-honest source client defines 2 split candidates and further computes G{2,5} = g2 + g5 and G{1,2,3,5} = g2 + g5 + g1 + g3 for the candidates 1 and 2, respectively. Since the source client holds label 2, even if G{2,5} is derived by secure aggregation, the g5 still can be revealed by G{2,5} − g2. Another example for differential attack is shown in Figure 6. Assume split candidate 1 is the one for splitting root node. In the current tree structure, source client may split right node by computing Lsplit of split candidate 2. In this case, G{1,3} should be aggregated by source client. And the g5 can be revealed by G{1,2,3,5} −G{1,3} − g2, where G{1,2,3,5} is computed in the previous node. D MORE DETAILS ON FEVERLESS PROTOCOL D.1 XGBOOST TRAINING OVER DISTRIBUTED LABELS At the initial stage, we allow all clients to agree on a tree structure (maximum depth and the number of trees) and the learning rate for updating prediction. To avoid overfitting problem, we should define regularization parameters. Threshold impurity is also another vital parameter used to identify tree and leaf nodes via the maximum impurity. After that, we should choose , δ for DP, hash function for masking generation and noise leader selection. Besides, we select a multiplicative group G with order q generated by a generator g and a large prime number p to run DH. At initialization process, all clients set parameters and sort their own feature based on values. Then, split candidates can be defined, and data samples between two different candidates will be grouped as a bucket. At the end, all entries are assigned initialized values to calculate the derivatives of loss function. The detailed algorithm is described as follows. Algorithm 1: Initialization 1 Set parameters: all clients agree on the maximum depth of a tree d, the number of trees (NT ), learning rate (η), regularization parameters (λ, γ), the threshold of Lsplit, , δ, p, g, selection portion (p) and hash function 2 for c ∈ [1, n] do 3 for each feature j owned by c do 4 sort(X(c)j ) 5 define buckets: Bjz 6 end 7 set initialized values: ŷi(c) 8 end After initialization, all clients can invoke Algorithm 2 to train model collaboratively. The inputs are from feature space consisting of features X(c)j and labels y (c) i distributed on different clients, respectively; while the output is a trained XGBoost model that can be used for prediction. Generally, trees are built one by one. And we see from line 4-10 in Algorithm 2 that each client can compute gradients and hessians at beginning of a new tree construction. Following that, clients are to split current node. Note that XGBoost training in DL-VFL requires each client to calculate G and H . If the labels in some buckets are incomplete, the corresponding gradients and hessians cannot be computed. Thus, each client should first broadcast missing data index setmID (see line 15-17 in Algorithm 2). Based on the predefined bucketBjz ,mID can be defined if labels in Bjz are not held by clients. In each broadcast, a client sending messages is regarded as a source client. Then others send the corresponding g(c ′ ) i and h (c ′ ) i back to the source client to computeLsplit through Algorithm 3-5 depicted in Appendix D.2. After finding a maximum impurity Lcsplit max, the current node will be split to “left” and “right” nodes if L c split max>threshold Lsplit, in which the value of the split candidate is own by c. In node splitting, clients should set a given Algorithm 2: Protocol overview 1 Input: {X(c)j | j ∈ f, c ∈ |C|}: features, {y (c) i | i ∈ m, c ∈ |C|}: labels 2 Output: XGBoost model 3 Building trees: 4 for nt ∈ [1, NT ] do 5 for c ∈ [1, n] do 6 for each data entry i owned by c do 7 g (c) i ← ∂ŷi(c)Loss(ŷi (c), y (c) i ) 8 h (c) i ← ∂2ŷi(c)Loss(ŷi (c), y (c) i ) 9 end 10 end 11 for each node in the current tree do 12 while current depth <d do 13 for c ∈ [1, n] do 14 for each feature j owned by c do 15 for each Bjz owned by c do 16 BroadcastmID = {i | yi /∈ Yc} 17 end 18 aggregate G, H by Algorithm 3-5 19 compute Lsplit according to Eq.(4) 20 end 21 find the maximum L(c)split and broadcast 22 end 23 L (c) split max ← max({L (c) split | c ∈ [1, n]}) 24 if L(c)split max ≤ threshold Lsplit then 25 set current node as leaf node 26 c computes w and broadcast 27 Break 28 else 29 c splits current node to left node and right node, and broadcasts data index of them. 30 end 31 end 32 set remaining nodes as leaf nodes 33 c computes w and broadcast 34 clients participating in calculation of w: update ŷi(c) 35 end 36 end node as ”leaf” if current depth reaches the predefined maximum depth or the maximum Lsplit is less than the predefined threshold of Lsplit (see line 12, 24-32 in Algorithm 2). The derivation of leaf value is followed by Eq. 5 where G and H are intaken. Since a leaf node is either “left” or “right” split by one of the clients in C from its parent node, this client knows G and H and leaf value can be derived. Finally, this leaf value will be broadcast, and clients who own the corresponding g(c)i and h (c) i can use it to update predictions. The details for the above process are shown in Algorithm 2. D.2 SECURE AGGREGATION WITH GLOBAL DIFFERENTIAL PRIVACY In line 15-19 of Algorithm 2, source client is able to compute Lsplit from the requested missing data indexes and the aggregation of received messages. To avoid that inference and differential attacks are conducted on labels by source client and outside adversaries, we propose a privacy-preserving approach, shown in Algorithm 3-5, to “twist” the DH key exchange, noise leader selection and secure aggregation together. This method represents a viable alternative to train XGBoost securely in DL-VFL without demanding excessive computational resources and affecting model accuracy. To generate the secure-but-can-be-cancelled-out maskings, we adopt DH here. In Algorithm 3, all clients randomly select numbers as their secret keys and generate the corresponding public keys. For any two clients in the set C, they will exchange public key and compute the corresponding shared keys. For simplicity, we do not describe the signature scheme for DH. We assume DH is conducted on authenticated channels, which means the man-in-the-middle attack (Khader & Lai (2015)) should be invalid here. Algorithm 3: Diffie-Hellman key exchange 1 for c ∈ [1, n] do 2 skc ← Z∗p 3 end 4 for c ∈ [1, n] do 5 pkc = g skc mod p 6 for c ′ ∈ [1, n] ∧ c′ 6= c do 7 Sc,c′ = pk sk c ′ c mod p 8 end 9 end If the shared keys are used as maskings directly, our system is not robust for clients collusion unless the amount of communication has been sacrificed as a cost to update maskings per round. But the communication complexity is exponentially increased with the number of clients for a single node splitting. Considering the structure of trees, the overall communication complexity will be O(2d ·NT · n2), which may not scale well in practical applications. To tackle this issue, we use KDF to update maskings per round automatically. Specifically, in line 24-25 of Algorithm 5, shared keys are taken as main keys. 0 and 1 are salt values for gradients and hessians, respectively. Since query in each round varies, the generated maskings should be dynamic accordingly. Besides, the sign of maskings is determined by the indexes of clients. In this way, we only need to use DH once, and the communication complexity is independent with tree structure. To enable FEVERLESS to hold against differential attack, we use GDP approach allowing the chosen one to inject a global noise to aggregated values per round. The approach is quite subtle. If the noise leader is selected by source client, the system will be vulnerable to the collusion. Moreover, a client could be easily identified as a target if we choose it in advance, e.g., selecting a list of leaders before the training. To avoid these issues and limit the probability of collusion to the greatest extent, we use VRF to iteratively select the leader (see Algorithm 4) to securely inject a global noise. The input of VRF includes mIDs and a fresh random number r (line 4 in Algorithm 4), so that this client will not be predicted and set beforehand - reducing its chance to be corrupted in advance by outsiders and the source client. All clients can broadcast their scores and then the one who holds the “max value” will become the leader. Then the leader re-generates a selection score as score threshold (selecthreshold) and sends it to the rest of the clients. (line 2-6 in Algorithm 5). The clients send the masked noise back to the leader if the re-generated score is larger than the threshold (line 7-13 in Algorithm 5). Subsequently, the leader will select k̂ clients, notify them and aggregate these masked noise to generate a global noise with a random number. In this context, even these selected clients are colluded (note at least one is not) with noise leader and source client, there is still a noise that cannot be recovered, safeguarding the training differentially private. Note since the noise is masked by the random number, the source client (even colluding with the leader) cannot recover the “pure” global Algorithm 4: Noise leader selection 1 count = 1 2 for each time run this algorithm do 3 for c ∈ [1, n] ∧ c 6= source client do 4 selecc ← H(SIGNskc(count,mIDs,r)) 5 Broadcast 6 end 7 selecmaxc ← max({selecc | c ∈ [1, n]}) 8 set c as noise leader 9 count+=1 10 end noise to conduct differential attack. And each client adds a noise with a probability p. If k out of k̂ are non-colluded, the probability of collusion is (1 − kn ) h. To cancel out the randomness, the selected clients will subtract the same randomness from masked messages (line 28-31 in Algorithm 5). Considering that the source client may procrastinate the leader selection and noise injection procedure so as to buy some time for its colluded clients to prepare sufficient large VRF values to participate into the competition of selection and adding noise. One may apply a heartbeat protocol (Nikoletseas & Rolim (2011)) to prevent that a new selected leader intentionally halts the noise adding stage for a long period, say 1 min. If there is no response from the leader after for a short while, a new leader will be randomly selected. Furthermore, the heartbeat may help to solve the problem that the leader accidentally drops from the network. We note that the heartbeat protocol is not our main focus in this paper. Before replying to source client, we have that the clients with labels put maskings to gradients and hessians, and for those without labels, they just generate and later send out maskings, in which the noise leader (i.e. one of the maskings generators) injects the noise. In this way, the maskings, guaranteeing perfect secrecy of the messages, will be cancelled out after the values aggregation, and the differentially private noise will solidate indistinguishability of individual data entry. Note that in line 24-34 of Algorithm 5, the maskings and masked values are in the range [0, N − 1]. And N should be sufficiently large to avoid overflow, and the summation of gradients and hessians should not exceed N . Algorithm 5: Secure aggregation with global differential privacy 1 Noise injection: 2 if c = leader then 3 selecthresholdc ← H(SIGNskc(count,mIDs,r)) 4 Broadcast 5 count+=1 6 end 7 for c ∈ [1, n] ∧ c 6= source client ∧ c 6= noise leader do 8 selecc ← H(SIGNskc(count,mIDs,r)) 9 if selecc > selecthresholdc then 10 send ñ(c)g = N(0,∆2gσ 2) + r (c) g and ñ (c) h = N(0,∆ 2 hσ 2) + r (c) h to noise leader 11 count+=1 12 end 13 end 14 if c = leader then 15 c selects k clients from clients of sending noise, k = d|{ñ(c)g }| · pe 16 if k < 1 then 17 redo noise injection 18 end 19 notify k clients 20 noise aggregation: Ñg = k · N(0,∆2gσ2) +Rg , Ñh = k · N(0,∆2hσ2) +Rh 21 end 22 Secure aggregation: 23 for c ∈ [1, n] do 24 mask (c) g ← (∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ · ( H(Sc,c′‖0‖query) mod N )) mod N 25 mask (c) h ← (∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ · ( H(Sc,c′‖1‖query) mod N )) mod N 26 G(c) = ∑ i∈mIDs g (c) i + mask (c) g mod N 27 H(c) = ∑ i∈mIDs h (c) i + mask (c) h mod N 28 if selecc > selecthresholdc ∧ received notification then 29 G(c) = G(c) − r(c)g mod N 30 H(c) = H(c) − r(c)h mod N 31 end 32 if c = leader then 33 G(c) = G(c) + Ñg mod N 34 H(c) = H(c) + Ñh mod N 35 end 36 send {G(c), H(c)} to source client 37 end E SECURITY ANALYSIS We investigate the security and privacy properties of our protocol. First, we define the security model of our setting and the properties. Then, we prove that our protocol satisfies these properties. Security Model. Our security is based on the random oracle model (ROM) (Smart (2016)) where the hash function outputs uniformly random value for a new query and the same value for a previously answered query. Adversarial Model. Our protocol is designed for semi-honest security model (Smart (2016)) where all parties follow the protocol while trying to obtain information regarding other parties’ inputs. We assume that the source client can collude with other clients, but the size of colluding clients is no more than n− 2. E.1 PRIVACY GOALS Our privacy goals can be summarized as: • Label privacy: No adversary controlling at most n−2 clients can learn who is the owner of a label among the honest parties. • Data privacy: No adversary controlling at most n − 2 clients can extract the data of an honest party. We first investigate the case where the source client is not part of the adversary. In the following theorem, we show that there exists a simulator Sim that simulates the joint view of clients in A by only using the inputs belonging to them. This implies that A does not learn more than what they have. Theorem E.1 (A not including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C such that |A| ≤ n − 2, the output of Sim is indistinguishable from the output of REAL: REAL C,X ,Y A (X C ,YC) ≡ SimC,X ,YA (X A,YA) (8) Proof. In order to prove that simulator Sim can simulate the outputs of the honest parties in H := C − A, we show that the distribution of the inputs belonging to the rest of the network cannot be distinguished from a randomly generated data. In this way, the simulator can use any dummy values as inputs of the honest parties to simulate their outputs. We will simulate the view of the A regarding the messages broadcast by the honest clients. A client c, first makes a key exchange with others, then after some internal operations, outputs G(c) and H(c) values. Let us investigate G(c) value, which is in the form of ∑ i∈mIDs g (c) i + mask (c) g , except for the noise leader who has additional noise of N(0, (∆gσ)2). The mask values are computed as∑ c 6=c′ |c−c′| c−c′ · H(Sc,c′‖0‖query) mod N . Here, we will use a hybrid model where we modify the protocol in several steps, and for each step, we will show that modifications are indistinguishable for the adversary A. In the end, we will achieve a hybrid that can be simulated by Sim. Hybrid1: The first hybrid directly follows the protocol. The distribution of the variables and the view of A is the same as REAL. Hybrid2: In the second hybrid, we replace the agreed keys between honest clients Sc,c′ for all c, c′ ∈ H with random values rc,c′ ∈ G where G is the group of key exchange protocol G. In the original protocol, Diffie-Hellman key exchange is used. The replacement is indistinguishable for the adversary because of the decision Diffie-Hellman assumption given in Definition 4. Also, note that these random values are only available to parties involved in the key exchange unless they are corrupted by the adversary. Hybrid3: In this hybrid, we replace the mask values of honest clients mask (c) g for all c ∈ H with random values R(c). Note that with the replacement in the previous step, the mask values are computed via ∑ c6=c′ |c−c′| c−c′ · H(rc,c′‖0‖query) mod N where rc,c′ ∈ ZN is a random value that is unknown to the adversary (if both c and c′ are honest). Because of the random oracle model, the output of the hash function will be a uniformly random value that is also unknown to the adversary. Since there are at most n − 2 clients in A , we have at least two honest clients c and c′ for which the adversary cannot know the uniformly chosen output of H(rc,c′‖0‖query). Then, the modular summation of these outputs includes at least one value that the adversary does not know and is uniformly random. Thus, it cannot be distinguishable from a random value R(c). Hybrid4: In this hybrid, we replace gradients of honest clients g (c) i for all c ∈ H with ’0’s. This is done by replacing mask values with R(c) := R(c) − ∑ i∈mIDs g (c) i mod N to keep the G (c) value the same. From the adversary’s perspective, sinceR(c) values are unknown and uniformly randomly chosen, the replacement is not distinguishable. In Hybrid4, we replace the gradients of honest parties with ’0’s, and the mask values are replaced by R(c) which is unknown to the adversary and chosen from a uniform distribution. Thus, a simulator Sim can simulate the outputs of honest parties G(c) without necessarily knowing their inputs. The same can be analyzed for hessian value, H(c). Since the masking values of G(c) and H(c) are different and the hash function is modeled as a random oracle, the randomness in both parts of them are independent of each other and indistinguishable by the adversary A. Overall, the simulator Sim can simulate our protocol. Thus, the view of the A can be simulated by replacing the inputs of the honest parties with zeros. Thus, the adversary does not learn any information on the inputs of the honest parties. Now, we analyze the case where the source client is part of the A. We show that there exists a simulator Sim that simulates the joint view of clients in A by only using the inputs belonging to them and the summations G and H . This implies that A does not learn more than what they have and the summation. Theorem E.2 (A including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C such that |A| ≤ n− 2, the output of Sim is indistinguishable from the output of REAL: REAL C,X ,Y A (X C ,YC) ≡ SimC,X ,YA (G,H,X A,YA) (9) where G = ∑ i∈mIDs g (c) i + N(0, (∆gσ) 2), H = ∑ i∈mIDs h (c) i + N(0, (∆hσ) 2). Proof. Here, we again show that Sim can simulate the outputs of the honest parties in H without knowing their inputs. Unlike Theorem E.1, Sim is also given the summations G and H because the adversary includes the source client. We can use the same hybrids with Theorem E.1 until Hybrid4, this is because that the inputs of the honest clients are not required yet. We need to update Hybrid4 such that it takes into account the summation. Here are the hybrids for the A with source client: Hybrid1,Hybrid2,Hybrid3: The same with Theorem E.1. Hybrid4: In this hybrid, we replace gradients of honest clients g (c) i for all c ∈ H with ’0’s, except one c′ which will be equal to ∑ i∈mIDs g (H) i mod N = G − ∑ i∈mIDs g (A) i mod N . The honest client c′ is randomly chosen among H. From the adversary’s perspective, since R(c) are unknown uniformly random chosen values, the replacement is not distinguishable. Overall, the view of theA can be simulated by replacing the inputs of the honest parties with zeros, except one with ∑ i∈mIDs g (H) i mod N . Thus, A does not learn any information from the honest clients, except the summation ∑ i∈mIDs g (H) i mod N . With Theorem E.2, we show that even the adversaryAwith source client cannot know more than the summation of gradient and hessian values, G and H . The proof is done via Sim without requiring individual data of the honest clients except for the summation. This implies that the adversary cannot distinguish which party provided which gradient or hessian values. Moreover, the parties who do not have any of the requested g or h values will send ’0’ together with the mask (and noise for the leader). This implies that we provide label privacy. Meaning that the adversary cannot distinguish which label’s g or h values are coming from which honest client. In the case when the adversary includes the source client, the summation of gradient and hessian values can be known to the adversary. In the following theorem, we show that these summations do not leak any individual data due to differential privacy. Theorem E.3 (Privacy of the Inputs). No A ⊂ C such that |A| ≤ n− 2 can retrieve the individual values of the honest clients with probability 1− k̂∑ i=0 CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂) Ck k̂−i Ck k̂ , where h and k̂ refer to the number of non-colluded clients and the number of clients who have selection score larger than threshold. Pt is the probability of selection score larger than the threshold. Proof. If the adversary does not include the source client, then following the previous theorems, the adversary cannot know any of the inputs belonging to the honest parties. Otherwise, it knows the summations G and H . Since we apply differential privacy (Dwork et al. (2006a;b)), the summation cannot leak information regarding the inputs. According to Definition 5, we add differentially private noise guaranteeing the security of individual data points while summation can be calculated. Proof of probability. Note noise leader selects k clients from n clients (rather than itself and the source client) to add noise. Suppose that there are h non-colluded clients out of n − 2 clients, the number of clients whose selection scores are larger than the threshold is k̂. The number of events is C k̂n−2−h + C 1 hC k̂−1 n−2−h + · · ·+ C k̂ hC 0 n−2−h, in which the events are that {“there are k̂ colluded clients out of k̂ clients and 0 non-colluded client”,· · · ,“there are 0 colluded client out of k̂ clients and k̂ non-colluded clients”}. Therefore, P (Ei) = C i h(Pt) i(1− Pt)h−i · C k̂−in−2−h(Pt) k̂−i(1− Pt)(n−h−k̂+i) = CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂), where Pt is the probability that the selection score is larger than the threshold, and Ei is i-th event. Then, the probability that noise leader selects k colluded clients from k̂ clients is P0 = Ck k̂−i Ck k̂ . At the end, the probability of all aggregated noise coming from colluded clients is k̂∑ i=0 P (Ei) · P0 = k̂∑ i=0 Cih(Pt) i(1− Pt)h−i · C k̂−in−2−h(Pt) k̂−i(1− Pt)(n−h−k̂+i) = k̂∑ i=0 CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂) Ck k̂−i Ck k̂ . Conversely, the probability of at least one non-colluded client participating in noise injection is 1− k̂∑ i=0 CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂) Ck k̂−i Ck k̂ . Note that because of the secure aggregation, the adversary cannot learn anything but the summation. Thus, our protocol does not require the addition of noise to each data. Instead, we only require the noise leader to add the noise, which prevents the retrieval of the individual data from the summation. In Theorems E.1 and E.2, we show that A cannot distinguish the individual values from randomly chosen values and can only know the summation if the source is part of the adversary. In Theorem E.3, we show that A cannot extract the individual values of the users from the summation due to the added noise and differential privacy. Thus, our protocol satisfies data privacy. In other words, the adversary cannot learn the data point of an honest client. It is important to note that since the noise leader is selected via VRF, no adversary can guess if any honest party will be the leader in the upcoming round beforehand. This provides additional security regarding the manipulation of the noise leader. F DISCUSSION To reduce the negative impact brought by noise, according to infinity divisibility of Gaussian distribution (Patel & Read (1996)), one may split global noise (N(0, (∆σ)2)) into n parts (N(0, (∆σ) 2 n )). But a drawback is that the privacy budget will increase linearly as an increasing number of colluded clients appear. For example, if GDP achieves -DP , in the worst case where there are n−1 colluded clients, the privacy budget will raise to n× . Hiding labels distribution. In the semi-honest setting, if the source client sends the missing indexes consistently, adversaries may figure out which labels are distributed (on the source clients) by statistical analysis. We show that this issue can be tackled. In the proposed protocol, source client broadcasts the missing data indexes mID (line 16 of Algorithm 2). Under the semi-honest setting, if source client sends missing indexes consistently, the adversaries will figure out which labels are distributed on source clients by statistic analysis. We note that FEVERLESS can be expanded to avoid this type of leakage by yielding extra communication overheads. Specifically, during broadcasting period, source client should send indexes of one bucket instead of mID, and the rest of protocol remains constant. In this way, others cannot distinguish the distribution of labels because all clients share the same index set I. If we assume labels are uniformly distributed on each client, the extra overheads are restricted to |I|/|C|. This cost is clearly noticeable in those datasets with a large number of data points. Other security tools. The masking scheme realizing secure aggregation may be replaced with an MPC (Damgård et al. (2012); Wu et al. (2020)) or additively homomorphic encryption (Paillier (1999)). However, the major defect of these tools is that they entail labor-intensive calculation with regard to encryption, which may not scale well in large-scale datasets. Due to this concern, we only put light-weight computation in FEVERLESE and further, we enhance the security to “perfect secrecy”. In our design, the selection of noise leader is captured by VRF. We note that there may be other options to fulfil the goal. For example, Proof of Elapsed Time (PoET) (Chen et al. (2017); Corso (2019)) is an interesting and effective mechanism which is used to maintain the consensus of distributed peers in Hyperledger Sawtooth. It provides a fair and trusted lottery strategy to select a block winner (per consensus round). Sharing the same philosophy with the VRF, it may be deployed in our protocol to yield leader. And building a more efficient noise leader selection algorithm could be an interesting open problem. G MORE DETAILS ON EXPERIMENT SETUP All the experiments are implemented in Python, and conducted on a cluster of machines with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz, with 15GB RAM in a local area network. Intuitively, the smaller we set, more secure FEVERLESS will be; but larger noise will be added. We note the above statement can be seen from the experimental results. As for the cryptographic tools, we set the key size of DH and Paillier as 160 bits and 1024 bits respectively(to save some time in running the experiments). This size can reach a symmetric security level with 80 bits key length. Note one may indeed increase the key size to obtain stronger security 7, but this will bring a longer experiment time as a side effect. We use 1024-bit MODP Group with 160-bit Prime Order Subgroup from RFC 5114 8 for DH Key exchange. SHAKE-256 (Dworkin (2015)), a member of SHA3 (Dworkin (2015)) family, is used as a hash function in leader selection and secure aggregation. •Credit Card: It is a commercial dataset used for predicting whether costumers will make payment on time. It provides 30,000 samples, and each sample composes of 23 features. • Bank marketing: Consisting with 45,211 data points and 17 features, the goal of bank marketing is to predict if a client will subscribe a term deposit. • Banknote authentication: Offering 1,372 data points and 4 features, this dataset is used to classify authenticated and unauthenticated banknotes. Note that different from traditional tabular data, features in the dataset are extracted from images that are taken from genuine and forged banknotelike specimens through Wavelet Transform (Antonini et al. (1992)). Using the small-scale dataset, the trained model may not be robust for noise, which brings negative impact on accuracy. H ADDITIONAL EXPERIMENTS AND FIGURES We present additional experiments, and all the experimental settings follow those defined in Section 4.1. In each presented figure, we show the results executed on the datesets Credit card (left), Bank Marketing (middle) and Banknote Authentication (right). Note that the comparison among FEVERLESS, LDP, and AHE requires a condition that #client=2; when #client=1, we can only show the results of the baseline. And the average performance of FEVERLESS in these figures is highlighted as the red dotted line. Via the experiments, we elaborate that how the accuracy varies with the increasing number of client among the baseline, FEVERLESS and LDP, w.r.t. different tree structures and . Figure 7-18 are presented for the best case where only a non-colluded client adds the noise. And other cases are demonstrated in Figure 19-26 with the selection scores: 1/2 and 1/3. Beyond those, we also add the comparison results for AHE in Table 2-4 with = 2. In general, without any added noise, the baseline can reach the highest accuracy and meanwhile, the accuracy remains stable as the client number increases. The performance of FEVERLESS is right behind that of baseline but still keeps stable. Note there are slight fluctuations in some figures (e.g. Figure 10, 12 and 14), especially for the case where complex tree structure and small are used. The LDP approach does harm accuracy, which can be seen from the continuously and significantly falling bars in the figures. Naturally, when more clients engage into the training, more noise should be added into the model. This makes LDP’s performance far lower than the red line. Note that banknote dataset is composed of 4 features. In the VFL setting, every client should have at least one feature. Therefore, we can only allow up to 4 clients to participate in the training. Beside, FEVERLESS does not perform well in banknote dataset. This is so because the model is trained by a small number of samples, so that the robustness is seriously affected by noise. H.1 BEST CASE: ACCURACY ON CLIENT NUMBER 7Note a stronger security level will not affect the training accuracy. 8https://tools.ietf.org/html/rfc5114 AHE AHE AHE H.2 OTHER CASES: ACCURACY ON CLIENT NUMBER H.3 ADDITIONAL RESULTS ON ACCURACY FOR BANKNOTE AUTHENTICATION H.4 ADDITIONAL RESULTS ON TIME In Figure 29-33, we show the time performance based on various numbers of client, tree and depth. Besides, we present the concrete results in Table 5-7. Table 8 also shows more specific runtime of tree construction in #tree=4 and depth=4 among baseline, FEVERLESS, LDP and AHE. In general, the runtime of FEVERLESS is slightly higher that that of the baseline. Compared to AHE, FEVERLESS significantly reduces training time while preserving privacy. This advantage is clearly seen from the cases using complex tree structures. Note that AHE can be replaced by other more complex cartographic solutions, such as secure MPC, which can also maintain data/label privacy. But the MPC-based solutions will consume more runtime. AHE AHE AHE AHE LDP H.5 RESULTS ON COMMUNICATION COST In Figure 34-36, we demonstrate the communication cost based on the numbers of clients, tree and depth. For the convenience of comparison, we set #clients=4, #tree=4 and depth=4 as default. We use Table 9-11 to elaborate the concrete costs. To sum up, we see that the communication cost of FEVERLESS is almost the same as those of the baseline and LDP. But as compared to AHE, FEVERLESS significantly reduces costs while maintaining privacy. AHE LDP
1. What is the focus of the paper regarding federated learning? 2. What are the strengths of the proposed approach, particularly in terms of privacy and security? 3. What are the weaknesses of the paper regarding its contributions and comparisons with other works? 4. How does the reviewer assess the clarity and quality of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper studies vertical federated learning for fast and secure XGBoost training where labels are distributed among multiple parties. Most previous works focus on labels are centralized in one party and adapt cryptography such as homomorphic encryption, multi-party computation, and differential privacy to protect the data and label privacy. This paper instead assumes decentralized labels and combines existing secure aggregation and global differential privacy to safeguard the data and label privacy. Review Pros: The problem addressed in this paper is of practical importance for many real-world applications. The challenges and the proposed solutions are well motivated. The paper is also very well-written and has a nice flow. Cons: Several typos: for example: “multiply hospitals” should be “multiple”. The main contribution of the paper is to loose the assumption of centralized labels to decentralized labels. Adapting existing secure aggregation to outperform homomorphic encryption-based VFL on latency or differential privacy on accuracy is straightforward. As such, the paper has a limited novelty from the ML perspective. The proposed method has shown improved performance over HE on the aspect of training time and DP on the aspect of accuracy. The paper, however, fails to compare with HE on the aspect of accuracy and DP HE on the aspect of latency.
ICLR
Title FEVERLESS: Fast and Secure Vertical Federated Learning based on XGBoost for Decentralized Labels Abstract Vertical Federated Learning (VFL) enables multiple clients to collaboratively train a global model over vertically partitioned data without revealing private local information. Tree-based models, like XGBoost and LightGBM, have been widely used in VFL to enhance the interpretation and efficiency of training. However, there is a fundamental lack of research on how to conduct VFL securely over distributed labels. This work is the first to fill this gap by designing a novel protocol, called FEVERLESS, based on XGBoost. FEVERLESS leverages secure aggregation via information masking technique and global differential privacy provided by a fairly and randomly selected noise leader to prevent private information from being leaked in the training process. Furthermore, it provides label and data privacy against honest-but-curious adversary even in the case of collusion of n − 2 out of n clients. We present a comprehensive security and efficiency analysis for our design, and the empirical results from our experiments demonstrate that FEVERLESS is fast and secure. In particular, it outperforms the solution based on additive homomorphic encryption in runtime cost and provides better accuracy than the local differential privacy approach1. 1 INTRODUCTION Traditional centralized deep learning models, demanding to collect a considerable amount of clients’ data to maintain high accuracy, to some degree, may increase the risk of data breaches. Data may not be easily shared among different entities due to privacy regulations and policies. To tackle this “Data Island” problem (Yang et al. (2019a)), Google proposed Federated Learning (FL) (McMahan et al. (2017)) to allow multiple clients to train a global model without sharing private data. The basic paradigm of FL is that all clients train local models with their own data, and then the information of local models, e.g., gradients, may be exchanged to produce a global model. Based on different types of data partition (Yang et al. (2019a)), FL can be mainly categorized into Horizontal Federated Learning (HFL) and Vertical Federated Learning (VFL). The former focuses on training with horizontally partitioned data where clients share the same feature space but differing in data index set. Several research works (Shokri & Shmatikov (2015); Orekondy et al. (2019); Geiping et al. (2020); Li & Han (2019)) have found that training data of HFL is still at high risk of leakage although private data is kept locally. Other studies (Phong et al. (2018); Truex et al. (2019); Xu et al. (2019); Zhang et al. (2020); Zhu et al. (2020)) have been dedicated to enhancing the security of HFL. On the contrary, VFL is mainly applied in the scenario of training with vertically partitioned data (Wu et al. (2020); Cheng et al. (2021)) where clients share the same data index set but differing in feature space. In this paper, our principal focus is to achieve privacy-preserving training on VFL. To best of our knowledge, many existing studies (Hardy et al. (2017); Nock et al. (2018); Liu et al. (2020); Yang et al. (2019b); Cheng et al. (2021); Chen & Guestrin (2016); Wu et al. (2020)) have proposed innovative approaches to prevent private information breaches in the context of VFL. Specifically, (Hardy et al. (2017)) introduced encryption-based privacy-preserving logistic regression to safeguard the information of data indexes. (Nock et al. (2018)) gave a comprehensive discussion on 1Code is available at: https://github.com/feverless111/vfl the impact of ID resolution. (Yang et al. (2019b)) introduced a scheme without using a coordinator for a limited number of clients. Recently, (Liu et al. (2020)) proposed an asymmetrically VFL scheme for logistic regression tackling privacy concerns on ID alignment. Unlike the training models used in the aforementioned works, XGBoost (Chen & Guestrin (2016)), which is one of the most popular models applied in VFL, can provide better interpretation, easier parameter tuning, and faster execution than deep learning in tabular data training (Goodfellow et al. (2016); LeCun et al. (2015)). These practical features and advantages draw academia and industry’s attention to the research on XGBoost, especially in the privacy-preserving context. (Wu et al. (2020)) introduced an approach for tree-based model training through a hybrid method composing homomorphic encryption and secure Multi-Party Computation (MPC) (Goldreich (1998); Bonawitz et al. (2017)). After that, (Cheng et al. (2021)) proposed a similar system to train XGBoost (Chen & Guestrin (2016)) securely over vertically partitioned data by using Additively Homomorphic Encryption (AHE). By applying Differential Privacy (DP) (Dwork (2008)), (Tian et al. (2020)) designed a VFL system to train GBDT without the need of encryption/decryption. However, most of the above solutions based on AHE and MPC do not scale well in terms of efficiency on training XGBoost. Beyond that, all the existing schemes basically assume that training labels are managed and processed by a sole client. In practice, a VFL scheme supporting distributed labels is necessary. For instance, multiple hospitals, clinics and health centers currently may be set to COVID-19 test spots and aim to train a model, e.g., XGBoost, to predict with good interpretation if citizens (living in various locations) are infected based on their health records and symptoms. In this context, the labels (i.e., the test results) are likely distributed among different health authorities - even targeting to the same group of patients, and feature space is vertically portioned. For example, a cardiac hospital only maintains heart data for the patients, while a psychiatric center holds the mental records, in which both authorities may collect and manage each of its registered patient’s label locally. Another common scenario could be in the financial sector where multiple bank branches and e-commerce companies prefer to build a global model to predict if their customers may pay some service (e.g., car loan) on time. The banks have part of features about the customers (e.g., account balance, funding in-and-out records), while the companies may obtain other features (e.g., payment preference). Since the customers may get the same service, e.g., loan, from different institutions, it is clear that labels must be distributed rather than centralized. In addition to efficiency and functionality aspects, one may also consider capturing stronger security for VFL. Training an XGBoost usually should involve the computation of first and second-order derivatives of the loss function (note gradients and hessians contain labels’ information), and the aggregation of them is required in each round. In the context where the labels are held by different clients, if the gradients and hessians are transmitted in the form of plaintexts and the summations of them are known to an aggregator (whom could be one of the clients engages in training), inference and differential attacks (Appendix C) will be easily conducted by the aggregator, resulting in information leakage. To tackle these problems, we propose a fast and secure VFL protocol, FEVERLESS, to train XGBoost (Appendix B.1) on distributed labels without disclosing both feature and label information. In our design, the privacy protection is guaranteed by secure aggregation (based on a masking scheme) and Global Differential Privacy (GDP) (Appendix B.6). We leverage masking instead of heavy-cost multiparty computation and we guarantee a “perfect secrecy” level for the masked data. In GDP, we use Verifiable Random Function (VRF) (Appendix B.5) to select a noise leader per round (who cannot be predicted and pre-compromised in advance) to aggregate noise from “selected” clients, which significantly maintains model accuracy. Our contributions can be summarized as follows. (1) We define VFL in a more practical scenario where training labels are distributed over multiple clients. Beyond that, we develop FEVERLESS to train XGBoost securely and efficiently with the elegant combination of secure aggregation technique (based on Diffie-Hellman (DH) key exchange (Appendix B.2) and Key Derivation Function (KDF) (Appendix B.4)) and GDP. (2) We give a comprehensive security analysis to demonstrate that FEVERLESS is able to safeguard labels and features privacy in the semi-honest setting, but also maintain the robustness even for the case where n− 2 out of n clients commit collusion. (3) We implement FEVERLESS and perform training time and accuracy evaluation on different realworld datasets. The empirical results show that FEVERLESS can maintain efficiency and accuracy simultaneously, and its performance is comparable to the baseline - a ”pure” XGBoost without using any encryption and differential privacy. Specifically, training the credit card and bank marketing datasets just takes 1% and 6.5% more runtime than the baseline and meanwhile, the accuracy is only lower than that of the baseline by 0.9% and 3.21%, respectively2. 2 PROBLEM FORMULATION 2.1 SYSTEM MODEL Before proceeding, we give some assumptions on our model. We suppose that a private set intersection (Kolesnikov et al. (2017); Pinkas et al. (2014)) has been used to align data IDs before the training starts, so that each client shares the same data index space I. But the names of features are not allowed to share among clients. As for the information of label distribution (indexes indicating a label belongs to which client, e.g., the label of i-th data instance is held by client A), we will consider the following conditions: (1) this information is revealed to the public in advance; or (2) the information is not allowed to publish but the training can still be accomplished (with extra cost). We also consider that the training is conducted on a dataset with m samples composing with feature space X = {x1, · · · , xm}, each containing f features, and label set Y = {y1, · · · , ym}. Besides, features {X(c)j | j ∈ {1, · · · , f}} and labels {y (c) i | i ∈ {1, · · · ,m}} are held among n clients where each client has at least one feature and one label. X(c)j and y (c) i refer to j-th feature and i-th label owned by c-th client, respectively. Considering a practical scenario wherein training labels are distributed among clients, we propose a new variant of VFL, named VFL over Distributed Labels (DL-VFL). The concrete definition is given as follows. Definition 1 (DL-VFL). Given a training set with m data samples consisting of feature space X , label space Y , index space I and clients set C, we have: X c ∩ X c ′ = ∅,Yc ∩ Yc ′ = ∅, Ic = Ic ′ ,∀c, c ′ ∈ C, c 6= c ′ . (1) A client c participating DL-VFL shares the same sample ID space I with the corresponding labels, where a single label belongs to only one client. And different clients hold the subset of X sampled from feature space. To achieve privacy-preserving XGBoost training, we further define two roles. Definition 2 (Source client). A source client with split candidates wants to compute the corresponding Lsplit based on Eq.(4). But some labels are missing so that ∑ gi and ∑ hi are unable to derive. For the case that a source client does not hold all labels in the current split candidates, we propose a solution based on secure aggregation and global differential privacy to help the source client to compute Lsplit while safeguarding other clients’ privacy. We consider the two conditions regarding if label distribution is publicly known. We find that if we keep label distribution hidden, we will take extra communication overhead to perform training. The detailed explanation is given in Appendix F. Note each client may have a chance to act as a source client because all the labels are distributed, where the source client leads the Lsplit computation, and clients provide missing label values to the source client. To achieve GDP, we define noise leader who is selected fairly and randomly from all clients (except for the source client) - preventing clients from being compromised beforehand. Definition 3 (Noise leader). By using VRF, a noise leader is responsible for generating the maximum leader score, aggregating differentially private noise from a portion of clients and adding the noise to the gradients and hessians. Note we summary the main notations in Table 1 (see Appendix A). 2.2 THREAT MODEL We mainly consider potential threats incurred by participating clients and the outside adversaries. We assume that all clients are honest-but-curious, which means they strictly follow designed algo- 2For banknote authentication dataset, FEVERLESS takes 13.96% more training time than the baseline, and the accuracy is 30.4% lower. This is because the model is trained by a small-scale dataset, so that the robustness is seriously affected by noise. rithms but try to infer private information of other clients from the received messages. Besides, we also consider up to n − 2 clients’ collusion to conduct attacks, and at least one non-colluded client adds noise per round. Through authenticated channels, DH key exchange can be securely executed among clients. Other messages are transmitted by public channels, and outside attackers can eavesdrop on these channels and try to reveal information about clients during the whole DL-VFL process. Note this paper mainly focuses on solving privacy issues in training DL-VFL based on XGBoost. Thus, other attacks, like data poisoning and backdoor attacks deteriorating model performance, are orthogonal to our problem. 3 A PRACTICAL PRIVACY-PRESERVING PROTOCOL 3.1 FEVERLESS PROTOCOL DESCRIPTION To prevent a source client from knowing gradients and hessians sent by other clients, one may directly use MPC (Damgård et al. (2012)) based on AHE (Paillier (1999); Wu et al. (2020)). But this method yields expensive computation cost. Getting rid of the complex mechanism like MPC, we leverage secure aggregation protocol via masking scheme based on DH key exchange(Bonawitz et al. (2017); Ács & Castelluccia (2011); Tian et al. (2020)). By further using KDF and Hash Function (see Appendix B.3&B.4), our masking (for gradients and hessians) can be derived without exchanging keys per training round. Our approach significantly reduces the communication cost but still maintains the robustness up to n − 2 colluded clients. Meanwhile, the secure aggregation can provide “perfect secrecy” for broadcast messages. After receiving the broadcast messages, the masking will be canceled out at the source client side. But only using the masking is unable to defend against differential attacks. One may consider using Local Differential Privacy (LDP) (Kairouz et al. (2014)) to make sure that each client may add noise to per send-out message, barely consuming any extra computation cost. The accumulated noise, from all clients, may seriously affect the model accuracy. To tackle this problem, we use a GDP (Wei et al. (2020)) approach with noise leader selection. A hybrid method is finally formed based on masking scheme and GDP, so that per client’s sensitive information can be protected by the “masks” and the aggregated values are secured by the noise which is injected by the chosen clients. We briefly introduce our design, and the detailed algorithms and more explanations are given in Appendix D. Assume each client c ∈ [1, n] generates respective secret key skc and computes gradients g (c) i and hessians h (c) i locally, where {i | yi ∈ Yc}. FEVERLESS works as follows. 1. Broadcast missing indexes. The source client broadcasts the mIDs= {i | yi /∈ Yc}. 2. Key exchange computation. Each client c computes public key pkc = gskc using secret keys skc, sends pkc to other clients and computes the corresponding shared keys3 {Sc,c′ = pk skc c′ = gskcskc′ | c, c ′ ∈ C, c 6= c′} based on secret key skc received public keys {pkc′ | c ′ ∈ C}. 3. Data masking. Each client c runs the masking generation algorithm to compute the maskings for protecting gradients and hessians. Specifically, based on KDF, clients’ indexes and the number of queries, the masking generation algorithm is conducted by mask(c)g ← ∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ ·( H(Sc,c′‖0‖query) ) , mask(c)h ← ∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ · ( H(Sc,c′‖1‖query) ) 4. Then the masked gra- dients G(c) and hessians H(c) are generated by G(c) = ∑ i∈mIDs g (c) i + mask (c) g −r(c)g , H(c) =∑ i∈mIDs h (c) i + mask (c) h −r (c) h . 4. Noise leader selection. Each client generates the selection score selecc using the VRF, H(SIGNskc(count,mIDs,r)), and broadcasts it, where count is the number of times clients conduct VRF, r is a fresh random number, and SIGN is the signature scheme (see Appendix B.5 for more details). The client with maximum score will be the noise leader. For ease of understanding, we assume client n with the largest selection score selectmaxn is the leader, in Figure 1. 3Shared keys are only generated once, and the KDF is used to generate the remaining maskings. 4For purpose of simplicity, we omit modular computations. The complete calculation processes are elabo- rated on Algorithm 3-5. 5. Noise injection. a) Noise leader selects k clients adding noise. For the details of the selec- tion, please see Algorithm 5 in Appendix D. b) The selected clients send {ñ(c)g = N(0,∆2gσ2) + r (c) g , ñ (c) h = N(0,∆ 2 hσ 2) + r (c) h |c ∈ k} to noise leader, in which the r (c) g and r (c) h are two random values to mask noise. c) The leader aggregates the noise: Ñg = k · N(0,∆2gσ2) + Rg and Ñh = k · N(0,∆2hσ2) +Rh, and further adds them to G(n) and H(n), respectively. 6. Aggregation and computation. All clients send the masked values to the source client. The source client computes ∑n c=1G (c) + k·N(0,∆2gσ2), ∑n c=1H (c) + k·N(0,∆2hσ2) and Lsplit. 7. Final update. The source client with maximum Lsplit updates model following XGBoost (Chen & Guestrin (2016)) and broadcasts the updated model and data indexes in child nodes as step 8. Figure 1 gives an overview of FEVERLESS. Note this process can be conducted iteratively. For simplicity, the core calculation processes are shown here, and more details are in Appendix D. 3.2 THEORETICAL ANALYSIS Computation cost: We use B and d to denote the number of buckets and the maximum depth respectively, and f (c) here represents the number of features held by a client c. For each client c, the computation cost can be divided into 4 parts: (1) Performing at most f (c) · B · NT · (2d − 1) times computation of Lsplit and w, taking O(f (c) · B · NT · 2d) time; (2) Creating n − 1 shared keys and 1 public key, which is O(n); (3) Conducting O(f (c) · B · NT · 2d) time to compute VRF outputs, select noise leader and generate noise; (4) Generating 2f (c) · B · NT · (2d − 1) maskings, which takes O(f (c) ·B ·NT · 2d ·n) time. Overall, each client’s computation complexity is O(f (c) ·B ·NT · 2d · n). Communication cost: Each client’s communication cost can be calculated as (1) Broadcasting at most f (c) · B · NT · (2d − 1) times of missing indexes mID; (2) Broadcasting 1 public key and receiving n − 1 public keys from other clients; (3) Broadcasting 1 leader selection score and sending noise to noise leader at most f (c) · B · NT · (2d − 1) times; (4) Sending source client 2 masked gradients and hessians of size 2dlog2Ne. Therefore the overall communication cost is f (c) · B · NT · (2d − 1) · (‖mID‖ · αI + αL + αN + n · αK2dlog2Ne), where αI , αL, αN and αK refer to the number of bits of index, leader selection score, noise and public keys, respectively. Thus, we have the communication complexity O(f (c) ·B ·NT · 2d). 3.3 SECURITY ANALYSIS We prove that FEVERLESS provides label and data privacy against an adversary controlling at most n − 2 clients in the semi-honest setting (Smart (2016)). Here, we provide a brief summary of our analysis and theorems. The formal proofs, in the random oracle model, are given in Appendix E. Label Privacy: Label privacy implies that the owner of a label among honest parties should not be leaked to the adversary. We achieve this by using a secure aggregation mechanism where the masks are created via DH key exchange and KDF. In brief, we show that because of the Decisional DH problem (see Definition 4), the adversary cannot distinguish the individual values from randomly chosen ones. That is why the adversary A cannot learn the owner of the label. Data Privacy: FEVERLESS provides data privacy, meaning that an adversary A cannot extract the data of any honest party. Individual data values are not separable from random values because of the secure masking. If the source client is not part of the adversary, no data information is leaked. But we require an additional countermeasure for the case where the source client is part of the adversary because it can collect the summation of the data values. We use differential privacy (Dwork et al. (2006a;b)) to achieve data privacy. Because of the noise added by differential privacy, the adversary cannot learn the individual data of an honest client. Moreover, we select the noise clients by the VRF which ensures that the noise leader cannot be predicted or compromised in advance. Theorem 3.1 (A not including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C so that |A| ≤ n− 2, the output of Sim is indistinguishable from the output of REAL : REALC,X ,YA (X C ,YC) ≡ Sim C,X ,Y A (XA,YA). Theorem 3.2 (A including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C so that |A| ≤ n− 2, the output of Sim is indistinguishable from the output of REAL:REALC,X ,YA (X C ,YC) ≡ Sim C,X ,Y A (G,H,XA,YA) where G = ∑ i∈mIDs g (c) i + N(0, (∆gσ) 2), H = ∑ i∈mIDs h (c) i + N(0, (∆hσ) 2). Theorem 3.3 (Privacy of the Inputs). No A ⊂ C such that |A| ≤ n − 2 can retrieve the individual values of the honest clients with probability 1 − ∑k̂ i=0 C i hC k̂−i n−2−h(Pt) k̂(1 − Pt)(n−k̂) Ck k̂−i Ck k̂ , where h and k̂ refer to the number of non-colluded clients and the number of clients who have selection score larger than threshold, respectively; and Pt is the probability of selection score larger than the threshold. 4 EXPERIMENT We perform evaluations on accuracy, runtime performance and communication cost, and compare our design with two straightforward secure approaches: one is based on LDP (for accuracy), and the other is built on AHE with GDP (for runtime). These approaches are most-commonly-used components for privacy-preserving FL, and they could be the building blocks for complex mechanisms, e.g., MPC. We note the protocol should intuitively outperform those MPC-based solutions, and one may leverage our source code to make further comparison if interested. In the experiments, the baseline, which is the pure XGBoots algorithm, follows the training process of Figure 1 without using any privacy-preserving tools (steps Ë - Î). And LDP does not conduct DH key exchange but each client injects noise into the aggregation of gradients and hessians, while AHE follows Figure 1 except executing DH key exchange. In AHE, each client sends (additive) encrypted messages to source client after step Î. We here show the performance of the best case where there is only one (non-colluded and randomly selected) client adding noise per round (k = 1). For other results (where k 6= 1), see Appendix H.2. Note we present the communication cost in Appendix H.5). 4.1 EXPERIMENT SETUP To present comprehensive results on accuracy, we set to be: 10, 5, 2 and 1, and δ is set to 10−5. In terms of accuracy and runtime, we evaluate different situations by varying the number of clients, the number of trees, and the maximum depth of trees (from 2 to 10). Other parameters regarding to training follow the suggestions in (Chen & Guestrin (2016)) and the library 5 of XGBoost. To deliver fair results, we conduct each test for 20 independent trials and then calculate the average. Datasets. We run the experiments on three datasets - Credit Card (Yeh & Lien (2009)), Bank Marketing (Moro et al. (2014)) and Banknote Authentication6 - for classification tasks. To fairly investigate the model performance in DL-VFL, we make the labels as sparse as possible, and they are uniformally distributed on clients. We give the more details of experiment setup in Appendix G. 4.2 EVALUATION ON ACCURACY In Figure 2, we present a clear picture on the accuracy performance based on the #tree and the maximum depth in (2, 10−5)−DP. We merge the #client in one tree structure, which means in one bar, and the value is the mean of accuracy when conducting on different numbers. The accuracy of the baseline in credit card (about 0.82) and bank marketing (nearly 0.9) remains unchanged as the #tree and maximum depth increases, while the accuracy in banknote authentication rises from 0.9 to approximately 1.0. To highlight the differences and ensure all results to be displayed clearly, we set the ranges of accuracy as [0.5, 0.9], [0.5, 1] and [0, 1] for the three datasets, respectively. Note the performance based on the #client is given in Appendix H.1. Comparing with the baseline, shown in the top and middle rows of the Figure 2, FEVERLESS and LDP suffer from continuously shrinking accuracy as tree structure becomes complex. This is so because the injected noises are accumulated into the model via the increase of query number. And the accuracy is easily affected by the depth. In the worst case where the #tree and maximum depth are both equal to 10, FEVERLESS decreases 10.37% (resp. 14.98% ), and LDP drops 24.78% (resp.24.59%) in credit card (resp. bank marketing). But on average, FEVERLESS’ accuracy only shrinks by around 0.9% (resp. 3.21%), while LDP suffers from estimated 3x (resp. 2x) accuracy loss. The difference in the degree of deterioration mainly comes from how much noise is added for each query. We note the deterioration of FEVERLESS is independent with the #client. Thus, we can maintain great accuracy even for the case where there exists a considerable amount of clients. Despite the fact that less noise is added in FEVERLESS, we do not predict that the accuracy falls to the same level (around 50%, like randomly guess in binary classification) as LDP in the bottom row of Figure 2. This is so because the model is trained by an extremely small-size dataset, which makes it hard to maintain the robustness but relatively sensitive to noise. If setting a larger , we may see our advantage more clear. The experiments conducted on banknote authentication dataset with larger are given in Appendix H.3 To distinguish the performance between FEVERLESS and LDP more clearly, Figure 3 shows the comparison over different , when #depth and #tree are set to 10. The performance of the model is decayed as the decrease of . In the left (resp. middle) of the Figure 3, the averaged accuracy of FEVERLESS falls from 0.7686 to 0.5967 (resp. from 0.8517 to 0.6831), while that of LDP also decreases to 0.5299 (resp. 0.5853). We notice that the highest values of LDP stay at the same level as those of FEVERLESS. This is because, in the case of 2-client training, only one client needs to add the noise in LDP (which is identical to our GDP solution). At last, the worse case can be seen in the right of the Figure 3 due to the weak robustness of the model obtained from the banknote authentication. The results are much far away from the baseline there. But even in this case, FEVERLESS still holds a tiny advantage over LDP. 4.3 EVALUATION ON TRAINING TIME To highlight the runtime complexity, we average the results varying by client number into one tree structure as well. We further set the ranges of time as [0s, 9,500s], [0s, 3,500s] and [0s, 110s] for the datasets to deliver visible results. Note since the banknote dataset contains the least samples, it 5https://xgboost.readthedocs.io/ 6https://archive.ics.uci.edu/ml/datasets/banknote+authentication does deliver the best training efficiency here. Figure 4 presents the comparison on the training time by varying maximum depth and the number of trees among the datasets. The training time increases exponentially and linearly with depth and the number of tree, which is consistent with our analysis given in Section 3.2. In Figure 4, compared with the baseline, the runtime of FEVERLESS at most increases 110.3s (resp. 50s, 4.3s), while AHE requires around 70x spike (resp. 48x, 21x) in credit card (resp. bank marketing, banknote authentication), where #depth and #trees are equal to 10. For the average case, FEVERLESS consumes Approx. 1%(resp.6.5%, 13.96%) more training time than the baseline, while AHE requires the 351%(resp.155.1%, 674%) extra, w.r.t. the three datasets. Its poor performances are due to the laborious calculations in encryption, in which each client has to conduct an encryption per query. By contrast, the masksings in FEVERLESS avoid these excessive costs. We further investigate the runtime performance on the #client in Appendix H. 5 CONCLUSION AND FUTURE WORK We consider a practical scenario where labels are distributedly maintained by different clients for VFL. By leveraging secure aggregation and GDP, we present a novel system, FEVERLESS, to train XGBoost securely. FEVERLESS can achieve perfect secrecy for label and data, and adversaries cannot learn any information about the data if the source client is not corrupted. With DP against differential attack, the source client knows nothing more than summation. Our design is also robust for the collusion of n−2 out of n clients. The experiment results show that FEVERLESS is fast and accurate, only taking 1% extra training time, and sacrificing 0.9% accuracy, as compared to the pure XGBoost. In Appendix F, we discuss how to reduce noise, hide distribution of labels and use other security tools. Although our system achieves great performance in terms of security and efficiency, its accuracy still does not work well in small-scale datasets. This remains an open problem. And we will also consider secure solutions against malicious adversaries. A NOTATIONS The frequently used notations are summarized in Table 1. B PRELIMINARIES B.1 XGBOOST XGBoost (Chen & Guestrin (2016)) is a popular tree-based model in tabular data training that can provide better interpretation, easier parameters tuning and faster execution speed than deep learning Goodfellow et al. (2016); LeCun et al. (2015). It also outperforms other well-known boosting tree systems in terms of accuracy and efficiency, like Spark MLLib Meng et al. (2016) and H2O Chen & Guestrin (2016), especially for large-scale datasets. Therefore, in this paper, we consider using XGBoost as a building block for classification tasks. Assume that a training set with m data points composing with feature space X = {x1, · · · , xm} and label space Y = {y1, · · · , ym}. Before training starts, every feature will be sorted based their values, and split candidates will be set for features. XGBoost builds trees based on the determination of defined splits candidates and some pruning conditions. Specifically, computing gradients and hessians first according to Eq.(2) and Eq.(3) for each data entry, where y(t−1)i denotes the prediction of previous tree for i-th data point, and yi is the label of i-th data point: gi = 1 1 + e−y (t−1) i − yi = ŷi − yi, (2) hi = e−y (t−1) i (1 + e−y (t−1) i )2 . (3) For splitting nodes, the XGBoost algorithm determines the best split candidate from all others based on maximum Lsplit in Eq.(4), where λ and γ are regularization parameters: Lsplit = 1 2 [ ∑ i∈IL gi∑ i∈IL hi + λ + ∑ i∈IR gi∑ i∈IR hi + λ − ∑ i∈I gi∑ i∈I hi + λ ]− γ. (4) The current node will be the leaf node if the following conditions are fulfilled: reaching the maximum depth of tree, the maximum value of impurity is less than preset threshold. The calculation of the leaf value follows Eq.(5): w = − ∑ i∈I gi∑ i∈I hi + λ . (5) B.2 DIFFIE-HELLMAN KEY EXCHANGE Based on Decision Diffie-Hellman (DDH) hard problem (Boneh (1998)) defined below, DiffieHellman key exchange (DH) (Diffie & Hellman (1976)) provides a method used for exchanging keys across public communication channels. Without losing generality and correctness, it consists of a tuple of algorithms (Param.Gen, Key.Gen, Key.Exc). The algorithm (G, g, q) ← Param.Gen (1α) generates public parameters (a group G with prime order q generated by a generator g) based on secure parameter α. (ski, pki) ← Key.Gen(G, g, q) allows client i to generate secret key (ski $←− Zq) and compute public key (pki ← gski ). Shared key is computed by (pk skj i , pk ski j ) ← Key.Exc(ski, pki, skj , pkj). Inspired by (Bonawitz et al. (2017); Ács & Castelluccia (2011)), we utilize shared keys as maskings to protect information of labels against inference attack during transmitting in public channels. The correctness requires pkskji = pk ski j . The security relies on the DDH problem (Boneh (1998)), which is defined as: Definition 4 (Decision Diffie-Hellman). Let G be a group with prime order q and g be the fixed generator of the group. The Probabilistic Polynomial Time (PPT) adversary A is given and ga and gb where a and b are randomly chosen. The probability of A distinguishing (ga, gb, gab) and (ga, gb, gc) for a randomly chosen c is negligible:∣∣∣Pr[a, b $←− Zq : A(g, ga, gb, gab) = true]− Pr [ a, b, c $←− Zq : A(g, ga, gb, gc) = true ]∣∣∣ < negl(α). B.3 PSEUDO-RANDOM GENERATOR AND HASH FUNCTION Pseudo-Random Generator (PRG) (Håstad et al. (1999)) is an algorithm which is able to generate random numbers. The ”pseudo-random” here means that the generated number is not truly random but has the similar properties with random number. Generally, the pseudo-random numbers are determined by given initial values a.k.a seeds. In cryptographic applications, a secure PRG requires attackers not knowing seeds can distinguish a truly random number from a output of PRG with a negligible probability. Similar with PRG, hash function allows mapping arbitrary size of data to a fixed bit value. For reducing communication cost of FEVERLESS, we use SHAKE-256 (Sha (2015)), one of the hash functions in SHA-3 (Aumasson et al. (2008)) family, to generate customize size of maskings. B.4 KEY DERIVATION FUNCTION Key Derivation Function (KDF) (Krawczyk & Eronen (2010)) is a kind of hash function that derives multiple secret keys from a main key by utilizing Pesudo-Random Function (PRF) (Kaliski (2005)). In general, KDF algorithm DK ← KDF (mainkey, salt, rounds) derives keys DK based on a main key, a cryptographic salt and current round of processing algorithm. The security requires a secure KDF is robust for brute-force attack or dictionary attack. Inspired by (Zdziarski (2012)) where key shares generated by DH key exchange are converted to AES keys, in this paper, we use KDF to generate maskings for every round to reduce communication cost. The main key we use is generated by DH key exchange. B.5 VERIFIABLE RANDOM FUNCTION Verifiable Random Function (VRF) (Micali et al. (1999)) is a PRF providing verifiable proofs of correctness of outputs. It is a tool widely used in cryptocurrencies, smart contracts and leader selection in distributed systems (Micali (2016)). Basically, given a input x, a signature scheme and a hash function, a practical leader selection scheme with VRF (Micali (2016)) works as: Sleader ← H(signski(x)) (6) where ski is the secret key for i-th client, and the maximum leader score Sleader is used to determine leader. The security and unforgeability of VRF requires that the signature scheme has the property of uniqueness, and hash function is able to map the signature to a random string with fixed size. The correctness of this Sleader is proved by the signature of x. B.6 DIFFERENTIAL PRIVACY Differential Privacy (DP) (Dwork et al. (2006a;b)) is a data protection system targeting on publishing statistical information of datasets while keeping individual data private. The security of DP requires that adversaries cannot distinguish statistically change from two datasets where an arbitrary data point is different. The most widely used DP mechanism is called ( , δ)-DP requiring less noise injected than original proposed -DP but with the same privacy level. The formal definition is given as follows. Definition 5. (( , δ) - Differential Privacy) Given two real positive numbers ( , δ) and a randomized algorithm A: Dn → Y , the algorithm A provides ( , δ) - differential privacy if for all data sets D, D ′ ∈ Dn differing in only one data sample, and all S ⊆ Y: Pr[A(D) ∈ S] ≤ exp( ) · Pr[A(D ′ ) ∈ S] + δ. (7) Note the noise N ∼ N(0,∆2σ2) will be put into the output of the algorithm, where ∆ is l2 - norm sensitivity of D and σ = √ 2 ln(1.25/δ) (Abadi et al. (2016)). C PRIVACY CONCERN Since we assume feature names are not public information for all clients, and the values of features never leave from clients, the privacy issues are mainly incurred by the leakage of label information. C.1 INFERENCE ATTACK During training process, gradients and hessians are sent to source client for Lsplit computation. For classification task, the single gradient is in range (−1, 0)∪(0, 1) for binary classification. According to Eq.(2), a label can be inferred as 1 and 0 if the range is (−1, 0) and (0, 1), respectively. Besides, hessian illustrated in Eq.(3) can leak a prediction of the corresponding data sample. With training processing, the prediction is increasingly closer to a true label. The source client and outside attackers can infer the true label with high probability. Gradients and hessians cannot be transmitted in plaintext. We thus use secure aggregation scheme to protect them from inference attack. C.2 DIFFERENTIAL ATTACK Differential attack can happen anytime and many times during the calculation of gradients and hessians. Figure 5 describes an example of differential attack taking place in single node split. After sorting feature1, the semi-honest source client defines 2 split candidates and further computes G{2,5} = g2 + g5 and G{1,2,3,5} = g2 + g5 + g1 + g3 for the candidates 1 and 2, respectively. Since the source client holds label 2, even if G{2,5} is derived by secure aggregation, the g5 still can be revealed by G{2,5} − g2. Another example for differential attack is shown in Figure 6. Assume split candidate 1 is the one for splitting root node. In the current tree structure, source client may split right node by computing Lsplit of split candidate 2. In this case, G{1,3} should be aggregated by source client. And the g5 can be revealed by G{1,2,3,5} −G{1,3} − g2, where G{1,2,3,5} is computed in the previous node. D MORE DETAILS ON FEVERLESS PROTOCOL D.1 XGBOOST TRAINING OVER DISTRIBUTED LABELS At the initial stage, we allow all clients to agree on a tree structure (maximum depth and the number of trees) and the learning rate for updating prediction. To avoid overfitting problem, we should define regularization parameters. Threshold impurity is also another vital parameter used to identify tree and leaf nodes via the maximum impurity. After that, we should choose , δ for DP, hash function for masking generation and noise leader selection. Besides, we select a multiplicative group G with order q generated by a generator g and a large prime number p to run DH. At initialization process, all clients set parameters and sort their own feature based on values. Then, split candidates can be defined, and data samples between two different candidates will be grouped as a bucket. At the end, all entries are assigned initialized values to calculate the derivatives of loss function. The detailed algorithm is described as follows. Algorithm 1: Initialization 1 Set parameters: all clients agree on the maximum depth of a tree d, the number of trees (NT ), learning rate (η), regularization parameters (λ, γ), the threshold of Lsplit, , δ, p, g, selection portion (p) and hash function 2 for c ∈ [1, n] do 3 for each feature j owned by c do 4 sort(X(c)j ) 5 define buckets: Bjz 6 end 7 set initialized values: ŷi(c) 8 end After initialization, all clients can invoke Algorithm 2 to train model collaboratively. The inputs are from feature space consisting of features X(c)j and labels y (c) i distributed on different clients, respectively; while the output is a trained XGBoost model that can be used for prediction. Generally, trees are built one by one. And we see from line 4-10 in Algorithm 2 that each client can compute gradients and hessians at beginning of a new tree construction. Following that, clients are to split current node. Note that XGBoost training in DL-VFL requires each client to calculate G and H . If the labels in some buckets are incomplete, the corresponding gradients and hessians cannot be computed. Thus, each client should first broadcast missing data index setmID (see line 15-17 in Algorithm 2). Based on the predefined bucketBjz ,mID can be defined if labels in Bjz are not held by clients. In each broadcast, a client sending messages is regarded as a source client. Then others send the corresponding g(c ′ ) i and h (c ′ ) i back to the source client to computeLsplit through Algorithm 3-5 depicted in Appendix D.2. After finding a maximum impurity Lcsplit max, the current node will be split to “left” and “right” nodes if L c split max>threshold Lsplit, in which the value of the split candidate is own by c. In node splitting, clients should set a given Algorithm 2: Protocol overview 1 Input: {X(c)j | j ∈ f, c ∈ |C|}: features, {y (c) i | i ∈ m, c ∈ |C|}: labels 2 Output: XGBoost model 3 Building trees: 4 for nt ∈ [1, NT ] do 5 for c ∈ [1, n] do 6 for each data entry i owned by c do 7 g (c) i ← ∂ŷi(c)Loss(ŷi (c), y (c) i ) 8 h (c) i ← ∂2ŷi(c)Loss(ŷi (c), y (c) i ) 9 end 10 end 11 for each node in the current tree do 12 while current depth <d do 13 for c ∈ [1, n] do 14 for each feature j owned by c do 15 for each Bjz owned by c do 16 BroadcastmID = {i | yi /∈ Yc} 17 end 18 aggregate G, H by Algorithm 3-5 19 compute Lsplit according to Eq.(4) 20 end 21 find the maximum L(c)split and broadcast 22 end 23 L (c) split max ← max({L (c) split | c ∈ [1, n]}) 24 if L(c)split max ≤ threshold Lsplit then 25 set current node as leaf node 26 c computes w and broadcast 27 Break 28 else 29 c splits current node to left node and right node, and broadcasts data index of them. 30 end 31 end 32 set remaining nodes as leaf nodes 33 c computes w and broadcast 34 clients participating in calculation of w: update ŷi(c) 35 end 36 end node as ”leaf” if current depth reaches the predefined maximum depth or the maximum Lsplit is less than the predefined threshold of Lsplit (see line 12, 24-32 in Algorithm 2). The derivation of leaf value is followed by Eq. 5 where G and H are intaken. Since a leaf node is either “left” or “right” split by one of the clients in C from its parent node, this client knows G and H and leaf value can be derived. Finally, this leaf value will be broadcast, and clients who own the corresponding g(c)i and h (c) i can use it to update predictions. The details for the above process are shown in Algorithm 2. D.2 SECURE AGGREGATION WITH GLOBAL DIFFERENTIAL PRIVACY In line 15-19 of Algorithm 2, source client is able to compute Lsplit from the requested missing data indexes and the aggregation of received messages. To avoid that inference and differential attacks are conducted on labels by source client and outside adversaries, we propose a privacy-preserving approach, shown in Algorithm 3-5, to “twist” the DH key exchange, noise leader selection and secure aggregation together. This method represents a viable alternative to train XGBoost securely in DL-VFL without demanding excessive computational resources and affecting model accuracy. To generate the secure-but-can-be-cancelled-out maskings, we adopt DH here. In Algorithm 3, all clients randomly select numbers as their secret keys and generate the corresponding public keys. For any two clients in the set C, they will exchange public key and compute the corresponding shared keys. For simplicity, we do not describe the signature scheme for DH. We assume DH is conducted on authenticated channels, which means the man-in-the-middle attack (Khader & Lai (2015)) should be invalid here. Algorithm 3: Diffie-Hellman key exchange 1 for c ∈ [1, n] do 2 skc ← Z∗p 3 end 4 for c ∈ [1, n] do 5 pkc = g skc mod p 6 for c ′ ∈ [1, n] ∧ c′ 6= c do 7 Sc,c′ = pk sk c ′ c mod p 8 end 9 end If the shared keys are used as maskings directly, our system is not robust for clients collusion unless the amount of communication has been sacrificed as a cost to update maskings per round. But the communication complexity is exponentially increased with the number of clients for a single node splitting. Considering the structure of trees, the overall communication complexity will be O(2d ·NT · n2), which may not scale well in practical applications. To tackle this issue, we use KDF to update maskings per round automatically. Specifically, in line 24-25 of Algorithm 5, shared keys are taken as main keys. 0 and 1 are salt values for gradients and hessians, respectively. Since query in each round varies, the generated maskings should be dynamic accordingly. Besides, the sign of maskings is determined by the indexes of clients. In this way, we only need to use DH once, and the communication complexity is independent with tree structure. To enable FEVERLESS to hold against differential attack, we use GDP approach allowing the chosen one to inject a global noise to aggregated values per round. The approach is quite subtle. If the noise leader is selected by source client, the system will be vulnerable to the collusion. Moreover, a client could be easily identified as a target if we choose it in advance, e.g., selecting a list of leaders before the training. To avoid these issues and limit the probability of collusion to the greatest extent, we use VRF to iteratively select the leader (see Algorithm 4) to securely inject a global noise. The input of VRF includes mIDs and a fresh random number r (line 4 in Algorithm 4), so that this client will not be predicted and set beforehand - reducing its chance to be corrupted in advance by outsiders and the source client. All clients can broadcast their scores and then the one who holds the “max value” will become the leader. Then the leader re-generates a selection score as score threshold (selecthreshold) and sends it to the rest of the clients. (line 2-6 in Algorithm 5). The clients send the masked noise back to the leader if the re-generated score is larger than the threshold (line 7-13 in Algorithm 5). Subsequently, the leader will select k̂ clients, notify them and aggregate these masked noise to generate a global noise with a random number. In this context, even these selected clients are colluded (note at least one is not) with noise leader and source client, there is still a noise that cannot be recovered, safeguarding the training differentially private. Note since the noise is masked by the random number, the source client (even colluding with the leader) cannot recover the “pure” global Algorithm 4: Noise leader selection 1 count = 1 2 for each time run this algorithm do 3 for c ∈ [1, n] ∧ c 6= source client do 4 selecc ← H(SIGNskc(count,mIDs,r)) 5 Broadcast 6 end 7 selecmaxc ← max({selecc | c ∈ [1, n]}) 8 set c as noise leader 9 count+=1 10 end noise to conduct differential attack. And each client adds a noise with a probability p. If k out of k̂ are non-colluded, the probability of collusion is (1 − kn ) h. To cancel out the randomness, the selected clients will subtract the same randomness from masked messages (line 28-31 in Algorithm 5). Considering that the source client may procrastinate the leader selection and noise injection procedure so as to buy some time for its colluded clients to prepare sufficient large VRF values to participate into the competition of selection and adding noise. One may apply a heartbeat protocol (Nikoletseas & Rolim (2011)) to prevent that a new selected leader intentionally halts the noise adding stage for a long period, say 1 min. If there is no response from the leader after for a short while, a new leader will be randomly selected. Furthermore, the heartbeat may help to solve the problem that the leader accidentally drops from the network. We note that the heartbeat protocol is not our main focus in this paper. Before replying to source client, we have that the clients with labels put maskings to gradients and hessians, and for those without labels, they just generate and later send out maskings, in which the noise leader (i.e. one of the maskings generators) injects the noise. In this way, the maskings, guaranteeing perfect secrecy of the messages, will be cancelled out after the values aggregation, and the differentially private noise will solidate indistinguishability of individual data entry. Note that in line 24-34 of Algorithm 5, the maskings and masked values are in the range [0, N − 1]. And N should be sufficiently large to avoid overflow, and the summation of gradients and hessians should not exceed N . Algorithm 5: Secure aggregation with global differential privacy 1 Noise injection: 2 if c = leader then 3 selecthresholdc ← H(SIGNskc(count,mIDs,r)) 4 Broadcast 5 count+=1 6 end 7 for c ∈ [1, n] ∧ c 6= source client ∧ c 6= noise leader do 8 selecc ← H(SIGNskc(count,mIDs,r)) 9 if selecc > selecthresholdc then 10 send ñ(c)g = N(0,∆2gσ 2) + r (c) g and ñ (c) h = N(0,∆ 2 hσ 2) + r (c) h to noise leader 11 count+=1 12 end 13 end 14 if c = leader then 15 c selects k clients from clients of sending noise, k = d|{ñ(c)g }| · pe 16 if k < 1 then 17 redo noise injection 18 end 19 notify k clients 20 noise aggregation: Ñg = k · N(0,∆2gσ2) +Rg , Ñh = k · N(0,∆2hσ2) +Rh 21 end 22 Secure aggregation: 23 for c ∈ [1, n] do 24 mask (c) g ← (∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ · ( H(Sc,c′‖0‖query) mod N )) mod N 25 mask (c) h ← (∑ c 6=c′ ∣∣∣c−c′ ∣∣∣ c−c′ · ( H(Sc,c′‖1‖query) mod N )) mod N 26 G(c) = ∑ i∈mIDs g (c) i + mask (c) g mod N 27 H(c) = ∑ i∈mIDs h (c) i + mask (c) h mod N 28 if selecc > selecthresholdc ∧ received notification then 29 G(c) = G(c) − r(c)g mod N 30 H(c) = H(c) − r(c)h mod N 31 end 32 if c = leader then 33 G(c) = G(c) + Ñg mod N 34 H(c) = H(c) + Ñh mod N 35 end 36 send {G(c), H(c)} to source client 37 end E SECURITY ANALYSIS We investigate the security and privacy properties of our protocol. First, we define the security model of our setting and the properties. Then, we prove that our protocol satisfies these properties. Security Model. Our security is based on the random oracle model (ROM) (Smart (2016)) where the hash function outputs uniformly random value for a new query and the same value for a previously answered query. Adversarial Model. Our protocol is designed for semi-honest security model (Smart (2016)) where all parties follow the protocol while trying to obtain information regarding other parties’ inputs. We assume that the source client can collude with other clients, but the size of colluding clients is no more than n− 2. E.1 PRIVACY GOALS Our privacy goals can be summarized as: • Label privacy: No adversary controlling at most n−2 clients can learn who is the owner of a label among the honest parties. • Data privacy: No adversary controlling at most n − 2 clients can extract the data of an honest party. We first investigate the case where the source client is not part of the adversary. In the following theorem, we show that there exists a simulator Sim that simulates the joint view of clients in A by only using the inputs belonging to them. This implies that A does not learn more than what they have. Theorem E.1 (A not including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C such that |A| ≤ n − 2, the output of Sim is indistinguishable from the output of REAL: REAL C,X ,Y A (X C ,YC) ≡ SimC,X ,YA (X A,YA) (8) Proof. In order to prove that simulator Sim can simulate the outputs of the honest parties in H := C − A, we show that the distribution of the inputs belonging to the rest of the network cannot be distinguished from a randomly generated data. In this way, the simulator can use any dummy values as inputs of the honest parties to simulate their outputs. We will simulate the view of the A regarding the messages broadcast by the honest clients. A client c, first makes a key exchange with others, then after some internal operations, outputs G(c) and H(c) values. Let us investigate G(c) value, which is in the form of ∑ i∈mIDs g (c) i + mask (c) g , except for the noise leader who has additional noise of N(0, (∆gσ)2). The mask values are computed as∑ c 6=c′ |c−c′| c−c′ · H(Sc,c′‖0‖query) mod N . Here, we will use a hybrid model where we modify the protocol in several steps, and for each step, we will show that modifications are indistinguishable for the adversary A. In the end, we will achieve a hybrid that can be simulated by Sim. Hybrid1: The first hybrid directly follows the protocol. The distribution of the variables and the view of A is the same as REAL. Hybrid2: In the second hybrid, we replace the agreed keys between honest clients Sc,c′ for all c, c′ ∈ H with random values rc,c′ ∈ G where G is the group of key exchange protocol G. In the original protocol, Diffie-Hellman key exchange is used. The replacement is indistinguishable for the adversary because of the decision Diffie-Hellman assumption given in Definition 4. Also, note that these random values are only available to parties involved in the key exchange unless they are corrupted by the adversary. Hybrid3: In this hybrid, we replace the mask values of honest clients mask (c) g for all c ∈ H with random values R(c). Note that with the replacement in the previous step, the mask values are computed via ∑ c6=c′ |c−c′| c−c′ · H(rc,c′‖0‖query) mod N where rc,c′ ∈ ZN is a random value that is unknown to the adversary (if both c and c′ are honest). Because of the random oracle model, the output of the hash function will be a uniformly random value that is also unknown to the adversary. Since there are at most n − 2 clients in A , we have at least two honest clients c and c′ for which the adversary cannot know the uniformly chosen output of H(rc,c′‖0‖query). Then, the modular summation of these outputs includes at least one value that the adversary does not know and is uniformly random. Thus, it cannot be distinguishable from a random value R(c). Hybrid4: In this hybrid, we replace gradients of honest clients g (c) i for all c ∈ H with ’0’s. This is done by replacing mask values with R(c) := R(c) − ∑ i∈mIDs g (c) i mod N to keep the G (c) value the same. From the adversary’s perspective, sinceR(c) values are unknown and uniformly randomly chosen, the replacement is not distinguishable. In Hybrid4, we replace the gradients of honest parties with ’0’s, and the mask values are replaced by R(c) which is unknown to the adversary and chosen from a uniform distribution. Thus, a simulator Sim can simulate the outputs of honest parties G(c) without necessarily knowing their inputs. The same can be analyzed for hessian value, H(c). Since the masking values of G(c) and H(c) are different and the hash function is modeled as a random oracle, the randomness in both parts of them are independent of each other and indistinguishable by the adversary A. Overall, the simulator Sim can simulate our protocol. Thus, the view of the A can be simulated by replacing the inputs of the honest parties with zeros. Thus, the adversary does not learn any information on the inputs of the honest parties. Now, we analyze the case where the source client is part of the A. We show that there exists a simulator Sim that simulates the joint view of clients in A by only using the inputs belonging to them and the summations G and H . This implies that A does not learn more than what they have and the summation. Theorem E.2 (A including source client). There exists a PPT simulator Sim for all |C| := n ≥ 3, |X | := f ≥ n, |Y| := m ≥ 1, ⋃ c∈C X (c), ⋃ c∈C Y(c) and A ⊂ C such that |A| ≤ n− 2, the output of Sim is indistinguishable from the output of REAL: REAL C,X ,Y A (X C ,YC) ≡ SimC,X ,YA (G,H,X A,YA) (9) where G = ∑ i∈mIDs g (c) i + N(0, (∆gσ) 2), H = ∑ i∈mIDs h (c) i + N(0, (∆hσ) 2). Proof. Here, we again show that Sim can simulate the outputs of the honest parties in H without knowing their inputs. Unlike Theorem E.1, Sim is also given the summations G and H because the adversary includes the source client. We can use the same hybrids with Theorem E.1 until Hybrid4, this is because that the inputs of the honest clients are not required yet. We need to update Hybrid4 such that it takes into account the summation. Here are the hybrids for the A with source client: Hybrid1,Hybrid2,Hybrid3: The same with Theorem E.1. Hybrid4: In this hybrid, we replace gradients of honest clients g (c) i for all c ∈ H with ’0’s, except one c′ which will be equal to ∑ i∈mIDs g (H) i mod N = G − ∑ i∈mIDs g (A) i mod N . The honest client c′ is randomly chosen among H. From the adversary’s perspective, since R(c) are unknown uniformly random chosen values, the replacement is not distinguishable. Overall, the view of theA can be simulated by replacing the inputs of the honest parties with zeros, except one with ∑ i∈mIDs g (H) i mod N . Thus, A does not learn any information from the honest clients, except the summation ∑ i∈mIDs g (H) i mod N . With Theorem E.2, we show that even the adversaryAwith source client cannot know more than the summation of gradient and hessian values, G and H . The proof is done via Sim without requiring individual data of the honest clients except for the summation. This implies that the adversary cannot distinguish which party provided which gradient or hessian values. Moreover, the parties who do not have any of the requested g or h values will send ’0’ together with the mask (and noise for the leader). This implies that we provide label privacy. Meaning that the adversary cannot distinguish which label’s g or h values are coming from which honest client. In the case when the adversary includes the source client, the summation of gradient and hessian values can be known to the adversary. In the following theorem, we show that these summations do not leak any individual data due to differential privacy. Theorem E.3 (Privacy of the Inputs). No A ⊂ C such that |A| ≤ n− 2 can retrieve the individual values of the honest clients with probability 1− k̂∑ i=0 CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂) Ck k̂−i Ck k̂ , where h and k̂ refer to the number of non-colluded clients and the number of clients who have selection score larger than threshold. Pt is the probability of selection score larger than the threshold. Proof. If the adversary does not include the source client, then following the previous theorems, the adversary cannot know any of the inputs belonging to the honest parties. Otherwise, it knows the summations G and H . Since we apply differential privacy (Dwork et al. (2006a;b)), the summation cannot leak information regarding the inputs. According to Definition 5, we add differentially private noise guaranteeing the security of individual data points while summation can be calculated. Proof of probability. Note noise leader selects k clients from n clients (rather than itself and the source client) to add noise. Suppose that there are h non-colluded clients out of n − 2 clients, the number of clients whose selection scores are larger than the threshold is k̂. The number of events is C k̂n−2−h + C 1 hC k̂−1 n−2−h + · · ·+ C k̂ hC 0 n−2−h, in which the events are that {“there are k̂ colluded clients out of k̂ clients and 0 non-colluded client”,· · · ,“there are 0 colluded client out of k̂ clients and k̂ non-colluded clients”}. Therefore, P (Ei) = C i h(Pt) i(1− Pt)h−i · C k̂−in−2−h(Pt) k̂−i(1− Pt)(n−h−k̂+i) = CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂), where Pt is the probability that the selection score is larger than the threshold, and Ei is i-th event. Then, the probability that noise leader selects k colluded clients from k̂ clients is P0 = Ck k̂−i Ck k̂ . At the end, the probability of all aggregated noise coming from colluded clients is k̂∑ i=0 P (Ei) · P0 = k̂∑ i=0 Cih(Pt) i(1− Pt)h−i · C k̂−in−2−h(Pt) k̂−i(1− Pt)(n−h−k̂+i) = k̂∑ i=0 CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂) Ck k̂−i Ck k̂ . Conversely, the probability of at least one non-colluded client participating in noise injection is 1− k̂∑ i=0 CihC k̂−i n−2−h(Pt) k̂(1− Pt)(n−k̂) Ck k̂−i Ck k̂ . Note that because of the secure aggregation, the adversary cannot learn anything but the summation. Thus, our protocol does not require the addition of noise to each data. Instead, we only require the noise leader to add the noise, which prevents the retrieval of the individual data from the summation. In Theorems E.1 and E.2, we show that A cannot distinguish the individual values from randomly chosen values and can only know the summation if the source is part of the adversary. In Theorem E.3, we show that A cannot extract the individual values of the users from the summation due to the added noise and differential privacy. Thus, our protocol satisfies data privacy. In other words, the adversary cannot learn the data point of an honest client. It is important to note that since the noise leader is selected via VRF, no adversary can guess if any honest party will be the leader in the upcoming round beforehand. This provides additional security regarding the manipulation of the noise leader. F DISCUSSION To reduce the negative impact brought by noise, according to infinity divisibility of Gaussian distribution (Patel & Read (1996)), one may split global noise (N(0, (∆σ)2)) into n parts (N(0, (∆σ) 2 n )). But a drawback is that the privacy budget will increase linearly as an increasing number of colluded clients appear. For example, if GDP achieves -DP , in the worst case where there are n−1 colluded clients, the privacy budget will raise to n× . Hiding labels distribution. In the semi-honest setting, if the source client sends the missing indexes consistently, adversaries may figure out which labels are distributed (on the source clients) by statistical analysis. We show that this issue can be tackled. In the proposed protocol, source client broadcasts the missing data indexes mID (line 16 of Algorithm 2). Under the semi-honest setting, if source client sends missing indexes consistently, the adversaries will figure out which labels are distributed on source clients by statistic analysis. We note that FEVERLESS can be expanded to avoid this type of leakage by yielding extra communication overheads. Specifically, during broadcasting period, source client should send indexes of one bucket instead of mID, and the rest of protocol remains constant. In this way, others cannot distinguish the distribution of labels because all clients share the same index set I. If we assume labels are uniformly distributed on each client, the extra overheads are restricted to |I|/|C|. This cost is clearly noticeable in those datasets with a large number of data points. Other security tools. The masking scheme realizing secure aggregation may be replaced with an MPC (Damgård et al. (2012); Wu et al. (2020)) or additively homomorphic encryption (Paillier (1999)). However, the major defect of these tools is that they entail labor-intensive calculation with regard to encryption, which may not scale well in large-scale datasets. Due to this concern, we only put light-weight computation in FEVERLESE and further, we enhance the security to “perfect secrecy”. In our design, the selection of noise leader is captured by VRF. We note that there may be other options to fulfil the goal. For example, Proof of Elapsed Time (PoET) (Chen et al. (2017); Corso (2019)) is an interesting and effective mechanism which is used to maintain the consensus of distributed peers in Hyperledger Sawtooth. It provides a fair and trusted lottery strategy to select a block winner (per consensus round). Sharing the same philosophy with the VRF, it may be deployed in our protocol to yield leader. And building a more efficient noise leader selection algorithm could be an interesting open problem. G MORE DETAILS ON EXPERIMENT SETUP All the experiments are implemented in Python, and conducted on a cluster of machines with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz, with 15GB RAM in a local area network. Intuitively, the smaller we set, more secure FEVERLESS will be; but larger noise will be added. We note the above statement can be seen from the experimental results. As for the cryptographic tools, we set the key size of DH and Paillier as 160 bits and 1024 bits respectively(to save some time in running the experiments). This size can reach a symmetric security level with 80 bits key length. Note one may indeed increase the key size to obtain stronger security 7, but this will bring a longer experiment time as a side effect. We use 1024-bit MODP Group with 160-bit Prime Order Subgroup from RFC 5114 8 for DH Key exchange. SHAKE-256 (Dworkin (2015)), a member of SHA3 (Dworkin (2015)) family, is used as a hash function in leader selection and secure aggregation. •Credit Card: It is a commercial dataset used for predicting whether costumers will make payment on time. It provides 30,000 samples, and each sample composes of 23 features. • Bank marketing: Consisting with 45,211 data points and 17 features, the goal of bank marketing is to predict if a client will subscribe a term deposit. • Banknote authentication: Offering 1,372 data points and 4 features, this dataset is used to classify authenticated and unauthenticated banknotes. Note that different from traditional tabular data, features in the dataset are extracted from images that are taken from genuine and forged banknotelike specimens through Wavelet Transform (Antonini et al. (1992)). Using the small-scale dataset, the trained model may not be robust for noise, which brings negative impact on accuracy. H ADDITIONAL EXPERIMENTS AND FIGURES We present additional experiments, and all the experimental settings follow those defined in Section 4.1. In each presented figure, we show the results executed on the datesets Credit card (left), Bank Marketing (middle) and Banknote Authentication (right). Note that the comparison among FEVERLESS, LDP, and AHE requires a condition that #client=2; when #client=1, we can only show the results of the baseline. And the average performance of FEVERLESS in these figures is highlighted as the red dotted line. Via the experiments, we elaborate that how the accuracy varies with the increasing number of client among the baseline, FEVERLESS and LDP, w.r.t. different tree structures and . Figure 7-18 are presented for the best case where only a non-colluded client adds the noise. And other cases are demonstrated in Figure 19-26 with the selection scores: 1/2 and 1/3. Beyond those, we also add the comparison results for AHE in Table 2-4 with = 2. In general, without any added noise, the baseline can reach the highest accuracy and meanwhile, the accuracy remains stable as the client number increases. The performance of FEVERLESS is right behind that of baseline but still keeps stable. Note there are slight fluctuations in some figures (e.g. Figure 10, 12 and 14), especially for the case where complex tree structure and small are used. The LDP approach does harm accuracy, which can be seen from the continuously and significantly falling bars in the figures. Naturally, when more clients engage into the training, more noise should be added into the model. This makes LDP’s performance far lower than the red line. Note that banknote dataset is composed of 4 features. In the VFL setting, every client should have at least one feature. Therefore, we can only allow up to 4 clients to participate in the training. Beside, FEVERLESS does not perform well in banknote dataset. This is so because the model is trained by a small number of samples, so that the robustness is seriously affected by noise. H.1 BEST CASE: ACCURACY ON CLIENT NUMBER 7Note a stronger security level will not affect the training accuracy. 8https://tools.ietf.org/html/rfc5114 AHE AHE AHE H.2 OTHER CASES: ACCURACY ON CLIENT NUMBER H.3 ADDITIONAL RESULTS ON ACCURACY FOR BANKNOTE AUTHENTICATION H.4 ADDITIONAL RESULTS ON TIME In Figure 29-33, we show the time performance based on various numbers of client, tree and depth. Besides, we present the concrete results in Table 5-7. Table 8 also shows more specific runtime of tree construction in #tree=4 and depth=4 among baseline, FEVERLESS, LDP and AHE. In general, the runtime of FEVERLESS is slightly higher that that of the baseline. Compared to AHE, FEVERLESS significantly reduces training time while preserving privacy. This advantage is clearly seen from the cases using complex tree structures. Note that AHE can be replaced by other more complex cartographic solutions, such as secure MPC, which can also maintain data/label privacy. But the MPC-based solutions will consume more runtime. AHE AHE AHE AHE LDP H.5 RESULTS ON COMMUNICATION COST In Figure 34-36, we demonstrate the communication cost based on the numbers of clients, tree and depth. For the convenience of comparison, we set #clients=4, #tree=4 and depth=4 as default. We use Table 9-11 to elaborate the concrete costs. To sum up, we see that the communication cost of FEVERLESS is almost the same as those of the baseline and LDP. But as compared to AHE, FEVERLESS significantly reduces costs while maintaining privacy. AHE LDP
1. What is the focus and contribution of the paper regarding vertical federated learning? 2. What are the strengths of the proposed approach, particularly in terms of data and label privacy? 3. What are the weaknesses of the paper, especially regarding its readability and examples given? 4. Do you have any concerns about the protocol's functionality when there are missing labels or multiple clients holding the same label? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a new approach called FEVERLESS to conduct vertical federated learning over distributed labels. In the studied setting, the clients have different features of the same sample, and the labels are distributed over different clients. To get the gradients of the samples with missing labels, the clients utilize Diffie-Hellman key exchange, key derivation function, and differential privacy to send the masked gradients to the source client. Then, the source client computes the gain based on the gradients, and the split candidate with the maximum gain is selected as the split point. The authors prove that FEVERLESS protects label and data privacy against an adversary controlling at most n-2 clients. The experiments show that FEVERLESS can achieve comparable accuracy compared with XGBoost. Review The paper provides a comprehensive security analysis of the proposed protocol. However, I have the following concerns. The paper is not easy to read. The preliminary and many details about the algorithm are put into the appendix, which is not friendly to the readers. The paper provides two examples (i.e., hospitals, bank branches) to demonstrate that the distributed label setting is common in practice. However, these two examples are not convincing. In these two examples, the training examples are usually distributed to the clients (e.g., customers are registered to distinct branches as mentioned in the paper), which is more like a horizontal federated learning setting. In the proposed algorithm, the clients send the sum of the gradients G to the source client. However, the source client needs to know G L and G R to compute the gain. How does the client split the received G into G L and G R ? For a missing label of the source client, there may be multiple clients that hold the label. Then, the noises are injected into the gradients of the same sample. In this case, what is the overall privacy budget for the sample? Also, does the algorithm still work in such as case? The datasets used in the experiments are small. The authors should better conduct experiments on a dataset with at least a million samples. It is not clear to compare the exact performance of different approaches in Figure 2 and Figure 4. The authors can add a table to show the accuracy and time of different approaches with a default depth and number of trees. In the computation cost, the time complexity of generated masks is O(xxx * n), while the overall complexity is O(xxx + n). Is it a typo?
ICLR
Title Unsupervised learning of features and object boundaries from local prediction Abstract The human visual system has to learn both which features to extract from images and how to group locations into (proto-)objects. Those two aspects are usually dealt with separately, although predictability is discussed as a cue for both. To incorporate features and boundaries into the same model, we model a retinotopic visual cortex with a pairwise Markov random field model in which each factor is paired with an additional binary variable, which switches the factor on or off. Using one of two contrastive learning objectives, we can learn both the features and the parameters of the Markov random field factors from images without further supervision signals. The features learned by shallow neural networks based on this loss are local averages, opponent colors, and Gabor-like stripe patterns as observed in early human visual cortices. Furthermore, we can infer connectivity between locations by inferring the switch variables. Contours inferred from this connectivity perform quite well on the Berkeley segmentation database (BSDS500) without any training on contours. Thus, optimizing predictions across space aids both segmentation and feature learning, and models trained this way show similarities to the human visual system. We speculate that retinotopic visual cortex might implement such predictions over space through lateral connections. 1 INTRODUCTION A long-standing question about human vision is how representations initially be based on parallel processing of retinotopic feature maps can represent objects in a useful way. Most research on this topic has focused on computing later object-centered representations from the feature map representations. Psychology and neuroscience identified features that lead to objects being grouped together (Koffka, 1935; Köhler, 1967), established feature integration into coherent objects as a sequential process (Treisman & Gelade, 1980), and developed solutions to the binding problem, i.e. ways how neurons could signal whether they represent parts of the same object (Finger & König, 2014; Peter et al., 2019; Singer & Gray, 1995; Treisman, 1996). In computer vision, researchers also focused on how feature map representations could be turned into segmentations and object masks. Classically, segmentation algorithm were clustering algorithms operating on extracted feature spaces (Arbeláez et al., 2011; Comaniciu & Meer, 2002; Cour et al., 2005; Felzenszwalb & Huttenlocher, 2004; Shi & Malik, 2000), and this approach is still explored with more complex mixture models today (Vacher et al., 2022). Since the advent of deep neural network models, the focus has shifted towards models that directly map to contour maps or semantic segmentation maps (Girshick et al., 2014; He et al., 2019; Kokkinos, 2016; Liu et al., 2017; Shen et al., 2015; Xie & Tu, 2015), as reviewed by Minaee et al. (2021). Diverse findings suggest that processing within the feature maps take object boundaries into account. For example, neurons appear to encode border ownership (Jeurissen et al., 2013; Peter et al., 2019; Self et al., 2019) and to fill in information across surfaces (Komatsu, 2006) and along illusory contours (Grosof et al., 1993; von der Heydt et al., 1984). Also, attention spreading through the feature maps seems to respect object boundaries (Baldauf & Desimone, 2014; Roelfsema et al., 1998). And selecting neurons that correspond to an object takes time, which scales with the distance between the points to be compared (Jeurissen et al., 2016; Korjoukov et al., 2012). Finally, a long history of psychophysical studies showed that changes in spatial frequency and orientation content can define (texture) boundaries (e.g. Beck et al., 1987; Landy & Bergen, 1991; Wolfson & Landy, 1995). In both human vision and computer vision, relatively little attention has been given to these effects of grouping or segmentation on the feature maps themselves. Additionally, most theories for grouping and segmentation take the features in the original feature maps as given. In human vision, these features are traditionally chosen by the experimenter (Koffka, 1935; Treisman & Gelade, 1980; Treisman, 1996) or are inferred based on other research (Peter et al., 2019; Self et al., 2019). Similarly, computer vision algorithms used off-the-shelf feature banks originally (Arbeláez et al., 2011; Comaniciu & Meer, 2002; Cour et al., 2005; Felzenszwalb & Huttenlocher, 2004; Shi & Malik, 2000), and have recently moved towards deep neural network representations trained for other tasks as a source for feature maps (Girshick et al., 2014; He et al., 2019; Kokkinos, 2016; Liu et al., 2017; Shen et al., 2015; Xie & Tu, 2015). Interestingly, predictability of visual inputs over space and time has been discussed as a solution for both these limitations of earlier theories. Predictability has been used as a cue for segmentation since the law of common fate of Gestalt psychology (Koffka, 1935), and both lateral interactions in visual cortices and contour integration respect the statistics of natural scenes (Geisler & Perry, 2009; Geisler et al., 2001). Among other signals like sparsity (Olshausen & Field, 1996) or reconstruction (Kingma & Welling, 2014), predictability is also a well known signal for self-supervised learning of features (Wiskott & Sejnowski, 2002), which has been exploited by many recent contrastive learning (e.g. Feichtenhofer et al., 2021; Gutmann & Hyvarinen, 2010; Hénaff et al., 2020; van den Oord et al., 2019) and predictive coding schemes (e.g. Lotter et al., 2017; 2018; van den Oord et al., 2019) for self-supervised learning. However, these uses of predictability for feature learning and for segmentation are usually studied separately. Here, we propose a model that learns both features and segmentation without supervision. Predictions between locations provide a self-supervised loss to learn the features, how to perform the prediction and how to infer which locations should be grouped. Also, this view combines contrastive learning (Gutmann & Hyvarinen, 2010; van den Oord et al., 2019), a Markov random field model for the feature maps (Li, 2012) and segmentation into a coherent framework. We implement our model using some shallow architectures. The learned features resemble early cortical responses and the object boundaries we infer from predictability align well with human object contour reports from the Berkeley segmentation database (BSDS500 (Arbeláez et al., 2011)). Thus, retinotopic visual cortex might implement similar computational principles as we propose here. 2 MODEL To explain our combined model of feature maps and their local segmentation information, we start with a Gaussian Markov random field model (Li, 2012) with pairwise factors. We then add a variable w ∈ {0, 1} to each factor that governs whether the factor enters the product or not. This yields a joint distribution for the whole feature map and all w’s. Marginalizing out the w’s yields a Markov random field with "robust" factors for the feature map, which we can use to predict feature vectors from the vectors at neighboring positions. We find two contrastive losses based on these predictions that can be used to optimize the feature extraction and the factors in the Markov random field model. We model the distribution of k-dimensional feature maps f ∈ Rk,m′,n′ that are computed from input images I ∈ Rc,m,n with c = 3 color channels (see Fig. 1 A & B). We use a Markov random field model with pairwise factors, i.e. we define the probability of encountering a feature map f with entries fi at locations i ∈ [1 . . .m′]× [1 . . . n′] as follows: p(f) ∝ ∏ i ψi(fi) ∏ (i,j)∈N ψij(fi, fj), (1) where ψi is the local factor, N is the set of all neighboring pairs, and ψij is the pairwise factor between positions i and j1. We will additionally assume shift invariance, i.e. each point has the same set of nearby relative positions in the map as neighbors, ψi is the same factor for each position, and each factor ψij depends only on the relative position of i and j. 1i and j thus have two entries each We now add a binary variable w ∈ {0, 1} to each pairwise factor that encodes whether the factor is ’active’ (w = 1) for that particular image (Fig. 1 C). To scale the probability of w = 1 and w = 0 relative to each other, we add a factor that scales them with constants pij ∈ [0, 1] and 1− pij respectively: p(f ,w) ∝ ∏ i ψi(fi) ∏ (i,j)∈N p wij ij (1− pij) 1−wijψij(fi, fj) wij (2) Finally, we assume that the factors are Gaussian and the feature vectors are originally normalized to have mean 0 and variance 1: p(f ,w) = 1 Z0 N (f , 0, I) ∏ (i,j)∈N p wij ij (1− pij)1−wij Z(wij , Cij) exp ( −wij 2 (fi − fj)TCij(fi − fj) ) , (3) where Z0 is the overall normalization constant, N(f , 0, I) is the density of a standard normal distribution with k ×m′ × n′ dimensions, Cij governs the strength of the coupling in the form of a precision matrix, which we will assume to be diagonal, and Z(wij , Cij) scales the distributions with wij = 0 and wij = 1 relative to each other. We set Z(wij , Cij) to the normalization constant of the Gaussian with standard Gaussian factors for fi and fj respectively. For w = 0 this is just (2π)−k, the normalization constant of a standard Gaussian in 2k dimensions. For w = 1 we get: Z(wij = 1, Cij) = ∫ ∫ exp ( −1 2 fTi fi − 1 2 fTj fj − 1 2 (fi − fj)TCij(fi − fj) ) dfidfj (4) = (2π)−k det ∣∣∣∣I + Cij CijCij I + Cij ∣∣∣∣ 12 (5) = (2π)−k ∏ l √ 1 + 2cll (6) which we get by computing the normalization constant of a Gaussian with the given precision and then using the assumption that Cij is a diagonal matrix with diagonal entries cll. This normalization depends only on w and the coupling matrix C of the factor ψij and thus induces a valid probability distribution on the feature maps. Two points are notable about this normalization though: First, once other factors also constrain fi and/or fj , this normalization will not guarantee p(wij = 1) = pij . 2 Second, the wij are not independent in the resulting distribution. For example, if pairwise factors connect a to b, b to c and a to c the corresponding w are dependent, because wab = 1 and wbc = 1 already imply a smaller difference between fa and fc than if these factor were inactive, which increases the probability for wac = 1. 2.1 LEARNING To learn our model from data, we use a contrastive learning objective on the marginal likelihood p(f). To do so, we first need to marginalize out the w’s, which is fortunately simple, because each w affects only a single factor: p(f) = ∑ w p(f ,w) = 1 Z0 N (f , 0, I) ∏ (i,j)∈N [pijψij(fi, fj) + (1− pij)] (7) Using this marginal likelihood directly for fitting is infeasible though, because computing Z0, i.e. normalizing this distribution is not computationally tractable. We resort to contrastive learning to fit the unnormalized probability distribution (Gutmann & Hyvarinen, 2010), i.e. we optimize discrimination from a noise distribution with the same support as the target distribution. Following van den Oord et al. (2019) we do not optimize the Markov random field directly, but optimize predictions based on the model using features from other locations as the noise distribution. For this noise distribution, the factors that depend only on a single location (the first product in (1)) will cancel. We thus ignore the N(f , 0, I) in our optimization and instead normalize the feature maps to mean 0 and unit variance across each image. We define two alternative losses that make predictions for positions based on all their neighbors or for a single factor respectively. 2.1.1 POSITION LOSS The position loss optimizes the probability of the feature vector at each location relative to the probability of randomly chosen other feature vectors from different locations and images: lpos(f) = ∑ i log p(fi|fj∀j ∈ N(i))∑ i′ p(fi′ |fj∀j ∈ N(i)) (8) = ∑ i ∑ j∈N(i) logψij(fi, fj)− ∑ i log ∑ i′ exp ∑ j∈N(i) logψij(fi′ , fj) , (9) where N(i) is the set of neighbors of i. 2Instead, p(wij = 1) will be higher, because other factors increase the precision for the feature vectors, which makes the normalization constants more similar. 2.1.2 FACTOR LOSS The factor loss instead maximizes each individual factor for the correct feature vectors relative to random pairs of feature vectors sampled from different locations and images: lfact = ∑ i,j log ψij(fi, fj)∑ i′,j′ ψij(fi′ , fj′) (10) = ∑ i,j logψij(fi, fj)− ∑ i,j log ∑ i′,j′ ψij(fi′ , fj′), (11) where i, j index the correct locations and i′, j′ index randomly drawn locations, in our implementation generated by shuffling the feature maps and taking all pairs that occur in these shuffled maps. 2.1.3 OPTIMIZATION We optimize all weights of the neural network used for feature extraction and the parameters of the random field, i.e. the C and pij for the different relative spatial locations simultaneously. As an optimization algorithm, we use stochastic gradient descent with momentum. Both losses succeed to learn the model, but the factor loss is substantially more efficient. We discuss the distinction between the two losses and further details of the optimization in the supplementary materials. 2.2 SEGMENTATION INFERENCE Computing the probability for any individual pair of locations (i, j) to be connected, i.e. computing p(wij = 1|f), depends only on the two connected feature vectors fi and fj : p(wij = 1|f) p(wij = 0|f) = pij (1− pij) Z(wij = 0, Cij) Z(wij = 1, Cij) exp ( −(fi − fj)TCij(fi − fj) ) (12) This inference effectively yields a connectivity measure for each pair of neighboring locations, i.e. a sparse connectivity matrix. Given that we did not apply any prior information enforcing continuous objects or contours, the inferred wij do not necessarily correspond to a valid segmentation or set of contours. Finding the best fitting contours or segmentation for given probabilities for the ws is an additional process, which in humans appears to be an attention-dependent serial process (Jeurissen et al., 2016; Self et al., 2019). To evaluate the detected boundaries in computer vision benchmarks, we nonetheless need to convert the connectivity matrix we extracted into a contour image. To do so, we use the spectral-clusteringbased globalization method developed by Arbeláez et al. (2011). This method requires that all connection weights between nodes are positive. To achieve this, we transform the log-probability ratios for the wij as follows: For each image, we find the 30% quantile of the values, subtract it from all log-probability ratios, and set all values below 0.01 to 0.01. We then compute the smallest eigenvectors of the graph Laplacian as in graph spectral clustering. These eigenvectors are then transformed back into image space and are filtered with simple edge detectors to find the final contours. 3 EVALUATION We implement 3 model types implementing feature extractions of increasing complexity in PyTorch (Paszke et al., 2019): Pixel value model. For illustrative purposes, we first apply our ideas to the rgb pixel values of an image as features. This provides us with an example, where we can easily show the feature values and connections. Additionally, this model provides an easy benchmark for all evaluations. Linear model. As the simplest kind of model that allows learning features, we use a single convolutional deep neural network layer as our feature model. Here, we use 50 11× 11 linear features. Predseg1: To show that our methods work for more complex architecture with non-linearities, we use a relatively small deep neural network with 4 layers (2 convolutional layers and 2 residual blocks with subsampling layers between them, see supplement for details). For each of these architectures, we train 24 different networks with all combinations of the following settings: 4 different sizes of neighborhoods (4, 8, 12, or 20 neighbors, see Fig. 1D); 3 different noise levels (0, 0.1, 0.2) and the two learning objectives. As a training set, we used the unlabeled image set from MS COCO (Lin et al., 2015), which contains 123,404 color images with varying resolution. To enable batch processing, we randomly crop these images to 256× 256 pixel resolution, but use no other data augmentation (See supplementary information for further training details). We want to evaluate whether our models learn human-like features and segmentations. To do so, we first analyze the features in the first layers of our networks where we can judge whether features are representative of biological visual systems. In particular, we extract segmentations from our activations and evaluate those on the Berkeley Segmentation Dataset (Arbeláez et al., 2011, BSDS500) 3.1 LEARNED FEATURES Linear Model We first analyze the weights in our linear models (Fig 2 A-C). All instances learn local averages and Gabor-like striped features, i.e. spatial frequency and orientation tuned features with limited spatial extend. These features clearly resemble receptive fields of neurons in primary visual cortex. Additionally, there appears to be some preference for features that weight the red and green color channels much stronger than the blue channel, similar to the human luminance channel, which leads to the yellow-blue contrasts in the plots. There is some difference between the two learning objectives though. The position based loss generally leads to lower frequency and somewhat noisier features. This could either be due to the higher learning efficiency of the factor based loss, i.e. the factor based loss is closer to convergence, or due to a genuinely different optimization goal. Predseg1 In Predseg1, we first analyze the layer 0 convolution (Fig. 2D), which has only 3 channels with 3× 3 receptive fields, which we originally introduced as a learnable downsampling. This layer consistently converges to applying near constant weights over space. Additionally, exactly one of the channels has a non-zero mean (the 3rd, 1st and 3rd in Fig. 2D) and the other two take balanced differences between two of the channels (red vs green and green vs. blue in the examples). This parallels the luminance and opponent color channels of human visual perception. In the second convolution, we observe a similar pattern of oriented filters and local averages as in the linear model albeit in false color as the input channels are rotated by the weighting of the layer 0 convolution (Fig. 2 E & F). 3.2 CONTOUR EXTRACTION To evaluate whether the connectivity information extracted by our model corresponds to human perceived segmentation, we extract contours from our models and compare them to contours reported by humans for the Berkeley Segmentation Database (Arbeláez et al., 2011; Martin et al., 2001). This database contains human drawn object boundaries for 500 natural images and is accompanied by methods for evaluating segmentation models. Using the methods provided with the database, we compute precision-recall curves for each model and use the best F-value (geometric mean of precision and recall) as the final evaluation metric. As we had multiple models to choose from, we choose the models from each class that perform best on the training data for our reports. For all models this was one of the models with the largest neighborhood, i.e. using 20 neighbors, and the factor loss. It seems the factor loss performed better simply due to its technical efficiency advantage as discussed above. Performance increases monotonically with neighborhood size and Markov random field based approaches to semantic segmentation also increased their performance with larger neighborhoods up to fully connected Markov random fields (Krähenbühl & Koltun, 2012; Chen et al., 2014; 2017). We thus expect that larger neighborhoods could work even better. Qualitatively, we observe that all our models yield sensible contour maps (see Fig. 3 A). Additionally, we note that the linear model and Layer 1 of the predseg model tend to produce double contours, i.e. they tend to produce two contours on either side of the contour reported by human subjects with some area between them connected to neither side of the contour. Quantitatively, our models also perform well except for the deeper layers of Predseg 1 (Fig. 3B and Table 1). The other models beat most hand-crafted contour detection algorithms that were tested on this benchmark (Canny, 1986; Comaniciu & Meer, 2002; Cour et al., 2005; Felzenszwalb & Huttenlocher, 2004) and perform close to the gPb-owt-ucm contour detection and segmentation algorithm (Arbeláez et al., 2011) that was the state of the art at the time. Layer-0 of Predseg 1 performs best followed by the linear feature model and finally the pixel value model. Interestingly, the best performing models seem to be mostly the local averaging models (cf. Fig. 2 C). In particular, the high performance of the first layer of Predseg 1 is surprising, because it uses only 3 × 3 pixel local color averages as features. Since the advent of deep neural network models, networks trained to optimize performance on image segmentation have reached much higher performance on the BSDS500 benchmark, essentially reaching perfect performance up to human inconsistency (e.g. He et al., 2019; Kokkinos, 2016; Linsley et al., 2020; Liu et al., 2017; Shen et al., 2015; Su et al., 2021; Xie & Tu, 2015, see Table 1). However, these models all require direct training on human reported contours and often use features learned for other tasks. There are also a few deep neural network models that attempt unsupervised segmentation (e.g. Chen et al., 2019; Lin et al., 2021; Xia & Kulis, 2017), but we were unable to find any that were evaluated on the contour task of BSD500. The closest is perhaps the W-net (Xia & Kulis, 2017), which used an autoencoder structure with additional constraints and was evaluated on the segmentation task on BSDS500 performing slighly better than gPb-owt-ucm. 4 DISCUSSION We present a model that can learn features and local segmentation information from images without further supervision signals. This model integrates the prediction task used for feature learning and the segmentation task into the same coherent probabilistic framework. This framework and the dual use for the connectivity information make it seem sensible to represent this information. Furthermore, the features learned by our models resemble receptive fields in the retina and primary visual cortex and the contours we extract from connectivity information match contours drawn by human subject fairly well, both without any training towards making them more human-like. To improve biological plausibility, all computations in our model are local and all units are connected to the same small, local set of other units throughout learning and inference, which matches early visual cortex, in which the lateral connections that follow natural image statistics are implemented anatomically (Buzás et al., 2006; Hunt et al., 2011; Roelfsema et al., 1998; Stettler et al., 2002). This in contrast to other ideas that require flexible pointers to arbitrary locations and features (as discussed by Shadlen & Movshon, 1999) or capsules that flexibly encode different parts of the input (Doerig et al., 2020; Kosiorek et al., 2019; Sabour et al., 2017; 2021). Nonetheless, we employ contrastive learning objectives and backpropagation here, for which we do not provide a biologically plausible implementations. However, there is currently active research towards biologically plausible alternatives to these algorithms (e.g. Illing et al., 2021; Xiong et al., 2020). Selecting the neurons that react to a specific object appears to rely on some central resource (Treisman, 1996; Treisman & Gelade, 1980) and to spread gradually through the feature maps (Jeurissen et al., 2013; 2016; Self et al., 2019). We used a computer vision algorithm for this step, which centrally computes the eigenvectors of the connectivity graph Laplacian (Arbeláez et al., 2011), which does not immediately look biologically plausible. However, a recent theory for hippocampal place and grid cells suggests that these cells compute the same eigenvectors of a graph Laplacian albeit of a successor representation (Stachenfeld et al., 2014; 2017). Thus, this might be an abstract description of an operation brains are capable of. In particular, earlier accounts that model the selection as a marker that spreads to related locations (e.g. Finger & König, 2014; Roelfsema, 2006; Singer & Gray, 1995) have some similarities with iterative algorithms to compute eigenvectors. Originally, phase coherence was proposed as a marker (Finger & König, 2014; Peter et al., 2019; Singer & Gray, 1995), but a simple gain increase within attended objects (Roelfsema, 2006) and a random gain modulation were also proposed (Haimerl et al., 2021; 2019). Regardless of the mechanistic implementation of the marker, connectivity information of the type our model extracts would be extremely helpful to explain the gradual spread of object selection. Our implementation of the model is not fully optimized, as it is meant as a proof of concept. In particular, we did not optimize the architectures or training parameters of our networks for the task, like initialization, optimization algorithm, learning rate, or regularization. Presumably, better performance in all benchmarks could be reached by adjusting any or all of these parameters. One possible next step for our model would be to train deeper architectures, such that the features could be used for complex tasks like object detection and classification. Contrastive losses like the one we use here are successfully applied for pretraining for large scale tasks such as ImageNet (Russakovsky et al., 2015) or MS Coco (Lin et al., 2015). These large scale applications often require modifications for better learning (Chen et al., 2020; Feichtenhofer et al., 2021; Grill et al., 2020; He et al., 2020; Hénaff et al., 2020; van den Oord et al., 2019). For example: Image augmentations to explicitly train networks to be invariant to some image changes, prediction heads that allow more complex distributions for the predictions, and memory banks or other methods to decrease the reliance on many negative samples. For understanding human vision, this line of reasoning opens the exciting possibility that higher visual cortex could be explained based on similar principles, as representations from contrastive learning also yield high predictive power for these cortices (Zhuang et al., 2021). The model we propose here is a probabilistic model of the feature maps. Based on this model, we could also infer the feature values. Thus, our model implies a pattern how neurons should combine their bottom-up inputs with predictions from nearby other neurons, once we include some uncertainty for the bottom-up inputs. In particular, the combination ought to take into account which nearby neurons react to the same object and which ones do not. Investigating this pooling could provide insights and predictions for phenomena that are related to local averaging. Crowding for example (Balas et al., 2009; Freeman & Simoncelli, 2011; Herzog et al., 2015; Wallis et al., 2016; 2017; 2019) is currently captured best by summary statistic models (Balas et al., 2009; Freeman & Simoncelli, 2011; Wallis et al., 2017), but deviations from these predictions suggest that object boundaries change processing (Herzog et al., 2015; Wallis et al., 2016; 2019). Another promising extension of our model would be processing over time, because predictions over time were found to be a potent signal for contrastive learning (Feichtenhofer et al., 2021) and because coherent object motion is among the strongest grouping signals for human observers (Köhler, 1967) and computer vision systems (Yang et al., 2021). Beside the substantial increases in processing capacity necessary to move to video processing instead of image processing, this step would require some extension of our framework to include object motion into the prediction. Nonetheless, including processing over time seems to be an interesting avenue for future research, especially because segmentation annotations for video are extremely expensive to collect such that unsupervised learning is particularly advantageous and popular in recent approaches (Araslanov et al., 2021; Jabri et al., 2020; Lai et al., 2020). A SUPPLEMENTARY MATERIAL: TRAINING DETAILS We trained 24 networks of each of the three types. The versions differed in the size of the neighborhood (4, 8, 12, or 20 neighbors), the amount of noise added (α ∈ 0, 0.1, 0.2), and the used loss (position or factor loss). The parameters we trained were: • all weights of the underlying network • the logit transform of p for each relative position of two neighbors • the logarithms of the diagonal entries of C for each relative position of neighbors We trained models using the standard stochastic gradient descent implemented in pytorch (Paszke et al., 2019) with a learning rate of 0.001, a momentum of 0.9 and a slight weight decay of 0.0001. To speed up convergence we increased the learning rate by a factor of 10 for the parameters of the prediction, i.e. C and p. For the gradient accumulation for the position based loss, we accumulate 5 repetitions for the pixel model and 10 for the linear model and for predseg1. Each repetition contained 10 random negative locations. Batch size was set to fit onto the smaller GPU type used in our local cluster. The resulting sizes are listed in Table 2 A.1 ARCHITECTURE DETAILS The pixel model was implemented as a single Identity layer. The linear model was implemented as a single 50× 11× 11 convolutional layer. The Predseg1 model was implemented as a sequential model with 4 processing steps separated by subsampling layers (1× 1 convolutional layers with a stride > 1). The first processing step was a 3× 3 convolutional layer with 3 channels followed by subsampling by a factor of 3. The second step was a 11× 11 convolutional layer with 64 features followed by subsampling by a factor of 2. The third and fourth steps were residual processing blocks, i.e. two convolutional layers with a rectified linear unit non-linearity between them whose results were added to the inputs. They had 128 and 256 features respectively and were separated by another subsampling by a factor of 2. A.2 ADDED NOISE To prevent individual features dimensions from becoming perfectly predictive, we added a small amount of Gaussian noise to the feature maps before applying the loss. To yield variables with mean 0 and variance 1 after adding the noise we implemented this step as: fnoise = √ 1− α2 + αϵ (13) where α ∈ [0, 1] controls the noise variance and ϵ is a standard normal random variable. Adding this noise did not change any of our results substantially and the three versions with different amounts of noise (α = 0, 0.1 or 0.2) performed within 1− 2% in all performance metrics. A.3 TRAINING DURATION Networks were trained in training jobs that were limited to either 48 hours of computation time or 10 epochs of training. As listed in table 2, we used a single such job for the pixel models, 7 for the linear models and 9 for the predseg1 models. Most larger networks were limited by the 48 hour limit, not by the epoch limit. A.4 USED COMPUTATIONAL RESOURCES The vast majority of the computation time was used for training the network parameters. Computing segmentations for the BSDS500 images and evaluating them took only a few hours of pure CPU processing. Networks were trained on an internal cluster using one GPU at a time and 6 CPUs for data loading. We list the training time per epoch in table 2. If every job had run for the full 48 hours we would have used (1 + 7 + 9) × 24 × 2 = 816 days of GPU processing time, which is a relatively close upper bound on the time we actually used. A.5 COMPARISON OF THE TWO LOSSES The position loss is consistent with the prediction made by the whole Markov random field, but is relatively inefficient, because the predicted distribution p(fi|fj∀j ∈ N(i)) and the normalization constants for these conditional distributions are different for every location i. Thus, the second term in equation (9) cannot be reused across the locations i. Instead, we need to compute the second term for each location separately, which requires a similar amount of memory as the whole feature representation for each negative sample i′ and each neighbor. To enable a sufficiently large set of negative points i′ with the available memory, we compute this loss multiple times with few negative samples and sum the gradients. This trick saves memory, because we can free the memory for the loss computation after each repetition. As the initial computation of the feature maps is the same for all negative samples, we save some computation for this procedure by computing the feature maps only once. To propagate the gradients through this single computation, we add up the gradients of the loss repetitions with regard to the feature maps and then propagate this summed gradient through the feature map computation. This procedure does not save computation time compared to the loss with many negative samples, as we still need to calculate the evaluation for each position and each sample in the normalization set. The factor loss does not lead to a consistent estimation of the MRF model, because the prediction p(fi|fj) should not be based only on the factor ψij , but should include indirect effects as fj also constrains the other neighbors of i. Optimizing each factor separately will thus overaccount for information that could be implemented in two factors. However, the factor loss has the distinct advantage that the same noise evaluations can be used for all positions and images in a minibatch, which enables a much larger number of noise samples and thus much faster learning.
1. What is the focus and contribution of the paper on unsupervised learning of features and segmentation? 2. What are the strengths of the proposed approach, particularly in terms of combining contrastive learning and Markov random field modeling? 3. What are the weaknesses of the paper regarding its claims and comparisons with other works in unsupervised segmentation? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What is the significance of evaluating human-like features and segmentations in the context of this research?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a model that learns both features and segmentation without supervision. The method combines contrastive learning, a Markov random field model for the feature maps and segmentation. Learning utilizes two losses; position loss which optimizes the probability of the feature vector at each location relative to the probability of randomly chosen other feature vectors from different locations and images and factor loss, which maximizes factors for the correct feature vectors relative to random pairs of feature vectors sampled from different locations and images. The goal of the research is to evaluate, if it is possible to learn human-like features and segmentations. Strengths And Weaknesses The paper is very well written and scientifically and technically sound. The method includes novelty, it has been carefully discussed and evaluated. However, the relevance of the paper was left a bit unclear. Unsupervised segmentation methods exist, why is it relevant to learn the features simultaneously, or wasn't this the main contribution here? Clarity, Quality, Novelty And Reproducibility The paper is very clear and easy to follow, the method has been reported in details and conclusions are drawn from various viewpoints. However, the relevance of the contribution is left quite open, was the main goal to be able to analyze the similarities of computer vision perception with human's, or was the main point to achieve simultanous segmentation and feature extraction?
ICLR
Title Unsupervised learning of features and object boundaries from local prediction Abstract The human visual system has to learn both which features to extract from images and how to group locations into (proto-)objects. Those two aspects are usually dealt with separately, although predictability is discussed as a cue for both. To incorporate features and boundaries into the same model, we model a retinotopic visual cortex with a pairwise Markov random field model in which each factor is paired with an additional binary variable, which switches the factor on or off. Using one of two contrastive learning objectives, we can learn both the features and the parameters of the Markov random field factors from images without further supervision signals. The features learned by shallow neural networks based on this loss are local averages, opponent colors, and Gabor-like stripe patterns as observed in early human visual cortices. Furthermore, we can infer connectivity between locations by inferring the switch variables. Contours inferred from this connectivity perform quite well on the Berkeley segmentation database (BSDS500) without any training on contours. Thus, optimizing predictions across space aids both segmentation and feature learning, and models trained this way show similarities to the human visual system. We speculate that retinotopic visual cortex might implement such predictions over space through lateral connections. 1 INTRODUCTION A long-standing question about human vision is how representations initially be based on parallel processing of retinotopic feature maps can represent objects in a useful way. Most research on this topic has focused on computing later object-centered representations from the feature map representations. Psychology and neuroscience identified features that lead to objects being grouped together (Koffka, 1935; Köhler, 1967), established feature integration into coherent objects as a sequential process (Treisman & Gelade, 1980), and developed solutions to the binding problem, i.e. ways how neurons could signal whether they represent parts of the same object (Finger & König, 2014; Peter et al., 2019; Singer & Gray, 1995; Treisman, 1996). In computer vision, researchers also focused on how feature map representations could be turned into segmentations and object masks. Classically, segmentation algorithm were clustering algorithms operating on extracted feature spaces (Arbeláez et al., 2011; Comaniciu & Meer, 2002; Cour et al., 2005; Felzenszwalb & Huttenlocher, 2004; Shi & Malik, 2000), and this approach is still explored with more complex mixture models today (Vacher et al., 2022). Since the advent of deep neural network models, the focus has shifted towards models that directly map to contour maps or semantic segmentation maps (Girshick et al., 2014; He et al., 2019; Kokkinos, 2016; Liu et al., 2017; Shen et al., 2015; Xie & Tu, 2015), as reviewed by Minaee et al. (2021). Diverse findings suggest that processing within the feature maps take object boundaries into account. For example, neurons appear to encode border ownership (Jeurissen et al., 2013; Peter et al., 2019; Self et al., 2019) and to fill in information across surfaces (Komatsu, 2006) and along illusory contours (Grosof et al., 1993; von der Heydt et al., 1984). Also, attention spreading through the feature maps seems to respect object boundaries (Baldauf & Desimone, 2014; Roelfsema et al., 1998). And selecting neurons that correspond to an object takes time, which scales with the distance between the points to be compared (Jeurissen et al., 2016; Korjoukov et al., 2012). Finally, a long history of psychophysical studies showed that changes in spatial frequency and orientation content can define (texture) boundaries (e.g. Beck et al., 1987; Landy & Bergen, 1991; Wolfson & Landy, 1995). In both human vision and computer vision, relatively little attention has been given to these effects of grouping or segmentation on the feature maps themselves. Additionally, most theories for grouping and segmentation take the features in the original feature maps as given. In human vision, these features are traditionally chosen by the experimenter (Koffka, 1935; Treisman & Gelade, 1980; Treisman, 1996) or are inferred based on other research (Peter et al., 2019; Self et al., 2019). Similarly, computer vision algorithms used off-the-shelf feature banks originally (Arbeláez et al., 2011; Comaniciu & Meer, 2002; Cour et al., 2005; Felzenszwalb & Huttenlocher, 2004; Shi & Malik, 2000), and have recently moved towards deep neural network representations trained for other tasks as a source for feature maps (Girshick et al., 2014; He et al., 2019; Kokkinos, 2016; Liu et al., 2017; Shen et al., 2015; Xie & Tu, 2015). Interestingly, predictability of visual inputs over space and time has been discussed as a solution for both these limitations of earlier theories. Predictability has been used as a cue for segmentation since the law of common fate of Gestalt psychology (Koffka, 1935), and both lateral interactions in visual cortices and contour integration respect the statistics of natural scenes (Geisler & Perry, 2009; Geisler et al., 2001). Among other signals like sparsity (Olshausen & Field, 1996) or reconstruction (Kingma & Welling, 2014), predictability is also a well known signal for self-supervised learning of features (Wiskott & Sejnowski, 2002), which has been exploited by many recent contrastive learning (e.g. Feichtenhofer et al., 2021; Gutmann & Hyvarinen, 2010; Hénaff et al., 2020; van den Oord et al., 2019) and predictive coding schemes (e.g. Lotter et al., 2017; 2018; van den Oord et al., 2019) for self-supervised learning. However, these uses of predictability for feature learning and for segmentation are usually studied separately. Here, we propose a model that learns both features and segmentation without supervision. Predictions between locations provide a self-supervised loss to learn the features, how to perform the prediction and how to infer which locations should be grouped. Also, this view combines contrastive learning (Gutmann & Hyvarinen, 2010; van den Oord et al., 2019), a Markov random field model for the feature maps (Li, 2012) and segmentation into a coherent framework. We implement our model using some shallow architectures. The learned features resemble early cortical responses and the object boundaries we infer from predictability align well with human object contour reports from the Berkeley segmentation database (BSDS500 (Arbeláez et al., 2011)). Thus, retinotopic visual cortex might implement similar computational principles as we propose here. 2 MODEL To explain our combined model of feature maps and their local segmentation information, we start with a Gaussian Markov random field model (Li, 2012) with pairwise factors. We then add a variable w ∈ {0, 1} to each factor that governs whether the factor enters the product or not. This yields a joint distribution for the whole feature map and all w’s. Marginalizing out the w’s yields a Markov random field with "robust" factors for the feature map, which we can use to predict feature vectors from the vectors at neighboring positions. We find two contrastive losses based on these predictions that can be used to optimize the feature extraction and the factors in the Markov random field model. We model the distribution of k-dimensional feature maps f ∈ Rk,m′,n′ that are computed from input images I ∈ Rc,m,n with c = 3 color channels (see Fig. 1 A & B). We use a Markov random field model with pairwise factors, i.e. we define the probability of encountering a feature map f with entries fi at locations i ∈ [1 . . .m′]× [1 . . . n′] as follows: p(f) ∝ ∏ i ψi(fi) ∏ (i,j)∈N ψij(fi, fj), (1) where ψi is the local factor, N is the set of all neighboring pairs, and ψij is the pairwise factor between positions i and j1. We will additionally assume shift invariance, i.e. each point has the same set of nearby relative positions in the map as neighbors, ψi is the same factor for each position, and each factor ψij depends only on the relative position of i and j. 1i and j thus have two entries each We now add a binary variable w ∈ {0, 1} to each pairwise factor that encodes whether the factor is ’active’ (w = 1) for that particular image (Fig. 1 C). To scale the probability of w = 1 and w = 0 relative to each other, we add a factor that scales them with constants pij ∈ [0, 1] and 1− pij respectively: p(f ,w) ∝ ∏ i ψi(fi) ∏ (i,j)∈N p wij ij (1− pij) 1−wijψij(fi, fj) wij (2) Finally, we assume that the factors are Gaussian and the feature vectors are originally normalized to have mean 0 and variance 1: p(f ,w) = 1 Z0 N (f , 0, I) ∏ (i,j)∈N p wij ij (1− pij)1−wij Z(wij , Cij) exp ( −wij 2 (fi − fj)TCij(fi − fj) ) , (3) where Z0 is the overall normalization constant, N(f , 0, I) is the density of a standard normal distribution with k ×m′ × n′ dimensions, Cij governs the strength of the coupling in the form of a precision matrix, which we will assume to be diagonal, and Z(wij , Cij) scales the distributions with wij = 0 and wij = 1 relative to each other. We set Z(wij , Cij) to the normalization constant of the Gaussian with standard Gaussian factors for fi and fj respectively. For w = 0 this is just (2π)−k, the normalization constant of a standard Gaussian in 2k dimensions. For w = 1 we get: Z(wij = 1, Cij) = ∫ ∫ exp ( −1 2 fTi fi − 1 2 fTj fj − 1 2 (fi − fj)TCij(fi − fj) ) dfidfj (4) = (2π)−k det ∣∣∣∣I + Cij CijCij I + Cij ∣∣∣∣ 12 (5) = (2π)−k ∏ l √ 1 + 2cll (6) which we get by computing the normalization constant of a Gaussian with the given precision and then using the assumption that Cij is a diagonal matrix with diagonal entries cll. This normalization depends only on w and the coupling matrix C of the factor ψij and thus induces a valid probability distribution on the feature maps. Two points are notable about this normalization though: First, once other factors also constrain fi and/or fj , this normalization will not guarantee p(wij = 1) = pij . 2 Second, the wij are not independent in the resulting distribution. For example, if pairwise factors connect a to b, b to c and a to c the corresponding w are dependent, because wab = 1 and wbc = 1 already imply a smaller difference between fa and fc than if these factor were inactive, which increases the probability for wac = 1. 2.1 LEARNING To learn our model from data, we use a contrastive learning objective on the marginal likelihood p(f). To do so, we first need to marginalize out the w’s, which is fortunately simple, because each w affects only a single factor: p(f) = ∑ w p(f ,w) = 1 Z0 N (f , 0, I) ∏ (i,j)∈N [pijψij(fi, fj) + (1− pij)] (7) Using this marginal likelihood directly for fitting is infeasible though, because computing Z0, i.e. normalizing this distribution is not computationally tractable. We resort to contrastive learning to fit the unnormalized probability distribution (Gutmann & Hyvarinen, 2010), i.e. we optimize discrimination from a noise distribution with the same support as the target distribution. Following van den Oord et al. (2019) we do not optimize the Markov random field directly, but optimize predictions based on the model using features from other locations as the noise distribution. For this noise distribution, the factors that depend only on a single location (the first product in (1)) will cancel. We thus ignore the N(f , 0, I) in our optimization and instead normalize the feature maps to mean 0 and unit variance across each image. We define two alternative losses that make predictions for positions based on all their neighbors or for a single factor respectively. 2.1.1 POSITION LOSS The position loss optimizes the probability of the feature vector at each location relative to the probability of randomly chosen other feature vectors from different locations and images: lpos(f) = ∑ i log p(fi|fj∀j ∈ N(i))∑ i′ p(fi′ |fj∀j ∈ N(i)) (8) = ∑ i ∑ j∈N(i) logψij(fi, fj)− ∑ i log ∑ i′ exp ∑ j∈N(i) logψij(fi′ , fj) , (9) where N(i) is the set of neighbors of i. 2Instead, p(wij = 1) will be higher, because other factors increase the precision for the feature vectors, which makes the normalization constants more similar. 2.1.2 FACTOR LOSS The factor loss instead maximizes each individual factor for the correct feature vectors relative to random pairs of feature vectors sampled from different locations and images: lfact = ∑ i,j log ψij(fi, fj)∑ i′,j′ ψij(fi′ , fj′) (10) = ∑ i,j logψij(fi, fj)− ∑ i,j log ∑ i′,j′ ψij(fi′ , fj′), (11) where i, j index the correct locations and i′, j′ index randomly drawn locations, in our implementation generated by shuffling the feature maps and taking all pairs that occur in these shuffled maps. 2.1.3 OPTIMIZATION We optimize all weights of the neural network used for feature extraction and the parameters of the random field, i.e. the C and pij for the different relative spatial locations simultaneously. As an optimization algorithm, we use stochastic gradient descent with momentum. Both losses succeed to learn the model, but the factor loss is substantially more efficient. We discuss the distinction between the two losses and further details of the optimization in the supplementary materials. 2.2 SEGMENTATION INFERENCE Computing the probability for any individual pair of locations (i, j) to be connected, i.e. computing p(wij = 1|f), depends only on the two connected feature vectors fi and fj : p(wij = 1|f) p(wij = 0|f) = pij (1− pij) Z(wij = 0, Cij) Z(wij = 1, Cij) exp ( −(fi − fj)TCij(fi − fj) ) (12) This inference effectively yields a connectivity measure for each pair of neighboring locations, i.e. a sparse connectivity matrix. Given that we did not apply any prior information enforcing continuous objects or contours, the inferred wij do not necessarily correspond to a valid segmentation or set of contours. Finding the best fitting contours or segmentation for given probabilities for the ws is an additional process, which in humans appears to be an attention-dependent serial process (Jeurissen et al., 2016; Self et al., 2019). To evaluate the detected boundaries in computer vision benchmarks, we nonetheless need to convert the connectivity matrix we extracted into a contour image. To do so, we use the spectral-clusteringbased globalization method developed by Arbeláez et al. (2011). This method requires that all connection weights between nodes are positive. To achieve this, we transform the log-probability ratios for the wij as follows: For each image, we find the 30% quantile of the values, subtract it from all log-probability ratios, and set all values below 0.01 to 0.01. We then compute the smallest eigenvectors of the graph Laplacian as in graph spectral clustering. These eigenvectors are then transformed back into image space and are filtered with simple edge detectors to find the final contours. 3 EVALUATION We implement 3 model types implementing feature extractions of increasing complexity in PyTorch (Paszke et al., 2019): Pixel value model. For illustrative purposes, we first apply our ideas to the rgb pixel values of an image as features. This provides us with an example, where we can easily show the feature values and connections. Additionally, this model provides an easy benchmark for all evaluations. Linear model. As the simplest kind of model that allows learning features, we use a single convolutional deep neural network layer as our feature model. Here, we use 50 11× 11 linear features. Predseg1: To show that our methods work for more complex architecture with non-linearities, we use a relatively small deep neural network with 4 layers (2 convolutional layers and 2 residual blocks with subsampling layers between them, see supplement for details). For each of these architectures, we train 24 different networks with all combinations of the following settings: 4 different sizes of neighborhoods (4, 8, 12, or 20 neighbors, see Fig. 1D); 3 different noise levels (0, 0.1, 0.2) and the two learning objectives. As a training set, we used the unlabeled image set from MS COCO (Lin et al., 2015), which contains 123,404 color images with varying resolution. To enable batch processing, we randomly crop these images to 256× 256 pixel resolution, but use no other data augmentation (See supplementary information for further training details). We want to evaluate whether our models learn human-like features and segmentations. To do so, we first analyze the features in the first layers of our networks where we can judge whether features are representative of biological visual systems. In particular, we extract segmentations from our activations and evaluate those on the Berkeley Segmentation Dataset (Arbeláez et al., 2011, BSDS500) 3.1 LEARNED FEATURES Linear Model We first analyze the weights in our linear models (Fig 2 A-C). All instances learn local averages and Gabor-like striped features, i.e. spatial frequency and orientation tuned features with limited spatial extend. These features clearly resemble receptive fields of neurons in primary visual cortex. Additionally, there appears to be some preference for features that weight the red and green color channels much stronger than the blue channel, similar to the human luminance channel, which leads to the yellow-blue contrasts in the plots. There is some difference between the two learning objectives though. The position based loss generally leads to lower frequency and somewhat noisier features. This could either be due to the higher learning efficiency of the factor based loss, i.e. the factor based loss is closer to convergence, or due to a genuinely different optimization goal. Predseg1 In Predseg1, we first analyze the layer 0 convolution (Fig. 2D), which has only 3 channels with 3× 3 receptive fields, which we originally introduced as a learnable downsampling. This layer consistently converges to applying near constant weights over space. Additionally, exactly one of the channels has a non-zero mean (the 3rd, 1st and 3rd in Fig. 2D) and the other two take balanced differences between two of the channels (red vs green and green vs. blue in the examples). This parallels the luminance and opponent color channels of human visual perception. In the second convolution, we observe a similar pattern of oriented filters and local averages as in the linear model albeit in false color as the input channels are rotated by the weighting of the layer 0 convolution (Fig. 2 E & F). 3.2 CONTOUR EXTRACTION To evaluate whether the connectivity information extracted by our model corresponds to human perceived segmentation, we extract contours from our models and compare them to contours reported by humans for the Berkeley Segmentation Database (Arbeláez et al., 2011; Martin et al., 2001). This database contains human drawn object boundaries for 500 natural images and is accompanied by methods for evaluating segmentation models. Using the methods provided with the database, we compute precision-recall curves for each model and use the best F-value (geometric mean of precision and recall) as the final evaluation metric. As we had multiple models to choose from, we choose the models from each class that perform best on the training data for our reports. For all models this was one of the models with the largest neighborhood, i.e. using 20 neighbors, and the factor loss. It seems the factor loss performed better simply due to its technical efficiency advantage as discussed above. Performance increases monotonically with neighborhood size and Markov random field based approaches to semantic segmentation also increased their performance with larger neighborhoods up to fully connected Markov random fields (Krähenbühl & Koltun, 2012; Chen et al., 2014; 2017). We thus expect that larger neighborhoods could work even better. Qualitatively, we observe that all our models yield sensible contour maps (see Fig. 3 A). Additionally, we note that the linear model and Layer 1 of the predseg model tend to produce double contours, i.e. they tend to produce two contours on either side of the contour reported by human subjects with some area between them connected to neither side of the contour. Quantitatively, our models also perform well except for the deeper layers of Predseg 1 (Fig. 3B and Table 1). The other models beat most hand-crafted contour detection algorithms that were tested on this benchmark (Canny, 1986; Comaniciu & Meer, 2002; Cour et al., 2005; Felzenszwalb & Huttenlocher, 2004) and perform close to the gPb-owt-ucm contour detection and segmentation algorithm (Arbeláez et al., 2011) that was the state of the art at the time. Layer-0 of Predseg 1 performs best followed by the linear feature model and finally the pixel value model. Interestingly, the best performing models seem to be mostly the local averaging models (cf. Fig. 2 C). In particular, the high performance of the first layer of Predseg 1 is surprising, because it uses only 3 × 3 pixel local color averages as features. Since the advent of deep neural network models, networks trained to optimize performance on image segmentation have reached much higher performance on the BSDS500 benchmark, essentially reaching perfect performance up to human inconsistency (e.g. He et al., 2019; Kokkinos, 2016; Linsley et al., 2020; Liu et al., 2017; Shen et al., 2015; Su et al., 2021; Xie & Tu, 2015, see Table 1). However, these models all require direct training on human reported contours and often use features learned for other tasks. There are also a few deep neural network models that attempt unsupervised segmentation (e.g. Chen et al., 2019; Lin et al., 2021; Xia & Kulis, 2017), but we were unable to find any that were evaluated on the contour task of BSD500. The closest is perhaps the W-net (Xia & Kulis, 2017), which used an autoencoder structure with additional constraints and was evaluated on the segmentation task on BSDS500 performing slighly better than gPb-owt-ucm. 4 DISCUSSION We present a model that can learn features and local segmentation information from images without further supervision signals. This model integrates the prediction task used for feature learning and the segmentation task into the same coherent probabilistic framework. This framework and the dual use for the connectivity information make it seem sensible to represent this information. Furthermore, the features learned by our models resemble receptive fields in the retina and primary visual cortex and the contours we extract from connectivity information match contours drawn by human subject fairly well, both without any training towards making them more human-like. To improve biological plausibility, all computations in our model are local and all units are connected to the same small, local set of other units throughout learning and inference, which matches early visual cortex, in which the lateral connections that follow natural image statistics are implemented anatomically (Buzás et al., 2006; Hunt et al., 2011; Roelfsema et al., 1998; Stettler et al., 2002). This in contrast to other ideas that require flexible pointers to arbitrary locations and features (as discussed by Shadlen & Movshon, 1999) or capsules that flexibly encode different parts of the input (Doerig et al., 2020; Kosiorek et al., 2019; Sabour et al., 2017; 2021). Nonetheless, we employ contrastive learning objectives and backpropagation here, for which we do not provide a biologically plausible implementations. However, there is currently active research towards biologically plausible alternatives to these algorithms (e.g. Illing et al., 2021; Xiong et al., 2020). Selecting the neurons that react to a specific object appears to rely on some central resource (Treisman, 1996; Treisman & Gelade, 1980) and to spread gradually through the feature maps (Jeurissen et al., 2013; 2016; Self et al., 2019). We used a computer vision algorithm for this step, which centrally computes the eigenvectors of the connectivity graph Laplacian (Arbeláez et al., 2011), which does not immediately look biologically plausible. However, a recent theory for hippocampal place and grid cells suggests that these cells compute the same eigenvectors of a graph Laplacian albeit of a successor representation (Stachenfeld et al., 2014; 2017). Thus, this might be an abstract description of an operation brains are capable of. In particular, earlier accounts that model the selection as a marker that spreads to related locations (e.g. Finger & König, 2014; Roelfsema, 2006; Singer & Gray, 1995) have some similarities with iterative algorithms to compute eigenvectors. Originally, phase coherence was proposed as a marker (Finger & König, 2014; Peter et al., 2019; Singer & Gray, 1995), but a simple gain increase within attended objects (Roelfsema, 2006) and a random gain modulation were also proposed (Haimerl et al., 2021; 2019). Regardless of the mechanistic implementation of the marker, connectivity information of the type our model extracts would be extremely helpful to explain the gradual spread of object selection. Our implementation of the model is not fully optimized, as it is meant as a proof of concept. In particular, we did not optimize the architectures or training parameters of our networks for the task, like initialization, optimization algorithm, learning rate, or regularization. Presumably, better performance in all benchmarks could be reached by adjusting any or all of these parameters. One possible next step for our model would be to train deeper architectures, such that the features could be used for complex tasks like object detection and classification. Contrastive losses like the one we use here are successfully applied for pretraining for large scale tasks such as ImageNet (Russakovsky et al., 2015) or MS Coco (Lin et al., 2015). These large scale applications often require modifications for better learning (Chen et al., 2020; Feichtenhofer et al., 2021; Grill et al., 2020; He et al., 2020; Hénaff et al., 2020; van den Oord et al., 2019). For example: Image augmentations to explicitly train networks to be invariant to some image changes, prediction heads that allow more complex distributions for the predictions, and memory banks or other methods to decrease the reliance on many negative samples. For understanding human vision, this line of reasoning opens the exciting possibility that higher visual cortex could be explained based on similar principles, as representations from contrastive learning also yield high predictive power for these cortices (Zhuang et al., 2021). The model we propose here is a probabilistic model of the feature maps. Based on this model, we could also infer the feature values. Thus, our model implies a pattern how neurons should combine their bottom-up inputs with predictions from nearby other neurons, once we include some uncertainty for the bottom-up inputs. In particular, the combination ought to take into account which nearby neurons react to the same object and which ones do not. Investigating this pooling could provide insights and predictions for phenomena that are related to local averaging. Crowding for example (Balas et al., 2009; Freeman & Simoncelli, 2011; Herzog et al., 2015; Wallis et al., 2016; 2017; 2019) is currently captured best by summary statistic models (Balas et al., 2009; Freeman & Simoncelli, 2011; Wallis et al., 2017), but deviations from these predictions suggest that object boundaries change processing (Herzog et al., 2015; Wallis et al., 2016; 2019). Another promising extension of our model would be processing over time, because predictions over time were found to be a potent signal for contrastive learning (Feichtenhofer et al., 2021) and because coherent object motion is among the strongest grouping signals for human observers (Köhler, 1967) and computer vision systems (Yang et al., 2021). Beside the substantial increases in processing capacity necessary to move to video processing instead of image processing, this step would require some extension of our framework to include object motion into the prediction. Nonetheless, including processing over time seems to be an interesting avenue for future research, especially because segmentation annotations for video are extremely expensive to collect such that unsupervised learning is particularly advantageous and popular in recent approaches (Araslanov et al., 2021; Jabri et al., 2020; Lai et al., 2020). A SUPPLEMENTARY MATERIAL: TRAINING DETAILS We trained 24 networks of each of the three types. The versions differed in the size of the neighborhood (4, 8, 12, or 20 neighbors), the amount of noise added (α ∈ 0, 0.1, 0.2), and the used loss (position or factor loss). The parameters we trained were: • all weights of the underlying network • the logit transform of p for each relative position of two neighbors • the logarithms of the diagonal entries of C for each relative position of neighbors We trained models using the standard stochastic gradient descent implemented in pytorch (Paszke et al., 2019) with a learning rate of 0.001, a momentum of 0.9 and a slight weight decay of 0.0001. To speed up convergence we increased the learning rate by a factor of 10 for the parameters of the prediction, i.e. C and p. For the gradient accumulation for the position based loss, we accumulate 5 repetitions for the pixel model and 10 for the linear model and for predseg1. Each repetition contained 10 random negative locations. Batch size was set to fit onto the smaller GPU type used in our local cluster. The resulting sizes are listed in Table 2 A.1 ARCHITECTURE DETAILS The pixel model was implemented as a single Identity layer. The linear model was implemented as a single 50× 11× 11 convolutional layer. The Predseg1 model was implemented as a sequential model with 4 processing steps separated by subsampling layers (1× 1 convolutional layers with a stride > 1). The first processing step was a 3× 3 convolutional layer with 3 channels followed by subsampling by a factor of 3. The second step was a 11× 11 convolutional layer with 64 features followed by subsampling by a factor of 2. The third and fourth steps were residual processing blocks, i.e. two convolutional layers with a rectified linear unit non-linearity between them whose results were added to the inputs. They had 128 and 256 features respectively and were separated by another subsampling by a factor of 2. A.2 ADDED NOISE To prevent individual features dimensions from becoming perfectly predictive, we added a small amount of Gaussian noise to the feature maps before applying the loss. To yield variables with mean 0 and variance 1 after adding the noise we implemented this step as: fnoise = √ 1− α2 + αϵ (13) where α ∈ [0, 1] controls the noise variance and ϵ is a standard normal random variable. Adding this noise did not change any of our results substantially and the three versions with different amounts of noise (α = 0, 0.1 or 0.2) performed within 1− 2% in all performance metrics. A.3 TRAINING DURATION Networks were trained in training jobs that were limited to either 48 hours of computation time or 10 epochs of training. As listed in table 2, we used a single such job for the pixel models, 7 for the linear models and 9 for the predseg1 models. Most larger networks were limited by the 48 hour limit, not by the epoch limit. A.4 USED COMPUTATIONAL RESOURCES The vast majority of the computation time was used for training the network parameters. Computing segmentations for the BSDS500 images and evaluating them took only a few hours of pure CPU processing. Networks were trained on an internal cluster using one GPU at a time and 6 CPUs for data loading. We list the training time per epoch in table 2. If every job had run for the full 48 hours we would have used (1 + 7 + 9) × 24 × 2 = 816 days of GPU processing time, which is a relatively close upper bound on the time we actually used. A.5 COMPARISON OF THE TWO LOSSES The position loss is consistent with the prediction made by the whole Markov random field, but is relatively inefficient, because the predicted distribution p(fi|fj∀j ∈ N(i)) and the normalization constants for these conditional distributions are different for every location i. Thus, the second term in equation (9) cannot be reused across the locations i. Instead, we need to compute the second term for each location separately, which requires a similar amount of memory as the whole feature representation for each negative sample i′ and each neighbor. To enable a sufficiently large set of negative points i′ with the available memory, we compute this loss multiple times with few negative samples and sum the gradients. This trick saves memory, because we can free the memory for the loss computation after each repetition. As the initial computation of the feature maps is the same for all negative samples, we save some computation for this procedure by computing the feature maps only once. To propagate the gradients through this single computation, we add up the gradients of the loss repetitions with regard to the feature maps and then propagate this summed gradient through the feature map computation. This procedure does not save computation time compared to the loss with many negative samples, as we still need to calculate the evaluation for each position and each sample in the normalization set. The factor loss does not lead to a consistent estimation of the MRF model, because the prediction p(fi|fj) should not be based only on the factor ψij , but should include indirect effects as fj also constrains the other neighbors of i. Optimizing each factor separately will thus overaccount for information that could be implemented in two factors. However, the factor loss has the distinct advantage that the same noise evaluations can be used for all positions and images in a minibatch, which enables a much larger number of noise samples and thus much faster learning.
1. What is the focus and contribution of the paper regarding Markov random field modeling? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its performance and comparisons with other works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Do you have any concerns about the significance of the proposed method, particularly in learning features and connectivity? 5. Are there any questions regarding the paper's experimental validation and comparisons with other deep neural networks for unsupervised segmentation?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a Markov random field model that learns both features and connectivity (whether two locations belong to the same segment), in an unsupervised manner using contrastive learning. The trained model is able to extract contours, and gives good performance on the Berkeley segmentation database. Strengths And Weaknesses Strengths: Empirical performance - unsupervised method for contour extraction gives good performance on the Berkeley database, comparable to or better than hand-crafted methods. Weaknesses: Extent of empirical validation - only studied on one benchmark, did not compare against existing deep neural networks for unsupervised segmentation (as opposed to contour extraction), would have been nice to get some sense of a comparison even if some simplifying assumptions had to be made. It's strange to me that the pixel-based model performs well, relative to the other proposed models, and also that the best performing model seems to be Predseg1-Layer 0, which is, as noted, only using 3x3 local color averages as features. This seems to suggest that the model is good at connectivity/contour extraction, but not necessarily in learning good features. Clarity, Quality, Novelty And Reproducibility The main novelty of the proposed method seems to be in the joint unsupervised learning of feature maps and local segmentation information. However, a counter-point to the usefulness of this is the fact that the best empirical results on the contour extraction task, as noted above, either are just using the pixel rgb values as features, or doing simple 3x3 averaging. This seems to suggest that, within this model, learning non-trivial/complex feature maps is not needed or in some way detrimental to learning the segmentation.
ICLR
Title Unsupervised learning of features and object boundaries from local prediction Abstract The human visual system has to learn both which features to extract from images and how to group locations into (proto-)objects. Those two aspects are usually dealt with separately, although predictability is discussed as a cue for both. To incorporate features and boundaries into the same model, we model a retinotopic visual cortex with a pairwise Markov random field model in which each factor is paired with an additional binary variable, which switches the factor on or off. Using one of two contrastive learning objectives, we can learn both the features and the parameters of the Markov random field factors from images without further supervision signals. The features learned by shallow neural networks based on this loss are local averages, opponent colors, and Gabor-like stripe patterns as observed in early human visual cortices. Furthermore, we can infer connectivity between locations by inferring the switch variables. Contours inferred from this connectivity perform quite well on the Berkeley segmentation database (BSDS500) without any training on contours. Thus, optimizing predictions across space aids both segmentation and feature learning, and models trained this way show similarities to the human visual system. We speculate that retinotopic visual cortex might implement such predictions over space through lateral connections. 1 INTRODUCTION A long-standing question about human vision is how representations initially be based on parallel processing of retinotopic feature maps can represent objects in a useful way. Most research on this topic has focused on computing later object-centered representations from the feature map representations. Psychology and neuroscience identified features that lead to objects being grouped together (Koffka, 1935; Köhler, 1967), established feature integration into coherent objects as a sequential process (Treisman & Gelade, 1980), and developed solutions to the binding problem, i.e. ways how neurons could signal whether they represent parts of the same object (Finger & König, 2014; Peter et al., 2019; Singer & Gray, 1995; Treisman, 1996). In computer vision, researchers also focused on how feature map representations could be turned into segmentations and object masks. Classically, segmentation algorithm were clustering algorithms operating on extracted feature spaces (Arbeláez et al., 2011; Comaniciu & Meer, 2002; Cour et al., 2005; Felzenszwalb & Huttenlocher, 2004; Shi & Malik, 2000), and this approach is still explored with more complex mixture models today (Vacher et al., 2022). Since the advent of deep neural network models, the focus has shifted towards models that directly map to contour maps or semantic segmentation maps (Girshick et al., 2014; He et al., 2019; Kokkinos, 2016; Liu et al., 2017; Shen et al., 2015; Xie & Tu, 2015), as reviewed by Minaee et al. (2021). Diverse findings suggest that processing within the feature maps take object boundaries into account. For example, neurons appear to encode border ownership (Jeurissen et al., 2013; Peter et al., 2019; Self et al., 2019) and to fill in information across surfaces (Komatsu, 2006) and along illusory contours (Grosof et al., 1993; von der Heydt et al., 1984). Also, attention spreading through the feature maps seems to respect object boundaries (Baldauf & Desimone, 2014; Roelfsema et al., 1998). And selecting neurons that correspond to an object takes time, which scales with the distance between the points to be compared (Jeurissen et al., 2016; Korjoukov et al., 2012). Finally, a long history of psychophysical studies showed that changes in spatial frequency and orientation content can define (texture) boundaries (e.g. Beck et al., 1987; Landy & Bergen, 1991; Wolfson & Landy, 1995). In both human vision and computer vision, relatively little attention has been given to these effects of grouping or segmentation on the feature maps themselves. Additionally, most theories for grouping and segmentation take the features in the original feature maps as given. In human vision, these features are traditionally chosen by the experimenter (Koffka, 1935; Treisman & Gelade, 1980; Treisman, 1996) or are inferred based on other research (Peter et al., 2019; Self et al., 2019). Similarly, computer vision algorithms used off-the-shelf feature banks originally (Arbeláez et al., 2011; Comaniciu & Meer, 2002; Cour et al., 2005; Felzenszwalb & Huttenlocher, 2004; Shi & Malik, 2000), and have recently moved towards deep neural network representations trained for other tasks as a source for feature maps (Girshick et al., 2014; He et al., 2019; Kokkinos, 2016; Liu et al., 2017; Shen et al., 2015; Xie & Tu, 2015). Interestingly, predictability of visual inputs over space and time has been discussed as a solution for both these limitations of earlier theories. Predictability has been used as a cue for segmentation since the law of common fate of Gestalt psychology (Koffka, 1935), and both lateral interactions in visual cortices and contour integration respect the statistics of natural scenes (Geisler & Perry, 2009; Geisler et al., 2001). Among other signals like sparsity (Olshausen & Field, 1996) or reconstruction (Kingma & Welling, 2014), predictability is also a well known signal for self-supervised learning of features (Wiskott & Sejnowski, 2002), which has been exploited by many recent contrastive learning (e.g. Feichtenhofer et al., 2021; Gutmann & Hyvarinen, 2010; Hénaff et al., 2020; van den Oord et al., 2019) and predictive coding schemes (e.g. Lotter et al., 2017; 2018; van den Oord et al., 2019) for self-supervised learning. However, these uses of predictability for feature learning and for segmentation are usually studied separately. Here, we propose a model that learns both features and segmentation without supervision. Predictions between locations provide a self-supervised loss to learn the features, how to perform the prediction and how to infer which locations should be grouped. Also, this view combines contrastive learning (Gutmann & Hyvarinen, 2010; van den Oord et al., 2019), a Markov random field model for the feature maps (Li, 2012) and segmentation into a coherent framework. We implement our model using some shallow architectures. The learned features resemble early cortical responses and the object boundaries we infer from predictability align well with human object contour reports from the Berkeley segmentation database (BSDS500 (Arbeláez et al., 2011)). Thus, retinotopic visual cortex might implement similar computational principles as we propose here. 2 MODEL To explain our combined model of feature maps and their local segmentation information, we start with a Gaussian Markov random field model (Li, 2012) with pairwise factors. We then add a variable w ∈ {0, 1} to each factor that governs whether the factor enters the product or not. This yields a joint distribution for the whole feature map and all w’s. Marginalizing out the w’s yields a Markov random field with "robust" factors for the feature map, which we can use to predict feature vectors from the vectors at neighboring positions. We find two contrastive losses based on these predictions that can be used to optimize the feature extraction and the factors in the Markov random field model. We model the distribution of k-dimensional feature maps f ∈ Rk,m′,n′ that are computed from input images I ∈ Rc,m,n with c = 3 color channels (see Fig. 1 A & B). We use a Markov random field model with pairwise factors, i.e. we define the probability of encountering a feature map f with entries fi at locations i ∈ [1 . . .m′]× [1 . . . n′] as follows: p(f) ∝ ∏ i ψi(fi) ∏ (i,j)∈N ψij(fi, fj), (1) where ψi is the local factor, N is the set of all neighboring pairs, and ψij is the pairwise factor between positions i and j1. We will additionally assume shift invariance, i.e. each point has the same set of nearby relative positions in the map as neighbors, ψi is the same factor for each position, and each factor ψij depends only on the relative position of i and j. 1i and j thus have two entries each We now add a binary variable w ∈ {0, 1} to each pairwise factor that encodes whether the factor is ’active’ (w = 1) for that particular image (Fig. 1 C). To scale the probability of w = 1 and w = 0 relative to each other, we add a factor that scales them with constants pij ∈ [0, 1] and 1− pij respectively: p(f ,w) ∝ ∏ i ψi(fi) ∏ (i,j)∈N p wij ij (1− pij) 1−wijψij(fi, fj) wij (2) Finally, we assume that the factors are Gaussian and the feature vectors are originally normalized to have mean 0 and variance 1: p(f ,w) = 1 Z0 N (f , 0, I) ∏ (i,j)∈N p wij ij (1− pij)1−wij Z(wij , Cij) exp ( −wij 2 (fi − fj)TCij(fi − fj) ) , (3) where Z0 is the overall normalization constant, N(f , 0, I) is the density of a standard normal distribution with k ×m′ × n′ dimensions, Cij governs the strength of the coupling in the form of a precision matrix, which we will assume to be diagonal, and Z(wij , Cij) scales the distributions with wij = 0 and wij = 1 relative to each other. We set Z(wij , Cij) to the normalization constant of the Gaussian with standard Gaussian factors for fi and fj respectively. For w = 0 this is just (2π)−k, the normalization constant of a standard Gaussian in 2k dimensions. For w = 1 we get: Z(wij = 1, Cij) = ∫ ∫ exp ( −1 2 fTi fi − 1 2 fTj fj − 1 2 (fi − fj)TCij(fi − fj) ) dfidfj (4) = (2π)−k det ∣∣∣∣I + Cij CijCij I + Cij ∣∣∣∣ 12 (5) = (2π)−k ∏ l √ 1 + 2cll (6) which we get by computing the normalization constant of a Gaussian with the given precision and then using the assumption that Cij is a diagonal matrix with diagonal entries cll. This normalization depends only on w and the coupling matrix C of the factor ψij and thus induces a valid probability distribution on the feature maps. Two points are notable about this normalization though: First, once other factors also constrain fi and/or fj , this normalization will not guarantee p(wij = 1) = pij . 2 Second, the wij are not independent in the resulting distribution. For example, if pairwise factors connect a to b, b to c and a to c the corresponding w are dependent, because wab = 1 and wbc = 1 already imply a smaller difference between fa and fc than if these factor were inactive, which increases the probability for wac = 1. 2.1 LEARNING To learn our model from data, we use a contrastive learning objective on the marginal likelihood p(f). To do so, we first need to marginalize out the w’s, which is fortunately simple, because each w affects only a single factor: p(f) = ∑ w p(f ,w) = 1 Z0 N (f , 0, I) ∏ (i,j)∈N [pijψij(fi, fj) + (1− pij)] (7) Using this marginal likelihood directly for fitting is infeasible though, because computing Z0, i.e. normalizing this distribution is not computationally tractable. We resort to contrastive learning to fit the unnormalized probability distribution (Gutmann & Hyvarinen, 2010), i.e. we optimize discrimination from a noise distribution with the same support as the target distribution. Following van den Oord et al. (2019) we do not optimize the Markov random field directly, but optimize predictions based on the model using features from other locations as the noise distribution. For this noise distribution, the factors that depend only on a single location (the first product in (1)) will cancel. We thus ignore the N(f , 0, I) in our optimization and instead normalize the feature maps to mean 0 and unit variance across each image. We define two alternative losses that make predictions for positions based on all their neighbors or for a single factor respectively. 2.1.1 POSITION LOSS The position loss optimizes the probability of the feature vector at each location relative to the probability of randomly chosen other feature vectors from different locations and images: lpos(f) = ∑ i log p(fi|fj∀j ∈ N(i))∑ i′ p(fi′ |fj∀j ∈ N(i)) (8) = ∑ i ∑ j∈N(i) logψij(fi, fj)− ∑ i log ∑ i′ exp ∑ j∈N(i) logψij(fi′ , fj) , (9) where N(i) is the set of neighbors of i. 2Instead, p(wij = 1) will be higher, because other factors increase the precision for the feature vectors, which makes the normalization constants more similar. 2.1.2 FACTOR LOSS The factor loss instead maximizes each individual factor for the correct feature vectors relative to random pairs of feature vectors sampled from different locations and images: lfact = ∑ i,j log ψij(fi, fj)∑ i′,j′ ψij(fi′ , fj′) (10) = ∑ i,j logψij(fi, fj)− ∑ i,j log ∑ i′,j′ ψij(fi′ , fj′), (11) where i, j index the correct locations and i′, j′ index randomly drawn locations, in our implementation generated by shuffling the feature maps and taking all pairs that occur in these shuffled maps. 2.1.3 OPTIMIZATION We optimize all weights of the neural network used for feature extraction and the parameters of the random field, i.e. the C and pij for the different relative spatial locations simultaneously. As an optimization algorithm, we use stochastic gradient descent with momentum. Both losses succeed to learn the model, but the factor loss is substantially more efficient. We discuss the distinction between the two losses and further details of the optimization in the supplementary materials. 2.2 SEGMENTATION INFERENCE Computing the probability for any individual pair of locations (i, j) to be connected, i.e. computing p(wij = 1|f), depends only on the two connected feature vectors fi and fj : p(wij = 1|f) p(wij = 0|f) = pij (1− pij) Z(wij = 0, Cij) Z(wij = 1, Cij) exp ( −(fi − fj)TCij(fi − fj) ) (12) This inference effectively yields a connectivity measure for each pair of neighboring locations, i.e. a sparse connectivity matrix. Given that we did not apply any prior information enforcing continuous objects or contours, the inferred wij do not necessarily correspond to a valid segmentation or set of contours. Finding the best fitting contours or segmentation for given probabilities for the ws is an additional process, which in humans appears to be an attention-dependent serial process (Jeurissen et al., 2016; Self et al., 2019). To evaluate the detected boundaries in computer vision benchmarks, we nonetheless need to convert the connectivity matrix we extracted into a contour image. To do so, we use the spectral-clusteringbased globalization method developed by Arbeláez et al. (2011). This method requires that all connection weights between nodes are positive. To achieve this, we transform the log-probability ratios for the wij as follows: For each image, we find the 30% quantile of the values, subtract it from all log-probability ratios, and set all values below 0.01 to 0.01. We then compute the smallest eigenvectors of the graph Laplacian as in graph spectral clustering. These eigenvectors are then transformed back into image space and are filtered with simple edge detectors to find the final contours. 3 EVALUATION We implement 3 model types implementing feature extractions of increasing complexity in PyTorch (Paszke et al., 2019): Pixel value model. For illustrative purposes, we first apply our ideas to the rgb pixel values of an image as features. This provides us with an example, where we can easily show the feature values and connections. Additionally, this model provides an easy benchmark for all evaluations. Linear model. As the simplest kind of model that allows learning features, we use a single convolutional deep neural network layer as our feature model. Here, we use 50 11× 11 linear features. Predseg1: To show that our methods work for more complex architecture with non-linearities, we use a relatively small deep neural network with 4 layers (2 convolutional layers and 2 residual blocks with subsampling layers between them, see supplement for details). For each of these architectures, we train 24 different networks with all combinations of the following settings: 4 different sizes of neighborhoods (4, 8, 12, or 20 neighbors, see Fig. 1D); 3 different noise levels (0, 0.1, 0.2) and the two learning objectives. As a training set, we used the unlabeled image set from MS COCO (Lin et al., 2015), which contains 123,404 color images with varying resolution. To enable batch processing, we randomly crop these images to 256× 256 pixel resolution, but use no other data augmentation (See supplementary information for further training details). We want to evaluate whether our models learn human-like features and segmentations. To do so, we first analyze the features in the first layers of our networks where we can judge whether features are representative of biological visual systems. In particular, we extract segmentations from our activations and evaluate those on the Berkeley Segmentation Dataset (Arbeláez et al., 2011, BSDS500) 3.1 LEARNED FEATURES Linear Model We first analyze the weights in our linear models (Fig 2 A-C). All instances learn local averages and Gabor-like striped features, i.e. spatial frequency and orientation tuned features with limited spatial extend. These features clearly resemble receptive fields of neurons in primary visual cortex. Additionally, there appears to be some preference for features that weight the red and green color channels much stronger than the blue channel, similar to the human luminance channel, which leads to the yellow-blue contrasts in the plots. There is some difference between the two learning objectives though. The position based loss generally leads to lower frequency and somewhat noisier features. This could either be due to the higher learning efficiency of the factor based loss, i.e. the factor based loss is closer to convergence, or due to a genuinely different optimization goal. Predseg1 In Predseg1, we first analyze the layer 0 convolution (Fig. 2D), which has only 3 channels with 3× 3 receptive fields, which we originally introduced as a learnable downsampling. This layer consistently converges to applying near constant weights over space. Additionally, exactly one of the channels has a non-zero mean (the 3rd, 1st and 3rd in Fig. 2D) and the other two take balanced differences between two of the channels (red vs green and green vs. blue in the examples). This parallels the luminance and opponent color channels of human visual perception. In the second convolution, we observe a similar pattern of oriented filters and local averages as in the linear model albeit in false color as the input channels are rotated by the weighting of the layer 0 convolution (Fig. 2 E & F). 3.2 CONTOUR EXTRACTION To evaluate whether the connectivity information extracted by our model corresponds to human perceived segmentation, we extract contours from our models and compare them to contours reported by humans for the Berkeley Segmentation Database (Arbeláez et al., 2011; Martin et al., 2001). This database contains human drawn object boundaries for 500 natural images and is accompanied by methods for evaluating segmentation models. Using the methods provided with the database, we compute precision-recall curves for each model and use the best F-value (geometric mean of precision and recall) as the final evaluation metric. As we had multiple models to choose from, we choose the models from each class that perform best on the training data for our reports. For all models this was one of the models with the largest neighborhood, i.e. using 20 neighbors, and the factor loss. It seems the factor loss performed better simply due to its technical efficiency advantage as discussed above. Performance increases monotonically with neighborhood size and Markov random field based approaches to semantic segmentation also increased their performance with larger neighborhoods up to fully connected Markov random fields (Krähenbühl & Koltun, 2012; Chen et al., 2014; 2017). We thus expect that larger neighborhoods could work even better. Qualitatively, we observe that all our models yield sensible contour maps (see Fig. 3 A). Additionally, we note that the linear model and Layer 1 of the predseg model tend to produce double contours, i.e. they tend to produce two contours on either side of the contour reported by human subjects with some area between them connected to neither side of the contour. Quantitatively, our models also perform well except for the deeper layers of Predseg 1 (Fig. 3B and Table 1). The other models beat most hand-crafted contour detection algorithms that were tested on this benchmark (Canny, 1986; Comaniciu & Meer, 2002; Cour et al., 2005; Felzenszwalb & Huttenlocher, 2004) and perform close to the gPb-owt-ucm contour detection and segmentation algorithm (Arbeláez et al., 2011) that was the state of the art at the time. Layer-0 of Predseg 1 performs best followed by the linear feature model and finally the pixel value model. Interestingly, the best performing models seem to be mostly the local averaging models (cf. Fig. 2 C). In particular, the high performance of the first layer of Predseg 1 is surprising, because it uses only 3 × 3 pixel local color averages as features. Since the advent of deep neural network models, networks trained to optimize performance on image segmentation have reached much higher performance on the BSDS500 benchmark, essentially reaching perfect performance up to human inconsistency (e.g. He et al., 2019; Kokkinos, 2016; Linsley et al., 2020; Liu et al., 2017; Shen et al., 2015; Su et al., 2021; Xie & Tu, 2015, see Table 1). However, these models all require direct training on human reported contours and often use features learned for other tasks. There are also a few deep neural network models that attempt unsupervised segmentation (e.g. Chen et al., 2019; Lin et al., 2021; Xia & Kulis, 2017), but we were unable to find any that were evaluated on the contour task of BSD500. The closest is perhaps the W-net (Xia & Kulis, 2017), which used an autoencoder structure with additional constraints and was evaluated on the segmentation task on BSDS500 performing slighly better than gPb-owt-ucm. 4 DISCUSSION We present a model that can learn features and local segmentation information from images without further supervision signals. This model integrates the prediction task used for feature learning and the segmentation task into the same coherent probabilistic framework. This framework and the dual use for the connectivity information make it seem sensible to represent this information. Furthermore, the features learned by our models resemble receptive fields in the retina and primary visual cortex and the contours we extract from connectivity information match contours drawn by human subject fairly well, both without any training towards making them more human-like. To improve biological plausibility, all computations in our model are local and all units are connected to the same small, local set of other units throughout learning and inference, which matches early visual cortex, in which the lateral connections that follow natural image statistics are implemented anatomically (Buzás et al., 2006; Hunt et al., 2011; Roelfsema et al., 1998; Stettler et al., 2002). This in contrast to other ideas that require flexible pointers to arbitrary locations and features (as discussed by Shadlen & Movshon, 1999) or capsules that flexibly encode different parts of the input (Doerig et al., 2020; Kosiorek et al., 2019; Sabour et al., 2017; 2021). Nonetheless, we employ contrastive learning objectives and backpropagation here, for which we do not provide a biologically plausible implementations. However, there is currently active research towards biologically plausible alternatives to these algorithms (e.g. Illing et al., 2021; Xiong et al., 2020). Selecting the neurons that react to a specific object appears to rely on some central resource (Treisman, 1996; Treisman & Gelade, 1980) and to spread gradually through the feature maps (Jeurissen et al., 2013; 2016; Self et al., 2019). We used a computer vision algorithm for this step, which centrally computes the eigenvectors of the connectivity graph Laplacian (Arbeláez et al., 2011), which does not immediately look biologically plausible. However, a recent theory for hippocampal place and grid cells suggests that these cells compute the same eigenvectors of a graph Laplacian albeit of a successor representation (Stachenfeld et al., 2014; 2017). Thus, this might be an abstract description of an operation brains are capable of. In particular, earlier accounts that model the selection as a marker that spreads to related locations (e.g. Finger & König, 2014; Roelfsema, 2006; Singer & Gray, 1995) have some similarities with iterative algorithms to compute eigenvectors. Originally, phase coherence was proposed as a marker (Finger & König, 2014; Peter et al., 2019; Singer & Gray, 1995), but a simple gain increase within attended objects (Roelfsema, 2006) and a random gain modulation were also proposed (Haimerl et al., 2021; 2019). Regardless of the mechanistic implementation of the marker, connectivity information of the type our model extracts would be extremely helpful to explain the gradual spread of object selection. Our implementation of the model is not fully optimized, as it is meant as a proof of concept. In particular, we did not optimize the architectures or training parameters of our networks for the task, like initialization, optimization algorithm, learning rate, or regularization. Presumably, better performance in all benchmarks could be reached by adjusting any or all of these parameters. One possible next step for our model would be to train deeper architectures, such that the features could be used for complex tasks like object detection and classification. Contrastive losses like the one we use here are successfully applied for pretraining for large scale tasks such as ImageNet (Russakovsky et al., 2015) or MS Coco (Lin et al., 2015). These large scale applications often require modifications for better learning (Chen et al., 2020; Feichtenhofer et al., 2021; Grill et al., 2020; He et al., 2020; Hénaff et al., 2020; van den Oord et al., 2019). For example: Image augmentations to explicitly train networks to be invariant to some image changes, prediction heads that allow more complex distributions for the predictions, and memory banks or other methods to decrease the reliance on many negative samples. For understanding human vision, this line of reasoning opens the exciting possibility that higher visual cortex could be explained based on similar principles, as representations from contrastive learning also yield high predictive power for these cortices (Zhuang et al., 2021). The model we propose here is a probabilistic model of the feature maps. Based on this model, we could also infer the feature values. Thus, our model implies a pattern how neurons should combine their bottom-up inputs with predictions from nearby other neurons, once we include some uncertainty for the bottom-up inputs. In particular, the combination ought to take into account which nearby neurons react to the same object and which ones do not. Investigating this pooling could provide insights and predictions for phenomena that are related to local averaging. Crowding for example (Balas et al., 2009; Freeman & Simoncelli, 2011; Herzog et al., 2015; Wallis et al., 2016; 2017; 2019) is currently captured best by summary statistic models (Balas et al., 2009; Freeman & Simoncelli, 2011; Wallis et al., 2017), but deviations from these predictions suggest that object boundaries change processing (Herzog et al., 2015; Wallis et al., 2016; 2019). Another promising extension of our model would be processing over time, because predictions over time were found to be a potent signal for contrastive learning (Feichtenhofer et al., 2021) and because coherent object motion is among the strongest grouping signals for human observers (Köhler, 1967) and computer vision systems (Yang et al., 2021). Beside the substantial increases in processing capacity necessary to move to video processing instead of image processing, this step would require some extension of our framework to include object motion into the prediction. Nonetheless, including processing over time seems to be an interesting avenue for future research, especially because segmentation annotations for video are extremely expensive to collect such that unsupervised learning is particularly advantageous and popular in recent approaches (Araslanov et al., 2021; Jabri et al., 2020; Lai et al., 2020). A SUPPLEMENTARY MATERIAL: TRAINING DETAILS We trained 24 networks of each of the three types. The versions differed in the size of the neighborhood (4, 8, 12, or 20 neighbors), the amount of noise added (α ∈ 0, 0.1, 0.2), and the used loss (position or factor loss). The parameters we trained were: • all weights of the underlying network • the logit transform of p for each relative position of two neighbors • the logarithms of the diagonal entries of C for each relative position of neighbors We trained models using the standard stochastic gradient descent implemented in pytorch (Paszke et al., 2019) with a learning rate of 0.001, a momentum of 0.9 and a slight weight decay of 0.0001. To speed up convergence we increased the learning rate by a factor of 10 for the parameters of the prediction, i.e. C and p. For the gradient accumulation for the position based loss, we accumulate 5 repetitions for the pixel model and 10 for the linear model and for predseg1. Each repetition contained 10 random negative locations. Batch size was set to fit onto the smaller GPU type used in our local cluster. The resulting sizes are listed in Table 2 A.1 ARCHITECTURE DETAILS The pixel model was implemented as a single Identity layer. The linear model was implemented as a single 50× 11× 11 convolutional layer. The Predseg1 model was implemented as a sequential model with 4 processing steps separated by subsampling layers (1× 1 convolutional layers with a stride > 1). The first processing step was a 3× 3 convolutional layer with 3 channels followed by subsampling by a factor of 3. The second step was a 11× 11 convolutional layer with 64 features followed by subsampling by a factor of 2. The third and fourth steps were residual processing blocks, i.e. two convolutional layers with a rectified linear unit non-linearity between them whose results were added to the inputs. They had 128 and 256 features respectively and were separated by another subsampling by a factor of 2. A.2 ADDED NOISE To prevent individual features dimensions from becoming perfectly predictive, we added a small amount of Gaussian noise to the feature maps before applying the loss. To yield variables with mean 0 and variance 1 after adding the noise we implemented this step as: fnoise = √ 1− α2 + αϵ (13) where α ∈ [0, 1] controls the noise variance and ϵ is a standard normal random variable. Adding this noise did not change any of our results substantially and the three versions with different amounts of noise (α = 0, 0.1 or 0.2) performed within 1− 2% in all performance metrics. A.3 TRAINING DURATION Networks were trained in training jobs that were limited to either 48 hours of computation time or 10 epochs of training. As listed in table 2, we used a single such job for the pixel models, 7 for the linear models and 9 for the predseg1 models. Most larger networks were limited by the 48 hour limit, not by the epoch limit. A.4 USED COMPUTATIONAL RESOURCES The vast majority of the computation time was used for training the network parameters. Computing segmentations for the BSDS500 images and evaluating them took only a few hours of pure CPU processing. Networks were trained on an internal cluster using one GPU at a time and 6 CPUs for data loading. We list the training time per epoch in table 2. If every job had run for the full 48 hours we would have used (1 + 7 + 9) × 24 × 2 = 816 days of GPU processing time, which is a relatively close upper bound on the time we actually used. A.5 COMPARISON OF THE TWO LOSSES The position loss is consistent with the prediction made by the whole Markov random field, but is relatively inefficient, because the predicted distribution p(fi|fj∀j ∈ N(i)) and the normalization constants for these conditional distributions are different for every location i. Thus, the second term in equation (9) cannot be reused across the locations i. Instead, we need to compute the second term for each location separately, which requires a similar amount of memory as the whole feature representation for each negative sample i′ and each neighbor. To enable a sufficiently large set of negative points i′ with the available memory, we compute this loss multiple times with few negative samples and sum the gradients. This trick saves memory, because we can free the memory for the loss computation after each repetition. As the initial computation of the feature maps is the same for all negative samples, we save some computation for this procedure by computing the feature maps only once. To propagate the gradients through this single computation, we add up the gradients of the loss repetitions with regard to the feature maps and then propagate this summed gradient through the feature map computation. This procedure does not save computation time compared to the loss with many negative samples, as we still need to calculate the evaluation for each position and each sample in the normalization set. The factor loss does not lead to a consistent estimation of the MRF model, because the prediction p(fi|fj) should not be based only on the factor ψij , but should include indirect effects as fj also constrains the other neighbors of i. Optimizing each factor separately will thus overaccount for information that could be implemented in two factors. However, the factor loss has the distinct advantage that the same noise evaluations can be used for all positions and images in a minibatch, which enables a much larger number of noise samples and thus much faster learning.
1. What is the main contribution of the paper regarding image feature learning and segmentation? 2. What are the strengths and weaknesses of the proposed pairwise Markov random field model? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What is the significance of the features learned by the shallow neural networks, and how do they relate to early visual cortex? 5. Can the proposed method outperform current deep-learning based methods for image feature extraction and segmentation?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors proposed a pairwised Markov random field model to learn both features and segmentation from images without further supervision signals. They showed that the features learned by the shallow neural networks based on the contrastive learning loss are local averages, opponent colors and Gabor-like stripe patterns as observed in early visual cortex. They can also inder connectivity between locations by inferring the switch variables, and contours inferred from such connectivty perform well on a benchmark. Strengths And Weaknesses Strength: The idea behind is interesting and the authors did a lot of careful empirical study to verify it. Weaknesses: 1, The introduction is not convincing enough. It is good to do a broad review of visual processing, but it is hard to get the point of what the authors want to claim in the introduction (except the last paragraph). A lot of stuff in the introduction seems like unnecessary. It is better to say more words about the model in the introduction. 2, It is unclear how the proposed model contribute to the computer vision community. Can this method outperform current deep-learning based methods for image feature extraction and segmentation? 3, Many of the implementation details can be put into Appendix. 4, The authors mentioned "human-like" features at several place in the paper. However, what is human-like features? Gabor-like patterns are first found on cats' visual cotex. It is not enough to say it is human like from Fig2, Fig3A and Table1 Clarity, Quality, Novelty And Reproducibility The writting is not clear, and some of the modeling parts seem like have been studied previously.
ICLR
Title Unsupervised learning of features and object boundaries from local prediction Abstract The human visual system has to learn both which features to extract from images and how to group locations into (proto-)objects. Those two aspects are usually dealt with separately, although predictability is discussed as a cue for both. To incorporate features and boundaries into the same model, we model a retinotopic visual cortex with a pairwise Markov random field model in which each factor is paired with an additional binary variable, which switches the factor on or off. Using one of two contrastive learning objectives, we can learn both the features and the parameters of the Markov random field factors from images without further supervision signals. The features learned by shallow neural networks based on this loss are local averages, opponent colors, and Gabor-like stripe patterns as observed in early human visual cortices. Furthermore, we can infer connectivity between locations by inferring the switch variables. Contours inferred from this connectivity perform quite well on the Berkeley segmentation database (BSDS500) without any training on contours. Thus, optimizing predictions across space aids both segmentation and feature learning, and models trained this way show similarities to the human visual system. We speculate that retinotopic visual cortex might implement such predictions over space through lateral connections. 1 INTRODUCTION A long-standing question about human vision is how representations initially be based on parallel processing of retinotopic feature maps can represent objects in a useful way. Most research on this topic has focused on computing later object-centered representations from the feature map representations. Psychology and neuroscience identified features that lead to objects being grouped together (Koffka, 1935; Köhler, 1967), established feature integration into coherent objects as a sequential process (Treisman & Gelade, 1980), and developed solutions to the binding problem, i.e. ways how neurons could signal whether they represent parts of the same object (Finger & König, 2014; Peter et al., 2019; Singer & Gray, 1995; Treisman, 1996). In computer vision, researchers also focused on how feature map representations could be turned into segmentations and object masks. Classically, segmentation algorithm were clustering algorithms operating on extracted feature spaces (Arbeláez et al., 2011; Comaniciu & Meer, 2002; Cour et al., 2005; Felzenszwalb & Huttenlocher, 2004; Shi & Malik, 2000), and this approach is still explored with more complex mixture models today (Vacher et al., 2022). Since the advent of deep neural network models, the focus has shifted towards models that directly map to contour maps or semantic segmentation maps (Girshick et al., 2014; He et al., 2019; Kokkinos, 2016; Liu et al., 2017; Shen et al., 2015; Xie & Tu, 2015), as reviewed by Minaee et al. (2021). Diverse findings suggest that processing within the feature maps take object boundaries into account. For example, neurons appear to encode border ownership (Jeurissen et al., 2013; Peter et al., 2019; Self et al., 2019) and to fill in information across surfaces (Komatsu, 2006) and along illusory contours (Grosof et al., 1993; von der Heydt et al., 1984). Also, attention spreading through the feature maps seems to respect object boundaries (Baldauf & Desimone, 2014; Roelfsema et al., 1998). And selecting neurons that correspond to an object takes time, which scales with the distance between the points to be compared (Jeurissen et al., 2016; Korjoukov et al., 2012). Finally, a long history of psychophysical studies showed that changes in spatial frequency and orientation content can define (texture) boundaries (e.g. Beck et al., 1987; Landy & Bergen, 1991; Wolfson & Landy, 1995). In both human vision and computer vision, relatively little attention has been given to these effects of grouping or segmentation on the feature maps themselves. Additionally, most theories for grouping and segmentation take the features in the original feature maps as given. In human vision, these features are traditionally chosen by the experimenter (Koffka, 1935; Treisman & Gelade, 1980; Treisman, 1996) or are inferred based on other research (Peter et al., 2019; Self et al., 2019). Similarly, computer vision algorithms used off-the-shelf feature banks originally (Arbeláez et al., 2011; Comaniciu & Meer, 2002; Cour et al., 2005; Felzenszwalb & Huttenlocher, 2004; Shi & Malik, 2000), and have recently moved towards deep neural network representations trained for other tasks as a source for feature maps (Girshick et al., 2014; He et al., 2019; Kokkinos, 2016; Liu et al., 2017; Shen et al., 2015; Xie & Tu, 2015). Interestingly, predictability of visual inputs over space and time has been discussed as a solution for both these limitations of earlier theories. Predictability has been used as a cue for segmentation since the law of common fate of Gestalt psychology (Koffka, 1935), and both lateral interactions in visual cortices and contour integration respect the statistics of natural scenes (Geisler & Perry, 2009; Geisler et al., 2001). Among other signals like sparsity (Olshausen & Field, 1996) or reconstruction (Kingma & Welling, 2014), predictability is also a well known signal for self-supervised learning of features (Wiskott & Sejnowski, 2002), which has been exploited by many recent contrastive learning (e.g. Feichtenhofer et al., 2021; Gutmann & Hyvarinen, 2010; Hénaff et al., 2020; van den Oord et al., 2019) and predictive coding schemes (e.g. Lotter et al., 2017; 2018; van den Oord et al., 2019) for self-supervised learning. However, these uses of predictability for feature learning and for segmentation are usually studied separately. Here, we propose a model that learns both features and segmentation without supervision. Predictions between locations provide a self-supervised loss to learn the features, how to perform the prediction and how to infer which locations should be grouped. Also, this view combines contrastive learning (Gutmann & Hyvarinen, 2010; van den Oord et al., 2019), a Markov random field model for the feature maps (Li, 2012) and segmentation into a coherent framework. We implement our model using some shallow architectures. The learned features resemble early cortical responses and the object boundaries we infer from predictability align well with human object contour reports from the Berkeley segmentation database (BSDS500 (Arbeláez et al., 2011)). Thus, retinotopic visual cortex might implement similar computational principles as we propose here. 2 MODEL To explain our combined model of feature maps and their local segmentation information, we start with a Gaussian Markov random field model (Li, 2012) with pairwise factors. We then add a variable w ∈ {0, 1} to each factor that governs whether the factor enters the product or not. This yields a joint distribution for the whole feature map and all w’s. Marginalizing out the w’s yields a Markov random field with "robust" factors for the feature map, which we can use to predict feature vectors from the vectors at neighboring positions. We find two contrastive losses based on these predictions that can be used to optimize the feature extraction and the factors in the Markov random field model. We model the distribution of k-dimensional feature maps f ∈ Rk,m′,n′ that are computed from input images I ∈ Rc,m,n with c = 3 color channels (see Fig. 1 A & B). We use a Markov random field model with pairwise factors, i.e. we define the probability of encountering a feature map f with entries fi at locations i ∈ [1 . . .m′]× [1 . . . n′] as follows: p(f) ∝ ∏ i ψi(fi) ∏ (i,j)∈N ψij(fi, fj), (1) where ψi is the local factor, N is the set of all neighboring pairs, and ψij is the pairwise factor between positions i and j1. We will additionally assume shift invariance, i.e. each point has the same set of nearby relative positions in the map as neighbors, ψi is the same factor for each position, and each factor ψij depends only on the relative position of i and j. 1i and j thus have two entries each We now add a binary variable w ∈ {0, 1} to each pairwise factor that encodes whether the factor is ’active’ (w = 1) for that particular image (Fig. 1 C). To scale the probability of w = 1 and w = 0 relative to each other, we add a factor that scales them with constants pij ∈ [0, 1] and 1− pij respectively: p(f ,w) ∝ ∏ i ψi(fi) ∏ (i,j)∈N p wij ij (1− pij) 1−wijψij(fi, fj) wij (2) Finally, we assume that the factors are Gaussian and the feature vectors are originally normalized to have mean 0 and variance 1: p(f ,w) = 1 Z0 N (f , 0, I) ∏ (i,j)∈N p wij ij (1− pij)1−wij Z(wij , Cij) exp ( −wij 2 (fi − fj)TCij(fi − fj) ) , (3) where Z0 is the overall normalization constant, N(f , 0, I) is the density of a standard normal distribution with k ×m′ × n′ dimensions, Cij governs the strength of the coupling in the form of a precision matrix, which we will assume to be diagonal, and Z(wij , Cij) scales the distributions with wij = 0 and wij = 1 relative to each other. We set Z(wij , Cij) to the normalization constant of the Gaussian with standard Gaussian factors for fi and fj respectively. For w = 0 this is just (2π)−k, the normalization constant of a standard Gaussian in 2k dimensions. For w = 1 we get: Z(wij = 1, Cij) = ∫ ∫ exp ( −1 2 fTi fi − 1 2 fTj fj − 1 2 (fi − fj)TCij(fi − fj) ) dfidfj (4) = (2π)−k det ∣∣∣∣I + Cij CijCij I + Cij ∣∣∣∣ 12 (5) = (2π)−k ∏ l √ 1 + 2cll (6) which we get by computing the normalization constant of a Gaussian with the given precision and then using the assumption that Cij is a diagonal matrix with diagonal entries cll. This normalization depends only on w and the coupling matrix C of the factor ψij and thus induces a valid probability distribution on the feature maps. Two points are notable about this normalization though: First, once other factors also constrain fi and/or fj , this normalization will not guarantee p(wij = 1) = pij . 2 Second, the wij are not independent in the resulting distribution. For example, if pairwise factors connect a to b, b to c and a to c the corresponding w are dependent, because wab = 1 and wbc = 1 already imply a smaller difference between fa and fc than if these factor were inactive, which increases the probability for wac = 1. 2.1 LEARNING To learn our model from data, we use a contrastive learning objective on the marginal likelihood p(f). To do so, we first need to marginalize out the w’s, which is fortunately simple, because each w affects only a single factor: p(f) = ∑ w p(f ,w) = 1 Z0 N (f , 0, I) ∏ (i,j)∈N [pijψij(fi, fj) + (1− pij)] (7) Using this marginal likelihood directly for fitting is infeasible though, because computing Z0, i.e. normalizing this distribution is not computationally tractable. We resort to contrastive learning to fit the unnormalized probability distribution (Gutmann & Hyvarinen, 2010), i.e. we optimize discrimination from a noise distribution with the same support as the target distribution. Following van den Oord et al. (2019) we do not optimize the Markov random field directly, but optimize predictions based on the model using features from other locations as the noise distribution. For this noise distribution, the factors that depend only on a single location (the first product in (1)) will cancel. We thus ignore the N(f , 0, I) in our optimization and instead normalize the feature maps to mean 0 and unit variance across each image. We define two alternative losses that make predictions for positions based on all their neighbors or for a single factor respectively. 2.1.1 POSITION LOSS The position loss optimizes the probability of the feature vector at each location relative to the probability of randomly chosen other feature vectors from different locations and images: lpos(f) = ∑ i log p(fi|fj∀j ∈ N(i))∑ i′ p(fi′ |fj∀j ∈ N(i)) (8) = ∑ i ∑ j∈N(i) logψij(fi, fj)− ∑ i log ∑ i′ exp ∑ j∈N(i) logψij(fi′ , fj) , (9) where N(i) is the set of neighbors of i. 2Instead, p(wij = 1) will be higher, because other factors increase the precision for the feature vectors, which makes the normalization constants more similar. 2.1.2 FACTOR LOSS The factor loss instead maximizes each individual factor for the correct feature vectors relative to random pairs of feature vectors sampled from different locations and images: lfact = ∑ i,j log ψij(fi, fj)∑ i′,j′ ψij(fi′ , fj′) (10) = ∑ i,j logψij(fi, fj)− ∑ i,j log ∑ i′,j′ ψij(fi′ , fj′), (11) where i, j index the correct locations and i′, j′ index randomly drawn locations, in our implementation generated by shuffling the feature maps and taking all pairs that occur in these shuffled maps. 2.1.3 OPTIMIZATION We optimize all weights of the neural network used for feature extraction and the parameters of the random field, i.e. the C and pij for the different relative spatial locations simultaneously. As an optimization algorithm, we use stochastic gradient descent with momentum. Both losses succeed to learn the model, but the factor loss is substantially more efficient. We discuss the distinction between the two losses and further details of the optimization in the supplementary materials. 2.2 SEGMENTATION INFERENCE Computing the probability for any individual pair of locations (i, j) to be connected, i.e. computing p(wij = 1|f), depends only on the two connected feature vectors fi and fj : p(wij = 1|f) p(wij = 0|f) = pij (1− pij) Z(wij = 0, Cij) Z(wij = 1, Cij) exp ( −(fi − fj)TCij(fi − fj) ) (12) This inference effectively yields a connectivity measure for each pair of neighboring locations, i.e. a sparse connectivity matrix. Given that we did not apply any prior information enforcing continuous objects or contours, the inferred wij do not necessarily correspond to a valid segmentation or set of contours. Finding the best fitting contours or segmentation for given probabilities for the ws is an additional process, which in humans appears to be an attention-dependent serial process (Jeurissen et al., 2016; Self et al., 2019). To evaluate the detected boundaries in computer vision benchmarks, we nonetheless need to convert the connectivity matrix we extracted into a contour image. To do so, we use the spectral-clusteringbased globalization method developed by Arbeláez et al. (2011). This method requires that all connection weights between nodes are positive. To achieve this, we transform the log-probability ratios for the wij as follows: For each image, we find the 30% quantile of the values, subtract it from all log-probability ratios, and set all values below 0.01 to 0.01. We then compute the smallest eigenvectors of the graph Laplacian as in graph spectral clustering. These eigenvectors are then transformed back into image space and are filtered with simple edge detectors to find the final contours. 3 EVALUATION We implement 3 model types implementing feature extractions of increasing complexity in PyTorch (Paszke et al., 2019): Pixel value model. For illustrative purposes, we first apply our ideas to the rgb pixel values of an image as features. This provides us with an example, where we can easily show the feature values and connections. Additionally, this model provides an easy benchmark for all evaluations. Linear model. As the simplest kind of model that allows learning features, we use a single convolutional deep neural network layer as our feature model. Here, we use 50 11× 11 linear features. Predseg1: To show that our methods work for more complex architecture with non-linearities, we use a relatively small deep neural network with 4 layers (2 convolutional layers and 2 residual blocks with subsampling layers between them, see supplement for details). For each of these architectures, we train 24 different networks with all combinations of the following settings: 4 different sizes of neighborhoods (4, 8, 12, or 20 neighbors, see Fig. 1D); 3 different noise levels (0, 0.1, 0.2) and the two learning objectives. As a training set, we used the unlabeled image set from MS COCO (Lin et al., 2015), which contains 123,404 color images with varying resolution. To enable batch processing, we randomly crop these images to 256× 256 pixel resolution, but use no other data augmentation (See supplementary information for further training details). We want to evaluate whether our models learn human-like features and segmentations. To do so, we first analyze the features in the first layers of our networks where we can judge whether features are representative of biological visual systems. In particular, we extract segmentations from our activations and evaluate those on the Berkeley Segmentation Dataset (Arbeláez et al., 2011, BSDS500) 3.1 LEARNED FEATURES Linear Model We first analyze the weights in our linear models (Fig 2 A-C). All instances learn local averages and Gabor-like striped features, i.e. spatial frequency and orientation tuned features with limited spatial extend. These features clearly resemble receptive fields of neurons in primary visual cortex. Additionally, there appears to be some preference for features that weight the red and green color channels much stronger than the blue channel, similar to the human luminance channel, which leads to the yellow-blue contrasts in the plots. There is some difference between the two learning objectives though. The position based loss generally leads to lower frequency and somewhat noisier features. This could either be due to the higher learning efficiency of the factor based loss, i.e. the factor based loss is closer to convergence, or due to a genuinely different optimization goal. Predseg1 In Predseg1, we first analyze the layer 0 convolution (Fig. 2D), which has only 3 channels with 3× 3 receptive fields, which we originally introduced as a learnable downsampling. This layer consistently converges to applying near constant weights over space. Additionally, exactly one of the channels has a non-zero mean (the 3rd, 1st and 3rd in Fig. 2D) and the other two take balanced differences between two of the channels (red vs green and green vs. blue in the examples). This parallels the luminance and opponent color channels of human visual perception. In the second convolution, we observe a similar pattern of oriented filters and local averages as in the linear model albeit in false color as the input channels are rotated by the weighting of the layer 0 convolution (Fig. 2 E & F). 3.2 CONTOUR EXTRACTION To evaluate whether the connectivity information extracted by our model corresponds to human perceived segmentation, we extract contours from our models and compare them to contours reported by humans for the Berkeley Segmentation Database (Arbeláez et al., 2011; Martin et al., 2001). This database contains human drawn object boundaries for 500 natural images and is accompanied by methods for evaluating segmentation models. Using the methods provided with the database, we compute precision-recall curves for each model and use the best F-value (geometric mean of precision and recall) as the final evaluation metric. As we had multiple models to choose from, we choose the models from each class that perform best on the training data for our reports. For all models this was one of the models with the largest neighborhood, i.e. using 20 neighbors, and the factor loss. It seems the factor loss performed better simply due to its technical efficiency advantage as discussed above. Performance increases monotonically with neighborhood size and Markov random field based approaches to semantic segmentation also increased their performance with larger neighborhoods up to fully connected Markov random fields (Krähenbühl & Koltun, 2012; Chen et al., 2014; 2017). We thus expect that larger neighborhoods could work even better. Qualitatively, we observe that all our models yield sensible contour maps (see Fig. 3 A). Additionally, we note that the linear model and Layer 1 of the predseg model tend to produce double contours, i.e. they tend to produce two contours on either side of the contour reported by human subjects with some area between them connected to neither side of the contour. Quantitatively, our models also perform well except for the deeper layers of Predseg 1 (Fig. 3B and Table 1). The other models beat most hand-crafted contour detection algorithms that were tested on this benchmark (Canny, 1986; Comaniciu & Meer, 2002; Cour et al., 2005; Felzenszwalb & Huttenlocher, 2004) and perform close to the gPb-owt-ucm contour detection and segmentation algorithm (Arbeláez et al., 2011) that was the state of the art at the time. Layer-0 of Predseg 1 performs best followed by the linear feature model and finally the pixel value model. Interestingly, the best performing models seem to be mostly the local averaging models (cf. Fig. 2 C). In particular, the high performance of the first layer of Predseg 1 is surprising, because it uses only 3 × 3 pixel local color averages as features. Since the advent of deep neural network models, networks trained to optimize performance on image segmentation have reached much higher performance on the BSDS500 benchmark, essentially reaching perfect performance up to human inconsistency (e.g. He et al., 2019; Kokkinos, 2016; Linsley et al., 2020; Liu et al., 2017; Shen et al., 2015; Su et al., 2021; Xie & Tu, 2015, see Table 1). However, these models all require direct training on human reported contours and often use features learned for other tasks. There are also a few deep neural network models that attempt unsupervised segmentation (e.g. Chen et al., 2019; Lin et al., 2021; Xia & Kulis, 2017), but we were unable to find any that were evaluated on the contour task of BSD500. The closest is perhaps the W-net (Xia & Kulis, 2017), which used an autoencoder structure with additional constraints and was evaluated on the segmentation task on BSDS500 performing slighly better than gPb-owt-ucm. 4 DISCUSSION We present a model that can learn features and local segmentation information from images without further supervision signals. This model integrates the prediction task used for feature learning and the segmentation task into the same coherent probabilistic framework. This framework and the dual use for the connectivity information make it seem sensible to represent this information. Furthermore, the features learned by our models resemble receptive fields in the retina and primary visual cortex and the contours we extract from connectivity information match contours drawn by human subject fairly well, both without any training towards making them more human-like. To improve biological plausibility, all computations in our model are local and all units are connected to the same small, local set of other units throughout learning and inference, which matches early visual cortex, in which the lateral connections that follow natural image statistics are implemented anatomically (Buzás et al., 2006; Hunt et al., 2011; Roelfsema et al., 1998; Stettler et al., 2002). This in contrast to other ideas that require flexible pointers to arbitrary locations and features (as discussed by Shadlen & Movshon, 1999) or capsules that flexibly encode different parts of the input (Doerig et al., 2020; Kosiorek et al., 2019; Sabour et al., 2017; 2021). Nonetheless, we employ contrastive learning objectives and backpropagation here, for which we do not provide a biologically plausible implementations. However, there is currently active research towards biologically plausible alternatives to these algorithms (e.g. Illing et al., 2021; Xiong et al., 2020). Selecting the neurons that react to a specific object appears to rely on some central resource (Treisman, 1996; Treisman & Gelade, 1980) and to spread gradually through the feature maps (Jeurissen et al., 2013; 2016; Self et al., 2019). We used a computer vision algorithm for this step, which centrally computes the eigenvectors of the connectivity graph Laplacian (Arbeláez et al., 2011), which does not immediately look biologically plausible. However, a recent theory for hippocampal place and grid cells suggests that these cells compute the same eigenvectors of a graph Laplacian albeit of a successor representation (Stachenfeld et al., 2014; 2017). Thus, this might be an abstract description of an operation brains are capable of. In particular, earlier accounts that model the selection as a marker that spreads to related locations (e.g. Finger & König, 2014; Roelfsema, 2006; Singer & Gray, 1995) have some similarities with iterative algorithms to compute eigenvectors. Originally, phase coherence was proposed as a marker (Finger & König, 2014; Peter et al., 2019; Singer & Gray, 1995), but a simple gain increase within attended objects (Roelfsema, 2006) and a random gain modulation were also proposed (Haimerl et al., 2021; 2019). Regardless of the mechanistic implementation of the marker, connectivity information of the type our model extracts would be extremely helpful to explain the gradual spread of object selection. Our implementation of the model is not fully optimized, as it is meant as a proof of concept. In particular, we did not optimize the architectures or training parameters of our networks for the task, like initialization, optimization algorithm, learning rate, or regularization. Presumably, better performance in all benchmarks could be reached by adjusting any or all of these parameters. One possible next step for our model would be to train deeper architectures, such that the features could be used for complex tasks like object detection and classification. Contrastive losses like the one we use here are successfully applied for pretraining for large scale tasks such as ImageNet (Russakovsky et al., 2015) or MS Coco (Lin et al., 2015). These large scale applications often require modifications for better learning (Chen et al., 2020; Feichtenhofer et al., 2021; Grill et al., 2020; He et al., 2020; Hénaff et al., 2020; van den Oord et al., 2019). For example: Image augmentations to explicitly train networks to be invariant to some image changes, prediction heads that allow more complex distributions for the predictions, and memory banks or other methods to decrease the reliance on many negative samples. For understanding human vision, this line of reasoning opens the exciting possibility that higher visual cortex could be explained based on similar principles, as representations from contrastive learning also yield high predictive power for these cortices (Zhuang et al., 2021). The model we propose here is a probabilistic model of the feature maps. Based on this model, we could also infer the feature values. Thus, our model implies a pattern how neurons should combine their bottom-up inputs with predictions from nearby other neurons, once we include some uncertainty for the bottom-up inputs. In particular, the combination ought to take into account which nearby neurons react to the same object and which ones do not. Investigating this pooling could provide insights and predictions for phenomena that are related to local averaging. Crowding for example (Balas et al., 2009; Freeman & Simoncelli, 2011; Herzog et al., 2015; Wallis et al., 2016; 2017; 2019) is currently captured best by summary statistic models (Balas et al., 2009; Freeman & Simoncelli, 2011; Wallis et al., 2017), but deviations from these predictions suggest that object boundaries change processing (Herzog et al., 2015; Wallis et al., 2016; 2019). Another promising extension of our model would be processing over time, because predictions over time were found to be a potent signal for contrastive learning (Feichtenhofer et al., 2021) and because coherent object motion is among the strongest grouping signals for human observers (Köhler, 1967) and computer vision systems (Yang et al., 2021). Beside the substantial increases in processing capacity necessary to move to video processing instead of image processing, this step would require some extension of our framework to include object motion into the prediction. Nonetheless, including processing over time seems to be an interesting avenue for future research, especially because segmentation annotations for video are extremely expensive to collect such that unsupervised learning is particularly advantageous and popular in recent approaches (Araslanov et al., 2021; Jabri et al., 2020; Lai et al., 2020). A SUPPLEMENTARY MATERIAL: TRAINING DETAILS We trained 24 networks of each of the three types. The versions differed in the size of the neighborhood (4, 8, 12, or 20 neighbors), the amount of noise added (α ∈ 0, 0.1, 0.2), and the used loss (position or factor loss). The parameters we trained were: • all weights of the underlying network • the logit transform of p for each relative position of two neighbors • the logarithms of the diagonal entries of C for each relative position of neighbors We trained models using the standard stochastic gradient descent implemented in pytorch (Paszke et al., 2019) with a learning rate of 0.001, a momentum of 0.9 and a slight weight decay of 0.0001. To speed up convergence we increased the learning rate by a factor of 10 for the parameters of the prediction, i.e. C and p. For the gradient accumulation for the position based loss, we accumulate 5 repetitions for the pixel model and 10 for the linear model and for predseg1. Each repetition contained 10 random negative locations. Batch size was set to fit onto the smaller GPU type used in our local cluster. The resulting sizes are listed in Table 2 A.1 ARCHITECTURE DETAILS The pixel model was implemented as a single Identity layer. The linear model was implemented as a single 50× 11× 11 convolutional layer. The Predseg1 model was implemented as a sequential model with 4 processing steps separated by subsampling layers (1× 1 convolutional layers with a stride > 1). The first processing step was a 3× 3 convolutional layer with 3 channels followed by subsampling by a factor of 3. The second step was a 11× 11 convolutional layer with 64 features followed by subsampling by a factor of 2. The third and fourth steps were residual processing blocks, i.e. two convolutional layers with a rectified linear unit non-linearity between them whose results were added to the inputs. They had 128 and 256 features respectively and were separated by another subsampling by a factor of 2. A.2 ADDED NOISE To prevent individual features dimensions from becoming perfectly predictive, we added a small amount of Gaussian noise to the feature maps before applying the loss. To yield variables with mean 0 and variance 1 after adding the noise we implemented this step as: fnoise = √ 1− α2 + αϵ (13) where α ∈ [0, 1] controls the noise variance and ϵ is a standard normal random variable. Adding this noise did not change any of our results substantially and the three versions with different amounts of noise (α = 0, 0.1 or 0.2) performed within 1− 2% in all performance metrics. A.3 TRAINING DURATION Networks were trained in training jobs that were limited to either 48 hours of computation time or 10 epochs of training. As listed in table 2, we used a single such job for the pixel models, 7 for the linear models and 9 for the predseg1 models. Most larger networks were limited by the 48 hour limit, not by the epoch limit. A.4 USED COMPUTATIONAL RESOURCES The vast majority of the computation time was used for training the network parameters. Computing segmentations for the BSDS500 images and evaluating them took only a few hours of pure CPU processing. Networks were trained on an internal cluster using one GPU at a time and 6 CPUs for data loading. We list the training time per epoch in table 2. If every job had run for the full 48 hours we would have used (1 + 7 + 9) × 24 × 2 = 816 days of GPU processing time, which is a relatively close upper bound on the time we actually used. A.5 COMPARISON OF THE TWO LOSSES The position loss is consistent with the prediction made by the whole Markov random field, but is relatively inefficient, because the predicted distribution p(fi|fj∀j ∈ N(i)) and the normalization constants for these conditional distributions are different for every location i. Thus, the second term in equation (9) cannot be reused across the locations i. Instead, we need to compute the second term for each location separately, which requires a similar amount of memory as the whole feature representation for each negative sample i′ and each neighbor. To enable a sufficiently large set of negative points i′ with the available memory, we compute this loss multiple times with few negative samples and sum the gradients. This trick saves memory, because we can free the memory for the loss computation after each repetition. As the initial computation of the feature maps is the same for all negative samples, we save some computation for this procedure by computing the feature maps only once. To propagate the gradients through this single computation, we add up the gradients of the loss repetitions with regard to the feature maps and then propagate this summed gradient through the feature map computation. This procedure does not save computation time compared to the loss with many negative samples, as we still need to calculate the evaluation for each position and each sample in the normalization set. The factor loss does not lead to a consistent estimation of the MRF model, because the prediction p(fi|fj) should not be based only on the factor ψij , but should include indirect effects as fj also constrains the other neighbors of i. Optimizing each factor separately will thus overaccount for information that could be implemented in two factors. However, the factor loss has the distinct advantage that the same noise evaluations can be used for all positions and images in a minibatch, which enables a much larger number of noise samples and thus much faster learning.
1. What is the main contribution of the paper, and how does it differ from other boundary detection methods? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and clarity? 3. How does the reviewer assess the quality and reproducibility of the paper's content? 4. What questions or concerns does the reviewer have regarding the learning objective, contrastive feature learning goal, and the optimization process? 5. How would you rephrase or rewrite the method section to make the learning objective more apparent? 6. Can you explain why the authors mentioned p_ij in the text but never used it in the objective? 7. How do you respond to the reviewer's confusion about the distinction between marginal distribution vs local definition of the distribution type? 8. Could you clarify whether the optimization is about approximating the marginal distribution P(w_{ij}) or not? 9. How would you address the reviewer's concern about the conflicting arguments regarding optimizing p_ij and not having it in the objective? 10. Can you provide additional equations as proof to support your answers to these questions?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a local model for boundary detection. At the core of the method, it assumes a local Gaussian Markov model in the feature space, where the features can be jointly learnt or derived from a neural network. The model uses a contrastive learning scheme to optimize features and pixel location connectivity, which gives the final contour prediction results. Strengths And Weaknesses strength If everything holds up, the strength of this paper is its novelty. It shows that even with local gaussian Markov models, under the proposed contrastive learning goal, one can recover meaningful boundary maps by looking at the pixel connectivity matrix. (Spectral clustering is needed to generate segments first and use edge detectors for contours.) Weaknesses This paper is poorly written with many missing pieces and handwavy arguments. The learning objective. In the loss section, only ψ i j appears in both terms, so why p i j is being optimized as well? ψ i j is just a gaussian error term between f i and f j , with optimizable diagonal variance C i j . How is the related to the connectivity w i j ? Also please avoid using p i j because you are also using p ( ⋅ ) as probability. I would highly recommend the author rewrite the method section in a way that the learning objective is obvious. According to the formulation in the text, the final prediction result is p ( w | f ) , then how is this target linked with the proposed objective? This needs a more detailed derivation, and since this is the core of the proposed method, it should be written as a formal theorem where you start from the contrastive objectives and ends with a MLE or approximation of p ( w | f ) . Right now, from the current version of the manuscript, I can only make an educated guess that after optimizing with the contrastive feature learning goal, a MLE estimation of p ( w | f ) is performed, given the learnt feature and covariance parameters. I don't think this is what happens because the authors mentioned in the text that p i j is also being optimized, which never appears. I suspect the learning goal is wrong, where you need the p i j terms as well, otherwise you are approximating some distribution (not stated clearly) with a different Markov field than the one stated in Eq2. Assuming it should not be just ψ i j in the loss term, but with p i j as well, the authors should make it explicit about what exactly are they approximating using the similar formulation of NCE, with equations as proof. I find handwavy arguments like: we do not optimize the Markov random field directly, but optimize predictions based on the model using features from other locations as the noise distribution. For this noise distribution, the factors that depend only on a single location (the first product in (1)) will cancel. very vague and hard to follow. What do you mean by optimizing a markov field? Optimizing for an MLE estimate? What are the parameters? MLE of what distribution? What noise distribution? Why did it cancel out? they really should be equations in addition to words. There's weird organization issues and formulation issues. Starting from Eq.4 to Sec 2.1, the derivation of normalization factor is for what? According to text in the later section, it seems you need this for some logit calculation. But it's funny to say that something is a proper distribution after deriving normalization factor: normalization factor is exactly what makes it a proper distribution, by definition. A better way to formulate this is to just let the pair-wise potential (factor) to be f ( w i j , f i , f j ) , and define this function to be the terms inside the prod operator. for each individual factor, p(w_ij=1) is p_ij by definition, but not the marginal distribution on the markov field, a simple fact . The authors make statements about this in a weird way, but it's just a simple distinction between marginal distribution vs local definition of the distribution type. I'm still confused about why stressing this point though; are you approximating the marginal distribution P(w_{ij}) somewhere? Because of the conflicting arguments about optimizing p i j and not having it in the objective, I can't make sense of what the optimization is about. Even if p i j is included, it's still not obvious to see what distribution approximation the authors are making, and how to get a point estimate of the connectivity in the end (is it MLE over some distribution? ). Clarity, Quality, Novelty And Reproducibility Clarity This paper lacks clarity. It is hard to follow and requires a reader to guess that the author is trying to say. The quality of the results is acceptable, but the paper writing is preventing the readers from understanding what's actually going on. The authors did provide full training and evaluation code in the submission. I believe the method/results are reproducible.
ICLR
Title Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial Auto-Encoders Abstract Employing a forward diffusion chain to gradually map the data to a noise distribution, diffusion-based generative models learn how to generate the data by inferring a reverse diffusion chain. However, this approach is slow and costly because it needs many forward and reverse steps. We propose a faster and cheaper approach that adds noise not until the data become pure random noise, but until they reach a hidden noisy-data distribution that we can confidently learn. Then, we use fewer reverse steps to generate data by starting from this hidden distribution that is made similar to the noisy data. We reveal that the proposed model can be cast as an adversarial auto-encoder empowered by both the diffusion process and a learnable implicit prior. Experimental results show even with a significantly smaller number of reverse diffusion steps, the proposed truncated diffusion probabilistic models can provide consistent improvements over the non-truncated ones in terms of performance in both unconditional and text-guided image generations. 1 INTRODUCTION Generating photo-realistic images with probabilistic models is a challenging and important task in machine learning and computer vision, with many potential applications in data augmentation, image editing, style transfer, etc. Recently, a new class of image generative models based on diffusion processes (Sohl-Dickstein et al., 2015) has achieved remarkable results on various commonly used image generation benchmarks (Song & Ermon, 2019; Ho et al., 2020; Song & Ermon, 2020; Song et al., 2021b; Dhariwal & Nichol, 2021), surpassing many existing deep generative models, such as autoregressive models (van den Oord et al., 2016), variational auto-encoders (VAEs) (Kingma & Welling, 2013; Rezende et al., 2014; van den Oord et al., 2017; Razavi et al., 2019), and generative adversarial networks (GANs) (Goodfellow et al., 2014; Radford et al., 2015; Arjovsky et al., 2017; Miyato et al., 2018; Brock et al., 2019; Karras et al., 2019; 2020b). This new modeling class, which includes both score-based and diffusion-based generative models, uses noise injection to gradually corrupt the data distribution into a simple noise distribution that can be easily sampled from, and then uses a denoising network to reverse the noise injection to generate photo-realistic images. From the perspective of score matching (Hyvärinen & Dayan, 2005; Vincent, 2011) and Langevin dynamics (Neal, 2011; Welling & Teh, 2011), the denoising network is trained by matching the score function, which is the gradient of the log-density of the data, of the corrupted data distribution and that of the generator distribution at different noise levels (Song & Ermon, 2019). This training objective can also be formulated under diffusion-based generative models (Sohl-Dickstein et al., 2015; Ho et al., 2020). These two types of models have been further unified by Song et al. (2021b) under the framework of discretized stochastic differential equations. Despite their impressive performance, diffusion-based (or score-based) generative models suffer from high computational costs, both in training and sampling. This is because they need to perform a large number of diffusion steps, typically hundreds or thousands, to ensure that the noise injection is small enough at each step to make the assumption that both the diffusion and denoising processes have the Gaussian form hold in the limit of small diffusion rate (Feller, 1949; Sohl-Dickstein et al., 2015). In other words, when the number of diffusion steps is small or when the rate is large, the Gaussian assumption may not hold well, and the model may not be able to capture the true score function of the data. Therefore, previous works have tried to reduce the number of diffusion steps by using non-Markovian reverse processes (Song et al., 2020; Kong & Ping, 2021), adaptive noise scheduling (San-Roman et al., 2021; Kingma et al., 2021), knowledge distillation (Luhman & Luhman, 2021; Salimans & Ho, 2022), diffusing in a lower-dimension latent space (Rombach et al., 2022), etc., but they still cannot achieve significant speedup without sacrificing the generation quality. In this paper, we propose a novel way to shorten the diffusion trajectory by learning an implicit distribution to start the reverse diffusion process, instead of relying on a tractable noise distribution. We call our method truncated diffusion probabilistic modeling (TDPM), which is based on the idea of truncating the forward diffusion chain of an existing diffusion model, such as the denoising diffusion probabilistic model (DDPM) of Ho et al. (2020). To significantly accelerate diffusion-based text-to-image generation, we also introduce the truncated latent diffusion model (TLDM), which truncates the diffusion chain of the latent diffusion model (LDM) of Rombach et al. (2022). We note LDM is the latent text-to-image diffusion model behind Stable Diffusion, an open-source project that provides state-of-the-art performance in generating photo-realistic images given text input. By truncating the chain, we can reduce the number of diffusion steps to an arbitrary level, but at the same time, we also lose the tractability of the distribution at the end of the chain. Therefore, we need to learn an implicit generative distribution that can approximate this distribution and provide the initial samples for the reverse diffusion process. We show that this implicit generative distribution can be implemented in different ways, such as using a separate generator network or reusing the denoising network. The former option has more flexibility and can improve the generation quality, while the latter option has no additional parameters and can achieve comparable results. We reveal that DDPM and VAE have a similar relationship as TDPM and adversarial auto-encoder (AAE, Makhzani et al. (2015)). Specifically, DDPM is like a VAE with a fixed encoder and a learnable decoder that use a diffusion process, and a predefined prior. TDPM is like an AAE with a fixed encoder and a learnable decoder that use a truncated diffusion process, and a learnable implicit prior. Our truncation method has several advantages when we use it to modify DDPM for generating images without text guidance or LDM for generating images with text guidance. First, it can generate samples much faster by using fewer diffusion steps, without sacrificing or even enhancing the generation quality. Second, it can exploit the cooperation between the implicit model and the diffusion model, as the diffusion model helps the implicit model train by providing noisy data samples, and the implicit model helps the diffusion model reverse by providing better initial samples. Third, it can adapt the truncation level to balance the generation quality and efficiency, depending on the data complexity and the computational resources. For generating images with text guidance, our method can speed up the generation significantly and make it suitable for real-time processing: while LDM takes the time to generate one photo-realistic image, our TLDM can generate more than 50 such images. The main contributions of our paper are as follows: • We introduce TDPM, a new diffusion-based generative model that can shorten the diffusion trajectory by learning an implicit distribution to start the reverse diffusion process, and demonstrate that the learning of the implicit distribution can be achieved in various ways. We further introduce TLDM to significantly accelerate diffusion-based text-to-image generation. • We show TDPM can be formulated as a diffusion-based AAE. • We show that the implicit distribution can be realized by reusing the denoising network for the reverse diffusion process, which can reduce the reverse diffusion steps by orders of magnitude without adding any extra parameters and with comparable generation quality. • We reveal the synergy between the implicit model and the diffusion model, as the diffusion process can simplify the training of the implicit model like GANs, and the implicit model can speed up the reverse diffusion process of the diffusion model. • We show that both TDPM and TLDM can adapt the truncation level, according to the data complexity and the computational resources, to achieve a good balance between the generation quality and the efficiency. 2 PRELIMINARIES ON DIFFUSION MODELS In Gaussian diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020), starting from the data distribution x0 ∼ q(x0), a pre-defined forward diffusion process qt produces auxiliary variables xt=1:T by gradually adding Gaussian noise, with variance βt ∈ (0, 1) at time t, as follows: q(x1, ...,xT |x0) := ∏T t=1 q(xt |xt−1), q(xt |xt−1) := N (xt; √ 1− βtxt−1, βtI). (1) With the limit of small diffusion rate (i.e., βt is kept sufficiently small), the reverse distribution q(xt−1 |xt) also follows a Gaussian distribution (Feller, 1949; Sohl-Dickstein et al., 2015) and can be approximated using a neural network parameterized Gaussian distribution pθ as: pθ(xt−1 |xt) := N (xt−1;µθ(xt, t),Σθ(xt, t)). (2) Moreover, with a sufficiently large T , the outcome of the diffusion chain xT will follow an isotropic Gaussian distribution. Thus, with the pre-defined forward (inference) diffusion process and the learned reverse (generative) diffusion process, we can sample from xT ∼ N (0, I) and run the diffusion process in reverse to get a sample from the data distribution q(x0). Under the variational inference (Kingma & Welling, 2013; Blei et al., 2017) framework, viewing q(x1, ...,xT |x0) in (1) as the inference network, we can use the evidence lower bound (ELBO) as our learning objective. Following previous works (Sohl-Dickstein et al., 2015; Ho et al., 2020), the negative ELBO of a diffusion probabilistic model, parameterized by θ, can be expressed as LELBO(θ) := L0(θ) + ∑T t=2 Lt−1(θ) + LT , L0(θ) := Eq(x0)Eq(x1 |x0) [− log pθ(x0 |x1)] , (3) Lt−1(θ) := Eq(x0)Eq(xt |x0)[DKL (q(xt−1 |xt,x0)||pθ(xt−1 |xt))], t ∈ {2, . . . , T} (4) LT := Eq(x0)[DKL (q(xT |x0)||p(xT ))], (5) where DKL(q||p) = Eq[log q − log p] denotes the Kullback–Leibler (KL) divergence from distributions p to q. Generally speaking, diffusion probabilistic models assume the number of diffusion steps T to be sufficiently large to satisfy two conditions: 1) the reverse distribution at each denoising step can be fitted with a Gaussian denoising generator pθ(xt−1|xt); 2) with a sufficiently small diffusion rate βt, the long forward diffusion process will successfully corrupt the data, making q(xT |x0) ≈ N (0, I), and hence approximately LT becomes zero and depends on neither x0 nor θ. What happens if T is insufficiently large? Given a non-Gaussian data distribution q(x0), when the number of denoising steps is reduced, the true posterior q(xt−1 |xt) is not Gaussian and usually intractable (Feller, 1949), resulting in new challenges to current diffusion models. As noted in Xiao et al. (2022), when βt is not sufficiently small, the diffusion step becomes larger and the denoising distribution can be multi-modal and hence too complex to be well fitted by Gaussian. The authors propose to define pθ(xt−1 |xt) with an implicit generator and substitute the ELBO with min θ ∑ t≥1 Eq(t) [Dadv(q(xt−1 |xt)∥pθ(xt−1 |xt))] , (6) where Dadv represents a statistical distance that relies on an adversarial training setup. This modified objective can be minimized by leveraging the power of conditional GANs in fitting implicit multimodal distributions (Arjovsky et al., 2017; Goodfellow et al., 2014; Nowozin et al., 2016). While the concept of diffusion has been used, the proposed models in Xiao et al. (2022) are shown to work the best only when the number of diffusion steps is limited to be as few as four, and start to exhibit deteriorated performance when further increasing that number. 3 TRUNCATED DIFFUSION AND ADVERSARIAL AUTO-ENCODING We first introduce the idea of accelerating both the training and generation of diffusion models by truncating the diffusion chains and describe the technical challenges. We then develop the objective function and training algorithm for TDPM. We further reveal TDPM can also be formulated as an AAE (Makhzani et al., 2015)) empowered by diffusion models. While DDPM can be considered as a hierarchical version of a variational auto-encoder (VAE) with a fixed multi-stochastic-layer encoder, our derivation shows that TDPM can be considered as a hierarchical version of an AAE with a fixed multi-stochastic-layer encoder but a learnable implicit prior. 3.1 MOTIVATION AND TECHNICAL CHALLENGES We propose a novel method called TDPM to speed up the diffusion process and the generative model. The main idea is to shorten the forward diffusion chain that transforms the data into Gaussian noise, and use a learned implicit distribution to sample the starting point of the reverse diffusion chain that reconstructs the data. To be more precise, we adopt the DDPM framework that defines a variance schedule {β1, β2, ..., βT }, which controls the amount of noise added at each step of the forward diffusion process. The forward process has a simple analytical form as a Gaussian distribution: q(xt |x0) = N ( √ ᾱtx0, (1− ᾱt)I); ᾱt = ∏t i=1 αi, αi = 1− βi. Here, xt is the noisy version of the data x0 at step t, and ᾱt is the cumulative product of the diffusion coefficients αi. The forward chain of length T is designed to be long enough to make the data distribution indistinguishable from Gaussian noise N (0, I). However, a long forward chain also implies a high computational cost for the reverse process, which uses a learned neural network to predict the conditional distribution of the clean data given the noisy one at each step. The proposed TDPM cuts off the last part of the forward chain and only keeps the first Ttrunc steps {β1, β2, ..., βTtrunc} ⊂ {β1, β2, ..., βT }. We choose Ttrunc to be much smaller than T so that we can save a lot of computation time in generation. The benefit of this truncation is illustrated in Figure 1, where the bottom row shows the truncated diffusion chain. We can see that the data are only partially corrupted by noise and still retain some features of the original data. This means that we can recover the data more easily and accurately by applying a few Gaussian denoising steps from the corrupted data. Moreover, we do not change the diffusion rates βt for the first Ttrunc steps, so we do not compromise the quality of the forward and reverse processes between time 0 and Ttrunc. However, truncating the forward chain also introduces a new challenge for the reverse process. Unlike the original chain, where the starting point of the reverse process is xT ∼ N (0, I), the truncated chain has an unknown distribution of the corrupted data at step Ttrunc. This makes it difficult to sample from this distribution and initiate the reverse process. To overcome this challenge, we introduce an implicit generative model that approximates the distribution of the corrupted data by minimizing a divergence measure between the implicit and the true noisy distributions at step Ttrunc. This way, we can use the implicit model to sample the starting point of the reverse process and then apply the learned denoising network to generate the data. 3.2 HAND-CRAFTED TDPM OBJECTIVE FUNCTION Mathematically, recall that the DDPM loss in (3) consists of three terms: L0, ∑T t=2 Lt−1, and LT . The training objective of a conventional diffusion model focuses on terms ∑T t=2 Lt−1 and L0. It assumes LT does not depend on any parameter and will be close to zero by carefully pre-defining the forward noising process such that q(xT |x0) ≈ p(xT ) = N (0, I). When the diffusion chains are truncated at time Ttrunc ≪ T , the forward diffusion ends at time Ttrunc, where the marginal distribution of the forward diffusion-corrupted data can be expressed as q(xTtrunc) := ∫ q(xTtrunc |x0)p(x0)dx0, (7) which takes a semi-implicit form (Yin & Zhou, 2018) whose density function is often intractable. To reverse this truncated forward diffusion chain, we can no longer start the reverse diffusion chain from a known distribution such as N (0, I). To this end, we propose TDPM that starts the reverse chain at time Ttrunc from pψ(xTtrunc), an implicit distribution parameterized by ψ. We match pψ(xTtrunc) to q(xTtrunc) via a loss term as L̃Ttrunc := D (q(xTtrunc)||pψ(xTtrunc)) , where D(q||p) is a statistical distance between distributions q and p, such as the Jensen–Shannon divergence and Wasserstein distance. As we keep all the diffusion steps before time Ttrunc in TDPM the same as those in DDPM, we combine L̃Ttrunc with all the loss terms of DDPM before time Ttrunc in (3) to define the TDPM loss as LTDPM := ∑Ttrunc t=1 Lt−1(θ) + L̃Ttrunc(ψ), L̃Ttrunc(ψ) := D (q(xTtrunc)||pψ(xTtrunc)) , (8) We note while in general pψ(xTtrunc) in TDPM is intractable, we can employ a deep neural networkbased generator Gψ to generate a random sample in a single step via xTtrunc = Gψ(z), z ∼ N (0, I). (9) We will discuss later that we may simply let ψ = θ to avoid adding more parameters. 3.3 TDPM AS DIFFUSION-BASED ADVERSARIAL AUTO-ENCODER Following the terminology of AAE, let us define the prior as pψ(xTtrunc), the decoder (likelihood) as pθ(x0 |xTtrunc) := ∫ . . . ∫ [∏Ttrunc t=1 pθ(xt−1 |xt) ] dxTtrunc−1 . . . dx1, (10) which is empowered by a reverse diffusion chain of length Ttrunc, and the encoder (variational posterior) as q(xTtrunc |x0). Thus we can view q(xTtrunc) defined in (7) as the aggregated posterior (Hoffman & Johnson, 2016; Tomczak & Welling, 2018). In addition to imposing an auto-encoding data-reconstruction loss, the key idea of the AAE (Makhzani et al., 2015) is to also match the aggregated posterior to a fixed prior. This idea differs AAE from a VAE that regularizes the autoencoder by matching the variational posterior to a fixed prior under the KL divergence. To this end, we introduce a diffusion-based AAE (Diffusion-AAE), whose loss function is defined as LDiffusion-AAE = −Eq(x0)Eq(xTtrunc |x0) log pθ(x0 |xTtrunc) +D(q(xTtrunc))||pψ(xTtrunc)). (11) Diffusion-AAE has two notable differences from a vanilla AAE: 1) its encoder is fixed and has no learnable parameters, while its prior is not fixed and is optimized to match the aggregated posterior, and 2) its decoder is a reverse diffusion chain, with Ttrunc stochastic layers all parameterized by θ. Note in general as the likelihood in (10) is intractable, the first loss term in (11) is intractable. However, the loss of Diffusion-AAE is upper bounded by the loss of TDPM, as described below. Theorem 1. The Diffusion-AAE loss in (11) is upper bounded by the TDPM loss in (8): LDiffusion-AAE ≤ LTDPM. 3.4 MATCHING THE PRIOR TO AGGREGATED POSTERIOR Via the loss term L̃Ttrunc := D (q(xTtrunc)||pψ(xTtrunc)) in (8), we aim to match the prior pψ(xTtrunc) to the aggregated posterior q(xTtrunc) in TDPM. While we have an analytic density function for neither p nor q, we can easily draw random samples from both of them. Thus, we explore the use of two different types of statistical distances that can be estimated from samples of both q and p. We empirically show that TDPM can achieve good performance regardless of which distance is used for optimization. One possible statistical distance is based on the idea of GANs (Goodfellow et al., 2014; Arjovsky et al., 2017; Bińkowski et al., 2018), which are widely used to learn implicit distributions from empirical data. In this setting, we use a generator Gψ(·) : Rd → Rd to transform samples from an isotropic Gaussian p(z) into samples that approximate the corrupted data, and a discriminator Dϕ(·) : Rd → [0, 1] to distinguish between the samples from the corrupted data distribution q(xTtrunc |x0) and the implicit generative distribution pψ(xTtrunc). The generator and the discriminator are trained by the following objective LGANTtrunc : min ψ max ϕ Ex∼q(xTtrunc )[logDϕ(x)] + Ez∼p(z) [log(1−Dϕ(Gψ(z)))]. (12) 3.5 TRAINING ALGORITHM As the objective in Equation 8 is a sum of different terms, following DDPM (Ho et al., 2020) to fix the terms Σθ(xt, t) = σ2t I , we can simplify 1 Ttrunc ∑Ttrunc t=1 Lt−1 as an expectation defined as Lsimple_trunc = Et,x0,ϵt [ ||ϵt − ϵθ(xt, t)||2 ] , t ∼ Unif(1, 2, . . . , Ttrunc), ϵt ∼ N (0, I) (13) where ϵt is an injected noise at a uniformly sampled timestep index t, xt = √ ᾱtx0 + √ 1− ᾱtϵt is a noisy image at time t, and ϵθ is a denoising U-Net that predicts the noise in order to refine the noisy image xt. Therefore the final simplified version of (8) is constructed as LGANTDPM = Lsimple_trunc + λLGANTtrunc , . (14) While λ, the weight of LTtrunc , can be tuned, we fix it as one for simplicity. Here the TDPM objective consists of two parts: the denoising part ϵθ is focused on denoising the truncated chain, getting updated from Lsimple_trunc, while the implicit part Gψ is focused on minimizing Eq[D (q(xTtrunc)||pψ(xTtrunc))], getting updated from LGANTtrunc . An interesting finding of this paper is that we do not necessarily need to introduce a separate set of parameters ψ for the generator Gψ, as we can simply reuse the same parameters θ of the reverse diffusion model (i.e., let ψ = θ) without clearly hurting the empirical performance. This suggests that the reverse diffusion process from T to Ttrunc could be effectively approximated by a single step using the same network architecture and parameters as the reverse diffusion steps from Ttrunc to 0. Therefore, we provide two configurations to parameterize the implicit distributions. 1) To save parameters, we let the implicit generator and denoising model share the same U-Net parameters but using different time step indices. Specifically, we first use xTtrunc =Gψ(xT )= ϵθ(xT , t=Ttrunc+1), where xT ∼ N (0, I), to generate a noisy image at time Ttrunc. 2) We further explore employing a different model, e.g., StyleGAN2 (Karras et al., 2020a), for the implicit generator, which provides better performance but increases the model size to get xTTrunc . Then for t=Ttrunc, . . . , 1, we iteratively refine it as xt−1 = 1√αt (xt − 1−αt√ 1−ᾱt ϵθ(xt, t)) + βtzt, where zt ∼ N(0, I) when t > 1 and z1 = 0. This process is depicted in Algorithms 1 and 2 in the Appendix. For the implementation details, please refer to Appendix D.6 and our code at https://github.com/JegZheng/ truncated-diffusion-probabilistic-models. 3.6 RELATED WORK In our previous discussions, we have related TDPM to several existing works such as DDPM and AAE. A detailed discussion on other related works is provided in Appendix B. 4 EXPERIMENTS We aim to demonstrate that TDPM can generate good samples faster by using fewer steps of reverse diffusion. We use different image datasets to test our method and follow the same setting as other diffusion models (Ho et al., 2020; Nichol & Dhariwal, 2021; Dhariwal & Nichol, 2021; Rombach et al., 2022) for our backbones. We also have two ways to set up the implicit generator that starts the reverse diffusion. One way is to reuse the denoising network, and the other way is to use a separate network. We try both ways for generating images without any labels. For generating images from text, we use the first way with the LDM backbone. We provide comprehensive details, toy examples, and additional experimental results in Appendices D.4-D.8. We use FID (lower is better) and Recall (higher is better) to measure the fidelity and diversity, respectively, of the generated images. We use CIFAR-10 (Krizhevsky et al., 2009), LSUN-bedroom, and LSUN-Church (Yu et al., 2015) datasets in unconditional experiments, and CUB-200 (Welinder et al., 2010) and MS-COCO (Lin et al., 2014) for text-to-image experiments. The images consist of 32× 32 pixels for CIFAR-10 and 256× 256 pixels for the other datasets. 4.1 EFFICIENCY IN BOTH TRAINING AND SAMPLING We first look at the results on CIFAR-10. We use DDPM (Ho et al., 2020) or improved DDPM (Nichol & Dhariwal, 2021) as our backbones. We use 4, 49, or 99 steps of reverse diffusion, which correspond Table 1: Results of unconditional generation on CIFAR-10, with the best FID and Recall in each group marked in bold. To compare TDPM (TTrunc=0) with GAN-based methods, we use DDPM backbone as generator and StyleGAN2 discriminator. Method NFE FID↓ Recall↑ DDPM backbone DDPM 1000 3.21 0.57 TDPM (TTrunc=99) 100 3.10 0.57 TDPM+ (TTrunc=99) 100 2.88 0.58 DDIM 50 4.67 0.53 TDPM (TTrunc=49) 50 3.30 0.57 TDPM+ (TTrunc=49) 50 2.94 0.58 TDPM (TTrunc=4) 5 3.34 0.57 TDPM+ (TTrunc=4) 5 3.21 0.57 Improved DDPM backbone Improved DDPM 4000 2.90 0.58 TDPM (TTrunc=99) 100 2.97 0.57 TDPM+ (TTrunc=99) 100 2.83 0.58 Improved DDPM+DDIM 50 3.92 0.55 TDPM (TTrunc=49) 50 3.11 0.57 TDPM+ (TTrunc=49) 50 2.96 0.58 TDPM (TTrunc=4) 5 3.51 0.55 TDPM+ (TTrunc=4) 5 3.17 0.57 GAN-based DDGAN 4 3.75 0.57 StyleGAN2 1 8.32 0.41 StyleGAN2-ADA 1 2.92 0.49 TDPM (TTrunc=0) 1 7.34 0.46 Table 2: Results on LSUN-Church and LSUN-Bedroom (resolution 256 × 256). Similar to Table 1, TDPM (TTrunc=0) uses DDPM backbone for the generator. Church Bedroom Method NFE FID FID DDPM backbone DDPM 1000 7.89 4.90 TDPM (TTrunc=99) 100 4.33 3.95 TDPM+ (TTrunc=99) 100 3.98 3.67 DDIM 50 10.58 6.62 TDPM (TTrunc=49) 50 5.35 4.10 TDPM+ (TTrunc=49) 50 4.34 3.98 TDPM (TTrunc=4) 5 4.98 4.16 TDPM+ (TTrunc=4) 5 4.89 4.09 ADM backbone ADM 1000 3.49 1.90 ADM+DDIM 250 6.45 2.31 TDPM (TTrunc=99) 100 4.41 2.24 TDPM+ (TTrunc=99) 100 3.61 1.88 TDPM (TTrunc=49) 50 4.57 2.92 TDPM+ (TTrunc=49) 50 3.67 1.89 TDPM (TTrunc=4) 5 5.61 7.92 TDPM+ (TTrunc=4) 5 4.66 4.01 GAN-based DDGAN 4 5.25 - StyleGAN2 1 3.93 3.98 StyleGAN2-ADA 1 4.12 7.89 TDPM (TTrunc=0) 1 4.77 5.24 Table 3: Results of ImageNet-64×64, evaluated with FID and Recall. TDPM+ is built with a pre-trained ADM and an implicit model trained at TTrunc using StylGAN-XL. Method NFE FID↓ Recall↑ ADM 1000 2.07 0.63 TDPM+ (TTrunc=99) 100 1.62 0.63 TDPM+ (TTrunc=49) 50 1.77 0.58 TDPM+ (TTrunc=4) 5 1.92 0.53 StyleGAN-XL (wo PG) 1 3.54 0.51 Figure 2: Random generation results of TDPM+(TTrunc=4) on ImageNet-64×64. to 5, 50, or 100 number of function evaluations (NFE). For the implicit generator, we either reuse the denoising U-Net or use a StyleGAN2 network (respectively, we call them TDPM and TDPM+). For comparison, we also include DDIM (Song et al., 2020) and DDGAN (Xiao et al., 2022). The comparison with a more diverse set of baselines can be found in Table 9 in Appendix D.7. Table 1 shows that our TDPM can get good FID with fewer NFE. TDPM+ can get even better FID, and it is the best when NFE=100. Compared with TDPM with 0 steps of reverse diffusion (a GAN with DDPM’s U-Net as generator and StyleGAN2 as discriminator) and StyleGAN2, TDPM with more than 0 steps of reverse diffusion has better recall and the FID is as good as StyleGAN2-ADA (a GAN with data augmentation for better training). This means TDPM can largely avoid the mode missing problem in GANs. We show some examples of generated images on CIFAR-10 in Figure 13. We also check how fast TDPM can train and sample. In training, we count how many images TDPM needs to well fit the truncated diffusion chain and the implicit prior. Figure 3 shows that when we use fewer steps of reverse diffusion, the diffusion part needs less time to train. But the implicit prior needs more time to train because it has to model a harder distribution, e.g., fitting the implicit prior with 4 diffusion steps needs similar time to directly fit it on the data. When we use 99 steps of reverse diffusion, the diffusion chain and the implicit prior need similar time to train, and the whole model trains faster than both GAN and DDPM. In sampling, we compare TDPM with 0, 1, 4, 49, or 99 steps of reverse diffusion. We report both FID and the sampling time (s/image) on one NVIDIA V100 GPU in Figure 4. When we use 4 steps of reverse diffusion, the FID is much lower than 0 steps, and the sampling time is slightly longer. When we use more steps of reverse diffusion, the FID goes down Ttrunc=0 (GAN) Ttrunc=4 Ttrunc=49 Ttrunc=99 DDPM 0 2 4 6 8 10 12 It er at ed lo g( ki m gs ) t < Ttrunc t = Ttrunc Figure 3: The required iterations (measured with iterated images) to converge in the training. The iterations for t < TTrunc (ϵθ) and t=TTrunc (Gψ) are marked in red and blue, respectively. Ttrunc=0 (GAN) speed-up x1000 Ttrunc=1 speed-up x500 Ttrunc=4 speed-up x200 Ttrunc=49 speed-up x20 Ttrunc=99 speed-up x10 DDPM 3 4 5 6 7 FI D 7.34 (0.03s) 4.47 (0.06s) 3.41 (0.15s) 3.3 (1.52s) 3.1 (3.13s) 3.27 (31.03s) Figure 4: Evolution of FID and corresponding GPU time (s/image) across different timesteps in the sampling stage. slowly, but the sampling time goes up linearly. When we use 99 steps of reverse diffusion, the FID of TDPM is better than DDPM with 1000 steps. Because the FID does not change much when we use more steps of reverse diffusion, we suggest using a small number of steps, such as 4 or more, to balance the quality and speed of generation. 4.2 RESULTS ON HIGHER-RESOLUTION AND MORE DIVERSE IMAGE DATASETS To test the performance of the proposed truncation method on high-resolution images, we train TDPM using two different diffusion models, DDPM (Ho et al., 2020) and ADM (Dhariwal & Nichol, 2021), as backbones on two datasets of 256× 256 resolution, LSUN-Church and LSUN-Bedroom (Yu et al., 2015). We compare the FIDs of TDPM with those of the backbone models and some state-of-the-art GANs in Tables 2. The results show that TDPM can generate images of similar quality with much smaller truncation steps Ttrunc, which means that it can produce images significantly faster than the backbone models. We also visualize the samples from the implicit distribution xTtrunc ∼ pθ(xTtrunc) that TDPM generates and the corresponding x0 that it finishes at the end of reverse chain in Figure 5. We further evaluate TDPM on ImageNet-1K (with resolution 64×64) that exhibits high diversity. Here we adopt the TDPM+ configuration, where we use a pre-trained ADM (Dhariwal & Nichol, 2021) checkpoint for t < TTrunc and train a StyleGAN-XL (Sauer et al., 2022) based implicit model at t = TTrunc (for simplicity, we choose to not use the progressive growing pipeline of StyleGAN-XL; See Appendix D.6 for more details). We compare both FID and Recall with our backbone models in Table 3 and show example generations in Figure 2. Similar to our observations in Table 1, TDPM has good generation quality with small truncation steps Ttrunc. Moreover, properly training an implicit model at Ttrunc can further improve the performance of the backbone. Table 4: Numerical results of Figure 6. The GPU time of sampling (s/image) is measured on one NVIDIA A100. CUB-Bird MS-COCO NFE GPU time LDM TLDM LDM TLDM 5 0.15 100.81 10.59 48.41 16.7 50 1.57 30.85 7.32 18.25 7.47 100 4.10 11.07 6.79 8.2 7.22 250 11.21 6.82 6.72 6.3 6.29 1000 41.09 6.68 - 6.29 - A bird with brown wings, black back, and red head. A green train is coming down the tracks. NFE=100 (TTrunc = 99) NFE=50 (TTrunc = 49) NFE=5 (TTrunc = 4) TLDM TLDM TLDM TLDM TLDM TLDM LDM LDM LDM LDMLDMLDM Figure 7: Example text-to-image generation results of LDM and TLDM (i.e., TDPM with LDM backbone) finetuned on CUB-200 (top row) or MS-COCO (bottom row), setting the number of times iterating through the reverse diffusion U-Net as 100 (left column), 50 (middle column), or 5 (right column). 4.3 TEXT-TO-IMAGE GENERATION Besides unconditional generation tasks, we develop for text-to-image generation the TLDM, a conditional version of TDPM that leverages as the backbone the LDM of Rombach et al. (2022), which is a state-of-the-art publicly released model with 1.45B parameters pre-trained on LAION400M (Schuhmann et al., 2021). LDM consists of a fixed auto-encoder for pixel generation and a latent-diffusion module to connect text and image embeddings. Here we fine-tune its latent-diffusion part on CUB-200 and MS-COCO datasets with 25K and 100K steps as the baseline. Similar to the unconditional case, we fine-tune with the LDM loss for t < TTrunc and GAN loss for t = TTrunc. More details about the setting can be found in Appendix D.6. The results of LDM with different DDIM sampling steps and TLDM with different truncated steps are summarized in Figure 6 and Table 4. Similar to applying diffusion directly on the original image-pixel space, when the diffusion chain is applied in the latent space, we observe TLDM can achieve comparable or better performance than LDM even though it has shortened the diffusion chain of LDM to have much fewer reverse diffusion steps. For the case that NFE is as small as 5, we note although the FID of TLDM has become higher due to using fewer diffusion steps, the generated image using TLDM at NFE=5 is still visually appealing, as shown in Figure 7. Compared with 50 and 250 steps using LDM, the sampling speed of TLDM using 5 steps is 10 and 50 times faster, respectively, while largely preserving generation quality. We provide additional text-to-image generation results of TLDM in Appendix D.8. 5 CONCLUSION In this paper, we investigate how to reduce the trajectory length of the diffusion chain to achieve efficient sampling without loss of generation quality. We propose truncated diffusion probabilistic modeling (TDPM) that truncates the length of a diffusion chain. In this way, TDPM can use a much shorter diffusion chain, while being required to start the reverse denoising process from an intractable distribution. We propose to learn such a distribution with an implicit generative model powered by the same U-Net used for denoising diffusion, and validate with multiple ways to learn the implicit distribution to ensure the robustness of the proposed TDPM. We reveal that TDPM can be cast as an adversarial auto-encoder with a learnable implicit prior. We conduct extensive experiments on both synthetic and real image data to demonstrate the effectiveness of TDPM in terms of both sample quality and efficiency, where the diffusion chain can be shortened to have only a few steps. ACKNOWLEDGMENTS H. Zheng and M. Zhou acknowledge the support of NSF-IIS 2212418 and IFML. A PROOF Proof of Theorem 1. As the last terms in both losses are the same, we only need to show that the first term in (11) is smaller than or equal to L0 + ∑Ttrunc t=2 Lt−1 in (8). Using Jensen’s inequality, we have − Eq(x0)Eq(xTtrunc |x0) log pθ(x0 |xTtrunc) = −Eq(x0)Eq(xTtrunc |x0) logEq(x1:Ttrunc−1 |x0,xTtrunc ) [ p(x0:Ttrunc−1 |xTtrunc) q(x1:Ttrunc−1 |x0,xTtrunc) ] ≤ −Eq(x0)Eq(xTtrunc |x0)Eq(x1:Ttrunc−1 |x0,xTtrunc ) log p(x0:Ttrunc−1 |xTtrunc) q(x1:Ttrunc−1 |x0,xTtrunc) = −Eq(x0)Eq(x1:Ttrunc |x0) log [ p(x0:Ttrunc−1) q(x1:Ttrunc |x0) q(xTtrunc |x0) p(xTtrunc) ] = ( −Eq(x0)Eq(x1:Ttrunc |x0) log p(x0:Ttrunc−1) q(x1:Ttrunc |x0) ) − Eq(x0)Eq(xTtrunc |x0) log q(xTtrunc |x0) p(xTtrunc) = ( ∑Ttrunc t=1 Lt−1 + LTtrunc)− LTtrunc = Ttrunc∑ t=1 Lt−1, (15) where the second to last equality follows the same derivation of the ELBO in Ho et al. (2020). B RELATED WORK Diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020) employ a forward Markov chain to diffuse the data to noise and learn the reversal of such a diffusion process. With the idea of exploiting the Markov operations (Goyal et al., 2017; Alain et al., 2016; Bordes et al., 2017), diffusion models achieve great success and inspire a variety of tasks including image generation and audio generation (Kong et al., 2020; Chen et al., 2020; Jolicoeur-Martineau et al., 2020; Vahdat et al., 2021). Recently, plenty of studies have been proposed to generalize diffusion model to continuous time diffusion and improve the diffusion models in likelihood estimation (Vincent, 2011; Song & Ermon, 2020; 2019; Nichol & Dhariwal, 2021; Song et al., 2021b;a; Kingma et al., 2021). Another mainstream is to improve the sampling efficiency of diffusion models, which are known for their enormous number of sampling steps. Luhman & Luhman (2021) improve diffusion processes with knowledge distillation and San-Roman et al. (2021) propose a learnable adaptive noise schedule. Song et al. (2020) and Kong & Ping (2021) exploit non-Markovian diffusion processes and shorten the denoising segments. Jolicoeur-Martineau et al. (2021) and Huang et al. (2021) use better SDE solvers for continuous-time models. Aside from these works, recently other types of generative models such as VAEs (Kingma & Welling, 2013), GANs (Goodfellow et al., 2014), and autoregressive models (van den Oord et al., 2016) have been incorporated to diffusion models. They are shown to benefit each other (Xiao et al., 2022; Pandey et al., 2022; Meng et al., 2021) and have a closer relation to our work. Xiao et al. (2022) consider the use of implicit models (Huszár, 2017; Mohamed & Lakshminarayanan, 2016; Tran et al., 2017; Yin & Zhou, 2018; Li & Malik, 2018) to boost the efficiency of diffusion models, where they deploy implicit models in each denoising step, which has higher difficulty in the training as the number of diffusion steps increases. Pandey et al. (2022) build diffusion models on top of the output of VAEs for refinement. Our work is also related if viewing TDPM as a diffusion model on top of an implicit model, where the implicit model can be parameterized with the U-Net or a separate network. C DISCUSSION Potential societal impacts: This paper proposes truncated diffusion probabilistic model as a novel type of diffusion-based generative model. The truncated part can be trained as implicit generative models such as GANs jointly or independently with the diffusion part. The capacities of truncated diffusion probabilistic models are competitive to existing diffusion-based ones and efficiency is largely improved. On the contrary of these positive effects, some negative perspectives could also be seen, depending on how the models are used. One major concern is the truncated diffusion technique proposed in this paper could potentially be a way to hack the existing diffusion models if the implicit models are maliciously used to fit the intermediate steps. For example, for some existing diffusion models, for safety concerns, the model’s capacity to generate private data needs to be locked by hiding the diffusion ending point into an unknown distribution. The technique of TDPM could be used to crack these existing online diffusion models by providing intermediate noisy images or fine-tuning the first few steps with TDPM to unlock the capacity. Besides, the capacity of generating good images can also be misused to generate ill-intentioned images at a much lower cost. Discussions: In this work, we mainly focus on reducing the length of the diffusion chain of a finite-time diffusion model. Our model has shown its effectiveness in improving finite-time diffusion models and it is non-trivial to further explore our model on continuous-time diffusion models (Song et al., 2021b). Moreover, while in this paper DDPM is the primary baseline, TDPM can also be built on other recent diffusion models. While pθ(xTtrunc) is parameterized as an implicit distribution, it can also be formulated as a semi-implicit distribution (Yin & Zhou, 2018), which allows it to be approximated with a Gaussian generator. Xiao et al. (2022) also present a closely related work. While we share the same spirit to reduce the length of the diffusion chain, these two strategies are not conflicting with each other. In future work we will look into the integration of these different strategies. There also exists plenty of options in approximating pθ(xTtrunc). When truncating the diffusion chain to be short, the implicit distribution still faces multi-modal and needs to fit with different methods depending upon the properties that we need. For example, in order to capture all modes, a VAE would be preferred, like done in Pandey et al. (2022). Below we provide an alternative method proposed in Zheng & Zhou (2021) to fit the truncated distribution. Besides the training, it’s also an open question whether TDPM can be incorporated into more advanced architectures to have further improvements and we leave this exploration for future work. D ALGORITHM DETAILS AND COMPLEMENTARY RESULTS Below we provide additional algorithm details and complementary experimental results. D.1 ADDITIONAL ANALYSIS ON THE PARAMETERIZATION OF THE IMPLICIT GENERATOR As shown in Section 3, in general, the objective of TDPM consists of the training of the diffusion model ϵθ (a U-Net architecture (Ronneberger et al., 2015)) with simple loss of DDPM Lsimple and the training of an implicit prior model Gψ with objective LGANTtrunc . Without loss of generality, in our main paper, we show two configurations to parameterize the implicit part for t = Ttrunc: 1) the implicit generator shares the same U-Net architecture used for 0 < t < Ttrunc; 2) the implicit generator is instantiated with a separate network. Below we explain this two configurations (denoted as TDPM+ in the main paper). Configuration 1): At t = Ttrunc, the Unet generates the noisy image at the truncated step: xTtrunc = ϵθ(xTtrunc+1, t = Ttrunc + 1), where xTtrunc+1 ∼ N (0, I) is the pure noise image whose pixels are iid sampled from standard normal. For t = Ttrunc, Ttrunc − 1, . . . , 1, the same Unet iteratively refines the noisy images by letting xt−1 = 1√ᾱt (xt − 1−αt√ 1−ᾱt ϵt−1) + βtzt; zt>1 ∼ N (0, I), z1 = 0, where ϵt−1 = ϵθ(xt, t) is the predicted noise by the Unet. Under this setting, the Unet-based generator plays two roles at the same time and the training will be more challenging than using two different generators here. However, we can also see as Ttrunc gets larger, the distribution of p(xTtrunc) will become more similar to a noise distribution, and generating the noisy images will be more like generating noises. In this case, being able to generate both noisy images and predicting noise becomes easier for the generator. Configuration 2) (TDPM+): Unlike previous configuration, where the implicit generator at step t = T shares the same U-Net architecture with t < Ttrunc. Another way is to parameterize Gψ with a separate generator. Although this configuration increases the total parameter of the generative model, it allows the model has better flexibility in the training stage. For example, these two networks can be trained in parallel or leverage a pre-trained model. In our paper, we conduct the experiments by using Stylegan2 generator architecture Karras et al. (2020b) for t = Ttrunc, resulting in an increase of 19M and 28M for the generator parameters when handling 32× 32 and 256× 256 images. The process of training and sampling of these configurations are summarized in Algorithm 1 and 2. Algorithm 1 Training 1: repeat 2: x0 ∼ q(x0) 3: t ∼ Uniform({1, . . . , Ttrunc}) 4: ϵt ∼ N (0, I), z ∼ N (0, I) 5: Update with (14) 6: until converged Algorithm 2 Sampling 1: xTtrunc+1 ∼ N (0, I) 2: if Gψ shared with ϵθ then 3: xTtrunc = ϵθ(xTtrunc+1, Ttrunc + 1) 4: else 5: xTtrunc = Gψ(xTtrunc+1) 6: end if 7: for t = Ttrunc, . . . , 1 do 8: zt ∼ N (0, I) if t > 1, else z1 = 0 9: xt−1 = 1√αt ( xt − 1−αt√1−ᾱt ϵθ(xt, t) ) + βtzt 10: end for 11: return x0 D.2 ALTERNATIVES OF LEARNING THE IMPLICIT DISTRIBUTION Another possible statistical distance is based on conditional transport (Zheng & Zhou, 2021), which is proposed to balance the model-seeking and mode-covering behaviors when fitting an empirical data distribution. In this setting, we use the same generator Gψ as before, but instead of a discriminator, we use a conditional distribution πη parameterized by η to find an optimized mapping between the samples of p and q, and a critic ϕ to measure the point-to-point cost cϕ in the feature space. The generator, the conditional distribution, and the critic are trained by the following objective LCTTtrunc : min ψ,η max ϕ Ex∼q(xTtrunc ) [ EGψ(z)∼πη(Gψ(z) |xTtrunc )cϕ(xTtrunc , Gψ(z)) ] + Ez∼p(z) [ ExTtrunc∼πη(xTtrunc |Gψ(z))cϕ(xTtrunc , Gψ(z)) ] . (16) Similar to (14), we fit TDPM-CT with following loss LCTTDPM = Lsimple_trunc + λLCTTtrunc . (17) We empirically find out this objective has no significant difference than using GAN objective shown in Equation 14 in performance-wise as long as the generator is well trained. D.3 CONDITIONAL TRUNCATED DIFFUSION PROBABILISTIC MODELS For conditional generation, we extend (14) and derive a conditional version of TDPM: LcTDPM = Lcsimple_trunc + λLcTtrunc , (18) where Lcsimple_trunc aims to train the conditional diffusion model with Lcsimple_trunc = EcEt,x0|c,ϵt [ ||ϵt − ϵθ(xt, c, t)||2 ] , t ∼ Unif(1, 2, . . . , Ttrunc), ϵt ∼ N (0, I), (19) and the truncated distribution LcTtrunc can be fitted with GAN or CT: min ψ max ϕ Ec [ Ex∼q(xTtrunc | c)[logDϕ(x | c)] + Ez∼p(z) [log(1−Dϕ(Gψ(z, c)) | c)] ] . (20) min ψ,η max ϕ Ec [ Ex∼q(xTtrunc | c) [ EGψ(z)∼πη(Gψ(z,c) |xTtrunc ,c)cϕ(xTtrunc , Gψ(z, c)) ] + Ez∼p(z) [ ExTtrunc∼πη(xTtrunc |Gψ(z,c),c)cϕ(xTtrunc , Gψ(z, c)) ] ] . (21) D.4 ANALYSIS ON TOY EXPERIMENTS Although we present image experiments in the main paper, our studies were firstly justified our method on synthetic toy data as a proof of concept. We adopt representative 2D synthetic datasets used in prior works (Gulrajani et al., 2017; Zheng & Zhou, 2021), including Swiss Roll, Double Moons, 8-modal, and 25-modal Gaussian mixtures with equal component weights. We use an empirical sample set X , consisting of |X | = 2, 000 samples and illustrate the generated samples after 5000 training epochs. We take 20 grids in the range [−10, 10] for both the x and y axes to approximate the empirical distribution of p̂θ and q̂, and report the corresponding forward KL DKL(q̂||p̂θ) as the quantitative evaluation metric. Figure 8 shows the results on the Swiss Roll data. We present a short chain with T = 2 and a longer chain with T = 5 to show the impacts of the number of diffusion steps. The first row shows that the data distribution is diffused with accumulated noise, and with more steps the diffused distribution will be closer to an isotropic Gaussian distribution. As one can see, truncating the diffusion chain to a short length will result in a clear gap between q(xTtrunc) and N (0, I). When DDPM (shown in the second row) samples from the isotropic Gaussian distribution, it becomes hard to recover the original data distribution from pure noise with only a few steps. Although we can see DDPM can get slightly improved with a few more steps (T = 5), as long as q(xT ) is not close to Gaussian, DDPM can hardly recover the data distribution. By contrast, as shown in the third and fourth rows, TDPM successfully approximates the non-Gaussian q(xTtrunc) with its implicit generator, and we can see the remaining part of the truncated chain is gradually recovered by the denoising steps. From both visualizations and DKL(q̂||p̂θ), we can see that TDPM is able to fit every step in such short chains. TDPM-GAN and TDPM-CT both succeed in fitting pθ(xTtrunc) but the latter one fits slightly better when the diffusion length is 2. When the length increases to 5, fitting the implicit distribution with GAN becomes easier. This observation demonstrate a benefit of combining the diffusion models and GANs. If the implicit generator is sufficiently powerful to model q(xTtrunc), then the number of steps in need can be compressed to a small number. On the contrary, if the implicit generator cannot capture the distribution, we need more steps to facilitate the fitting of the data distribution. Shown in Figure 9-Figure 11, we can see 8-modal Gaussian is more similar to an isotropic Gaussian after getting diffused, thus DDPM can recover a distribution similar to data with 5 steps. On 25-Gaussians, we can observe GAN does not suffer from mode-collapse and provide a better approximation than CT, which results in better data distribution recovery in the final step. D.5 ADDITIONAL ABLATION STUDIES Using Pre-trained diffusion backbones: Different from the default setting, here we put the implicit model of TDPM+ trained at t = Ttrunc and a pre-trained DDPM model1 in the same pipeline of sampling. In this case we do not need to spend any time on pretraining the DDPM model, and only need to train the implicit model for t = Ttrunc. As shown in Table 5, when combined with a pre-trained DDPM for t < Ttrunc, the generation performance of TDPM trained under this two-step procedure is comparable to TDPM trained end-to-end. Sensitivity to noise schedule: Nichol & Dhariwal (2021) show the noise schedule affects the training of DDPM. Here we examine if TDPM is sensitive to the choice of noise schedule. We compare the linear schedule with cosine schedule, which adds noise in a milder manner. The results on CIFAR-10 are reported in Table 6, which suggest that TDPM is not sensitive to the choice between these two schedules. On the choice of truncated step: As the diffused distribution could facilitate the learning of the implicit generator Gψ (Arjovsky & Bottou, 2017), where we could observe by increasing the number of diffusion steps, the FID of TDPM consistently gets better. A natural question is on which step should we truncate the diffusion chain. We study the signal-to-noise ratio (SNR) of different diffusion step. Based on q(xt|x0) = N ( √ ᾱtx0, 1− ᾱtI), we calculate SNR as SNR = √ ᾱt√ 1− ᾱt ; ᾱt = t∏ i=1 (1− βt). We visualize the SNR evolution across time step t > 0 in Figure 12, where we can observe the SNR rapidly decays in the first 100 steps. According to previous studies in Arjovsky & Bottou (2017), injecting noise into the data distribution could smoothen the data distribution support and facilitate the GAN training. The SNR change in this interval indicates injecting noise in the level of t ∈ J1, 100K could bring in more significant improvement for the GAN training. When the step is greater than 200, the SNR is change is no longer significant and close to zero, which indicates the implicit model 1The pre-trained checkpoints are provided by: https://github.com/pesser/pytorch_ diffusion might not be too informative, though it is easier to train. Our experimental observations in Figure 3 also justify this conclusion: when training a GAN at TTrunc = 4, the required number of iterations is similar to training it on clean data; by training the GAN model at TTrunc = 99, the training of GAN is significantly facilitated. For TTrunc > 100, we empirically examine to train a GAN and find it would converge faster than training the diffusion model for t < TTrunc. Comparison of model efficiency: In complement of the results in Table 1-2, we provide detailed model size and generation time on v100 GPU. The results are summarized in Table 7. We can see TDPM has an increasing in the total number of parameter, as it involves a discriminator to help train the implicit model, while its sampling efficiency is also obvious. D.6 EXPERIMENTAL SETTINGS D.6.1 MODEL ARCHITECTURE Generator: Our generator structure strictly follows the U-Net structure (Ronneberger et al., 2015) used in DDPM, improved DDPM, and ADM (Ho et al., 2020; Nichol & Dhariwal, 2021; Dhariwal & Nichol, 2021), which consists of multiple ResNet blocks (He et al., 2016) with Attention blocks (Vaswani et al., 2017) injected in the bottleneck. Please refer to these paper for more details on the architecture. A key difference between our model and previous diffusion models is that our model also train such U-Net as an extra implicit generator Gθ that takes a latent variable z ∼ N (0, I) and a fixed time index t = Ttrunc + 1 as input. However, this does not result in a difference in the generator architecture. We parameterize Gθ with the same U-Net architecture for simplicity and the time embedding t = Ttrunc + 1 is specified to be trained with the implicit loss shown in (12) and (16). We have also tested to use all zero time embedding for t = Ttrunc + 1 and found no clear differences. For our results of TDPM+, the generator Gψ specifically takes a StyleGAN2 architecture Karras et al. (2020b) and there is no time-embedding in Gψ. An increase of generator parameter appears caused by separating the implicit model and denoising U-Net. Note that the generator is trained with GAN loss and without specially designed adaptive augmentation in Karras et al. (2020a). For the detailed model architecture please refer to the corresponding paper or their Github repository: https://github.com/NVlabs/stylegan2-ada-pytorch. Discriminator: Similar to Xiao et al. (2022), we adopt the discriminator architecture used in Karras et al. (2020b), but without the time step input. The discriminator discriminate xTtrunc is from the diffused distribution q(xTtrunc) or implicit generative distribution pθ(xTtrunc). Please refer to Appendix C of Xiao et al. (2022) for the detailed design. Navigator: Training with LCTTtrunc involves an extra module named navigator (Zheng & Zhou, 2021). We strictly follow the architecture used in Zheng & Zhou (2021), where the navigator is an MLP taking the pairwise feature distance as inputs. There is no time embedding used in the navigator as it is only used for the training at t = TTrunc. The feature is extracted from the layer before the final scalar output. Please refer to their Appendix D for detailed information. Architecture for text-to-image experiments: We adopt the 1.45B LDM model (Rombach et al., 2022) that is pretrained on LAION-400M dataset (Schuhmann et al., 2021). The LDM model consists of a U-Net KL-regularized autoencoder with downsampling-factor 8 (resolution 256 -> 32), a U-Net in the latent space, and a BERT (Devlin et al., 2018) text encoder transform raw text to a sequence of 1280-dimension embeddings. We only fine-tune the latent model in our experiments. In the training of the truncated part, the discriminator takes the first-half of the U-Net (downsampling backbone) with a linear predicting head on top of it. Architecture for toy experiments: The generator uses an architecture stacked with 4 linear layers with 128 hidden units. Each intermediate layer is equipped with a time-embedding layer and follows softplus activation. The discriminator and navigator have the same architecture, without time-embedding layers, and using leakyReLU as the activation function. D.6.2 TRAINING CONFIGURATIONS Datasets: We use CIFAR-10 (Krizhevsky et al., 2009), LSUN-bedroom, and LSUN-Church (Yu et al., 2015) datasets for unconditional generation in the main experiments. Additionally, we apply CelebA(Liu et al., 2015) and CelebA-HQ (Lee et al., 2020) for complementary justification. For text-to-image experiments, we use CUB-200 (Welinder et al., 2010) and MS-COCO (Lin et al., 2014). The images consist of 32 × 32 pixels for CIFAR-10. For the other datasets, we apply center-crop along the short edge and resize to the target resolution (64× 64 for CelebA; 256× 256 for the others). Diffusion schedule: For all datasets, we strictly follow the diffusion process used in our backbone models, and instantiate the truncated diffusion schedule by obtaining the first TTrunc diffusion rates {β1, ..., βTTrunc}. For example, if our goal is to fit a model with NFE=50, to truncate the diffusion process used in Ho et al. (2020) (β1 = 10−4, βT = 0.02, T=1000), we first initialize β1, β2, ... β1000, and then taking the first 49 steps to complete the truncation. Optimization: We train our models using the Adam optimizer (Kingma & Ba, 2015), where most of the hyperparameters match the setting in Xiao et al. (2022), and we slightly modify the generator learning rate to match the setting in Ho et al. (2020), as shown in Table 8. We train our models using V100 GPUs, with CUDA 10.1, PyTorch 1.7.1. The training takes approximately 2 days on CIFAR-10 with 4 GPUs, and a week on CelebA-HQ and LSUN-Church with 8 GPUs. Table 8: Optimization hyper-parameters. CIFAR10 CelebA CelebA-HQ LSUN Initial learning rate for discriminator 10−4 10−4 10−4 10−4 Initial learning rate for navigator (if applicable) 10−4 10−4 10−4 10−4 Initial learning rate for generator 1× 10−5 1× 10−5 2× 10−5 2× 10−5 Adam optimizer β1 0.5 0.5 0.5 0.5 Adam optimizer β2 0.9 0.9 0.9 0.9 EMA 0.9999 0.9999 0.9999 0.9999 Batch size 128 128 64 64 # of training iterations 800k 800k 0.5M 2.4M(bedroom)/1.2M(church) # of GPUs 4 8 8 8 For TDPM+, where we use StyleGAN2 generator as Gψ, we directly use their original training hyper-parameters and train the model in parallel with the diffusion model. For TLDM, we set the base learning rate as 10−5 and the mini-batch size is set to 64. For the ImageNet1K-64×64 experiments, we use StyleGAN-XL generator as Gψ and strictly follow all the default training hyper-parameters. To simplify the implementation and save computation, instead of applying the default progressive growing pipeline 16 × 16 → 32 × 32 → 64 × 64, we directly train the implicit model on 64×64 images corrupted at TTrunc. Without using the progressive growing pipeline, the result of StyleGANXL shown in Table 2 is clearly worse than the progressive one reported in their paper (FID 1.51). However, when used as the implicit model of TDPM, the final performance of TDPM becomes competitive with this result. Evaluation: When evaluating the sampling time, we use models trained on CIFAR-10 and generate a batch of 128 samples. When evaluating the FID, and recall score, following the convention, we use 50k generated samples for CIFAR-10, LSUN-bedroom and LSUN-church, 30k samples for CelebAHQ (since the CelebA HQ dataset contains only 30k samples), 30k samples for the text-to-image datasets. The recall scores are calculated with the recipe in Kynkäänniemi et al. (2019). In the sampling stage, we follow our backbone to apply the same guidance in the diffusion part (t < TTrunc) if applicable. Specifically, for LDM backbone, we use classifier-free guidance (Ho & Salimans, 2022) with scale 1.5 and there are no DDIM steps for TDLM. D.7 ADDITIONAL RESULTS ON UNCONDITIONAL GENERATION TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 14: Qualitative results of TDPM on LSUN-Church (256 × 256), with Ttrunc = 99, 49, and 4. Note NFE = Ttrunc + 1 in TDPM. Each group presents generated samples from pθ(x0) (left) and pθ(xTtrunc) (right). TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 15: Analogous qualitative results to Figure 14 on LSUN-Bedroom. Produced by TDPM. TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 16: Analogous qualitative results to Figure 14 on CelebA-HQ. Produced by TDPM. TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 18: Analogous qualitative results to Figure 14 on LSUN-Bedroom. Produced by TDPM-CT. TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 19: Analogous qualitative results to Figure 14 on CelebA-HQ. Produced by TDPM-CT. D.8 ADDITIONAL RESULTS ON TEXT-TO-IMAGE GENERATION A white and gray bird with black wings. An airplan flying over a body of water. A sign reads “TDPM”. Busy city street at dusk with sun setting. Figure 20: Additional text-to-image generation results with different text prompt, produced by TLDM with Ttrunc = 49. The bagel is put in a squre plate. The bathroom has a big mirror. A cluster of flower on the wooden table.
1. What is the focus and contribution of the paper on faster synthesis in diffusion models? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its sampling procedure and implicit modeling? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, including comparisons with prior works? 4. Are there any concerns or questions regarding the choice of TTruanc and trade-offs between speed, sample quality, and mode coverage?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper For faster synthesis in diffusion models, this work proposes a sampling procedure that simulates a truncated diffusion process. For example, if a standard diffusion model simulates a diffusion process for t ∈ [ 0 , T ] , its truncated variant (which the authors call the TDPM) simulates t ∈ [ 0 , T T r u n c ] , where T T r u n c < T . To then sample from this process, x is drawn from p T T r u n c (rather than p T ), and then the diffusion is reversed to t = 0 as in a standard diffusion model. Of course, p T T r u n c is usually not a simple distribution anymore and cannot be trivially sampled from; the authors thus choose to model it implicitly in a GAN-like fashion. Strengths And Weaknesses Strengths: TDPMs boast shorter diffusion times. There appears to be little sacrifice in sample quality (in terms of FID) and mode coverage (in terms of recall). Weaknesses: The ability to perform likelihood evaluations via the deterministic ODE framework is lost, as the p T T r u n c model is now implicit. It is unclear how to choose T T r u n c , and how to control the trade-off between speed versus sample quality and mode coverage. Clarity, Quality, Novelty And Reproducibility The central idea in this paper is finding a suitable middle ground between GANs and diffusion models (DMs). In this sense, the novelty of the paper is somewhat limited, as a very similar concept has been explored in [1]. However, quantitatively speaking, the model performs better than [1] in terms of both FID and Recall, and also better than many other competing methods that exist in the space of fast diffusion models (e.g. DDIM, FastDDPM, Distilled Diffusion). Therefore, it may suggest that refining GAN predictions with a score may be more effective than using GANs throughout. [1] Xiao, Z., Kreis, K. and Vahdat, A., 2021. Tackling the generative learning trilemma with denoising diffusion gans. arXiv preprint arXiv:2112.07804.
ICLR
Title Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial Auto-Encoders Abstract Employing a forward diffusion chain to gradually map the data to a noise distribution, diffusion-based generative models learn how to generate the data by inferring a reverse diffusion chain. However, this approach is slow and costly because it needs many forward and reverse steps. We propose a faster and cheaper approach that adds noise not until the data become pure random noise, but until they reach a hidden noisy-data distribution that we can confidently learn. Then, we use fewer reverse steps to generate data by starting from this hidden distribution that is made similar to the noisy data. We reveal that the proposed model can be cast as an adversarial auto-encoder empowered by both the diffusion process and a learnable implicit prior. Experimental results show even with a significantly smaller number of reverse diffusion steps, the proposed truncated diffusion probabilistic models can provide consistent improvements over the non-truncated ones in terms of performance in both unconditional and text-guided image generations. 1 INTRODUCTION Generating photo-realistic images with probabilistic models is a challenging and important task in machine learning and computer vision, with many potential applications in data augmentation, image editing, style transfer, etc. Recently, a new class of image generative models based on diffusion processes (Sohl-Dickstein et al., 2015) has achieved remarkable results on various commonly used image generation benchmarks (Song & Ermon, 2019; Ho et al., 2020; Song & Ermon, 2020; Song et al., 2021b; Dhariwal & Nichol, 2021), surpassing many existing deep generative models, such as autoregressive models (van den Oord et al., 2016), variational auto-encoders (VAEs) (Kingma & Welling, 2013; Rezende et al., 2014; van den Oord et al., 2017; Razavi et al., 2019), and generative adversarial networks (GANs) (Goodfellow et al., 2014; Radford et al., 2015; Arjovsky et al., 2017; Miyato et al., 2018; Brock et al., 2019; Karras et al., 2019; 2020b). This new modeling class, which includes both score-based and diffusion-based generative models, uses noise injection to gradually corrupt the data distribution into a simple noise distribution that can be easily sampled from, and then uses a denoising network to reverse the noise injection to generate photo-realistic images. From the perspective of score matching (Hyvärinen & Dayan, 2005; Vincent, 2011) and Langevin dynamics (Neal, 2011; Welling & Teh, 2011), the denoising network is trained by matching the score function, which is the gradient of the log-density of the data, of the corrupted data distribution and that of the generator distribution at different noise levels (Song & Ermon, 2019). This training objective can also be formulated under diffusion-based generative models (Sohl-Dickstein et al., 2015; Ho et al., 2020). These two types of models have been further unified by Song et al. (2021b) under the framework of discretized stochastic differential equations. Despite their impressive performance, diffusion-based (or score-based) generative models suffer from high computational costs, both in training and sampling. This is because they need to perform a large number of diffusion steps, typically hundreds or thousands, to ensure that the noise injection is small enough at each step to make the assumption that both the diffusion and denoising processes have the Gaussian form hold in the limit of small diffusion rate (Feller, 1949; Sohl-Dickstein et al., 2015). In other words, when the number of diffusion steps is small or when the rate is large, the Gaussian assumption may not hold well, and the model may not be able to capture the true score function of the data. Therefore, previous works have tried to reduce the number of diffusion steps by using non-Markovian reverse processes (Song et al., 2020; Kong & Ping, 2021), adaptive noise scheduling (San-Roman et al., 2021; Kingma et al., 2021), knowledge distillation (Luhman & Luhman, 2021; Salimans & Ho, 2022), diffusing in a lower-dimension latent space (Rombach et al., 2022), etc., but they still cannot achieve significant speedup without sacrificing the generation quality. In this paper, we propose a novel way to shorten the diffusion trajectory by learning an implicit distribution to start the reverse diffusion process, instead of relying on a tractable noise distribution. We call our method truncated diffusion probabilistic modeling (TDPM), which is based on the idea of truncating the forward diffusion chain of an existing diffusion model, such as the denoising diffusion probabilistic model (DDPM) of Ho et al. (2020). To significantly accelerate diffusion-based text-to-image generation, we also introduce the truncated latent diffusion model (TLDM), which truncates the diffusion chain of the latent diffusion model (LDM) of Rombach et al. (2022). We note LDM is the latent text-to-image diffusion model behind Stable Diffusion, an open-source project that provides state-of-the-art performance in generating photo-realistic images given text input. By truncating the chain, we can reduce the number of diffusion steps to an arbitrary level, but at the same time, we also lose the tractability of the distribution at the end of the chain. Therefore, we need to learn an implicit generative distribution that can approximate this distribution and provide the initial samples for the reverse diffusion process. We show that this implicit generative distribution can be implemented in different ways, such as using a separate generator network or reusing the denoising network. The former option has more flexibility and can improve the generation quality, while the latter option has no additional parameters and can achieve comparable results. We reveal that DDPM and VAE have a similar relationship as TDPM and adversarial auto-encoder (AAE, Makhzani et al. (2015)). Specifically, DDPM is like a VAE with a fixed encoder and a learnable decoder that use a diffusion process, and a predefined prior. TDPM is like an AAE with a fixed encoder and a learnable decoder that use a truncated diffusion process, and a learnable implicit prior. Our truncation method has several advantages when we use it to modify DDPM for generating images without text guidance or LDM for generating images with text guidance. First, it can generate samples much faster by using fewer diffusion steps, without sacrificing or even enhancing the generation quality. Second, it can exploit the cooperation between the implicit model and the diffusion model, as the diffusion model helps the implicit model train by providing noisy data samples, and the implicit model helps the diffusion model reverse by providing better initial samples. Third, it can adapt the truncation level to balance the generation quality and efficiency, depending on the data complexity and the computational resources. For generating images with text guidance, our method can speed up the generation significantly and make it suitable for real-time processing: while LDM takes the time to generate one photo-realistic image, our TLDM can generate more than 50 such images. The main contributions of our paper are as follows: • We introduce TDPM, a new diffusion-based generative model that can shorten the diffusion trajectory by learning an implicit distribution to start the reverse diffusion process, and demonstrate that the learning of the implicit distribution can be achieved in various ways. We further introduce TLDM to significantly accelerate diffusion-based text-to-image generation. • We show TDPM can be formulated as a diffusion-based AAE. • We show that the implicit distribution can be realized by reusing the denoising network for the reverse diffusion process, which can reduce the reverse diffusion steps by orders of magnitude without adding any extra parameters and with comparable generation quality. • We reveal the synergy between the implicit model and the diffusion model, as the diffusion process can simplify the training of the implicit model like GANs, and the implicit model can speed up the reverse diffusion process of the diffusion model. • We show that both TDPM and TLDM can adapt the truncation level, according to the data complexity and the computational resources, to achieve a good balance between the generation quality and the efficiency. 2 PRELIMINARIES ON DIFFUSION MODELS In Gaussian diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020), starting from the data distribution x0 ∼ q(x0), a pre-defined forward diffusion process qt produces auxiliary variables xt=1:T by gradually adding Gaussian noise, with variance βt ∈ (0, 1) at time t, as follows: q(x1, ...,xT |x0) := ∏T t=1 q(xt |xt−1), q(xt |xt−1) := N (xt; √ 1− βtxt−1, βtI). (1) With the limit of small diffusion rate (i.e., βt is kept sufficiently small), the reverse distribution q(xt−1 |xt) also follows a Gaussian distribution (Feller, 1949; Sohl-Dickstein et al., 2015) and can be approximated using a neural network parameterized Gaussian distribution pθ as: pθ(xt−1 |xt) := N (xt−1;µθ(xt, t),Σθ(xt, t)). (2) Moreover, with a sufficiently large T , the outcome of the diffusion chain xT will follow an isotropic Gaussian distribution. Thus, with the pre-defined forward (inference) diffusion process and the learned reverse (generative) diffusion process, we can sample from xT ∼ N (0, I) and run the diffusion process in reverse to get a sample from the data distribution q(x0). Under the variational inference (Kingma & Welling, 2013; Blei et al., 2017) framework, viewing q(x1, ...,xT |x0) in (1) as the inference network, we can use the evidence lower bound (ELBO) as our learning objective. Following previous works (Sohl-Dickstein et al., 2015; Ho et al., 2020), the negative ELBO of a diffusion probabilistic model, parameterized by θ, can be expressed as LELBO(θ) := L0(θ) + ∑T t=2 Lt−1(θ) + LT , L0(θ) := Eq(x0)Eq(x1 |x0) [− log pθ(x0 |x1)] , (3) Lt−1(θ) := Eq(x0)Eq(xt |x0)[DKL (q(xt−1 |xt,x0)||pθ(xt−1 |xt))], t ∈ {2, . . . , T} (4) LT := Eq(x0)[DKL (q(xT |x0)||p(xT ))], (5) where DKL(q||p) = Eq[log q − log p] denotes the Kullback–Leibler (KL) divergence from distributions p to q. Generally speaking, diffusion probabilistic models assume the number of diffusion steps T to be sufficiently large to satisfy two conditions: 1) the reverse distribution at each denoising step can be fitted with a Gaussian denoising generator pθ(xt−1|xt); 2) with a sufficiently small diffusion rate βt, the long forward diffusion process will successfully corrupt the data, making q(xT |x0) ≈ N (0, I), and hence approximately LT becomes zero and depends on neither x0 nor θ. What happens if T is insufficiently large? Given a non-Gaussian data distribution q(x0), when the number of denoising steps is reduced, the true posterior q(xt−1 |xt) is not Gaussian and usually intractable (Feller, 1949), resulting in new challenges to current diffusion models. As noted in Xiao et al. (2022), when βt is not sufficiently small, the diffusion step becomes larger and the denoising distribution can be multi-modal and hence too complex to be well fitted by Gaussian. The authors propose to define pθ(xt−1 |xt) with an implicit generator and substitute the ELBO with min θ ∑ t≥1 Eq(t) [Dadv(q(xt−1 |xt)∥pθ(xt−1 |xt))] , (6) where Dadv represents a statistical distance that relies on an adversarial training setup. This modified objective can be minimized by leveraging the power of conditional GANs in fitting implicit multimodal distributions (Arjovsky et al., 2017; Goodfellow et al., 2014; Nowozin et al., 2016). While the concept of diffusion has been used, the proposed models in Xiao et al. (2022) are shown to work the best only when the number of diffusion steps is limited to be as few as four, and start to exhibit deteriorated performance when further increasing that number. 3 TRUNCATED DIFFUSION AND ADVERSARIAL AUTO-ENCODING We first introduce the idea of accelerating both the training and generation of diffusion models by truncating the diffusion chains and describe the technical challenges. We then develop the objective function and training algorithm for TDPM. We further reveal TDPM can also be formulated as an AAE (Makhzani et al., 2015)) empowered by diffusion models. While DDPM can be considered as a hierarchical version of a variational auto-encoder (VAE) with a fixed multi-stochastic-layer encoder, our derivation shows that TDPM can be considered as a hierarchical version of an AAE with a fixed multi-stochastic-layer encoder but a learnable implicit prior. 3.1 MOTIVATION AND TECHNICAL CHALLENGES We propose a novel method called TDPM to speed up the diffusion process and the generative model. The main idea is to shorten the forward diffusion chain that transforms the data into Gaussian noise, and use a learned implicit distribution to sample the starting point of the reverse diffusion chain that reconstructs the data. To be more precise, we adopt the DDPM framework that defines a variance schedule {β1, β2, ..., βT }, which controls the amount of noise added at each step of the forward diffusion process. The forward process has a simple analytical form as a Gaussian distribution: q(xt |x0) = N ( √ ᾱtx0, (1− ᾱt)I); ᾱt = ∏t i=1 αi, αi = 1− βi. Here, xt is the noisy version of the data x0 at step t, and ᾱt is the cumulative product of the diffusion coefficients αi. The forward chain of length T is designed to be long enough to make the data distribution indistinguishable from Gaussian noise N (0, I). However, a long forward chain also implies a high computational cost for the reverse process, which uses a learned neural network to predict the conditional distribution of the clean data given the noisy one at each step. The proposed TDPM cuts off the last part of the forward chain and only keeps the first Ttrunc steps {β1, β2, ..., βTtrunc} ⊂ {β1, β2, ..., βT }. We choose Ttrunc to be much smaller than T so that we can save a lot of computation time in generation. The benefit of this truncation is illustrated in Figure 1, where the bottom row shows the truncated diffusion chain. We can see that the data are only partially corrupted by noise and still retain some features of the original data. This means that we can recover the data more easily and accurately by applying a few Gaussian denoising steps from the corrupted data. Moreover, we do not change the diffusion rates βt for the first Ttrunc steps, so we do not compromise the quality of the forward and reverse processes between time 0 and Ttrunc. However, truncating the forward chain also introduces a new challenge for the reverse process. Unlike the original chain, where the starting point of the reverse process is xT ∼ N (0, I), the truncated chain has an unknown distribution of the corrupted data at step Ttrunc. This makes it difficult to sample from this distribution and initiate the reverse process. To overcome this challenge, we introduce an implicit generative model that approximates the distribution of the corrupted data by minimizing a divergence measure between the implicit and the true noisy distributions at step Ttrunc. This way, we can use the implicit model to sample the starting point of the reverse process and then apply the learned denoising network to generate the data. 3.2 HAND-CRAFTED TDPM OBJECTIVE FUNCTION Mathematically, recall that the DDPM loss in (3) consists of three terms: L0, ∑T t=2 Lt−1, and LT . The training objective of a conventional diffusion model focuses on terms ∑T t=2 Lt−1 and L0. It assumes LT does not depend on any parameter and will be close to zero by carefully pre-defining the forward noising process such that q(xT |x0) ≈ p(xT ) = N (0, I). When the diffusion chains are truncated at time Ttrunc ≪ T , the forward diffusion ends at time Ttrunc, where the marginal distribution of the forward diffusion-corrupted data can be expressed as q(xTtrunc) := ∫ q(xTtrunc |x0)p(x0)dx0, (7) which takes a semi-implicit form (Yin & Zhou, 2018) whose density function is often intractable. To reverse this truncated forward diffusion chain, we can no longer start the reverse diffusion chain from a known distribution such as N (0, I). To this end, we propose TDPM that starts the reverse chain at time Ttrunc from pψ(xTtrunc), an implicit distribution parameterized by ψ. We match pψ(xTtrunc) to q(xTtrunc) via a loss term as L̃Ttrunc := D (q(xTtrunc)||pψ(xTtrunc)) , where D(q||p) is a statistical distance between distributions q and p, such as the Jensen–Shannon divergence and Wasserstein distance. As we keep all the diffusion steps before time Ttrunc in TDPM the same as those in DDPM, we combine L̃Ttrunc with all the loss terms of DDPM before time Ttrunc in (3) to define the TDPM loss as LTDPM := ∑Ttrunc t=1 Lt−1(θ) + L̃Ttrunc(ψ), L̃Ttrunc(ψ) := D (q(xTtrunc)||pψ(xTtrunc)) , (8) We note while in general pψ(xTtrunc) in TDPM is intractable, we can employ a deep neural networkbased generator Gψ to generate a random sample in a single step via xTtrunc = Gψ(z), z ∼ N (0, I). (9) We will discuss later that we may simply let ψ = θ to avoid adding more parameters. 3.3 TDPM AS DIFFUSION-BASED ADVERSARIAL AUTO-ENCODER Following the terminology of AAE, let us define the prior as pψ(xTtrunc), the decoder (likelihood) as pθ(x0 |xTtrunc) := ∫ . . . ∫ [∏Ttrunc t=1 pθ(xt−1 |xt) ] dxTtrunc−1 . . . dx1, (10) which is empowered by a reverse diffusion chain of length Ttrunc, and the encoder (variational posterior) as q(xTtrunc |x0). Thus we can view q(xTtrunc) defined in (7) as the aggregated posterior (Hoffman & Johnson, 2016; Tomczak & Welling, 2018). In addition to imposing an auto-encoding data-reconstruction loss, the key idea of the AAE (Makhzani et al., 2015) is to also match the aggregated posterior to a fixed prior. This idea differs AAE from a VAE that regularizes the autoencoder by matching the variational posterior to a fixed prior under the KL divergence. To this end, we introduce a diffusion-based AAE (Diffusion-AAE), whose loss function is defined as LDiffusion-AAE = −Eq(x0)Eq(xTtrunc |x0) log pθ(x0 |xTtrunc) +D(q(xTtrunc))||pψ(xTtrunc)). (11) Diffusion-AAE has two notable differences from a vanilla AAE: 1) its encoder is fixed and has no learnable parameters, while its prior is not fixed and is optimized to match the aggregated posterior, and 2) its decoder is a reverse diffusion chain, with Ttrunc stochastic layers all parameterized by θ. Note in general as the likelihood in (10) is intractable, the first loss term in (11) is intractable. However, the loss of Diffusion-AAE is upper bounded by the loss of TDPM, as described below. Theorem 1. The Diffusion-AAE loss in (11) is upper bounded by the TDPM loss in (8): LDiffusion-AAE ≤ LTDPM. 3.4 MATCHING THE PRIOR TO AGGREGATED POSTERIOR Via the loss term L̃Ttrunc := D (q(xTtrunc)||pψ(xTtrunc)) in (8), we aim to match the prior pψ(xTtrunc) to the aggregated posterior q(xTtrunc) in TDPM. While we have an analytic density function for neither p nor q, we can easily draw random samples from both of them. Thus, we explore the use of two different types of statistical distances that can be estimated from samples of both q and p. We empirically show that TDPM can achieve good performance regardless of which distance is used for optimization. One possible statistical distance is based on the idea of GANs (Goodfellow et al., 2014; Arjovsky et al., 2017; Bińkowski et al., 2018), which are widely used to learn implicit distributions from empirical data. In this setting, we use a generator Gψ(·) : Rd → Rd to transform samples from an isotropic Gaussian p(z) into samples that approximate the corrupted data, and a discriminator Dϕ(·) : Rd → [0, 1] to distinguish between the samples from the corrupted data distribution q(xTtrunc |x0) and the implicit generative distribution pψ(xTtrunc). The generator and the discriminator are trained by the following objective LGANTtrunc : min ψ max ϕ Ex∼q(xTtrunc )[logDϕ(x)] + Ez∼p(z) [log(1−Dϕ(Gψ(z)))]. (12) 3.5 TRAINING ALGORITHM As the objective in Equation 8 is a sum of different terms, following DDPM (Ho et al., 2020) to fix the terms Σθ(xt, t) = σ2t I , we can simplify 1 Ttrunc ∑Ttrunc t=1 Lt−1 as an expectation defined as Lsimple_trunc = Et,x0,ϵt [ ||ϵt − ϵθ(xt, t)||2 ] , t ∼ Unif(1, 2, . . . , Ttrunc), ϵt ∼ N (0, I) (13) where ϵt is an injected noise at a uniformly sampled timestep index t, xt = √ ᾱtx0 + √ 1− ᾱtϵt is a noisy image at time t, and ϵθ is a denoising U-Net that predicts the noise in order to refine the noisy image xt. Therefore the final simplified version of (8) is constructed as LGANTDPM = Lsimple_trunc + λLGANTtrunc , . (14) While λ, the weight of LTtrunc , can be tuned, we fix it as one for simplicity. Here the TDPM objective consists of two parts: the denoising part ϵθ is focused on denoising the truncated chain, getting updated from Lsimple_trunc, while the implicit part Gψ is focused on minimizing Eq[D (q(xTtrunc)||pψ(xTtrunc))], getting updated from LGANTtrunc . An interesting finding of this paper is that we do not necessarily need to introduce a separate set of parameters ψ for the generator Gψ, as we can simply reuse the same parameters θ of the reverse diffusion model (i.e., let ψ = θ) without clearly hurting the empirical performance. This suggests that the reverse diffusion process from T to Ttrunc could be effectively approximated by a single step using the same network architecture and parameters as the reverse diffusion steps from Ttrunc to 0. Therefore, we provide two configurations to parameterize the implicit distributions. 1) To save parameters, we let the implicit generator and denoising model share the same U-Net parameters but using different time step indices. Specifically, we first use xTtrunc =Gψ(xT )= ϵθ(xT , t=Ttrunc+1), where xT ∼ N (0, I), to generate a noisy image at time Ttrunc. 2) We further explore employing a different model, e.g., StyleGAN2 (Karras et al., 2020a), for the implicit generator, which provides better performance but increases the model size to get xTTrunc . Then for t=Ttrunc, . . . , 1, we iteratively refine it as xt−1 = 1√αt (xt − 1−αt√ 1−ᾱt ϵθ(xt, t)) + βtzt, where zt ∼ N(0, I) when t > 1 and z1 = 0. This process is depicted in Algorithms 1 and 2 in the Appendix. For the implementation details, please refer to Appendix D.6 and our code at https://github.com/JegZheng/ truncated-diffusion-probabilistic-models. 3.6 RELATED WORK In our previous discussions, we have related TDPM to several existing works such as DDPM and AAE. A detailed discussion on other related works is provided in Appendix B. 4 EXPERIMENTS We aim to demonstrate that TDPM can generate good samples faster by using fewer steps of reverse diffusion. We use different image datasets to test our method and follow the same setting as other diffusion models (Ho et al., 2020; Nichol & Dhariwal, 2021; Dhariwal & Nichol, 2021; Rombach et al., 2022) for our backbones. We also have two ways to set up the implicit generator that starts the reverse diffusion. One way is to reuse the denoising network, and the other way is to use a separate network. We try both ways for generating images without any labels. For generating images from text, we use the first way with the LDM backbone. We provide comprehensive details, toy examples, and additional experimental results in Appendices D.4-D.8. We use FID (lower is better) and Recall (higher is better) to measure the fidelity and diversity, respectively, of the generated images. We use CIFAR-10 (Krizhevsky et al., 2009), LSUN-bedroom, and LSUN-Church (Yu et al., 2015) datasets in unconditional experiments, and CUB-200 (Welinder et al., 2010) and MS-COCO (Lin et al., 2014) for text-to-image experiments. The images consist of 32× 32 pixels for CIFAR-10 and 256× 256 pixels for the other datasets. 4.1 EFFICIENCY IN BOTH TRAINING AND SAMPLING We first look at the results on CIFAR-10. We use DDPM (Ho et al., 2020) or improved DDPM (Nichol & Dhariwal, 2021) as our backbones. We use 4, 49, or 99 steps of reverse diffusion, which correspond Table 1: Results of unconditional generation on CIFAR-10, with the best FID and Recall in each group marked in bold. To compare TDPM (TTrunc=0) with GAN-based methods, we use DDPM backbone as generator and StyleGAN2 discriminator. Method NFE FID↓ Recall↑ DDPM backbone DDPM 1000 3.21 0.57 TDPM (TTrunc=99) 100 3.10 0.57 TDPM+ (TTrunc=99) 100 2.88 0.58 DDIM 50 4.67 0.53 TDPM (TTrunc=49) 50 3.30 0.57 TDPM+ (TTrunc=49) 50 2.94 0.58 TDPM (TTrunc=4) 5 3.34 0.57 TDPM+ (TTrunc=4) 5 3.21 0.57 Improved DDPM backbone Improved DDPM 4000 2.90 0.58 TDPM (TTrunc=99) 100 2.97 0.57 TDPM+ (TTrunc=99) 100 2.83 0.58 Improved DDPM+DDIM 50 3.92 0.55 TDPM (TTrunc=49) 50 3.11 0.57 TDPM+ (TTrunc=49) 50 2.96 0.58 TDPM (TTrunc=4) 5 3.51 0.55 TDPM+ (TTrunc=4) 5 3.17 0.57 GAN-based DDGAN 4 3.75 0.57 StyleGAN2 1 8.32 0.41 StyleGAN2-ADA 1 2.92 0.49 TDPM (TTrunc=0) 1 7.34 0.46 Table 2: Results on LSUN-Church and LSUN-Bedroom (resolution 256 × 256). Similar to Table 1, TDPM (TTrunc=0) uses DDPM backbone for the generator. Church Bedroom Method NFE FID FID DDPM backbone DDPM 1000 7.89 4.90 TDPM (TTrunc=99) 100 4.33 3.95 TDPM+ (TTrunc=99) 100 3.98 3.67 DDIM 50 10.58 6.62 TDPM (TTrunc=49) 50 5.35 4.10 TDPM+ (TTrunc=49) 50 4.34 3.98 TDPM (TTrunc=4) 5 4.98 4.16 TDPM+ (TTrunc=4) 5 4.89 4.09 ADM backbone ADM 1000 3.49 1.90 ADM+DDIM 250 6.45 2.31 TDPM (TTrunc=99) 100 4.41 2.24 TDPM+ (TTrunc=99) 100 3.61 1.88 TDPM (TTrunc=49) 50 4.57 2.92 TDPM+ (TTrunc=49) 50 3.67 1.89 TDPM (TTrunc=4) 5 5.61 7.92 TDPM+ (TTrunc=4) 5 4.66 4.01 GAN-based DDGAN 4 5.25 - StyleGAN2 1 3.93 3.98 StyleGAN2-ADA 1 4.12 7.89 TDPM (TTrunc=0) 1 4.77 5.24 Table 3: Results of ImageNet-64×64, evaluated with FID and Recall. TDPM+ is built with a pre-trained ADM and an implicit model trained at TTrunc using StylGAN-XL. Method NFE FID↓ Recall↑ ADM 1000 2.07 0.63 TDPM+ (TTrunc=99) 100 1.62 0.63 TDPM+ (TTrunc=49) 50 1.77 0.58 TDPM+ (TTrunc=4) 5 1.92 0.53 StyleGAN-XL (wo PG) 1 3.54 0.51 Figure 2: Random generation results of TDPM+(TTrunc=4) on ImageNet-64×64. to 5, 50, or 100 number of function evaluations (NFE). For the implicit generator, we either reuse the denoising U-Net or use a StyleGAN2 network (respectively, we call them TDPM and TDPM+). For comparison, we also include DDIM (Song et al., 2020) and DDGAN (Xiao et al., 2022). The comparison with a more diverse set of baselines can be found in Table 9 in Appendix D.7. Table 1 shows that our TDPM can get good FID with fewer NFE. TDPM+ can get even better FID, and it is the best when NFE=100. Compared with TDPM with 0 steps of reverse diffusion (a GAN with DDPM’s U-Net as generator and StyleGAN2 as discriminator) and StyleGAN2, TDPM with more than 0 steps of reverse diffusion has better recall and the FID is as good as StyleGAN2-ADA (a GAN with data augmentation for better training). This means TDPM can largely avoid the mode missing problem in GANs. We show some examples of generated images on CIFAR-10 in Figure 13. We also check how fast TDPM can train and sample. In training, we count how many images TDPM needs to well fit the truncated diffusion chain and the implicit prior. Figure 3 shows that when we use fewer steps of reverse diffusion, the diffusion part needs less time to train. But the implicit prior needs more time to train because it has to model a harder distribution, e.g., fitting the implicit prior with 4 diffusion steps needs similar time to directly fit it on the data. When we use 99 steps of reverse diffusion, the diffusion chain and the implicit prior need similar time to train, and the whole model trains faster than both GAN and DDPM. In sampling, we compare TDPM with 0, 1, 4, 49, or 99 steps of reverse diffusion. We report both FID and the sampling time (s/image) on one NVIDIA V100 GPU in Figure 4. When we use 4 steps of reverse diffusion, the FID is much lower than 0 steps, and the sampling time is slightly longer. When we use more steps of reverse diffusion, the FID goes down Ttrunc=0 (GAN) Ttrunc=4 Ttrunc=49 Ttrunc=99 DDPM 0 2 4 6 8 10 12 It er at ed lo g( ki m gs ) t < Ttrunc t = Ttrunc Figure 3: The required iterations (measured with iterated images) to converge in the training. The iterations for t < TTrunc (ϵθ) and t=TTrunc (Gψ) are marked in red and blue, respectively. Ttrunc=0 (GAN) speed-up x1000 Ttrunc=1 speed-up x500 Ttrunc=4 speed-up x200 Ttrunc=49 speed-up x20 Ttrunc=99 speed-up x10 DDPM 3 4 5 6 7 FI D 7.34 (0.03s) 4.47 (0.06s) 3.41 (0.15s) 3.3 (1.52s) 3.1 (3.13s) 3.27 (31.03s) Figure 4: Evolution of FID and corresponding GPU time (s/image) across different timesteps in the sampling stage. slowly, but the sampling time goes up linearly. When we use 99 steps of reverse diffusion, the FID of TDPM is better than DDPM with 1000 steps. Because the FID does not change much when we use more steps of reverse diffusion, we suggest using a small number of steps, such as 4 or more, to balance the quality and speed of generation. 4.2 RESULTS ON HIGHER-RESOLUTION AND MORE DIVERSE IMAGE DATASETS To test the performance of the proposed truncation method on high-resolution images, we train TDPM using two different diffusion models, DDPM (Ho et al., 2020) and ADM (Dhariwal & Nichol, 2021), as backbones on two datasets of 256× 256 resolution, LSUN-Church and LSUN-Bedroom (Yu et al., 2015). We compare the FIDs of TDPM with those of the backbone models and some state-of-the-art GANs in Tables 2. The results show that TDPM can generate images of similar quality with much smaller truncation steps Ttrunc, which means that it can produce images significantly faster than the backbone models. We also visualize the samples from the implicit distribution xTtrunc ∼ pθ(xTtrunc) that TDPM generates and the corresponding x0 that it finishes at the end of reverse chain in Figure 5. We further evaluate TDPM on ImageNet-1K (with resolution 64×64) that exhibits high diversity. Here we adopt the TDPM+ configuration, where we use a pre-trained ADM (Dhariwal & Nichol, 2021) checkpoint for t < TTrunc and train a StyleGAN-XL (Sauer et al., 2022) based implicit model at t = TTrunc (for simplicity, we choose to not use the progressive growing pipeline of StyleGAN-XL; See Appendix D.6 for more details). We compare both FID and Recall with our backbone models in Table 3 and show example generations in Figure 2. Similar to our observations in Table 1, TDPM has good generation quality with small truncation steps Ttrunc. Moreover, properly training an implicit model at Ttrunc can further improve the performance of the backbone. Table 4: Numerical results of Figure 6. The GPU time of sampling (s/image) is measured on one NVIDIA A100. CUB-Bird MS-COCO NFE GPU time LDM TLDM LDM TLDM 5 0.15 100.81 10.59 48.41 16.7 50 1.57 30.85 7.32 18.25 7.47 100 4.10 11.07 6.79 8.2 7.22 250 11.21 6.82 6.72 6.3 6.29 1000 41.09 6.68 - 6.29 - A bird with brown wings, black back, and red head. A green train is coming down the tracks. NFE=100 (TTrunc = 99) NFE=50 (TTrunc = 49) NFE=5 (TTrunc = 4) TLDM TLDM TLDM TLDM TLDM TLDM LDM LDM LDM LDMLDMLDM Figure 7: Example text-to-image generation results of LDM and TLDM (i.e., TDPM with LDM backbone) finetuned on CUB-200 (top row) or MS-COCO (bottom row), setting the number of times iterating through the reverse diffusion U-Net as 100 (left column), 50 (middle column), or 5 (right column). 4.3 TEXT-TO-IMAGE GENERATION Besides unconditional generation tasks, we develop for text-to-image generation the TLDM, a conditional version of TDPM that leverages as the backbone the LDM of Rombach et al. (2022), which is a state-of-the-art publicly released model with 1.45B parameters pre-trained on LAION400M (Schuhmann et al., 2021). LDM consists of a fixed auto-encoder for pixel generation and a latent-diffusion module to connect text and image embeddings. Here we fine-tune its latent-diffusion part on CUB-200 and MS-COCO datasets with 25K and 100K steps as the baseline. Similar to the unconditional case, we fine-tune with the LDM loss for t < TTrunc and GAN loss for t = TTrunc. More details about the setting can be found in Appendix D.6. The results of LDM with different DDIM sampling steps and TLDM with different truncated steps are summarized in Figure 6 and Table 4. Similar to applying diffusion directly on the original image-pixel space, when the diffusion chain is applied in the latent space, we observe TLDM can achieve comparable or better performance than LDM even though it has shortened the diffusion chain of LDM to have much fewer reverse diffusion steps. For the case that NFE is as small as 5, we note although the FID of TLDM has become higher due to using fewer diffusion steps, the generated image using TLDM at NFE=5 is still visually appealing, as shown in Figure 7. Compared with 50 and 250 steps using LDM, the sampling speed of TLDM using 5 steps is 10 and 50 times faster, respectively, while largely preserving generation quality. We provide additional text-to-image generation results of TLDM in Appendix D.8. 5 CONCLUSION In this paper, we investigate how to reduce the trajectory length of the diffusion chain to achieve efficient sampling without loss of generation quality. We propose truncated diffusion probabilistic modeling (TDPM) that truncates the length of a diffusion chain. In this way, TDPM can use a much shorter diffusion chain, while being required to start the reverse denoising process from an intractable distribution. We propose to learn such a distribution with an implicit generative model powered by the same U-Net used for denoising diffusion, and validate with multiple ways to learn the implicit distribution to ensure the robustness of the proposed TDPM. We reveal that TDPM can be cast as an adversarial auto-encoder with a learnable implicit prior. We conduct extensive experiments on both synthetic and real image data to demonstrate the effectiveness of TDPM in terms of both sample quality and efficiency, where the diffusion chain can be shortened to have only a few steps. ACKNOWLEDGMENTS H. Zheng and M. Zhou acknowledge the support of NSF-IIS 2212418 and IFML. A PROOF Proof of Theorem 1. As the last terms in both losses are the same, we only need to show that the first term in (11) is smaller than or equal to L0 + ∑Ttrunc t=2 Lt−1 in (8). Using Jensen’s inequality, we have − Eq(x0)Eq(xTtrunc |x0) log pθ(x0 |xTtrunc) = −Eq(x0)Eq(xTtrunc |x0) logEq(x1:Ttrunc−1 |x0,xTtrunc ) [ p(x0:Ttrunc−1 |xTtrunc) q(x1:Ttrunc−1 |x0,xTtrunc) ] ≤ −Eq(x0)Eq(xTtrunc |x0)Eq(x1:Ttrunc−1 |x0,xTtrunc ) log p(x0:Ttrunc−1 |xTtrunc) q(x1:Ttrunc−1 |x0,xTtrunc) = −Eq(x0)Eq(x1:Ttrunc |x0) log [ p(x0:Ttrunc−1) q(x1:Ttrunc |x0) q(xTtrunc |x0) p(xTtrunc) ] = ( −Eq(x0)Eq(x1:Ttrunc |x0) log p(x0:Ttrunc−1) q(x1:Ttrunc |x0) ) − Eq(x0)Eq(xTtrunc |x0) log q(xTtrunc |x0) p(xTtrunc) = ( ∑Ttrunc t=1 Lt−1 + LTtrunc)− LTtrunc = Ttrunc∑ t=1 Lt−1, (15) where the second to last equality follows the same derivation of the ELBO in Ho et al. (2020). B RELATED WORK Diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020) employ a forward Markov chain to diffuse the data to noise and learn the reversal of such a diffusion process. With the idea of exploiting the Markov operations (Goyal et al., 2017; Alain et al., 2016; Bordes et al., 2017), diffusion models achieve great success and inspire a variety of tasks including image generation and audio generation (Kong et al., 2020; Chen et al., 2020; Jolicoeur-Martineau et al., 2020; Vahdat et al., 2021). Recently, plenty of studies have been proposed to generalize diffusion model to continuous time diffusion and improve the diffusion models in likelihood estimation (Vincent, 2011; Song & Ermon, 2020; 2019; Nichol & Dhariwal, 2021; Song et al., 2021b;a; Kingma et al., 2021). Another mainstream is to improve the sampling efficiency of diffusion models, which are known for their enormous number of sampling steps. Luhman & Luhman (2021) improve diffusion processes with knowledge distillation and San-Roman et al. (2021) propose a learnable adaptive noise schedule. Song et al. (2020) and Kong & Ping (2021) exploit non-Markovian diffusion processes and shorten the denoising segments. Jolicoeur-Martineau et al. (2021) and Huang et al. (2021) use better SDE solvers for continuous-time models. Aside from these works, recently other types of generative models such as VAEs (Kingma & Welling, 2013), GANs (Goodfellow et al., 2014), and autoregressive models (van den Oord et al., 2016) have been incorporated to diffusion models. They are shown to benefit each other (Xiao et al., 2022; Pandey et al., 2022; Meng et al., 2021) and have a closer relation to our work. Xiao et al. (2022) consider the use of implicit models (Huszár, 2017; Mohamed & Lakshminarayanan, 2016; Tran et al., 2017; Yin & Zhou, 2018; Li & Malik, 2018) to boost the efficiency of diffusion models, where they deploy implicit models in each denoising step, which has higher difficulty in the training as the number of diffusion steps increases. Pandey et al. (2022) build diffusion models on top of the output of VAEs for refinement. Our work is also related if viewing TDPM as a diffusion model on top of an implicit model, where the implicit model can be parameterized with the U-Net or a separate network. C DISCUSSION Potential societal impacts: This paper proposes truncated diffusion probabilistic model as a novel type of diffusion-based generative model. The truncated part can be trained as implicit generative models such as GANs jointly or independently with the diffusion part. The capacities of truncated diffusion probabilistic models are competitive to existing diffusion-based ones and efficiency is largely improved. On the contrary of these positive effects, some negative perspectives could also be seen, depending on how the models are used. One major concern is the truncated diffusion technique proposed in this paper could potentially be a way to hack the existing diffusion models if the implicit models are maliciously used to fit the intermediate steps. For example, for some existing diffusion models, for safety concerns, the model’s capacity to generate private data needs to be locked by hiding the diffusion ending point into an unknown distribution. The technique of TDPM could be used to crack these existing online diffusion models by providing intermediate noisy images or fine-tuning the first few steps with TDPM to unlock the capacity. Besides, the capacity of generating good images can also be misused to generate ill-intentioned images at a much lower cost. Discussions: In this work, we mainly focus on reducing the length of the diffusion chain of a finite-time diffusion model. Our model has shown its effectiveness in improving finite-time diffusion models and it is non-trivial to further explore our model on continuous-time diffusion models (Song et al., 2021b). Moreover, while in this paper DDPM is the primary baseline, TDPM can also be built on other recent diffusion models. While pθ(xTtrunc) is parameterized as an implicit distribution, it can also be formulated as a semi-implicit distribution (Yin & Zhou, 2018), which allows it to be approximated with a Gaussian generator. Xiao et al. (2022) also present a closely related work. While we share the same spirit to reduce the length of the diffusion chain, these two strategies are not conflicting with each other. In future work we will look into the integration of these different strategies. There also exists plenty of options in approximating pθ(xTtrunc). When truncating the diffusion chain to be short, the implicit distribution still faces multi-modal and needs to fit with different methods depending upon the properties that we need. For example, in order to capture all modes, a VAE would be preferred, like done in Pandey et al. (2022). Below we provide an alternative method proposed in Zheng & Zhou (2021) to fit the truncated distribution. Besides the training, it’s also an open question whether TDPM can be incorporated into more advanced architectures to have further improvements and we leave this exploration for future work. D ALGORITHM DETAILS AND COMPLEMENTARY RESULTS Below we provide additional algorithm details and complementary experimental results. D.1 ADDITIONAL ANALYSIS ON THE PARAMETERIZATION OF THE IMPLICIT GENERATOR As shown in Section 3, in general, the objective of TDPM consists of the training of the diffusion model ϵθ (a U-Net architecture (Ronneberger et al., 2015)) with simple loss of DDPM Lsimple and the training of an implicit prior model Gψ with objective LGANTtrunc . Without loss of generality, in our main paper, we show two configurations to parameterize the implicit part for t = Ttrunc: 1) the implicit generator shares the same U-Net architecture used for 0 < t < Ttrunc; 2) the implicit generator is instantiated with a separate network. Below we explain this two configurations (denoted as TDPM+ in the main paper). Configuration 1): At t = Ttrunc, the Unet generates the noisy image at the truncated step: xTtrunc = ϵθ(xTtrunc+1, t = Ttrunc + 1), where xTtrunc+1 ∼ N (0, I) is the pure noise image whose pixels are iid sampled from standard normal. For t = Ttrunc, Ttrunc − 1, . . . , 1, the same Unet iteratively refines the noisy images by letting xt−1 = 1√ᾱt (xt − 1−αt√ 1−ᾱt ϵt−1) + βtzt; zt>1 ∼ N (0, I), z1 = 0, where ϵt−1 = ϵθ(xt, t) is the predicted noise by the Unet. Under this setting, the Unet-based generator plays two roles at the same time and the training will be more challenging than using two different generators here. However, we can also see as Ttrunc gets larger, the distribution of p(xTtrunc) will become more similar to a noise distribution, and generating the noisy images will be more like generating noises. In this case, being able to generate both noisy images and predicting noise becomes easier for the generator. Configuration 2) (TDPM+): Unlike previous configuration, where the implicit generator at step t = T shares the same U-Net architecture with t < Ttrunc. Another way is to parameterize Gψ with a separate generator. Although this configuration increases the total parameter of the generative model, it allows the model has better flexibility in the training stage. For example, these two networks can be trained in parallel or leverage a pre-trained model. In our paper, we conduct the experiments by using Stylegan2 generator architecture Karras et al. (2020b) for t = Ttrunc, resulting in an increase of 19M and 28M for the generator parameters when handling 32× 32 and 256× 256 images. The process of training and sampling of these configurations are summarized in Algorithm 1 and 2. Algorithm 1 Training 1: repeat 2: x0 ∼ q(x0) 3: t ∼ Uniform({1, . . . , Ttrunc}) 4: ϵt ∼ N (0, I), z ∼ N (0, I) 5: Update with (14) 6: until converged Algorithm 2 Sampling 1: xTtrunc+1 ∼ N (0, I) 2: if Gψ shared with ϵθ then 3: xTtrunc = ϵθ(xTtrunc+1, Ttrunc + 1) 4: else 5: xTtrunc = Gψ(xTtrunc+1) 6: end if 7: for t = Ttrunc, . . . , 1 do 8: zt ∼ N (0, I) if t > 1, else z1 = 0 9: xt−1 = 1√αt ( xt − 1−αt√1−ᾱt ϵθ(xt, t) ) + βtzt 10: end for 11: return x0 D.2 ALTERNATIVES OF LEARNING THE IMPLICIT DISTRIBUTION Another possible statistical distance is based on conditional transport (Zheng & Zhou, 2021), which is proposed to balance the model-seeking and mode-covering behaviors when fitting an empirical data distribution. In this setting, we use the same generator Gψ as before, but instead of a discriminator, we use a conditional distribution πη parameterized by η to find an optimized mapping between the samples of p and q, and a critic ϕ to measure the point-to-point cost cϕ in the feature space. The generator, the conditional distribution, and the critic are trained by the following objective LCTTtrunc : min ψ,η max ϕ Ex∼q(xTtrunc ) [ EGψ(z)∼πη(Gψ(z) |xTtrunc )cϕ(xTtrunc , Gψ(z)) ] + Ez∼p(z) [ ExTtrunc∼πη(xTtrunc |Gψ(z))cϕ(xTtrunc , Gψ(z)) ] . (16) Similar to (14), we fit TDPM-CT with following loss LCTTDPM = Lsimple_trunc + λLCTTtrunc . (17) We empirically find out this objective has no significant difference than using GAN objective shown in Equation 14 in performance-wise as long as the generator is well trained. D.3 CONDITIONAL TRUNCATED DIFFUSION PROBABILISTIC MODELS For conditional generation, we extend (14) and derive a conditional version of TDPM: LcTDPM = Lcsimple_trunc + λLcTtrunc , (18) where Lcsimple_trunc aims to train the conditional diffusion model with Lcsimple_trunc = EcEt,x0|c,ϵt [ ||ϵt − ϵθ(xt, c, t)||2 ] , t ∼ Unif(1, 2, . . . , Ttrunc), ϵt ∼ N (0, I), (19) and the truncated distribution LcTtrunc can be fitted with GAN or CT: min ψ max ϕ Ec [ Ex∼q(xTtrunc | c)[logDϕ(x | c)] + Ez∼p(z) [log(1−Dϕ(Gψ(z, c)) | c)] ] . (20) min ψ,η max ϕ Ec [ Ex∼q(xTtrunc | c) [ EGψ(z)∼πη(Gψ(z,c) |xTtrunc ,c)cϕ(xTtrunc , Gψ(z, c)) ] + Ez∼p(z) [ ExTtrunc∼πη(xTtrunc |Gψ(z,c),c)cϕ(xTtrunc , Gψ(z, c)) ] ] . (21) D.4 ANALYSIS ON TOY EXPERIMENTS Although we present image experiments in the main paper, our studies were firstly justified our method on synthetic toy data as a proof of concept. We adopt representative 2D synthetic datasets used in prior works (Gulrajani et al., 2017; Zheng & Zhou, 2021), including Swiss Roll, Double Moons, 8-modal, and 25-modal Gaussian mixtures with equal component weights. We use an empirical sample set X , consisting of |X | = 2, 000 samples and illustrate the generated samples after 5000 training epochs. We take 20 grids in the range [−10, 10] for both the x and y axes to approximate the empirical distribution of p̂θ and q̂, and report the corresponding forward KL DKL(q̂||p̂θ) as the quantitative evaluation metric. Figure 8 shows the results on the Swiss Roll data. We present a short chain with T = 2 and a longer chain with T = 5 to show the impacts of the number of diffusion steps. The first row shows that the data distribution is diffused with accumulated noise, and with more steps the diffused distribution will be closer to an isotropic Gaussian distribution. As one can see, truncating the diffusion chain to a short length will result in a clear gap between q(xTtrunc) and N (0, I). When DDPM (shown in the second row) samples from the isotropic Gaussian distribution, it becomes hard to recover the original data distribution from pure noise with only a few steps. Although we can see DDPM can get slightly improved with a few more steps (T = 5), as long as q(xT ) is not close to Gaussian, DDPM can hardly recover the data distribution. By contrast, as shown in the third and fourth rows, TDPM successfully approximates the non-Gaussian q(xTtrunc) with its implicit generator, and we can see the remaining part of the truncated chain is gradually recovered by the denoising steps. From both visualizations and DKL(q̂||p̂θ), we can see that TDPM is able to fit every step in such short chains. TDPM-GAN and TDPM-CT both succeed in fitting pθ(xTtrunc) but the latter one fits slightly better when the diffusion length is 2. When the length increases to 5, fitting the implicit distribution with GAN becomes easier. This observation demonstrate a benefit of combining the diffusion models and GANs. If the implicit generator is sufficiently powerful to model q(xTtrunc), then the number of steps in need can be compressed to a small number. On the contrary, if the implicit generator cannot capture the distribution, we need more steps to facilitate the fitting of the data distribution. Shown in Figure 9-Figure 11, we can see 8-modal Gaussian is more similar to an isotropic Gaussian after getting diffused, thus DDPM can recover a distribution similar to data with 5 steps. On 25-Gaussians, we can observe GAN does not suffer from mode-collapse and provide a better approximation than CT, which results in better data distribution recovery in the final step. D.5 ADDITIONAL ABLATION STUDIES Using Pre-trained diffusion backbones: Different from the default setting, here we put the implicit model of TDPM+ trained at t = Ttrunc and a pre-trained DDPM model1 in the same pipeline of sampling. In this case we do not need to spend any time on pretraining the DDPM model, and only need to train the implicit model for t = Ttrunc. As shown in Table 5, when combined with a pre-trained DDPM for t < Ttrunc, the generation performance of TDPM trained under this two-step procedure is comparable to TDPM trained end-to-end. Sensitivity to noise schedule: Nichol & Dhariwal (2021) show the noise schedule affects the training of DDPM. Here we examine if TDPM is sensitive to the choice of noise schedule. We compare the linear schedule with cosine schedule, which adds noise in a milder manner. The results on CIFAR-10 are reported in Table 6, which suggest that TDPM is not sensitive to the choice between these two schedules. On the choice of truncated step: As the diffused distribution could facilitate the learning of the implicit generator Gψ (Arjovsky & Bottou, 2017), where we could observe by increasing the number of diffusion steps, the FID of TDPM consistently gets better. A natural question is on which step should we truncate the diffusion chain. We study the signal-to-noise ratio (SNR) of different diffusion step. Based on q(xt|x0) = N ( √ ᾱtx0, 1− ᾱtI), we calculate SNR as SNR = √ ᾱt√ 1− ᾱt ; ᾱt = t∏ i=1 (1− βt). We visualize the SNR evolution across time step t > 0 in Figure 12, where we can observe the SNR rapidly decays in the first 100 steps. According to previous studies in Arjovsky & Bottou (2017), injecting noise into the data distribution could smoothen the data distribution support and facilitate the GAN training. The SNR change in this interval indicates injecting noise in the level of t ∈ J1, 100K could bring in more significant improvement for the GAN training. When the step is greater than 200, the SNR is change is no longer significant and close to zero, which indicates the implicit model 1The pre-trained checkpoints are provided by: https://github.com/pesser/pytorch_ diffusion might not be too informative, though it is easier to train. Our experimental observations in Figure 3 also justify this conclusion: when training a GAN at TTrunc = 4, the required number of iterations is similar to training it on clean data; by training the GAN model at TTrunc = 99, the training of GAN is significantly facilitated. For TTrunc > 100, we empirically examine to train a GAN and find it would converge faster than training the diffusion model for t < TTrunc. Comparison of model efficiency: In complement of the results in Table 1-2, we provide detailed model size and generation time on v100 GPU. The results are summarized in Table 7. We can see TDPM has an increasing in the total number of parameter, as it involves a discriminator to help train the implicit model, while its sampling efficiency is also obvious. D.6 EXPERIMENTAL SETTINGS D.6.1 MODEL ARCHITECTURE Generator: Our generator structure strictly follows the U-Net structure (Ronneberger et al., 2015) used in DDPM, improved DDPM, and ADM (Ho et al., 2020; Nichol & Dhariwal, 2021; Dhariwal & Nichol, 2021), which consists of multiple ResNet blocks (He et al., 2016) with Attention blocks (Vaswani et al., 2017) injected in the bottleneck. Please refer to these paper for more details on the architecture. A key difference between our model and previous diffusion models is that our model also train such U-Net as an extra implicit generator Gθ that takes a latent variable z ∼ N (0, I) and a fixed time index t = Ttrunc + 1 as input. However, this does not result in a difference in the generator architecture. We parameterize Gθ with the same U-Net architecture for simplicity and the time embedding t = Ttrunc + 1 is specified to be trained with the implicit loss shown in (12) and (16). We have also tested to use all zero time embedding for t = Ttrunc + 1 and found no clear differences. For our results of TDPM+, the generator Gψ specifically takes a StyleGAN2 architecture Karras et al. (2020b) and there is no time-embedding in Gψ. An increase of generator parameter appears caused by separating the implicit model and denoising U-Net. Note that the generator is trained with GAN loss and without specially designed adaptive augmentation in Karras et al. (2020a). For the detailed model architecture please refer to the corresponding paper or their Github repository: https://github.com/NVlabs/stylegan2-ada-pytorch. Discriminator: Similar to Xiao et al. (2022), we adopt the discriminator architecture used in Karras et al. (2020b), but without the time step input. The discriminator discriminate xTtrunc is from the diffused distribution q(xTtrunc) or implicit generative distribution pθ(xTtrunc). Please refer to Appendix C of Xiao et al. (2022) for the detailed design. Navigator: Training with LCTTtrunc involves an extra module named navigator (Zheng & Zhou, 2021). We strictly follow the architecture used in Zheng & Zhou (2021), where the navigator is an MLP taking the pairwise feature distance as inputs. There is no time embedding used in the navigator as it is only used for the training at t = TTrunc. The feature is extracted from the layer before the final scalar output. Please refer to their Appendix D for detailed information. Architecture for text-to-image experiments: We adopt the 1.45B LDM model (Rombach et al., 2022) that is pretrained on LAION-400M dataset (Schuhmann et al., 2021). The LDM model consists of a U-Net KL-regularized autoencoder with downsampling-factor 8 (resolution 256 -> 32), a U-Net in the latent space, and a BERT (Devlin et al., 2018) text encoder transform raw text to a sequence of 1280-dimension embeddings. We only fine-tune the latent model in our experiments. In the training of the truncated part, the discriminator takes the first-half of the U-Net (downsampling backbone) with a linear predicting head on top of it. Architecture for toy experiments: The generator uses an architecture stacked with 4 linear layers with 128 hidden units. Each intermediate layer is equipped with a time-embedding layer and follows softplus activation. The discriminator and navigator have the same architecture, without time-embedding layers, and using leakyReLU as the activation function. D.6.2 TRAINING CONFIGURATIONS Datasets: We use CIFAR-10 (Krizhevsky et al., 2009), LSUN-bedroom, and LSUN-Church (Yu et al., 2015) datasets for unconditional generation in the main experiments. Additionally, we apply CelebA(Liu et al., 2015) and CelebA-HQ (Lee et al., 2020) for complementary justification. For text-to-image experiments, we use CUB-200 (Welinder et al., 2010) and MS-COCO (Lin et al., 2014). The images consist of 32 × 32 pixels for CIFAR-10. For the other datasets, we apply center-crop along the short edge and resize to the target resolution (64× 64 for CelebA; 256× 256 for the others). Diffusion schedule: For all datasets, we strictly follow the diffusion process used in our backbone models, and instantiate the truncated diffusion schedule by obtaining the first TTrunc diffusion rates {β1, ..., βTTrunc}. For example, if our goal is to fit a model with NFE=50, to truncate the diffusion process used in Ho et al. (2020) (β1 = 10−4, βT = 0.02, T=1000), we first initialize β1, β2, ... β1000, and then taking the first 49 steps to complete the truncation. Optimization: We train our models using the Adam optimizer (Kingma & Ba, 2015), where most of the hyperparameters match the setting in Xiao et al. (2022), and we slightly modify the generator learning rate to match the setting in Ho et al. (2020), as shown in Table 8. We train our models using V100 GPUs, with CUDA 10.1, PyTorch 1.7.1. The training takes approximately 2 days on CIFAR-10 with 4 GPUs, and a week on CelebA-HQ and LSUN-Church with 8 GPUs. Table 8: Optimization hyper-parameters. CIFAR10 CelebA CelebA-HQ LSUN Initial learning rate for discriminator 10−4 10−4 10−4 10−4 Initial learning rate for navigator (if applicable) 10−4 10−4 10−4 10−4 Initial learning rate for generator 1× 10−5 1× 10−5 2× 10−5 2× 10−5 Adam optimizer β1 0.5 0.5 0.5 0.5 Adam optimizer β2 0.9 0.9 0.9 0.9 EMA 0.9999 0.9999 0.9999 0.9999 Batch size 128 128 64 64 # of training iterations 800k 800k 0.5M 2.4M(bedroom)/1.2M(church) # of GPUs 4 8 8 8 For TDPM+, where we use StyleGAN2 generator as Gψ, we directly use their original training hyper-parameters and train the model in parallel with the diffusion model. For TLDM, we set the base learning rate as 10−5 and the mini-batch size is set to 64. For the ImageNet1K-64×64 experiments, we use StyleGAN-XL generator as Gψ and strictly follow all the default training hyper-parameters. To simplify the implementation and save computation, instead of applying the default progressive growing pipeline 16 × 16 → 32 × 32 → 64 × 64, we directly train the implicit model on 64×64 images corrupted at TTrunc. Without using the progressive growing pipeline, the result of StyleGANXL shown in Table 2 is clearly worse than the progressive one reported in their paper (FID 1.51). However, when used as the implicit model of TDPM, the final performance of TDPM becomes competitive with this result. Evaluation: When evaluating the sampling time, we use models trained on CIFAR-10 and generate a batch of 128 samples. When evaluating the FID, and recall score, following the convention, we use 50k generated samples for CIFAR-10, LSUN-bedroom and LSUN-church, 30k samples for CelebAHQ (since the CelebA HQ dataset contains only 30k samples), 30k samples for the text-to-image datasets. The recall scores are calculated with the recipe in Kynkäänniemi et al. (2019). In the sampling stage, we follow our backbone to apply the same guidance in the diffusion part (t < TTrunc) if applicable. Specifically, for LDM backbone, we use classifier-free guidance (Ho & Salimans, 2022) with scale 1.5 and there are no DDIM steps for TDLM. D.7 ADDITIONAL RESULTS ON UNCONDITIONAL GENERATION TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 14: Qualitative results of TDPM on LSUN-Church (256 × 256), with Ttrunc = 99, 49, and 4. Note NFE = Ttrunc + 1 in TDPM. Each group presents generated samples from pθ(x0) (left) and pθ(xTtrunc) (right). TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 15: Analogous qualitative results to Figure 14 on LSUN-Bedroom. Produced by TDPM. TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 16: Analogous qualitative results to Figure 14 on CelebA-HQ. Produced by TDPM. TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 18: Analogous qualitative results to Figure 14 on LSUN-Bedroom. Produced by TDPM-CT. TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 19: Analogous qualitative results to Figure 14 on CelebA-HQ. Produced by TDPM-CT. D.8 ADDITIONAL RESULTS ON TEXT-TO-IMAGE GENERATION A white and gray bird with black wings. An airplan flying over a body of water. A sign reads “TDPM”. Busy city street at dusk with sun setting. Figure 20: Additional text-to-image generation results with different text prompt, produced by TLDM with Ttrunc = 49. The bagel is put in a squre plate. The bathroom has a big mirror. A cluster of flower on the wooden table.
1. What is the focus and contribution of the paper regarding sampling speed improvement in diffusion-based generative models? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to previous works? 3. Do you have any concerns or suggestions regarding the optimization approach used in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor comments or suggestions for improving the manuscript?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper aims to improve sampling speed of diffusion-based generative models by minimizing the number of reverse steps. Instead of using a distillation technique, this paper truncates the diffusion process, stopping adding noise to samples before the samples become pure white noise. Then, an implicit generative model is employed to model this hidden-noisy distribution for sampling. Experiments on unconditional and text-conditional generation tasks show that the proposed method performs reasonably well in case of a limited number of reverse steps. Strengths And Weaknesses Strengths: The problem tackled in this paper is important both in academic and practical scenarios. And, the proposed method, combining diffusion process and implicit generative models, is also reasonable, since the hidden-noisy distribution after truncated diffusion process, seems to be a quiet unimodal distribution, which will be well estimated by GANs. Experiments on unconditional and text-conditional generation tasks show that TDPM reasonably performs well even in the case of a very limited number of reverse steps. Weaknesses: In my opinion, the main weakness of this work is insufficient comparison to previous work. To improve the sampling speed in the DDPM framework, there are several methods published in ICLR last year. This manuscript briefly discusses the limitation of DD-GAN and progressive distillation, but none of them are empirically compared to TDPM. So, this makes it difficult to evaluate the real value of TDPM in practical scenarios. Detailed comments As shown in Eq.(14), the authors try to train a network by jointly optimizing the denoising objective and adversarial loss. This may hurt the stability of optimization for large-scale diffusion models. Instead of joint optimization, what about using this two-phase approach? First, the standard denoising objective is used to pre-train the diffusion model. Then, we truncate the diffusion process and train GANs to match prior to aggregated posterior. I failed to find insights from recasting TDPM as an adversarial auto-encoder, since this reinterpretation seems to be fairly straightforward. Honestly, even if the derivation is straightforward, it would be great to give many insights through the reinterpretation. It would be much better to compare the proposed method with progressive distillation or DD-GAN. In my understanding, DD-GAN was not proven to be working well on ImageNet-scale datasets, but progressive distillation works well on many large-scale datasets. There are some minor comments to the manuscript: I couldn’t find \mathcal{L}{\textrm{T}{\textrm{trunc}}^{\textrm{CT}} in the main section. It first appears in the appendix. Figure 5 shows some samples having some artifacts caused by watermarks in the training set. It would be better to include clean samples. Clarity, Quality, Novelty And Reproducibility This manuscript is generally well-written. I’m able to follow many technical details to understand the proposed method. There are some minor issues, summarized in the section above. In terms of novelty, this is not the first work to combine different types of generative models, since DD-GAN also shows the potential that considering implicit generative models in the diffusion process reduces the forward or reverse steps. In terms of reproducibility, I didn’t find any unusual components to implement the proposed method. So, the numbers introduced in the paper would be reproduced.
ICLR
Title Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial Auto-Encoders Abstract Employing a forward diffusion chain to gradually map the data to a noise distribution, diffusion-based generative models learn how to generate the data by inferring a reverse diffusion chain. However, this approach is slow and costly because it needs many forward and reverse steps. We propose a faster and cheaper approach that adds noise not until the data become pure random noise, but until they reach a hidden noisy-data distribution that we can confidently learn. Then, we use fewer reverse steps to generate data by starting from this hidden distribution that is made similar to the noisy data. We reveal that the proposed model can be cast as an adversarial auto-encoder empowered by both the diffusion process and a learnable implicit prior. Experimental results show even with a significantly smaller number of reverse diffusion steps, the proposed truncated diffusion probabilistic models can provide consistent improvements over the non-truncated ones in terms of performance in both unconditional and text-guided image generations. 1 INTRODUCTION Generating photo-realistic images with probabilistic models is a challenging and important task in machine learning and computer vision, with many potential applications in data augmentation, image editing, style transfer, etc. Recently, a new class of image generative models based on diffusion processes (Sohl-Dickstein et al., 2015) has achieved remarkable results on various commonly used image generation benchmarks (Song & Ermon, 2019; Ho et al., 2020; Song & Ermon, 2020; Song et al., 2021b; Dhariwal & Nichol, 2021), surpassing many existing deep generative models, such as autoregressive models (van den Oord et al., 2016), variational auto-encoders (VAEs) (Kingma & Welling, 2013; Rezende et al., 2014; van den Oord et al., 2017; Razavi et al., 2019), and generative adversarial networks (GANs) (Goodfellow et al., 2014; Radford et al., 2015; Arjovsky et al., 2017; Miyato et al., 2018; Brock et al., 2019; Karras et al., 2019; 2020b). This new modeling class, which includes both score-based and diffusion-based generative models, uses noise injection to gradually corrupt the data distribution into a simple noise distribution that can be easily sampled from, and then uses a denoising network to reverse the noise injection to generate photo-realistic images. From the perspective of score matching (Hyvärinen & Dayan, 2005; Vincent, 2011) and Langevin dynamics (Neal, 2011; Welling & Teh, 2011), the denoising network is trained by matching the score function, which is the gradient of the log-density of the data, of the corrupted data distribution and that of the generator distribution at different noise levels (Song & Ermon, 2019). This training objective can also be formulated under diffusion-based generative models (Sohl-Dickstein et al., 2015; Ho et al., 2020). These two types of models have been further unified by Song et al. (2021b) under the framework of discretized stochastic differential equations. Despite their impressive performance, diffusion-based (or score-based) generative models suffer from high computational costs, both in training and sampling. This is because they need to perform a large number of diffusion steps, typically hundreds or thousands, to ensure that the noise injection is small enough at each step to make the assumption that both the diffusion and denoising processes have the Gaussian form hold in the limit of small diffusion rate (Feller, 1949; Sohl-Dickstein et al., 2015). In other words, when the number of diffusion steps is small or when the rate is large, the Gaussian assumption may not hold well, and the model may not be able to capture the true score function of the data. Therefore, previous works have tried to reduce the number of diffusion steps by using non-Markovian reverse processes (Song et al., 2020; Kong & Ping, 2021), adaptive noise scheduling (San-Roman et al., 2021; Kingma et al., 2021), knowledge distillation (Luhman & Luhman, 2021; Salimans & Ho, 2022), diffusing in a lower-dimension latent space (Rombach et al., 2022), etc., but they still cannot achieve significant speedup without sacrificing the generation quality. In this paper, we propose a novel way to shorten the diffusion trajectory by learning an implicit distribution to start the reverse diffusion process, instead of relying on a tractable noise distribution. We call our method truncated diffusion probabilistic modeling (TDPM), which is based on the idea of truncating the forward diffusion chain of an existing diffusion model, such as the denoising diffusion probabilistic model (DDPM) of Ho et al. (2020). To significantly accelerate diffusion-based text-to-image generation, we also introduce the truncated latent diffusion model (TLDM), which truncates the diffusion chain of the latent diffusion model (LDM) of Rombach et al. (2022). We note LDM is the latent text-to-image diffusion model behind Stable Diffusion, an open-source project that provides state-of-the-art performance in generating photo-realistic images given text input. By truncating the chain, we can reduce the number of diffusion steps to an arbitrary level, but at the same time, we also lose the tractability of the distribution at the end of the chain. Therefore, we need to learn an implicit generative distribution that can approximate this distribution and provide the initial samples for the reverse diffusion process. We show that this implicit generative distribution can be implemented in different ways, such as using a separate generator network or reusing the denoising network. The former option has more flexibility and can improve the generation quality, while the latter option has no additional parameters and can achieve comparable results. We reveal that DDPM and VAE have a similar relationship as TDPM and adversarial auto-encoder (AAE, Makhzani et al. (2015)). Specifically, DDPM is like a VAE with a fixed encoder and a learnable decoder that use a diffusion process, and a predefined prior. TDPM is like an AAE with a fixed encoder and a learnable decoder that use a truncated diffusion process, and a learnable implicit prior. Our truncation method has several advantages when we use it to modify DDPM for generating images without text guidance or LDM for generating images with text guidance. First, it can generate samples much faster by using fewer diffusion steps, without sacrificing or even enhancing the generation quality. Second, it can exploit the cooperation between the implicit model and the diffusion model, as the diffusion model helps the implicit model train by providing noisy data samples, and the implicit model helps the diffusion model reverse by providing better initial samples. Third, it can adapt the truncation level to balance the generation quality and efficiency, depending on the data complexity and the computational resources. For generating images with text guidance, our method can speed up the generation significantly and make it suitable for real-time processing: while LDM takes the time to generate one photo-realistic image, our TLDM can generate more than 50 such images. The main contributions of our paper are as follows: • We introduce TDPM, a new diffusion-based generative model that can shorten the diffusion trajectory by learning an implicit distribution to start the reverse diffusion process, and demonstrate that the learning of the implicit distribution can be achieved in various ways. We further introduce TLDM to significantly accelerate diffusion-based text-to-image generation. • We show TDPM can be formulated as a diffusion-based AAE. • We show that the implicit distribution can be realized by reusing the denoising network for the reverse diffusion process, which can reduce the reverse diffusion steps by orders of magnitude without adding any extra parameters and with comparable generation quality. • We reveal the synergy between the implicit model and the diffusion model, as the diffusion process can simplify the training of the implicit model like GANs, and the implicit model can speed up the reverse diffusion process of the diffusion model. • We show that both TDPM and TLDM can adapt the truncation level, according to the data complexity and the computational resources, to achieve a good balance between the generation quality and the efficiency. 2 PRELIMINARIES ON DIFFUSION MODELS In Gaussian diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020), starting from the data distribution x0 ∼ q(x0), a pre-defined forward diffusion process qt produces auxiliary variables xt=1:T by gradually adding Gaussian noise, with variance βt ∈ (0, 1) at time t, as follows: q(x1, ...,xT |x0) := ∏T t=1 q(xt |xt−1), q(xt |xt−1) := N (xt; √ 1− βtxt−1, βtI). (1) With the limit of small diffusion rate (i.e., βt is kept sufficiently small), the reverse distribution q(xt−1 |xt) also follows a Gaussian distribution (Feller, 1949; Sohl-Dickstein et al., 2015) and can be approximated using a neural network parameterized Gaussian distribution pθ as: pθ(xt−1 |xt) := N (xt−1;µθ(xt, t),Σθ(xt, t)). (2) Moreover, with a sufficiently large T , the outcome of the diffusion chain xT will follow an isotropic Gaussian distribution. Thus, with the pre-defined forward (inference) diffusion process and the learned reverse (generative) diffusion process, we can sample from xT ∼ N (0, I) and run the diffusion process in reverse to get a sample from the data distribution q(x0). Under the variational inference (Kingma & Welling, 2013; Blei et al., 2017) framework, viewing q(x1, ...,xT |x0) in (1) as the inference network, we can use the evidence lower bound (ELBO) as our learning objective. Following previous works (Sohl-Dickstein et al., 2015; Ho et al., 2020), the negative ELBO of a diffusion probabilistic model, parameterized by θ, can be expressed as LELBO(θ) := L0(θ) + ∑T t=2 Lt−1(θ) + LT , L0(θ) := Eq(x0)Eq(x1 |x0) [− log pθ(x0 |x1)] , (3) Lt−1(θ) := Eq(x0)Eq(xt |x0)[DKL (q(xt−1 |xt,x0)||pθ(xt−1 |xt))], t ∈ {2, . . . , T} (4) LT := Eq(x0)[DKL (q(xT |x0)||p(xT ))], (5) where DKL(q||p) = Eq[log q − log p] denotes the Kullback–Leibler (KL) divergence from distributions p to q. Generally speaking, diffusion probabilistic models assume the number of diffusion steps T to be sufficiently large to satisfy two conditions: 1) the reverse distribution at each denoising step can be fitted with a Gaussian denoising generator pθ(xt−1|xt); 2) with a sufficiently small diffusion rate βt, the long forward diffusion process will successfully corrupt the data, making q(xT |x0) ≈ N (0, I), and hence approximately LT becomes zero and depends on neither x0 nor θ. What happens if T is insufficiently large? Given a non-Gaussian data distribution q(x0), when the number of denoising steps is reduced, the true posterior q(xt−1 |xt) is not Gaussian and usually intractable (Feller, 1949), resulting in new challenges to current diffusion models. As noted in Xiao et al. (2022), when βt is not sufficiently small, the diffusion step becomes larger and the denoising distribution can be multi-modal and hence too complex to be well fitted by Gaussian. The authors propose to define pθ(xt−1 |xt) with an implicit generator and substitute the ELBO with min θ ∑ t≥1 Eq(t) [Dadv(q(xt−1 |xt)∥pθ(xt−1 |xt))] , (6) where Dadv represents a statistical distance that relies on an adversarial training setup. This modified objective can be minimized by leveraging the power of conditional GANs in fitting implicit multimodal distributions (Arjovsky et al., 2017; Goodfellow et al., 2014; Nowozin et al., 2016). While the concept of diffusion has been used, the proposed models in Xiao et al. (2022) are shown to work the best only when the number of diffusion steps is limited to be as few as four, and start to exhibit deteriorated performance when further increasing that number. 3 TRUNCATED DIFFUSION AND ADVERSARIAL AUTO-ENCODING We first introduce the idea of accelerating both the training and generation of diffusion models by truncating the diffusion chains and describe the technical challenges. We then develop the objective function and training algorithm for TDPM. We further reveal TDPM can also be formulated as an AAE (Makhzani et al., 2015)) empowered by diffusion models. While DDPM can be considered as a hierarchical version of a variational auto-encoder (VAE) with a fixed multi-stochastic-layer encoder, our derivation shows that TDPM can be considered as a hierarchical version of an AAE with a fixed multi-stochastic-layer encoder but a learnable implicit prior. 3.1 MOTIVATION AND TECHNICAL CHALLENGES We propose a novel method called TDPM to speed up the diffusion process and the generative model. The main idea is to shorten the forward diffusion chain that transforms the data into Gaussian noise, and use a learned implicit distribution to sample the starting point of the reverse diffusion chain that reconstructs the data. To be more precise, we adopt the DDPM framework that defines a variance schedule {β1, β2, ..., βT }, which controls the amount of noise added at each step of the forward diffusion process. The forward process has a simple analytical form as a Gaussian distribution: q(xt |x0) = N ( √ ᾱtx0, (1− ᾱt)I); ᾱt = ∏t i=1 αi, αi = 1− βi. Here, xt is the noisy version of the data x0 at step t, and ᾱt is the cumulative product of the diffusion coefficients αi. The forward chain of length T is designed to be long enough to make the data distribution indistinguishable from Gaussian noise N (0, I). However, a long forward chain also implies a high computational cost for the reverse process, which uses a learned neural network to predict the conditional distribution of the clean data given the noisy one at each step. The proposed TDPM cuts off the last part of the forward chain and only keeps the first Ttrunc steps {β1, β2, ..., βTtrunc} ⊂ {β1, β2, ..., βT }. We choose Ttrunc to be much smaller than T so that we can save a lot of computation time in generation. The benefit of this truncation is illustrated in Figure 1, where the bottom row shows the truncated diffusion chain. We can see that the data are only partially corrupted by noise and still retain some features of the original data. This means that we can recover the data more easily and accurately by applying a few Gaussian denoising steps from the corrupted data. Moreover, we do not change the diffusion rates βt for the first Ttrunc steps, so we do not compromise the quality of the forward and reverse processes between time 0 and Ttrunc. However, truncating the forward chain also introduces a new challenge for the reverse process. Unlike the original chain, where the starting point of the reverse process is xT ∼ N (0, I), the truncated chain has an unknown distribution of the corrupted data at step Ttrunc. This makes it difficult to sample from this distribution and initiate the reverse process. To overcome this challenge, we introduce an implicit generative model that approximates the distribution of the corrupted data by minimizing a divergence measure between the implicit and the true noisy distributions at step Ttrunc. This way, we can use the implicit model to sample the starting point of the reverse process and then apply the learned denoising network to generate the data. 3.2 HAND-CRAFTED TDPM OBJECTIVE FUNCTION Mathematically, recall that the DDPM loss in (3) consists of three terms: L0, ∑T t=2 Lt−1, and LT . The training objective of a conventional diffusion model focuses on terms ∑T t=2 Lt−1 and L0. It assumes LT does not depend on any parameter and will be close to zero by carefully pre-defining the forward noising process such that q(xT |x0) ≈ p(xT ) = N (0, I). When the diffusion chains are truncated at time Ttrunc ≪ T , the forward diffusion ends at time Ttrunc, where the marginal distribution of the forward diffusion-corrupted data can be expressed as q(xTtrunc) := ∫ q(xTtrunc |x0)p(x0)dx0, (7) which takes a semi-implicit form (Yin & Zhou, 2018) whose density function is often intractable. To reverse this truncated forward diffusion chain, we can no longer start the reverse diffusion chain from a known distribution such as N (0, I). To this end, we propose TDPM that starts the reverse chain at time Ttrunc from pψ(xTtrunc), an implicit distribution parameterized by ψ. We match pψ(xTtrunc) to q(xTtrunc) via a loss term as L̃Ttrunc := D (q(xTtrunc)||pψ(xTtrunc)) , where D(q||p) is a statistical distance between distributions q and p, such as the Jensen–Shannon divergence and Wasserstein distance. As we keep all the diffusion steps before time Ttrunc in TDPM the same as those in DDPM, we combine L̃Ttrunc with all the loss terms of DDPM before time Ttrunc in (3) to define the TDPM loss as LTDPM := ∑Ttrunc t=1 Lt−1(θ) + L̃Ttrunc(ψ), L̃Ttrunc(ψ) := D (q(xTtrunc)||pψ(xTtrunc)) , (8) We note while in general pψ(xTtrunc) in TDPM is intractable, we can employ a deep neural networkbased generator Gψ to generate a random sample in a single step via xTtrunc = Gψ(z), z ∼ N (0, I). (9) We will discuss later that we may simply let ψ = θ to avoid adding more parameters. 3.3 TDPM AS DIFFUSION-BASED ADVERSARIAL AUTO-ENCODER Following the terminology of AAE, let us define the prior as pψ(xTtrunc), the decoder (likelihood) as pθ(x0 |xTtrunc) := ∫ . . . ∫ [∏Ttrunc t=1 pθ(xt−1 |xt) ] dxTtrunc−1 . . . dx1, (10) which is empowered by a reverse diffusion chain of length Ttrunc, and the encoder (variational posterior) as q(xTtrunc |x0). Thus we can view q(xTtrunc) defined in (7) as the aggregated posterior (Hoffman & Johnson, 2016; Tomczak & Welling, 2018). In addition to imposing an auto-encoding data-reconstruction loss, the key idea of the AAE (Makhzani et al., 2015) is to also match the aggregated posterior to a fixed prior. This idea differs AAE from a VAE that regularizes the autoencoder by matching the variational posterior to a fixed prior under the KL divergence. To this end, we introduce a diffusion-based AAE (Diffusion-AAE), whose loss function is defined as LDiffusion-AAE = −Eq(x0)Eq(xTtrunc |x0) log pθ(x0 |xTtrunc) +D(q(xTtrunc))||pψ(xTtrunc)). (11) Diffusion-AAE has two notable differences from a vanilla AAE: 1) its encoder is fixed and has no learnable parameters, while its prior is not fixed and is optimized to match the aggregated posterior, and 2) its decoder is a reverse diffusion chain, with Ttrunc stochastic layers all parameterized by θ. Note in general as the likelihood in (10) is intractable, the first loss term in (11) is intractable. However, the loss of Diffusion-AAE is upper bounded by the loss of TDPM, as described below. Theorem 1. The Diffusion-AAE loss in (11) is upper bounded by the TDPM loss in (8): LDiffusion-AAE ≤ LTDPM. 3.4 MATCHING THE PRIOR TO AGGREGATED POSTERIOR Via the loss term L̃Ttrunc := D (q(xTtrunc)||pψ(xTtrunc)) in (8), we aim to match the prior pψ(xTtrunc) to the aggregated posterior q(xTtrunc) in TDPM. While we have an analytic density function for neither p nor q, we can easily draw random samples from both of them. Thus, we explore the use of two different types of statistical distances that can be estimated from samples of both q and p. We empirically show that TDPM can achieve good performance regardless of which distance is used for optimization. One possible statistical distance is based on the idea of GANs (Goodfellow et al., 2014; Arjovsky et al., 2017; Bińkowski et al., 2018), which are widely used to learn implicit distributions from empirical data. In this setting, we use a generator Gψ(·) : Rd → Rd to transform samples from an isotropic Gaussian p(z) into samples that approximate the corrupted data, and a discriminator Dϕ(·) : Rd → [0, 1] to distinguish between the samples from the corrupted data distribution q(xTtrunc |x0) and the implicit generative distribution pψ(xTtrunc). The generator and the discriminator are trained by the following objective LGANTtrunc : min ψ max ϕ Ex∼q(xTtrunc )[logDϕ(x)] + Ez∼p(z) [log(1−Dϕ(Gψ(z)))]. (12) 3.5 TRAINING ALGORITHM As the objective in Equation 8 is a sum of different terms, following DDPM (Ho et al., 2020) to fix the terms Σθ(xt, t) = σ2t I , we can simplify 1 Ttrunc ∑Ttrunc t=1 Lt−1 as an expectation defined as Lsimple_trunc = Et,x0,ϵt [ ||ϵt − ϵθ(xt, t)||2 ] , t ∼ Unif(1, 2, . . . , Ttrunc), ϵt ∼ N (0, I) (13) where ϵt is an injected noise at a uniformly sampled timestep index t, xt = √ ᾱtx0 + √ 1− ᾱtϵt is a noisy image at time t, and ϵθ is a denoising U-Net that predicts the noise in order to refine the noisy image xt. Therefore the final simplified version of (8) is constructed as LGANTDPM = Lsimple_trunc + λLGANTtrunc , . (14) While λ, the weight of LTtrunc , can be tuned, we fix it as one for simplicity. Here the TDPM objective consists of two parts: the denoising part ϵθ is focused on denoising the truncated chain, getting updated from Lsimple_trunc, while the implicit part Gψ is focused on minimizing Eq[D (q(xTtrunc)||pψ(xTtrunc))], getting updated from LGANTtrunc . An interesting finding of this paper is that we do not necessarily need to introduce a separate set of parameters ψ for the generator Gψ, as we can simply reuse the same parameters θ of the reverse diffusion model (i.e., let ψ = θ) without clearly hurting the empirical performance. This suggests that the reverse diffusion process from T to Ttrunc could be effectively approximated by a single step using the same network architecture and parameters as the reverse diffusion steps from Ttrunc to 0. Therefore, we provide two configurations to parameterize the implicit distributions. 1) To save parameters, we let the implicit generator and denoising model share the same U-Net parameters but using different time step indices. Specifically, we first use xTtrunc =Gψ(xT )= ϵθ(xT , t=Ttrunc+1), where xT ∼ N (0, I), to generate a noisy image at time Ttrunc. 2) We further explore employing a different model, e.g., StyleGAN2 (Karras et al., 2020a), for the implicit generator, which provides better performance but increases the model size to get xTTrunc . Then for t=Ttrunc, . . . , 1, we iteratively refine it as xt−1 = 1√αt (xt − 1−αt√ 1−ᾱt ϵθ(xt, t)) + βtzt, where zt ∼ N(0, I) when t > 1 and z1 = 0. This process is depicted in Algorithms 1 and 2 in the Appendix. For the implementation details, please refer to Appendix D.6 and our code at https://github.com/JegZheng/ truncated-diffusion-probabilistic-models. 3.6 RELATED WORK In our previous discussions, we have related TDPM to several existing works such as DDPM and AAE. A detailed discussion on other related works is provided in Appendix B. 4 EXPERIMENTS We aim to demonstrate that TDPM can generate good samples faster by using fewer steps of reverse diffusion. We use different image datasets to test our method and follow the same setting as other diffusion models (Ho et al., 2020; Nichol & Dhariwal, 2021; Dhariwal & Nichol, 2021; Rombach et al., 2022) for our backbones. We also have two ways to set up the implicit generator that starts the reverse diffusion. One way is to reuse the denoising network, and the other way is to use a separate network. We try both ways for generating images without any labels. For generating images from text, we use the first way with the LDM backbone. We provide comprehensive details, toy examples, and additional experimental results in Appendices D.4-D.8. We use FID (lower is better) and Recall (higher is better) to measure the fidelity and diversity, respectively, of the generated images. We use CIFAR-10 (Krizhevsky et al., 2009), LSUN-bedroom, and LSUN-Church (Yu et al., 2015) datasets in unconditional experiments, and CUB-200 (Welinder et al., 2010) and MS-COCO (Lin et al., 2014) for text-to-image experiments. The images consist of 32× 32 pixels for CIFAR-10 and 256× 256 pixels for the other datasets. 4.1 EFFICIENCY IN BOTH TRAINING AND SAMPLING We first look at the results on CIFAR-10. We use DDPM (Ho et al., 2020) or improved DDPM (Nichol & Dhariwal, 2021) as our backbones. We use 4, 49, or 99 steps of reverse diffusion, which correspond Table 1: Results of unconditional generation on CIFAR-10, with the best FID and Recall in each group marked in bold. To compare TDPM (TTrunc=0) with GAN-based methods, we use DDPM backbone as generator and StyleGAN2 discriminator. Method NFE FID↓ Recall↑ DDPM backbone DDPM 1000 3.21 0.57 TDPM (TTrunc=99) 100 3.10 0.57 TDPM+ (TTrunc=99) 100 2.88 0.58 DDIM 50 4.67 0.53 TDPM (TTrunc=49) 50 3.30 0.57 TDPM+ (TTrunc=49) 50 2.94 0.58 TDPM (TTrunc=4) 5 3.34 0.57 TDPM+ (TTrunc=4) 5 3.21 0.57 Improved DDPM backbone Improved DDPM 4000 2.90 0.58 TDPM (TTrunc=99) 100 2.97 0.57 TDPM+ (TTrunc=99) 100 2.83 0.58 Improved DDPM+DDIM 50 3.92 0.55 TDPM (TTrunc=49) 50 3.11 0.57 TDPM+ (TTrunc=49) 50 2.96 0.58 TDPM (TTrunc=4) 5 3.51 0.55 TDPM+ (TTrunc=4) 5 3.17 0.57 GAN-based DDGAN 4 3.75 0.57 StyleGAN2 1 8.32 0.41 StyleGAN2-ADA 1 2.92 0.49 TDPM (TTrunc=0) 1 7.34 0.46 Table 2: Results on LSUN-Church and LSUN-Bedroom (resolution 256 × 256). Similar to Table 1, TDPM (TTrunc=0) uses DDPM backbone for the generator. Church Bedroom Method NFE FID FID DDPM backbone DDPM 1000 7.89 4.90 TDPM (TTrunc=99) 100 4.33 3.95 TDPM+ (TTrunc=99) 100 3.98 3.67 DDIM 50 10.58 6.62 TDPM (TTrunc=49) 50 5.35 4.10 TDPM+ (TTrunc=49) 50 4.34 3.98 TDPM (TTrunc=4) 5 4.98 4.16 TDPM+ (TTrunc=4) 5 4.89 4.09 ADM backbone ADM 1000 3.49 1.90 ADM+DDIM 250 6.45 2.31 TDPM (TTrunc=99) 100 4.41 2.24 TDPM+ (TTrunc=99) 100 3.61 1.88 TDPM (TTrunc=49) 50 4.57 2.92 TDPM+ (TTrunc=49) 50 3.67 1.89 TDPM (TTrunc=4) 5 5.61 7.92 TDPM+ (TTrunc=4) 5 4.66 4.01 GAN-based DDGAN 4 5.25 - StyleGAN2 1 3.93 3.98 StyleGAN2-ADA 1 4.12 7.89 TDPM (TTrunc=0) 1 4.77 5.24 Table 3: Results of ImageNet-64×64, evaluated with FID and Recall. TDPM+ is built with a pre-trained ADM and an implicit model trained at TTrunc using StylGAN-XL. Method NFE FID↓ Recall↑ ADM 1000 2.07 0.63 TDPM+ (TTrunc=99) 100 1.62 0.63 TDPM+ (TTrunc=49) 50 1.77 0.58 TDPM+ (TTrunc=4) 5 1.92 0.53 StyleGAN-XL (wo PG) 1 3.54 0.51 Figure 2: Random generation results of TDPM+(TTrunc=4) on ImageNet-64×64. to 5, 50, or 100 number of function evaluations (NFE). For the implicit generator, we either reuse the denoising U-Net or use a StyleGAN2 network (respectively, we call them TDPM and TDPM+). For comparison, we also include DDIM (Song et al., 2020) and DDGAN (Xiao et al., 2022). The comparison with a more diverse set of baselines can be found in Table 9 in Appendix D.7. Table 1 shows that our TDPM can get good FID with fewer NFE. TDPM+ can get even better FID, and it is the best when NFE=100. Compared with TDPM with 0 steps of reverse diffusion (a GAN with DDPM’s U-Net as generator and StyleGAN2 as discriminator) and StyleGAN2, TDPM with more than 0 steps of reverse diffusion has better recall and the FID is as good as StyleGAN2-ADA (a GAN with data augmentation for better training). This means TDPM can largely avoid the mode missing problem in GANs. We show some examples of generated images on CIFAR-10 in Figure 13. We also check how fast TDPM can train and sample. In training, we count how many images TDPM needs to well fit the truncated diffusion chain and the implicit prior. Figure 3 shows that when we use fewer steps of reverse diffusion, the diffusion part needs less time to train. But the implicit prior needs more time to train because it has to model a harder distribution, e.g., fitting the implicit prior with 4 diffusion steps needs similar time to directly fit it on the data. When we use 99 steps of reverse diffusion, the diffusion chain and the implicit prior need similar time to train, and the whole model trains faster than both GAN and DDPM. In sampling, we compare TDPM with 0, 1, 4, 49, or 99 steps of reverse diffusion. We report both FID and the sampling time (s/image) on one NVIDIA V100 GPU in Figure 4. When we use 4 steps of reverse diffusion, the FID is much lower than 0 steps, and the sampling time is slightly longer. When we use more steps of reverse diffusion, the FID goes down Ttrunc=0 (GAN) Ttrunc=4 Ttrunc=49 Ttrunc=99 DDPM 0 2 4 6 8 10 12 It er at ed lo g( ki m gs ) t < Ttrunc t = Ttrunc Figure 3: The required iterations (measured with iterated images) to converge in the training. The iterations for t < TTrunc (ϵθ) and t=TTrunc (Gψ) are marked in red and blue, respectively. Ttrunc=0 (GAN) speed-up x1000 Ttrunc=1 speed-up x500 Ttrunc=4 speed-up x200 Ttrunc=49 speed-up x20 Ttrunc=99 speed-up x10 DDPM 3 4 5 6 7 FI D 7.34 (0.03s) 4.47 (0.06s) 3.41 (0.15s) 3.3 (1.52s) 3.1 (3.13s) 3.27 (31.03s) Figure 4: Evolution of FID and corresponding GPU time (s/image) across different timesteps in the sampling stage. slowly, but the sampling time goes up linearly. When we use 99 steps of reverse diffusion, the FID of TDPM is better than DDPM with 1000 steps. Because the FID does not change much when we use more steps of reverse diffusion, we suggest using a small number of steps, such as 4 or more, to balance the quality and speed of generation. 4.2 RESULTS ON HIGHER-RESOLUTION AND MORE DIVERSE IMAGE DATASETS To test the performance of the proposed truncation method on high-resolution images, we train TDPM using two different diffusion models, DDPM (Ho et al., 2020) and ADM (Dhariwal & Nichol, 2021), as backbones on two datasets of 256× 256 resolution, LSUN-Church and LSUN-Bedroom (Yu et al., 2015). We compare the FIDs of TDPM with those of the backbone models and some state-of-the-art GANs in Tables 2. The results show that TDPM can generate images of similar quality with much smaller truncation steps Ttrunc, which means that it can produce images significantly faster than the backbone models. We also visualize the samples from the implicit distribution xTtrunc ∼ pθ(xTtrunc) that TDPM generates and the corresponding x0 that it finishes at the end of reverse chain in Figure 5. We further evaluate TDPM on ImageNet-1K (with resolution 64×64) that exhibits high diversity. Here we adopt the TDPM+ configuration, where we use a pre-trained ADM (Dhariwal & Nichol, 2021) checkpoint for t < TTrunc and train a StyleGAN-XL (Sauer et al., 2022) based implicit model at t = TTrunc (for simplicity, we choose to not use the progressive growing pipeline of StyleGAN-XL; See Appendix D.6 for more details). We compare both FID and Recall with our backbone models in Table 3 and show example generations in Figure 2. Similar to our observations in Table 1, TDPM has good generation quality with small truncation steps Ttrunc. Moreover, properly training an implicit model at Ttrunc can further improve the performance of the backbone. Table 4: Numerical results of Figure 6. The GPU time of sampling (s/image) is measured on one NVIDIA A100. CUB-Bird MS-COCO NFE GPU time LDM TLDM LDM TLDM 5 0.15 100.81 10.59 48.41 16.7 50 1.57 30.85 7.32 18.25 7.47 100 4.10 11.07 6.79 8.2 7.22 250 11.21 6.82 6.72 6.3 6.29 1000 41.09 6.68 - 6.29 - A bird with brown wings, black back, and red head. A green train is coming down the tracks. NFE=100 (TTrunc = 99) NFE=50 (TTrunc = 49) NFE=5 (TTrunc = 4) TLDM TLDM TLDM TLDM TLDM TLDM LDM LDM LDM LDMLDMLDM Figure 7: Example text-to-image generation results of LDM and TLDM (i.e., TDPM with LDM backbone) finetuned on CUB-200 (top row) or MS-COCO (bottom row), setting the number of times iterating through the reverse diffusion U-Net as 100 (left column), 50 (middle column), or 5 (right column). 4.3 TEXT-TO-IMAGE GENERATION Besides unconditional generation tasks, we develop for text-to-image generation the TLDM, a conditional version of TDPM that leverages as the backbone the LDM of Rombach et al. (2022), which is a state-of-the-art publicly released model with 1.45B parameters pre-trained on LAION400M (Schuhmann et al., 2021). LDM consists of a fixed auto-encoder for pixel generation and a latent-diffusion module to connect text and image embeddings. Here we fine-tune its latent-diffusion part on CUB-200 and MS-COCO datasets with 25K and 100K steps as the baseline. Similar to the unconditional case, we fine-tune with the LDM loss for t < TTrunc and GAN loss for t = TTrunc. More details about the setting can be found in Appendix D.6. The results of LDM with different DDIM sampling steps and TLDM with different truncated steps are summarized in Figure 6 and Table 4. Similar to applying diffusion directly on the original image-pixel space, when the diffusion chain is applied in the latent space, we observe TLDM can achieve comparable or better performance than LDM even though it has shortened the diffusion chain of LDM to have much fewer reverse diffusion steps. For the case that NFE is as small as 5, we note although the FID of TLDM has become higher due to using fewer diffusion steps, the generated image using TLDM at NFE=5 is still visually appealing, as shown in Figure 7. Compared with 50 and 250 steps using LDM, the sampling speed of TLDM using 5 steps is 10 and 50 times faster, respectively, while largely preserving generation quality. We provide additional text-to-image generation results of TLDM in Appendix D.8. 5 CONCLUSION In this paper, we investigate how to reduce the trajectory length of the diffusion chain to achieve efficient sampling without loss of generation quality. We propose truncated diffusion probabilistic modeling (TDPM) that truncates the length of a diffusion chain. In this way, TDPM can use a much shorter diffusion chain, while being required to start the reverse denoising process from an intractable distribution. We propose to learn such a distribution with an implicit generative model powered by the same U-Net used for denoising diffusion, and validate with multiple ways to learn the implicit distribution to ensure the robustness of the proposed TDPM. We reveal that TDPM can be cast as an adversarial auto-encoder with a learnable implicit prior. We conduct extensive experiments on both synthetic and real image data to demonstrate the effectiveness of TDPM in terms of both sample quality and efficiency, where the diffusion chain can be shortened to have only a few steps. ACKNOWLEDGMENTS H. Zheng and M. Zhou acknowledge the support of NSF-IIS 2212418 and IFML. A PROOF Proof of Theorem 1. As the last terms in both losses are the same, we only need to show that the first term in (11) is smaller than or equal to L0 + ∑Ttrunc t=2 Lt−1 in (8). Using Jensen’s inequality, we have − Eq(x0)Eq(xTtrunc |x0) log pθ(x0 |xTtrunc) = −Eq(x0)Eq(xTtrunc |x0) logEq(x1:Ttrunc−1 |x0,xTtrunc ) [ p(x0:Ttrunc−1 |xTtrunc) q(x1:Ttrunc−1 |x0,xTtrunc) ] ≤ −Eq(x0)Eq(xTtrunc |x0)Eq(x1:Ttrunc−1 |x0,xTtrunc ) log p(x0:Ttrunc−1 |xTtrunc) q(x1:Ttrunc−1 |x0,xTtrunc) = −Eq(x0)Eq(x1:Ttrunc |x0) log [ p(x0:Ttrunc−1) q(x1:Ttrunc |x0) q(xTtrunc |x0) p(xTtrunc) ] = ( −Eq(x0)Eq(x1:Ttrunc |x0) log p(x0:Ttrunc−1) q(x1:Ttrunc |x0) ) − Eq(x0)Eq(xTtrunc |x0) log q(xTtrunc |x0) p(xTtrunc) = ( ∑Ttrunc t=1 Lt−1 + LTtrunc)− LTtrunc = Ttrunc∑ t=1 Lt−1, (15) where the second to last equality follows the same derivation of the ELBO in Ho et al. (2020). B RELATED WORK Diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020) employ a forward Markov chain to diffuse the data to noise and learn the reversal of such a diffusion process. With the idea of exploiting the Markov operations (Goyal et al., 2017; Alain et al., 2016; Bordes et al., 2017), diffusion models achieve great success and inspire a variety of tasks including image generation and audio generation (Kong et al., 2020; Chen et al., 2020; Jolicoeur-Martineau et al., 2020; Vahdat et al., 2021). Recently, plenty of studies have been proposed to generalize diffusion model to continuous time diffusion and improve the diffusion models in likelihood estimation (Vincent, 2011; Song & Ermon, 2020; 2019; Nichol & Dhariwal, 2021; Song et al., 2021b;a; Kingma et al., 2021). Another mainstream is to improve the sampling efficiency of diffusion models, which are known for their enormous number of sampling steps. Luhman & Luhman (2021) improve diffusion processes with knowledge distillation and San-Roman et al. (2021) propose a learnable adaptive noise schedule. Song et al. (2020) and Kong & Ping (2021) exploit non-Markovian diffusion processes and shorten the denoising segments. Jolicoeur-Martineau et al. (2021) and Huang et al. (2021) use better SDE solvers for continuous-time models. Aside from these works, recently other types of generative models such as VAEs (Kingma & Welling, 2013), GANs (Goodfellow et al., 2014), and autoregressive models (van den Oord et al., 2016) have been incorporated to diffusion models. They are shown to benefit each other (Xiao et al., 2022; Pandey et al., 2022; Meng et al., 2021) and have a closer relation to our work. Xiao et al. (2022) consider the use of implicit models (Huszár, 2017; Mohamed & Lakshminarayanan, 2016; Tran et al., 2017; Yin & Zhou, 2018; Li & Malik, 2018) to boost the efficiency of diffusion models, where they deploy implicit models in each denoising step, which has higher difficulty in the training as the number of diffusion steps increases. Pandey et al. (2022) build diffusion models on top of the output of VAEs for refinement. Our work is also related if viewing TDPM as a diffusion model on top of an implicit model, where the implicit model can be parameterized with the U-Net or a separate network. C DISCUSSION Potential societal impacts: This paper proposes truncated diffusion probabilistic model as a novel type of diffusion-based generative model. The truncated part can be trained as implicit generative models such as GANs jointly or independently with the diffusion part. The capacities of truncated diffusion probabilistic models are competitive to existing diffusion-based ones and efficiency is largely improved. On the contrary of these positive effects, some negative perspectives could also be seen, depending on how the models are used. One major concern is the truncated diffusion technique proposed in this paper could potentially be a way to hack the existing diffusion models if the implicit models are maliciously used to fit the intermediate steps. For example, for some existing diffusion models, for safety concerns, the model’s capacity to generate private data needs to be locked by hiding the diffusion ending point into an unknown distribution. The technique of TDPM could be used to crack these existing online diffusion models by providing intermediate noisy images or fine-tuning the first few steps with TDPM to unlock the capacity. Besides, the capacity of generating good images can also be misused to generate ill-intentioned images at a much lower cost. Discussions: In this work, we mainly focus on reducing the length of the diffusion chain of a finite-time diffusion model. Our model has shown its effectiveness in improving finite-time diffusion models and it is non-trivial to further explore our model on continuous-time diffusion models (Song et al., 2021b). Moreover, while in this paper DDPM is the primary baseline, TDPM can also be built on other recent diffusion models. While pθ(xTtrunc) is parameterized as an implicit distribution, it can also be formulated as a semi-implicit distribution (Yin & Zhou, 2018), which allows it to be approximated with a Gaussian generator. Xiao et al. (2022) also present a closely related work. While we share the same spirit to reduce the length of the diffusion chain, these two strategies are not conflicting with each other. In future work we will look into the integration of these different strategies. There also exists plenty of options in approximating pθ(xTtrunc). When truncating the diffusion chain to be short, the implicit distribution still faces multi-modal and needs to fit with different methods depending upon the properties that we need. For example, in order to capture all modes, a VAE would be preferred, like done in Pandey et al. (2022). Below we provide an alternative method proposed in Zheng & Zhou (2021) to fit the truncated distribution. Besides the training, it’s also an open question whether TDPM can be incorporated into more advanced architectures to have further improvements and we leave this exploration for future work. D ALGORITHM DETAILS AND COMPLEMENTARY RESULTS Below we provide additional algorithm details and complementary experimental results. D.1 ADDITIONAL ANALYSIS ON THE PARAMETERIZATION OF THE IMPLICIT GENERATOR As shown in Section 3, in general, the objective of TDPM consists of the training of the diffusion model ϵθ (a U-Net architecture (Ronneberger et al., 2015)) with simple loss of DDPM Lsimple and the training of an implicit prior model Gψ with objective LGANTtrunc . Without loss of generality, in our main paper, we show two configurations to parameterize the implicit part for t = Ttrunc: 1) the implicit generator shares the same U-Net architecture used for 0 < t < Ttrunc; 2) the implicit generator is instantiated with a separate network. Below we explain this two configurations (denoted as TDPM+ in the main paper). Configuration 1): At t = Ttrunc, the Unet generates the noisy image at the truncated step: xTtrunc = ϵθ(xTtrunc+1, t = Ttrunc + 1), where xTtrunc+1 ∼ N (0, I) is the pure noise image whose pixels are iid sampled from standard normal. For t = Ttrunc, Ttrunc − 1, . . . , 1, the same Unet iteratively refines the noisy images by letting xt−1 = 1√ᾱt (xt − 1−αt√ 1−ᾱt ϵt−1) + βtzt; zt>1 ∼ N (0, I), z1 = 0, where ϵt−1 = ϵθ(xt, t) is the predicted noise by the Unet. Under this setting, the Unet-based generator plays two roles at the same time and the training will be more challenging than using two different generators here. However, we can also see as Ttrunc gets larger, the distribution of p(xTtrunc) will become more similar to a noise distribution, and generating the noisy images will be more like generating noises. In this case, being able to generate both noisy images and predicting noise becomes easier for the generator. Configuration 2) (TDPM+): Unlike previous configuration, where the implicit generator at step t = T shares the same U-Net architecture with t < Ttrunc. Another way is to parameterize Gψ with a separate generator. Although this configuration increases the total parameter of the generative model, it allows the model has better flexibility in the training stage. For example, these two networks can be trained in parallel or leverage a pre-trained model. In our paper, we conduct the experiments by using Stylegan2 generator architecture Karras et al. (2020b) for t = Ttrunc, resulting in an increase of 19M and 28M for the generator parameters when handling 32× 32 and 256× 256 images. The process of training and sampling of these configurations are summarized in Algorithm 1 and 2. Algorithm 1 Training 1: repeat 2: x0 ∼ q(x0) 3: t ∼ Uniform({1, . . . , Ttrunc}) 4: ϵt ∼ N (0, I), z ∼ N (0, I) 5: Update with (14) 6: until converged Algorithm 2 Sampling 1: xTtrunc+1 ∼ N (0, I) 2: if Gψ shared with ϵθ then 3: xTtrunc = ϵθ(xTtrunc+1, Ttrunc + 1) 4: else 5: xTtrunc = Gψ(xTtrunc+1) 6: end if 7: for t = Ttrunc, . . . , 1 do 8: zt ∼ N (0, I) if t > 1, else z1 = 0 9: xt−1 = 1√αt ( xt − 1−αt√1−ᾱt ϵθ(xt, t) ) + βtzt 10: end for 11: return x0 D.2 ALTERNATIVES OF LEARNING THE IMPLICIT DISTRIBUTION Another possible statistical distance is based on conditional transport (Zheng & Zhou, 2021), which is proposed to balance the model-seeking and mode-covering behaviors when fitting an empirical data distribution. In this setting, we use the same generator Gψ as before, but instead of a discriminator, we use a conditional distribution πη parameterized by η to find an optimized mapping between the samples of p and q, and a critic ϕ to measure the point-to-point cost cϕ in the feature space. The generator, the conditional distribution, and the critic are trained by the following objective LCTTtrunc : min ψ,η max ϕ Ex∼q(xTtrunc ) [ EGψ(z)∼πη(Gψ(z) |xTtrunc )cϕ(xTtrunc , Gψ(z)) ] + Ez∼p(z) [ ExTtrunc∼πη(xTtrunc |Gψ(z))cϕ(xTtrunc , Gψ(z)) ] . (16) Similar to (14), we fit TDPM-CT with following loss LCTTDPM = Lsimple_trunc + λLCTTtrunc . (17) We empirically find out this objective has no significant difference than using GAN objective shown in Equation 14 in performance-wise as long as the generator is well trained. D.3 CONDITIONAL TRUNCATED DIFFUSION PROBABILISTIC MODELS For conditional generation, we extend (14) and derive a conditional version of TDPM: LcTDPM = Lcsimple_trunc + λLcTtrunc , (18) where Lcsimple_trunc aims to train the conditional diffusion model with Lcsimple_trunc = EcEt,x0|c,ϵt [ ||ϵt − ϵθ(xt, c, t)||2 ] , t ∼ Unif(1, 2, . . . , Ttrunc), ϵt ∼ N (0, I), (19) and the truncated distribution LcTtrunc can be fitted with GAN or CT: min ψ max ϕ Ec [ Ex∼q(xTtrunc | c)[logDϕ(x | c)] + Ez∼p(z) [log(1−Dϕ(Gψ(z, c)) | c)] ] . (20) min ψ,η max ϕ Ec [ Ex∼q(xTtrunc | c) [ EGψ(z)∼πη(Gψ(z,c) |xTtrunc ,c)cϕ(xTtrunc , Gψ(z, c)) ] + Ez∼p(z) [ ExTtrunc∼πη(xTtrunc |Gψ(z,c),c)cϕ(xTtrunc , Gψ(z, c)) ] ] . (21) D.4 ANALYSIS ON TOY EXPERIMENTS Although we present image experiments in the main paper, our studies were firstly justified our method on synthetic toy data as a proof of concept. We adopt representative 2D synthetic datasets used in prior works (Gulrajani et al., 2017; Zheng & Zhou, 2021), including Swiss Roll, Double Moons, 8-modal, and 25-modal Gaussian mixtures with equal component weights. We use an empirical sample set X , consisting of |X | = 2, 000 samples and illustrate the generated samples after 5000 training epochs. We take 20 grids in the range [−10, 10] for both the x and y axes to approximate the empirical distribution of p̂θ and q̂, and report the corresponding forward KL DKL(q̂||p̂θ) as the quantitative evaluation metric. Figure 8 shows the results on the Swiss Roll data. We present a short chain with T = 2 and a longer chain with T = 5 to show the impacts of the number of diffusion steps. The first row shows that the data distribution is diffused with accumulated noise, and with more steps the diffused distribution will be closer to an isotropic Gaussian distribution. As one can see, truncating the diffusion chain to a short length will result in a clear gap between q(xTtrunc) and N (0, I). When DDPM (shown in the second row) samples from the isotropic Gaussian distribution, it becomes hard to recover the original data distribution from pure noise with only a few steps. Although we can see DDPM can get slightly improved with a few more steps (T = 5), as long as q(xT ) is not close to Gaussian, DDPM can hardly recover the data distribution. By contrast, as shown in the third and fourth rows, TDPM successfully approximates the non-Gaussian q(xTtrunc) with its implicit generator, and we can see the remaining part of the truncated chain is gradually recovered by the denoising steps. From both visualizations and DKL(q̂||p̂θ), we can see that TDPM is able to fit every step in such short chains. TDPM-GAN and TDPM-CT both succeed in fitting pθ(xTtrunc) but the latter one fits slightly better when the diffusion length is 2. When the length increases to 5, fitting the implicit distribution with GAN becomes easier. This observation demonstrate a benefit of combining the diffusion models and GANs. If the implicit generator is sufficiently powerful to model q(xTtrunc), then the number of steps in need can be compressed to a small number. On the contrary, if the implicit generator cannot capture the distribution, we need more steps to facilitate the fitting of the data distribution. Shown in Figure 9-Figure 11, we can see 8-modal Gaussian is more similar to an isotropic Gaussian after getting diffused, thus DDPM can recover a distribution similar to data with 5 steps. On 25-Gaussians, we can observe GAN does not suffer from mode-collapse and provide a better approximation than CT, which results in better data distribution recovery in the final step. D.5 ADDITIONAL ABLATION STUDIES Using Pre-trained diffusion backbones: Different from the default setting, here we put the implicit model of TDPM+ trained at t = Ttrunc and a pre-trained DDPM model1 in the same pipeline of sampling. In this case we do not need to spend any time on pretraining the DDPM model, and only need to train the implicit model for t = Ttrunc. As shown in Table 5, when combined with a pre-trained DDPM for t < Ttrunc, the generation performance of TDPM trained under this two-step procedure is comparable to TDPM trained end-to-end. Sensitivity to noise schedule: Nichol & Dhariwal (2021) show the noise schedule affects the training of DDPM. Here we examine if TDPM is sensitive to the choice of noise schedule. We compare the linear schedule with cosine schedule, which adds noise in a milder manner. The results on CIFAR-10 are reported in Table 6, which suggest that TDPM is not sensitive to the choice between these two schedules. On the choice of truncated step: As the diffused distribution could facilitate the learning of the implicit generator Gψ (Arjovsky & Bottou, 2017), where we could observe by increasing the number of diffusion steps, the FID of TDPM consistently gets better. A natural question is on which step should we truncate the diffusion chain. We study the signal-to-noise ratio (SNR) of different diffusion step. Based on q(xt|x0) = N ( √ ᾱtx0, 1− ᾱtI), we calculate SNR as SNR = √ ᾱt√ 1− ᾱt ; ᾱt = t∏ i=1 (1− βt). We visualize the SNR evolution across time step t > 0 in Figure 12, where we can observe the SNR rapidly decays in the first 100 steps. According to previous studies in Arjovsky & Bottou (2017), injecting noise into the data distribution could smoothen the data distribution support and facilitate the GAN training. The SNR change in this interval indicates injecting noise in the level of t ∈ J1, 100K could bring in more significant improvement for the GAN training. When the step is greater than 200, the SNR is change is no longer significant and close to zero, which indicates the implicit model 1The pre-trained checkpoints are provided by: https://github.com/pesser/pytorch_ diffusion might not be too informative, though it is easier to train. Our experimental observations in Figure 3 also justify this conclusion: when training a GAN at TTrunc = 4, the required number of iterations is similar to training it on clean data; by training the GAN model at TTrunc = 99, the training of GAN is significantly facilitated. For TTrunc > 100, we empirically examine to train a GAN and find it would converge faster than training the diffusion model for t < TTrunc. Comparison of model efficiency: In complement of the results in Table 1-2, we provide detailed model size and generation time on v100 GPU. The results are summarized in Table 7. We can see TDPM has an increasing in the total number of parameter, as it involves a discriminator to help train the implicit model, while its sampling efficiency is also obvious. D.6 EXPERIMENTAL SETTINGS D.6.1 MODEL ARCHITECTURE Generator: Our generator structure strictly follows the U-Net structure (Ronneberger et al., 2015) used in DDPM, improved DDPM, and ADM (Ho et al., 2020; Nichol & Dhariwal, 2021; Dhariwal & Nichol, 2021), which consists of multiple ResNet blocks (He et al., 2016) with Attention blocks (Vaswani et al., 2017) injected in the bottleneck. Please refer to these paper for more details on the architecture. A key difference between our model and previous diffusion models is that our model also train such U-Net as an extra implicit generator Gθ that takes a latent variable z ∼ N (0, I) and a fixed time index t = Ttrunc + 1 as input. However, this does not result in a difference in the generator architecture. We parameterize Gθ with the same U-Net architecture for simplicity and the time embedding t = Ttrunc + 1 is specified to be trained with the implicit loss shown in (12) and (16). We have also tested to use all zero time embedding for t = Ttrunc + 1 and found no clear differences. For our results of TDPM+, the generator Gψ specifically takes a StyleGAN2 architecture Karras et al. (2020b) and there is no time-embedding in Gψ. An increase of generator parameter appears caused by separating the implicit model and denoising U-Net. Note that the generator is trained with GAN loss and without specially designed adaptive augmentation in Karras et al. (2020a). For the detailed model architecture please refer to the corresponding paper or their Github repository: https://github.com/NVlabs/stylegan2-ada-pytorch. Discriminator: Similar to Xiao et al. (2022), we adopt the discriminator architecture used in Karras et al. (2020b), but without the time step input. The discriminator discriminate xTtrunc is from the diffused distribution q(xTtrunc) or implicit generative distribution pθ(xTtrunc). Please refer to Appendix C of Xiao et al. (2022) for the detailed design. Navigator: Training with LCTTtrunc involves an extra module named navigator (Zheng & Zhou, 2021). We strictly follow the architecture used in Zheng & Zhou (2021), where the navigator is an MLP taking the pairwise feature distance as inputs. There is no time embedding used in the navigator as it is only used for the training at t = TTrunc. The feature is extracted from the layer before the final scalar output. Please refer to their Appendix D for detailed information. Architecture for text-to-image experiments: We adopt the 1.45B LDM model (Rombach et al., 2022) that is pretrained on LAION-400M dataset (Schuhmann et al., 2021). The LDM model consists of a U-Net KL-regularized autoencoder with downsampling-factor 8 (resolution 256 -> 32), a U-Net in the latent space, and a BERT (Devlin et al., 2018) text encoder transform raw text to a sequence of 1280-dimension embeddings. We only fine-tune the latent model in our experiments. In the training of the truncated part, the discriminator takes the first-half of the U-Net (downsampling backbone) with a linear predicting head on top of it. Architecture for toy experiments: The generator uses an architecture stacked with 4 linear layers with 128 hidden units. Each intermediate layer is equipped with a time-embedding layer and follows softplus activation. The discriminator and navigator have the same architecture, without time-embedding layers, and using leakyReLU as the activation function. D.6.2 TRAINING CONFIGURATIONS Datasets: We use CIFAR-10 (Krizhevsky et al., 2009), LSUN-bedroom, and LSUN-Church (Yu et al., 2015) datasets for unconditional generation in the main experiments. Additionally, we apply CelebA(Liu et al., 2015) and CelebA-HQ (Lee et al., 2020) for complementary justification. For text-to-image experiments, we use CUB-200 (Welinder et al., 2010) and MS-COCO (Lin et al., 2014). The images consist of 32 × 32 pixels for CIFAR-10. For the other datasets, we apply center-crop along the short edge and resize to the target resolution (64× 64 for CelebA; 256× 256 for the others). Diffusion schedule: For all datasets, we strictly follow the diffusion process used in our backbone models, and instantiate the truncated diffusion schedule by obtaining the first TTrunc diffusion rates {β1, ..., βTTrunc}. For example, if our goal is to fit a model with NFE=50, to truncate the diffusion process used in Ho et al. (2020) (β1 = 10−4, βT = 0.02, T=1000), we first initialize β1, β2, ... β1000, and then taking the first 49 steps to complete the truncation. Optimization: We train our models using the Adam optimizer (Kingma & Ba, 2015), where most of the hyperparameters match the setting in Xiao et al. (2022), and we slightly modify the generator learning rate to match the setting in Ho et al. (2020), as shown in Table 8. We train our models using V100 GPUs, with CUDA 10.1, PyTorch 1.7.1. The training takes approximately 2 days on CIFAR-10 with 4 GPUs, and a week on CelebA-HQ and LSUN-Church with 8 GPUs. Table 8: Optimization hyper-parameters. CIFAR10 CelebA CelebA-HQ LSUN Initial learning rate for discriminator 10−4 10−4 10−4 10−4 Initial learning rate for navigator (if applicable) 10−4 10−4 10−4 10−4 Initial learning rate for generator 1× 10−5 1× 10−5 2× 10−5 2× 10−5 Adam optimizer β1 0.5 0.5 0.5 0.5 Adam optimizer β2 0.9 0.9 0.9 0.9 EMA 0.9999 0.9999 0.9999 0.9999 Batch size 128 128 64 64 # of training iterations 800k 800k 0.5M 2.4M(bedroom)/1.2M(church) # of GPUs 4 8 8 8 For TDPM+, where we use StyleGAN2 generator as Gψ, we directly use their original training hyper-parameters and train the model in parallel with the diffusion model. For TLDM, we set the base learning rate as 10−5 and the mini-batch size is set to 64. For the ImageNet1K-64×64 experiments, we use StyleGAN-XL generator as Gψ and strictly follow all the default training hyper-parameters. To simplify the implementation and save computation, instead of applying the default progressive growing pipeline 16 × 16 → 32 × 32 → 64 × 64, we directly train the implicit model on 64×64 images corrupted at TTrunc. Without using the progressive growing pipeline, the result of StyleGANXL shown in Table 2 is clearly worse than the progressive one reported in their paper (FID 1.51). However, when used as the implicit model of TDPM, the final performance of TDPM becomes competitive with this result. Evaluation: When evaluating the sampling time, we use models trained on CIFAR-10 and generate a batch of 128 samples. When evaluating the FID, and recall score, following the convention, we use 50k generated samples for CIFAR-10, LSUN-bedroom and LSUN-church, 30k samples for CelebAHQ (since the CelebA HQ dataset contains only 30k samples), 30k samples for the text-to-image datasets. The recall scores are calculated with the recipe in Kynkäänniemi et al. (2019). In the sampling stage, we follow our backbone to apply the same guidance in the diffusion part (t < TTrunc) if applicable. Specifically, for LDM backbone, we use classifier-free guidance (Ho & Salimans, 2022) with scale 1.5 and there are no DDIM steps for TDLM. D.7 ADDITIONAL RESULTS ON UNCONDITIONAL GENERATION TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 14: Qualitative results of TDPM on LSUN-Church (256 × 256), with Ttrunc = 99, 49, and 4. Note NFE = Ttrunc + 1 in TDPM. Each group presents generated samples from pθ(x0) (left) and pθ(xTtrunc) (right). TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 15: Analogous qualitative results to Figure 14 on LSUN-Bedroom. Produced by TDPM. TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 16: Analogous qualitative results to Figure 14 on CelebA-HQ. Produced by TDPM. TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 18: Analogous qualitative results to Figure 14 on LSUN-Bedroom. Produced by TDPM-CT. TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 19: Analogous qualitative results to Figure 14 on CelebA-HQ. Produced by TDPM-CT. D.8 ADDITIONAL RESULTS ON TEXT-TO-IMAGE GENERATION A white and gray bird with black wings. An airplan flying over a body of water. A sign reads “TDPM”. Busy city street at dusk with sun setting. Figure 20: Additional text-to-image generation results with different text prompt, produced by TLDM with Ttrunc = 49. The bagel is put in a squre plate. The bathroom has a big mirror. A cluster of flower on the wooden table.
1. What is the focus and contribution of the paper on truncated diffusion probabilistic model? 2. What are the strengths of the proposed approach, particularly in its motivation and extensive analysis? 3. What are the weaknesses of the paper regarding the choice of T_trunc and the potential training difficulties with GAN? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Can you provide more diverse and higher-quality datasets to support your experiments?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors propose TDPM, a truncated diffusion probabilistic model that essentially skips the diffusion steps by truncating the start/end of the process, stopping at an implicit non-gaussian distribution which can be sampled from another generative model. The goal is to reduce the number of steps without compromising image quality. The authors also related the proposed TDPM to an adversarial autoencoder AAE in the same way as how DDPM is sort of a VAE. Given T_trunc as the end point instead of T where T_trunc is a lot less than T, the discriminator is thus trying to discriminate between sample from this implicit distribution at T_trunc and from the generator. Interestingly, this generator can even be the same as the denoising network (by specifying appropriate t). The results seem convincing, Strengths And Weaknesses Strength -well motivated approach -well written paper -extensive analysis and discussion Weakness -the step to truncate T_trunc need to be decided at training time, and it's not clear what is the best T_trunc to set. -experiments could have been better, with more diverse and higher quality datasets. The main paper only shows cifar-10, LSUN-church, and LSUN-Bedroom. Something like ImageNet for example, could be more convincing. -introducing adversarial aspect also means potential training difficulty that comes with GAN Clarity, Quality, Novelty And Reproducibility The paper is easy to understand and the quality of the paper is generally good. I appreciate a lot of additional information in the supplementary materials like the toy example, alternative generator, etc. I think the reproducibility is also pretty good, considering that this is simply adding GAN to the (start) end of the (reverse) diffusion process which is easy to implement. Hyper-parameters used is also mentioned in the supp. As for the novelty, as mentioned in the related work, the idea of trying to speed up diffusion has been studied quite a lot, though I believe not exactly as what has been proposed here. Most similar I think is this recent work "Accelerating Diffusion Models via Early Stop of the Diffusion Process" from Lyu et al which propose similar idea of sampling from implicit distribution learned from GAN or VAE, though they seem to be using pre-trained generator instead of training them together.
ICLR
Title Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial Auto-Encoders Abstract Employing a forward diffusion chain to gradually map the data to a noise distribution, diffusion-based generative models learn how to generate the data by inferring a reverse diffusion chain. However, this approach is slow and costly because it needs many forward and reverse steps. We propose a faster and cheaper approach that adds noise not until the data become pure random noise, but until they reach a hidden noisy-data distribution that we can confidently learn. Then, we use fewer reverse steps to generate data by starting from this hidden distribution that is made similar to the noisy data. We reveal that the proposed model can be cast as an adversarial auto-encoder empowered by both the diffusion process and a learnable implicit prior. Experimental results show even with a significantly smaller number of reverse diffusion steps, the proposed truncated diffusion probabilistic models can provide consistent improvements over the non-truncated ones in terms of performance in both unconditional and text-guided image generations. 1 INTRODUCTION Generating photo-realistic images with probabilistic models is a challenging and important task in machine learning and computer vision, with many potential applications in data augmentation, image editing, style transfer, etc. Recently, a new class of image generative models based on diffusion processes (Sohl-Dickstein et al., 2015) has achieved remarkable results on various commonly used image generation benchmarks (Song & Ermon, 2019; Ho et al., 2020; Song & Ermon, 2020; Song et al., 2021b; Dhariwal & Nichol, 2021), surpassing many existing deep generative models, such as autoregressive models (van den Oord et al., 2016), variational auto-encoders (VAEs) (Kingma & Welling, 2013; Rezende et al., 2014; van den Oord et al., 2017; Razavi et al., 2019), and generative adversarial networks (GANs) (Goodfellow et al., 2014; Radford et al., 2015; Arjovsky et al., 2017; Miyato et al., 2018; Brock et al., 2019; Karras et al., 2019; 2020b). This new modeling class, which includes both score-based and diffusion-based generative models, uses noise injection to gradually corrupt the data distribution into a simple noise distribution that can be easily sampled from, and then uses a denoising network to reverse the noise injection to generate photo-realistic images. From the perspective of score matching (Hyvärinen & Dayan, 2005; Vincent, 2011) and Langevin dynamics (Neal, 2011; Welling & Teh, 2011), the denoising network is trained by matching the score function, which is the gradient of the log-density of the data, of the corrupted data distribution and that of the generator distribution at different noise levels (Song & Ermon, 2019). This training objective can also be formulated under diffusion-based generative models (Sohl-Dickstein et al., 2015; Ho et al., 2020). These two types of models have been further unified by Song et al. (2021b) under the framework of discretized stochastic differential equations. Despite their impressive performance, diffusion-based (or score-based) generative models suffer from high computational costs, both in training and sampling. This is because they need to perform a large number of diffusion steps, typically hundreds or thousands, to ensure that the noise injection is small enough at each step to make the assumption that both the diffusion and denoising processes have the Gaussian form hold in the limit of small diffusion rate (Feller, 1949; Sohl-Dickstein et al., 2015). In other words, when the number of diffusion steps is small or when the rate is large, the Gaussian assumption may not hold well, and the model may not be able to capture the true score function of the data. Therefore, previous works have tried to reduce the number of diffusion steps by using non-Markovian reverse processes (Song et al., 2020; Kong & Ping, 2021), adaptive noise scheduling (San-Roman et al., 2021; Kingma et al., 2021), knowledge distillation (Luhman & Luhman, 2021; Salimans & Ho, 2022), diffusing in a lower-dimension latent space (Rombach et al., 2022), etc., but they still cannot achieve significant speedup without sacrificing the generation quality. In this paper, we propose a novel way to shorten the diffusion trajectory by learning an implicit distribution to start the reverse diffusion process, instead of relying on a tractable noise distribution. We call our method truncated diffusion probabilistic modeling (TDPM), which is based on the idea of truncating the forward diffusion chain of an existing diffusion model, such as the denoising diffusion probabilistic model (DDPM) of Ho et al. (2020). To significantly accelerate diffusion-based text-to-image generation, we also introduce the truncated latent diffusion model (TLDM), which truncates the diffusion chain of the latent diffusion model (LDM) of Rombach et al. (2022). We note LDM is the latent text-to-image diffusion model behind Stable Diffusion, an open-source project that provides state-of-the-art performance in generating photo-realistic images given text input. By truncating the chain, we can reduce the number of diffusion steps to an arbitrary level, but at the same time, we also lose the tractability of the distribution at the end of the chain. Therefore, we need to learn an implicit generative distribution that can approximate this distribution and provide the initial samples for the reverse diffusion process. We show that this implicit generative distribution can be implemented in different ways, such as using a separate generator network or reusing the denoising network. The former option has more flexibility and can improve the generation quality, while the latter option has no additional parameters and can achieve comparable results. We reveal that DDPM and VAE have a similar relationship as TDPM and adversarial auto-encoder (AAE, Makhzani et al. (2015)). Specifically, DDPM is like a VAE with a fixed encoder and a learnable decoder that use a diffusion process, and a predefined prior. TDPM is like an AAE with a fixed encoder and a learnable decoder that use a truncated diffusion process, and a learnable implicit prior. Our truncation method has several advantages when we use it to modify DDPM for generating images without text guidance or LDM for generating images with text guidance. First, it can generate samples much faster by using fewer diffusion steps, without sacrificing or even enhancing the generation quality. Second, it can exploit the cooperation between the implicit model and the diffusion model, as the diffusion model helps the implicit model train by providing noisy data samples, and the implicit model helps the diffusion model reverse by providing better initial samples. Third, it can adapt the truncation level to balance the generation quality and efficiency, depending on the data complexity and the computational resources. For generating images with text guidance, our method can speed up the generation significantly and make it suitable for real-time processing: while LDM takes the time to generate one photo-realistic image, our TLDM can generate more than 50 such images. The main contributions of our paper are as follows: • We introduce TDPM, a new diffusion-based generative model that can shorten the diffusion trajectory by learning an implicit distribution to start the reverse diffusion process, and demonstrate that the learning of the implicit distribution can be achieved in various ways. We further introduce TLDM to significantly accelerate diffusion-based text-to-image generation. • We show TDPM can be formulated as a diffusion-based AAE. • We show that the implicit distribution can be realized by reusing the denoising network for the reverse diffusion process, which can reduce the reverse diffusion steps by orders of magnitude without adding any extra parameters and with comparable generation quality. • We reveal the synergy between the implicit model and the diffusion model, as the diffusion process can simplify the training of the implicit model like GANs, and the implicit model can speed up the reverse diffusion process of the diffusion model. • We show that both TDPM and TLDM can adapt the truncation level, according to the data complexity and the computational resources, to achieve a good balance between the generation quality and the efficiency. 2 PRELIMINARIES ON DIFFUSION MODELS In Gaussian diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020), starting from the data distribution x0 ∼ q(x0), a pre-defined forward diffusion process qt produces auxiliary variables xt=1:T by gradually adding Gaussian noise, with variance βt ∈ (0, 1) at time t, as follows: q(x1, ...,xT |x0) := ∏T t=1 q(xt |xt−1), q(xt |xt−1) := N (xt; √ 1− βtxt−1, βtI). (1) With the limit of small diffusion rate (i.e., βt is kept sufficiently small), the reverse distribution q(xt−1 |xt) also follows a Gaussian distribution (Feller, 1949; Sohl-Dickstein et al., 2015) and can be approximated using a neural network parameterized Gaussian distribution pθ as: pθ(xt−1 |xt) := N (xt−1;µθ(xt, t),Σθ(xt, t)). (2) Moreover, with a sufficiently large T , the outcome of the diffusion chain xT will follow an isotropic Gaussian distribution. Thus, with the pre-defined forward (inference) diffusion process and the learned reverse (generative) diffusion process, we can sample from xT ∼ N (0, I) and run the diffusion process in reverse to get a sample from the data distribution q(x0). Under the variational inference (Kingma & Welling, 2013; Blei et al., 2017) framework, viewing q(x1, ...,xT |x0) in (1) as the inference network, we can use the evidence lower bound (ELBO) as our learning objective. Following previous works (Sohl-Dickstein et al., 2015; Ho et al., 2020), the negative ELBO of a diffusion probabilistic model, parameterized by θ, can be expressed as LELBO(θ) := L0(θ) + ∑T t=2 Lt−1(θ) + LT , L0(θ) := Eq(x0)Eq(x1 |x0) [− log pθ(x0 |x1)] , (3) Lt−1(θ) := Eq(x0)Eq(xt |x0)[DKL (q(xt−1 |xt,x0)||pθ(xt−1 |xt))], t ∈ {2, . . . , T} (4) LT := Eq(x0)[DKL (q(xT |x0)||p(xT ))], (5) where DKL(q||p) = Eq[log q − log p] denotes the Kullback–Leibler (KL) divergence from distributions p to q. Generally speaking, diffusion probabilistic models assume the number of diffusion steps T to be sufficiently large to satisfy two conditions: 1) the reverse distribution at each denoising step can be fitted with a Gaussian denoising generator pθ(xt−1|xt); 2) with a sufficiently small diffusion rate βt, the long forward diffusion process will successfully corrupt the data, making q(xT |x0) ≈ N (0, I), and hence approximately LT becomes zero and depends on neither x0 nor θ. What happens if T is insufficiently large? Given a non-Gaussian data distribution q(x0), when the number of denoising steps is reduced, the true posterior q(xt−1 |xt) is not Gaussian and usually intractable (Feller, 1949), resulting in new challenges to current diffusion models. As noted in Xiao et al. (2022), when βt is not sufficiently small, the diffusion step becomes larger and the denoising distribution can be multi-modal and hence too complex to be well fitted by Gaussian. The authors propose to define pθ(xt−1 |xt) with an implicit generator and substitute the ELBO with min θ ∑ t≥1 Eq(t) [Dadv(q(xt−1 |xt)∥pθ(xt−1 |xt))] , (6) where Dadv represents a statistical distance that relies on an adversarial training setup. This modified objective can be minimized by leveraging the power of conditional GANs in fitting implicit multimodal distributions (Arjovsky et al., 2017; Goodfellow et al., 2014; Nowozin et al., 2016). While the concept of diffusion has been used, the proposed models in Xiao et al. (2022) are shown to work the best only when the number of diffusion steps is limited to be as few as four, and start to exhibit deteriorated performance when further increasing that number. 3 TRUNCATED DIFFUSION AND ADVERSARIAL AUTO-ENCODING We first introduce the idea of accelerating both the training and generation of diffusion models by truncating the diffusion chains and describe the technical challenges. We then develop the objective function and training algorithm for TDPM. We further reveal TDPM can also be formulated as an AAE (Makhzani et al., 2015)) empowered by diffusion models. While DDPM can be considered as a hierarchical version of a variational auto-encoder (VAE) with a fixed multi-stochastic-layer encoder, our derivation shows that TDPM can be considered as a hierarchical version of an AAE with a fixed multi-stochastic-layer encoder but a learnable implicit prior. 3.1 MOTIVATION AND TECHNICAL CHALLENGES We propose a novel method called TDPM to speed up the diffusion process and the generative model. The main idea is to shorten the forward diffusion chain that transforms the data into Gaussian noise, and use a learned implicit distribution to sample the starting point of the reverse diffusion chain that reconstructs the data. To be more precise, we adopt the DDPM framework that defines a variance schedule {β1, β2, ..., βT }, which controls the amount of noise added at each step of the forward diffusion process. The forward process has a simple analytical form as a Gaussian distribution: q(xt |x0) = N ( √ ᾱtx0, (1− ᾱt)I); ᾱt = ∏t i=1 αi, αi = 1− βi. Here, xt is the noisy version of the data x0 at step t, and ᾱt is the cumulative product of the diffusion coefficients αi. The forward chain of length T is designed to be long enough to make the data distribution indistinguishable from Gaussian noise N (0, I). However, a long forward chain also implies a high computational cost for the reverse process, which uses a learned neural network to predict the conditional distribution of the clean data given the noisy one at each step. The proposed TDPM cuts off the last part of the forward chain and only keeps the first Ttrunc steps {β1, β2, ..., βTtrunc} ⊂ {β1, β2, ..., βT }. We choose Ttrunc to be much smaller than T so that we can save a lot of computation time in generation. The benefit of this truncation is illustrated in Figure 1, where the bottom row shows the truncated diffusion chain. We can see that the data are only partially corrupted by noise and still retain some features of the original data. This means that we can recover the data more easily and accurately by applying a few Gaussian denoising steps from the corrupted data. Moreover, we do not change the diffusion rates βt for the first Ttrunc steps, so we do not compromise the quality of the forward and reverse processes between time 0 and Ttrunc. However, truncating the forward chain also introduces a new challenge for the reverse process. Unlike the original chain, where the starting point of the reverse process is xT ∼ N (0, I), the truncated chain has an unknown distribution of the corrupted data at step Ttrunc. This makes it difficult to sample from this distribution and initiate the reverse process. To overcome this challenge, we introduce an implicit generative model that approximates the distribution of the corrupted data by minimizing a divergence measure between the implicit and the true noisy distributions at step Ttrunc. This way, we can use the implicit model to sample the starting point of the reverse process and then apply the learned denoising network to generate the data. 3.2 HAND-CRAFTED TDPM OBJECTIVE FUNCTION Mathematically, recall that the DDPM loss in (3) consists of three terms: L0, ∑T t=2 Lt−1, and LT . The training objective of a conventional diffusion model focuses on terms ∑T t=2 Lt−1 and L0. It assumes LT does not depend on any parameter and will be close to zero by carefully pre-defining the forward noising process such that q(xT |x0) ≈ p(xT ) = N (0, I). When the diffusion chains are truncated at time Ttrunc ≪ T , the forward diffusion ends at time Ttrunc, where the marginal distribution of the forward diffusion-corrupted data can be expressed as q(xTtrunc) := ∫ q(xTtrunc |x0)p(x0)dx0, (7) which takes a semi-implicit form (Yin & Zhou, 2018) whose density function is often intractable. To reverse this truncated forward diffusion chain, we can no longer start the reverse diffusion chain from a known distribution such as N (0, I). To this end, we propose TDPM that starts the reverse chain at time Ttrunc from pψ(xTtrunc), an implicit distribution parameterized by ψ. We match pψ(xTtrunc) to q(xTtrunc) via a loss term as L̃Ttrunc := D (q(xTtrunc)||pψ(xTtrunc)) , where D(q||p) is a statistical distance between distributions q and p, such as the Jensen–Shannon divergence and Wasserstein distance. As we keep all the diffusion steps before time Ttrunc in TDPM the same as those in DDPM, we combine L̃Ttrunc with all the loss terms of DDPM before time Ttrunc in (3) to define the TDPM loss as LTDPM := ∑Ttrunc t=1 Lt−1(θ) + L̃Ttrunc(ψ), L̃Ttrunc(ψ) := D (q(xTtrunc)||pψ(xTtrunc)) , (8) We note while in general pψ(xTtrunc) in TDPM is intractable, we can employ a deep neural networkbased generator Gψ to generate a random sample in a single step via xTtrunc = Gψ(z), z ∼ N (0, I). (9) We will discuss later that we may simply let ψ = θ to avoid adding more parameters. 3.3 TDPM AS DIFFUSION-BASED ADVERSARIAL AUTO-ENCODER Following the terminology of AAE, let us define the prior as pψ(xTtrunc), the decoder (likelihood) as pθ(x0 |xTtrunc) := ∫ . . . ∫ [∏Ttrunc t=1 pθ(xt−1 |xt) ] dxTtrunc−1 . . . dx1, (10) which is empowered by a reverse diffusion chain of length Ttrunc, and the encoder (variational posterior) as q(xTtrunc |x0). Thus we can view q(xTtrunc) defined in (7) as the aggregated posterior (Hoffman & Johnson, 2016; Tomczak & Welling, 2018). In addition to imposing an auto-encoding data-reconstruction loss, the key idea of the AAE (Makhzani et al., 2015) is to also match the aggregated posterior to a fixed prior. This idea differs AAE from a VAE that regularizes the autoencoder by matching the variational posterior to a fixed prior under the KL divergence. To this end, we introduce a diffusion-based AAE (Diffusion-AAE), whose loss function is defined as LDiffusion-AAE = −Eq(x0)Eq(xTtrunc |x0) log pθ(x0 |xTtrunc) +D(q(xTtrunc))||pψ(xTtrunc)). (11) Diffusion-AAE has two notable differences from a vanilla AAE: 1) its encoder is fixed and has no learnable parameters, while its prior is not fixed and is optimized to match the aggregated posterior, and 2) its decoder is a reverse diffusion chain, with Ttrunc stochastic layers all parameterized by θ. Note in general as the likelihood in (10) is intractable, the first loss term in (11) is intractable. However, the loss of Diffusion-AAE is upper bounded by the loss of TDPM, as described below. Theorem 1. The Diffusion-AAE loss in (11) is upper bounded by the TDPM loss in (8): LDiffusion-AAE ≤ LTDPM. 3.4 MATCHING THE PRIOR TO AGGREGATED POSTERIOR Via the loss term L̃Ttrunc := D (q(xTtrunc)||pψ(xTtrunc)) in (8), we aim to match the prior pψ(xTtrunc) to the aggregated posterior q(xTtrunc) in TDPM. While we have an analytic density function for neither p nor q, we can easily draw random samples from both of them. Thus, we explore the use of two different types of statistical distances that can be estimated from samples of both q and p. We empirically show that TDPM can achieve good performance regardless of which distance is used for optimization. One possible statistical distance is based on the idea of GANs (Goodfellow et al., 2014; Arjovsky et al., 2017; Bińkowski et al., 2018), which are widely used to learn implicit distributions from empirical data. In this setting, we use a generator Gψ(·) : Rd → Rd to transform samples from an isotropic Gaussian p(z) into samples that approximate the corrupted data, and a discriminator Dϕ(·) : Rd → [0, 1] to distinguish between the samples from the corrupted data distribution q(xTtrunc |x0) and the implicit generative distribution pψ(xTtrunc). The generator and the discriminator are trained by the following objective LGANTtrunc : min ψ max ϕ Ex∼q(xTtrunc )[logDϕ(x)] + Ez∼p(z) [log(1−Dϕ(Gψ(z)))]. (12) 3.5 TRAINING ALGORITHM As the objective in Equation 8 is a sum of different terms, following DDPM (Ho et al., 2020) to fix the terms Σθ(xt, t) = σ2t I , we can simplify 1 Ttrunc ∑Ttrunc t=1 Lt−1 as an expectation defined as Lsimple_trunc = Et,x0,ϵt [ ||ϵt − ϵθ(xt, t)||2 ] , t ∼ Unif(1, 2, . . . , Ttrunc), ϵt ∼ N (0, I) (13) where ϵt is an injected noise at a uniformly sampled timestep index t, xt = √ ᾱtx0 + √ 1− ᾱtϵt is a noisy image at time t, and ϵθ is a denoising U-Net that predicts the noise in order to refine the noisy image xt. Therefore the final simplified version of (8) is constructed as LGANTDPM = Lsimple_trunc + λLGANTtrunc , . (14) While λ, the weight of LTtrunc , can be tuned, we fix it as one for simplicity. Here the TDPM objective consists of two parts: the denoising part ϵθ is focused on denoising the truncated chain, getting updated from Lsimple_trunc, while the implicit part Gψ is focused on minimizing Eq[D (q(xTtrunc)||pψ(xTtrunc))], getting updated from LGANTtrunc . An interesting finding of this paper is that we do not necessarily need to introduce a separate set of parameters ψ for the generator Gψ, as we can simply reuse the same parameters θ of the reverse diffusion model (i.e., let ψ = θ) without clearly hurting the empirical performance. This suggests that the reverse diffusion process from T to Ttrunc could be effectively approximated by a single step using the same network architecture and parameters as the reverse diffusion steps from Ttrunc to 0. Therefore, we provide two configurations to parameterize the implicit distributions. 1) To save parameters, we let the implicit generator and denoising model share the same U-Net parameters but using different time step indices. Specifically, we first use xTtrunc =Gψ(xT )= ϵθ(xT , t=Ttrunc+1), where xT ∼ N (0, I), to generate a noisy image at time Ttrunc. 2) We further explore employing a different model, e.g., StyleGAN2 (Karras et al., 2020a), for the implicit generator, which provides better performance but increases the model size to get xTTrunc . Then for t=Ttrunc, . . . , 1, we iteratively refine it as xt−1 = 1√αt (xt − 1−αt√ 1−ᾱt ϵθ(xt, t)) + βtzt, where zt ∼ N(0, I) when t > 1 and z1 = 0. This process is depicted in Algorithms 1 and 2 in the Appendix. For the implementation details, please refer to Appendix D.6 and our code at https://github.com/JegZheng/ truncated-diffusion-probabilistic-models. 3.6 RELATED WORK In our previous discussions, we have related TDPM to several existing works such as DDPM and AAE. A detailed discussion on other related works is provided in Appendix B. 4 EXPERIMENTS We aim to demonstrate that TDPM can generate good samples faster by using fewer steps of reverse diffusion. We use different image datasets to test our method and follow the same setting as other diffusion models (Ho et al., 2020; Nichol & Dhariwal, 2021; Dhariwal & Nichol, 2021; Rombach et al., 2022) for our backbones. We also have two ways to set up the implicit generator that starts the reverse diffusion. One way is to reuse the denoising network, and the other way is to use a separate network. We try both ways for generating images without any labels. For generating images from text, we use the first way with the LDM backbone. We provide comprehensive details, toy examples, and additional experimental results in Appendices D.4-D.8. We use FID (lower is better) and Recall (higher is better) to measure the fidelity and diversity, respectively, of the generated images. We use CIFAR-10 (Krizhevsky et al., 2009), LSUN-bedroom, and LSUN-Church (Yu et al., 2015) datasets in unconditional experiments, and CUB-200 (Welinder et al., 2010) and MS-COCO (Lin et al., 2014) for text-to-image experiments. The images consist of 32× 32 pixels for CIFAR-10 and 256× 256 pixels for the other datasets. 4.1 EFFICIENCY IN BOTH TRAINING AND SAMPLING We first look at the results on CIFAR-10. We use DDPM (Ho et al., 2020) or improved DDPM (Nichol & Dhariwal, 2021) as our backbones. We use 4, 49, or 99 steps of reverse diffusion, which correspond Table 1: Results of unconditional generation on CIFAR-10, with the best FID and Recall in each group marked in bold. To compare TDPM (TTrunc=0) with GAN-based methods, we use DDPM backbone as generator and StyleGAN2 discriminator. Method NFE FID↓ Recall↑ DDPM backbone DDPM 1000 3.21 0.57 TDPM (TTrunc=99) 100 3.10 0.57 TDPM+ (TTrunc=99) 100 2.88 0.58 DDIM 50 4.67 0.53 TDPM (TTrunc=49) 50 3.30 0.57 TDPM+ (TTrunc=49) 50 2.94 0.58 TDPM (TTrunc=4) 5 3.34 0.57 TDPM+ (TTrunc=4) 5 3.21 0.57 Improved DDPM backbone Improved DDPM 4000 2.90 0.58 TDPM (TTrunc=99) 100 2.97 0.57 TDPM+ (TTrunc=99) 100 2.83 0.58 Improved DDPM+DDIM 50 3.92 0.55 TDPM (TTrunc=49) 50 3.11 0.57 TDPM+ (TTrunc=49) 50 2.96 0.58 TDPM (TTrunc=4) 5 3.51 0.55 TDPM+ (TTrunc=4) 5 3.17 0.57 GAN-based DDGAN 4 3.75 0.57 StyleGAN2 1 8.32 0.41 StyleGAN2-ADA 1 2.92 0.49 TDPM (TTrunc=0) 1 7.34 0.46 Table 2: Results on LSUN-Church and LSUN-Bedroom (resolution 256 × 256). Similar to Table 1, TDPM (TTrunc=0) uses DDPM backbone for the generator. Church Bedroom Method NFE FID FID DDPM backbone DDPM 1000 7.89 4.90 TDPM (TTrunc=99) 100 4.33 3.95 TDPM+ (TTrunc=99) 100 3.98 3.67 DDIM 50 10.58 6.62 TDPM (TTrunc=49) 50 5.35 4.10 TDPM+ (TTrunc=49) 50 4.34 3.98 TDPM (TTrunc=4) 5 4.98 4.16 TDPM+ (TTrunc=4) 5 4.89 4.09 ADM backbone ADM 1000 3.49 1.90 ADM+DDIM 250 6.45 2.31 TDPM (TTrunc=99) 100 4.41 2.24 TDPM+ (TTrunc=99) 100 3.61 1.88 TDPM (TTrunc=49) 50 4.57 2.92 TDPM+ (TTrunc=49) 50 3.67 1.89 TDPM (TTrunc=4) 5 5.61 7.92 TDPM+ (TTrunc=4) 5 4.66 4.01 GAN-based DDGAN 4 5.25 - StyleGAN2 1 3.93 3.98 StyleGAN2-ADA 1 4.12 7.89 TDPM (TTrunc=0) 1 4.77 5.24 Table 3: Results of ImageNet-64×64, evaluated with FID and Recall. TDPM+ is built with a pre-trained ADM and an implicit model trained at TTrunc using StylGAN-XL. Method NFE FID↓ Recall↑ ADM 1000 2.07 0.63 TDPM+ (TTrunc=99) 100 1.62 0.63 TDPM+ (TTrunc=49) 50 1.77 0.58 TDPM+ (TTrunc=4) 5 1.92 0.53 StyleGAN-XL (wo PG) 1 3.54 0.51 Figure 2: Random generation results of TDPM+(TTrunc=4) on ImageNet-64×64. to 5, 50, or 100 number of function evaluations (NFE). For the implicit generator, we either reuse the denoising U-Net or use a StyleGAN2 network (respectively, we call them TDPM and TDPM+). For comparison, we also include DDIM (Song et al., 2020) and DDGAN (Xiao et al., 2022). The comparison with a more diverse set of baselines can be found in Table 9 in Appendix D.7. Table 1 shows that our TDPM can get good FID with fewer NFE. TDPM+ can get even better FID, and it is the best when NFE=100. Compared with TDPM with 0 steps of reverse diffusion (a GAN with DDPM’s U-Net as generator and StyleGAN2 as discriminator) and StyleGAN2, TDPM with more than 0 steps of reverse diffusion has better recall and the FID is as good as StyleGAN2-ADA (a GAN with data augmentation for better training). This means TDPM can largely avoid the mode missing problem in GANs. We show some examples of generated images on CIFAR-10 in Figure 13. We also check how fast TDPM can train and sample. In training, we count how many images TDPM needs to well fit the truncated diffusion chain and the implicit prior. Figure 3 shows that when we use fewer steps of reverse diffusion, the diffusion part needs less time to train. But the implicit prior needs more time to train because it has to model a harder distribution, e.g., fitting the implicit prior with 4 diffusion steps needs similar time to directly fit it on the data. When we use 99 steps of reverse diffusion, the diffusion chain and the implicit prior need similar time to train, and the whole model trains faster than both GAN and DDPM. In sampling, we compare TDPM with 0, 1, 4, 49, or 99 steps of reverse diffusion. We report both FID and the sampling time (s/image) on one NVIDIA V100 GPU in Figure 4. When we use 4 steps of reverse diffusion, the FID is much lower than 0 steps, and the sampling time is slightly longer. When we use more steps of reverse diffusion, the FID goes down Ttrunc=0 (GAN) Ttrunc=4 Ttrunc=49 Ttrunc=99 DDPM 0 2 4 6 8 10 12 It er at ed lo g( ki m gs ) t < Ttrunc t = Ttrunc Figure 3: The required iterations (measured with iterated images) to converge in the training. The iterations for t < TTrunc (ϵθ) and t=TTrunc (Gψ) are marked in red and blue, respectively. Ttrunc=0 (GAN) speed-up x1000 Ttrunc=1 speed-up x500 Ttrunc=4 speed-up x200 Ttrunc=49 speed-up x20 Ttrunc=99 speed-up x10 DDPM 3 4 5 6 7 FI D 7.34 (0.03s) 4.47 (0.06s) 3.41 (0.15s) 3.3 (1.52s) 3.1 (3.13s) 3.27 (31.03s) Figure 4: Evolution of FID and corresponding GPU time (s/image) across different timesteps in the sampling stage. slowly, but the sampling time goes up linearly. When we use 99 steps of reverse diffusion, the FID of TDPM is better than DDPM with 1000 steps. Because the FID does not change much when we use more steps of reverse diffusion, we suggest using a small number of steps, such as 4 or more, to balance the quality and speed of generation. 4.2 RESULTS ON HIGHER-RESOLUTION AND MORE DIVERSE IMAGE DATASETS To test the performance of the proposed truncation method on high-resolution images, we train TDPM using two different diffusion models, DDPM (Ho et al., 2020) and ADM (Dhariwal & Nichol, 2021), as backbones on two datasets of 256× 256 resolution, LSUN-Church and LSUN-Bedroom (Yu et al., 2015). We compare the FIDs of TDPM with those of the backbone models and some state-of-the-art GANs in Tables 2. The results show that TDPM can generate images of similar quality with much smaller truncation steps Ttrunc, which means that it can produce images significantly faster than the backbone models. We also visualize the samples from the implicit distribution xTtrunc ∼ pθ(xTtrunc) that TDPM generates and the corresponding x0 that it finishes at the end of reverse chain in Figure 5. We further evaluate TDPM on ImageNet-1K (with resolution 64×64) that exhibits high diversity. Here we adopt the TDPM+ configuration, where we use a pre-trained ADM (Dhariwal & Nichol, 2021) checkpoint for t < TTrunc and train a StyleGAN-XL (Sauer et al., 2022) based implicit model at t = TTrunc (for simplicity, we choose to not use the progressive growing pipeline of StyleGAN-XL; See Appendix D.6 for more details). We compare both FID and Recall with our backbone models in Table 3 and show example generations in Figure 2. Similar to our observations in Table 1, TDPM has good generation quality with small truncation steps Ttrunc. Moreover, properly training an implicit model at Ttrunc can further improve the performance of the backbone. Table 4: Numerical results of Figure 6. The GPU time of sampling (s/image) is measured on one NVIDIA A100. CUB-Bird MS-COCO NFE GPU time LDM TLDM LDM TLDM 5 0.15 100.81 10.59 48.41 16.7 50 1.57 30.85 7.32 18.25 7.47 100 4.10 11.07 6.79 8.2 7.22 250 11.21 6.82 6.72 6.3 6.29 1000 41.09 6.68 - 6.29 - A bird with brown wings, black back, and red head. A green train is coming down the tracks. NFE=100 (TTrunc = 99) NFE=50 (TTrunc = 49) NFE=5 (TTrunc = 4) TLDM TLDM TLDM TLDM TLDM TLDM LDM LDM LDM LDMLDMLDM Figure 7: Example text-to-image generation results of LDM and TLDM (i.e., TDPM with LDM backbone) finetuned on CUB-200 (top row) or MS-COCO (bottom row), setting the number of times iterating through the reverse diffusion U-Net as 100 (left column), 50 (middle column), or 5 (right column). 4.3 TEXT-TO-IMAGE GENERATION Besides unconditional generation tasks, we develop for text-to-image generation the TLDM, a conditional version of TDPM that leverages as the backbone the LDM of Rombach et al. (2022), which is a state-of-the-art publicly released model with 1.45B parameters pre-trained on LAION400M (Schuhmann et al., 2021). LDM consists of a fixed auto-encoder for pixel generation and a latent-diffusion module to connect text and image embeddings. Here we fine-tune its latent-diffusion part on CUB-200 and MS-COCO datasets with 25K and 100K steps as the baseline. Similar to the unconditional case, we fine-tune with the LDM loss for t < TTrunc and GAN loss for t = TTrunc. More details about the setting can be found in Appendix D.6. The results of LDM with different DDIM sampling steps and TLDM with different truncated steps are summarized in Figure 6 and Table 4. Similar to applying diffusion directly on the original image-pixel space, when the diffusion chain is applied in the latent space, we observe TLDM can achieve comparable or better performance than LDM even though it has shortened the diffusion chain of LDM to have much fewer reverse diffusion steps. For the case that NFE is as small as 5, we note although the FID of TLDM has become higher due to using fewer diffusion steps, the generated image using TLDM at NFE=5 is still visually appealing, as shown in Figure 7. Compared with 50 and 250 steps using LDM, the sampling speed of TLDM using 5 steps is 10 and 50 times faster, respectively, while largely preserving generation quality. We provide additional text-to-image generation results of TLDM in Appendix D.8. 5 CONCLUSION In this paper, we investigate how to reduce the trajectory length of the diffusion chain to achieve efficient sampling without loss of generation quality. We propose truncated diffusion probabilistic modeling (TDPM) that truncates the length of a diffusion chain. In this way, TDPM can use a much shorter diffusion chain, while being required to start the reverse denoising process from an intractable distribution. We propose to learn such a distribution with an implicit generative model powered by the same U-Net used for denoising diffusion, and validate with multiple ways to learn the implicit distribution to ensure the robustness of the proposed TDPM. We reveal that TDPM can be cast as an adversarial auto-encoder with a learnable implicit prior. We conduct extensive experiments on both synthetic and real image data to demonstrate the effectiveness of TDPM in terms of both sample quality and efficiency, where the diffusion chain can be shortened to have only a few steps. ACKNOWLEDGMENTS H. Zheng and M. Zhou acknowledge the support of NSF-IIS 2212418 and IFML. A PROOF Proof of Theorem 1. As the last terms in both losses are the same, we only need to show that the first term in (11) is smaller than or equal to L0 + ∑Ttrunc t=2 Lt−1 in (8). Using Jensen’s inequality, we have − Eq(x0)Eq(xTtrunc |x0) log pθ(x0 |xTtrunc) = −Eq(x0)Eq(xTtrunc |x0) logEq(x1:Ttrunc−1 |x0,xTtrunc ) [ p(x0:Ttrunc−1 |xTtrunc) q(x1:Ttrunc−1 |x0,xTtrunc) ] ≤ −Eq(x0)Eq(xTtrunc |x0)Eq(x1:Ttrunc−1 |x0,xTtrunc ) log p(x0:Ttrunc−1 |xTtrunc) q(x1:Ttrunc−1 |x0,xTtrunc) = −Eq(x0)Eq(x1:Ttrunc |x0) log [ p(x0:Ttrunc−1) q(x1:Ttrunc |x0) q(xTtrunc |x0) p(xTtrunc) ] = ( −Eq(x0)Eq(x1:Ttrunc |x0) log p(x0:Ttrunc−1) q(x1:Ttrunc |x0) ) − Eq(x0)Eq(xTtrunc |x0) log q(xTtrunc |x0) p(xTtrunc) = ( ∑Ttrunc t=1 Lt−1 + LTtrunc)− LTtrunc = Ttrunc∑ t=1 Lt−1, (15) where the second to last equality follows the same derivation of the ELBO in Ho et al. (2020). B RELATED WORK Diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020) employ a forward Markov chain to diffuse the data to noise and learn the reversal of such a diffusion process. With the idea of exploiting the Markov operations (Goyal et al., 2017; Alain et al., 2016; Bordes et al., 2017), diffusion models achieve great success and inspire a variety of tasks including image generation and audio generation (Kong et al., 2020; Chen et al., 2020; Jolicoeur-Martineau et al., 2020; Vahdat et al., 2021). Recently, plenty of studies have been proposed to generalize diffusion model to continuous time diffusion and improve the diffusion models in likelihood estimation (Vincent, 2011; Song & Ermon, 2020; 2019; Nichol & Dhariwal, 2021; Song et al., 2021b;a; Kingma et al., 2021). Another mainstream is to improve the sampling efficiency of diffusion models, which are known for their enormous number of sampling steps. Luhman & Luhman (2021) improve diffusion processes with knowledge distillation and San-Roman et al. (2021) propose a learnable adaptive noise schedule. Song et al. (2020) and Kong & Ping (2021) exploit non-Markovian diffusion processes and shorten the denoising segments. Jolicoeur-Martineau et al. (2021) and Huang et al. (2021) use better SDE solvers for continuous-time models. Aside from these works, recently other types of generative models such as VAEs (Kingma & Welling, 2013), GANs (Goodfellow et al., 2014), and autoregressive models (van den Oord et al., 2016) have been incorporated to diffusion models. They are shown to benefit each other (Xiao et al., 2022; Pandey et al., 2022; Meng et al., 2021) and have a closer relation to our work. Xiao et al. (2022) consider the use of implicit models (Huszár, 2017; Mohamed & Lakshminarayanan, 2016; Tran et al., 2017; Yin & Zhou, 2018; Li & Malik, 2018) to boost the efficiency of diffusion models, where they deploy implicit models in each denoising step, which has higher difficulty in the training as the number of diffusion steps increases. Pandey et al. (2022) build diffusion models on top of the output of VAEs for refinement. Our work is also related if viewing TDPM as a diffusion model on top of an implicit model, where the implicit model can be parameterized with the U-Net or a separate network. C DISCUSSION Potential societal impacts: This paper proposes truncated diffusion probabilistic model as a novel type of diffusion-based generative model. The truncated part can be trained as implicit generative models such as GANs jointly or independently with the diffusion part. The capacities of truncated diffusion probabilistic models are competitive to existing diffusion-based ones and efficiency is largely improved. On the contrary of these positive effects, some negative perspectives could also be seen, depending on how the models are used. One major concern is the truncated diffusion technique proposed in this paper could potentially be a way to hack the existing diffusion models if the implicit models are maliciously used to fit the intermediate steps. For example, for some existing diffusion models, for safety concerns, the model’s capacity to generate private data needs to be locked by hiding the diffusion ending point into an unknown distribution. The technique of TDPM could be used to crack these existing online diffusion models by providing intermediate noisy images or fine-tuning the first few steps with TDPM to unlock the capacity. Besides, the capacity of generating good images can also be misused to generate ill-intentioned images at a much lower cost. Discussions: In this work, we mainly focus on reducing the length of the diffusion chain of a finite-time diffusion model. Our model has shown its effectiveness in improving finite-time diffusion models and it is non-trivial to further explore our model on continuous-time diffusion models (Song et al., 2021b). Moreover, while in this paper DDPM is the primary baseline, TDPM can also be built on other recent diffusion models. While pθ(xTtrunc) is parameterized as an implicit distribution, it can also be formulated as a semi-implicit distribution (Yin & Zhou, 2018), which allows it to be approximated with a Gaussian generator. Xiao et al. (2022) also present a closely related work. While we share the same spirit to reduce the length of the diffusion chain, these two strategies are not conflicting with each other. In future work we will look into the integration of these different strategies. There also exists plenty of options in approximating pθ(xTtrunc). When truncating the diffusion chain to be short, the implicit distribution still faces multi-modal and needs to fit with different methods depending upon the properties that we need. For example, in order to capture all modes, a VAE would be preferred, like done in Pandey et al. (2022). Below we provide an alternative method proposed in Zheng & Zhou (2021) to fit the truncated distribution. Besides the training, it’s also an open question whether TDPM can be incorporated into more advanced architectures to have further improvements and we leave this exploration for future work. D ALGORITHM DETAILS AND COMPLEMENTARY RESULTS Below we provide additional algorithm details and complementary experimental results. D.1 ADDITIONAL ANALYSIS ON THE PARAMETERIZATION OF THE IMPLICIT GENERATOR As shown in Section 3, in general, the objective of TDPM consists of the training of the diffusion model ϵθ (a U-Net architecture (Ronneberger et al., 2015)) with simple loss of DDPM Lsimple and the training of an implicit prior model Gψ with objective LGANTtrunc . Without loss of generality, in our main paper, we show two configurations to parameterize the implicit part for t = Ttrunc: 1) the implicit generator shares the same U-Net architecture used for 0 < t < Ttrunc; 2) the implicit generator is instantiated with a separate network. Below we explain this two configurations (denoted as TDPM+ in the main paper). Configuration 1): At t = Ttrunc, the Unet generates the noisy image at the truncated step: xTtrunc = ϵθ(xTtrunc+1, t = Ttrunc + 1), where xTtrunc+1 ∼ N (0, I) is the pure noise image whose pixels are iid sampled from standard normal. For t = Ttrunc, Ttrunc − 1, . . . , 1, the same Unet iteratively refines the noisy images by letting xt−1 = 1√ᾱt (xt − 1−αt√ 1−ᾱt ϵt−1) + βtzt; zt>1 ∼ N (0, I), z1 = 0, where ϵt−1 = ϵθ(xt, t) is the predicted noise by the Unet. Under this setting, the Unet-based generator plays two roles at the same time and the training will be more challenging than using two different generators here. However, we can also see as Ttrunc gets larger, the distribution of p(xTtrunc) will become more similar to a noise distribution, and generating the noisy images will be more like generating noises. In this case, being able to generate both noisy images and predicting noise becomes easier for the generator. Configuration 2) (TDPM+): Unlike previous configuration, where the implicit generator at step t = T shares the same U-Net architecture with t < Ttrunc. Another way is to parameterize Gψ with a separate generator. Although this configuration increases the total parameter of the generative model, it allows the model has better flexibility in the training stage. For example, these two networks can be trained in parallel or leverage a pre-trained model. In our paper, we conduct the experiments by using Stylegan2 generator architecture Karras et al. (2020b) for t = Ttrunc, resulting in an increase of 19M and 28M for the generator parameters when handling 32× 32 and 256× 256 images. The process of training and sampling of these configurations are summarized in Algorithm 1 and 2. Algorithm 1 Training 1: repeat 2: x0 ∼ q(x0) 3: t ∼ Uniform({1, . . . , Ttrunc}) 4: ϵt ∼ N (0, I), z ∼ N (0, I) 5: Update with (14) 6: until converged Algorithm 2 Sampling 1: xTtrunc+1 ∼ N (0, I) 2: if Gψ shared with ϵθ then 3: xTtrunc = ϵθ(xTtrunc+1, Ttrunc + 1) 4: else 5: xTtrunc = Gψ(xTtrunc+1) 6: end if 7: for t = Ttrunc, . . . , 1 do 8: zt ∼ N (0, I) if t > 1, else z1 = 0 9: xt−1 = 1√αt ( xt − 1−αt√1−ᾱt ϵθ(xt, t) ) + βtzt 10: end for 11: return x0 D.2 ALTERNATIVES OF LEARNING THE IMPLICIT DISTRIBUTION Another possible statistical distance is based on conditional transport (Zheng & Zhou, 2021), which is proposed to balance the model-seeking and mode-covering behaviors when fitting an empirical data distribution. In this setting, we use the same generator Gψ as before, but instead of a discriminator, we use a conditional distribution πη parameterized by η to find an optimized mapping between the samples of p and q, and a critic ϕ to measure the point-to-point cost cϕ in the feature space. The generator, the conditional distribution, and the critic are trained by the following objective LCTTtrunc : min ψ,η max ϕ Ex∼q(xTtrunc ) [ EGψ(z)∼πη(Gψ(z) |xTtrunc )cϕ(xTtrunc , Gψ(z)) ] + Ez∼p(z) [ ExTtrunc∼πη(xTtrunc |Gψ(z))cϕ(xTtrunc , Gψ(z)) ] . (16) Similar to (14), we fit TDPM-CT with following loss LCTTDPM = Lsimple_trunc + λLCTTtrunc . (17) We empirically find out this objective has no significant difference than using GAN objective shown in Equation 14 in performance-wise as long as the generator is well trained. D.3 CONDITIONAL TRUNCATED DIFFUSION PROBABILISTIC MODELS For conditional generation, we extend (14) and derive a conditional version of TDPM: LcTDPM = Lcsimple_trunc + λLcTtrunc , (18) where Lcsimple_trunc aims to train the conditional diffusion model with Lcsimple_trunc = EcEt,x0|c,ϵt [ ||ϵt − ϵθ(xt, c, t)||2 ] , t ∼ Unif(1, 2, . . . , Ttrunc), ϵt ∼ N (0, I), (19) and the truncated distribution LcTtrunc can be fitted with GAN or CT: min ψ max ϕ Ec [ Ex∼q(xTtrunc | c)[logDϕ(x | c)] + Ez∼p(z) [log(1−Dϕ(Gψ(z, c)) | c)] ] . (20) min ψ,η max ϕ Ec [ Ex∼q(xTtrunc | c) [ EGψ(z)∼πη(Gψ(z,c) |xTtrunc ,c)cϕ(xTtrunc , Gψ(z, c)) ] + Ez∼p(z) [ ExTtrunc∼πη(xTtrunc |Gψ(z,c),c)cϕ(xTtrunc , Gψ(z, c)) ] ] . (21) D.4 ANALYSIS ON TOY EXPERIMENTS Although we present image experiments in the main paper, our studies were firstly justified our method on synthetic toy data as a proof of concept. We adopt representative 2D synthetic datasets used in prior works (Gulrajani et al., 2017; Zheng & Zhou, 2021), including Swiss Roll, Double Moons, 8-modal, and 25-modal Gaussian mixtures with equal component weights. We use an empirical sample set X , consisting of |X | = 2, 000 samples and illustrate the generated samples after 5000 training epochs. We take 20 grids in the range [−10, 10] for both the x and y axes to approximate the empirical distribution of p̂θ and q̂, and report the corresponding forward KL DKL(q̂||p̂θ) as the quantitative evaluation metric. Figure 8 shows the results on the Swiss Roll data. We present a short chain with T = 2 and a longer chain with T = 5 to show the impacts of the number of diffusion steps. The first row shows that the data distribution is diffused with accumulated noise, and with more steps the diffused distribution will be closer to an isotropic Gaussian distribution. As one can see, truncating the diffusion chain to a short length will result in a clear gap between q(xTtrunc) and N (0, I). When DDPM (shown in the second row) samples from the isotropic Gaussian distribution, it becomes hard to recover the original data distribution from pure noise with only a few steps. Although we can see DDPM can get slightly improved with a few more steps (T = 5), as long as q(xT ) is not close to Gaussian, DDPM can hardly recover the data distribution. By contrast, as shown in the third and fourth rows, TDPM successfully approximates the non-Gaussian q(xTtrunc) with its implicit generator, and we can see the remaining part of the truncated chain is gradually recovered by the denoising steps. From both visualizations and DKL(q̂||p̂θ), we can see that TDPM is able to fit every step in such short chains. TDPM-GAN and TDPM-CT both succeed in fitting pθ(xTtrunc) but the latter one fits slightly better when the diffusion length is 2. When the length increases to 5, fitting the implicit distribution with GAN becomes easier. This observation demonstrate a benefit of combining the diffusion models and GANs. If the implicit generator is sufficiently powerful to model q(xTtrunc), then the number of steps in need can be compressed to a small number. On the contrary, if the implicit generator cannot capture the distribution, we need more steps to facilitate the fitting of the data distribution. Shown in Figure 9-Figure 11, we can see 8-modal Gaussian is more similar to an isotropic Gaussian after getting diffused, thus DDPM can recover a distribution similar to data with 5 steps. On 25-Gaussians, we can observe GAN does not suffer from mode-collapse and provide a better approximation than CT, which results in better data distribution recovery in the final step. D.5 ADDITIONAL ABLATION STUDIES Using Pre-trained diffusion backbones: Different from the default setting, here we put the implicit model of TDPM+ trained at t = Ttrunc and a pre-trained DDPM model1 in the same pipeline of sampling. In this case we do not need to spend any time on pretraining the DDPM model, and only need to train the implicit model for t = Ttrunc. As shown in Table 5, when combined with a pre-trained DDPM for t < Ttrunc, the generation performance of TDPM trained under this two-step procedure is comparable to TDPM trained end-to-end. Sensitivity to noise schedule: Nichol & Dhariwal (2021) show the noise schedule affects the training of DDPM. Here we examine if TDPM is sensitive to the choice of noise schedule. We compare the linear schedule with cosine schedule, which adds noise in a milder manner. The results on CIFAR-10 are reported in Table 6, which suggest that TDPM is not sensitive to the choice between these two schedules. On the choice of truncated step: As the diffused distribution could facilitate the learning of the implicit generator Gψ (Arjovsky & Bottou, 2017), where we could observe by increasing the number of diffusion steps, the FID of TDPM consistently gets better. A natural question is on which step should we truncate the diffusion chain. We study the signal-to-noise ratio (SNR) of different diffusion step. Based on q(xt|x0) = N ( √ ᾱtx0, 1− ᾱtI), we calculate SNR as SNR = √ ᾱt√ 1− ᾱt ; ᾱt = t∏ i=1 (1− βt). We visualize the SNR evolution across time step t > 0 in Figure 12, where we can observe the SNR rapidly decays in the first 100 steps. According to previous studies in Arjovsky & Bottou (2017), injecting noise into the data distribution could smoothen the data distribution support and facilitate the GAN training. The SNR change in this interval indicates injecting noise in the level of t ∈ J1, 100K could bring in more significant improvement for the GAN training. When the step is greater than 200, the SNR is change is no longer significant and close to zero, which indicates the implicit model 1The pre-trained checkpoints are provided by: https://github.com/pesser/pytorch_ diffusion might not be too informative, though it is easier to train. Our experimental observations in Figure 3 also justify this conclusion: when training a GAN at TTrunc = 4, the required number of iterations is similar to training it on clean data; by training the GAN model at TTrunc = 99, the training of GAN is significantly facilitated. For TTrunc > 100, we empirically examine to train a GAN and find it would converge faster than training the diffusion model for t < TTrunc. Comparison of model efficiency: In complement of the results in Table 1-2, we provide detailed model size and generation time on v100 GPU. The results are summarized in Table 7. We can see TDPM has an increasing in the total number of parameter, as it involves a discriminator to help train the implicit model, while its sampling efficiency is also obvious. D.6 EXPERIMENTAL SETTINGS D.6.1 MODEL ARCHITECTURE Generator: Our generator structure strictly follows the U-Net structure (Ronneberger et al., 2015) used in DDPM, improved DDPM, and ADM (Ho et al., 2020; Nichol & Dhariwal, 2021; Dhariwal & Nichol, 2021), which consists of multiple ResNet blocks (He et al., 2016) with Attention blocks (Vaswani et al., 2017) injected in the bottleneck. Please refer to these paper for more details on the architecture. A key difference between our model and previous diffusion models is that our model also train such U-Net as an extra implicit generator Gθ that takes a latent variable z ∼ N (0, I) and a fixed time index t = Ttrunc + 1 as input. However, this does not result in a difference in the generator architecture. We parameterize Gθ with the same U-Net architecture for simplicity and the time embedding t = Ttrunc + 1 is specified to be trained with the implicit loss shown in (12) and (16). We have also tested to use all zero time embedding for t = Ttrunc + 1 and found no clear differences. For our results of TDPM+, the generator Gψ specifically takes a StyleGAN2 architecture Karras et al. (2020b) and there is no time-embedding in Gψ. An increase of generator parameter appears caused by separating the implicit model and denoising U-Net. Note that the generator is trained with GAN loss and without specially designed adaptive augmentation in Karras et al. (2020a). For the detailed model architecture please refer to the corresponding paper or their Github repository: https://github.com/NVlabs/stylegan2-ada-pytorch. Discriminator: Similar to Xiao et al. (2022), we adopt the discriminator architecture used in Karras et al. (2020b), but without the time step input. The discriminator discriminate xTtrunc is from the diffused distribution q(xTtrunc) or implicit generative distribution pθ(xTtrunc). Please refer to Appendix C of Xiao et al. (2022) for the detailed design. Navigator: Training with LCTTtrunc involves an extra module named navigator (Zheng & Zhou, 2021). We strictly follow the architecture used in Zheng & Zhou (2021), where the navigator is an MLP taking the pairwise feature distance as inputs. There is no time embedding used in the navigator as it is only used for the training at t = TTrunc. The feature is extracted from the layer before the final scalar output. Please refer to their Appendix D for detailed information. Architecture for text-to-image experiments: We adopt the 1.45B LDM model (Rombach et al., 2022) that is pretrained on LAION-400M dataset (Schuhmann et al., 2021). The LDM model consists of a U-Net KL-regularized autoencoder with downsampling-factor 8 (resolution 256 -> 32), a U-Net in the latent space, and a BERT (Devlin et al., 2018) text encoder transform raw text to a sequence of 1280-dimension embeddings. We only fine-tune the latent model in our experiments. In the training of the truncated part, the discriminator takes the first-half of the U-Net (downsampling backbone) with a linear predicting head on top of it. Architecture for toy experiments: The generator uses an architecture stacked with 4 linear layers with 128 hidden units. Each intermediate layer is equipped with a time-embedding layer and follows softplus activation. The discriminator and navigator have the same architecture, without time-embedding layers, and using leakyReLU as the activation function. D.6.2 TRAINING CONFIGURATIONS Datasets: We use CIFAR-10 (Krizhevsky et al., 2009), LSUN-bedroom, and LSUN-Church (Yu et al., 2015) datasets for unconditional generation in the main experiments. Additionally, we apply CelebA(Liu et al., 2015) and CelebA-HQ (Lee et al., 2020) for complementary justification. For text-to-image experiments, we use CUB-200 (Welinder et al., 2010) and MS-COCO (Lin et al., 2014). The images consist of 32 × 32 pixels for CIFAR-10. For the other datasets, we apply center-crop along the short edge and resize to the target resolution (64× 64 for CelebA; 256× 256 for the others). Diffusion schedule: For all datasets, we strictly follow the diffusion process used in our backbone models, and instantiate the truncated diffusion schedule by obtaining the first TTrunc diffusion rates {β1, ..., βTTrunc}. For example, if our goal is to fit a model with NFE=50, to truncate the diffusion process used in Ho et al. (2020) (β1 = 10−4, βT = 0.02, T=1000), we first initialize β1, β2, ... β1000, and then taking the first 49 steps to complete the truncation. Optimization: We train our models using the Adam optimizer (Kingma & Ba, 2015), where most of the hyperparameters match the setting in Xiao et al. (2022), and we slightly modify the generator learning rate to match the setting in Ho et al. (2020), as shown in Table 8. We train our models using V100 GPUs, with CUDA 10.1, PyTorch 1.7.1. The training takes approximately 2 days on CIFAR-10 with 4 GPUs, and a week on CelebA-HQ and LSUN-Church with 8 GPUs. Table 8: Optimization hyper-parameters. CIFAR10 CelebA CelebA-HQ LSUN Initial learning rate for discriminator 10−4 10−4 10−4 10−4 Initial learning rate for navigator (if applicable) 10−4 10−4 10−4 10−4 Initial learning rate for generator 1× 10−5 1× 10−5 2× 10−5 2× 10−5 Adam optimizer β1 0.5 0.5 0.5 0.5 Adam optimizer β2 0.9 0.9 0.9 0.9 EMA 0.9999 0.9999 0.9999 0.9999 Batch size 128 128 64 64 # of training iterations 800k 800k 0.5M 2.4M(bedroom)/1.2M(church) # of GPUs 4 8 8 8 For TDPM+, where we use StyleGAN2 generator as Gψ, we directly use their original training hyper-parameters and train the model in parallel with the diffusion model. For TLDM, we set the base learning rate as 10−5 and the mini-batch size is set to 64. For the ImageNet1K-64×64 experiments, we use StyleGAN-XL generator as Gψ and strictly follow all the default training hyper-parameters. To simplify the implementation and save computation, instead of applying the default progressive growing pipeline 16 × 16 → 32 × 32 → 64 × 64, we directly train the implicit model on 64×64 images corrupted at TTrunc. Without using the progressive growing pipeline, the result of StyleGANXL shown in Table 2 is clearly worse than the progressive one reported in their paper (FID 1.51). However, when used as the implicit model of TDPM, the final performance of TDPM becomes competitive with this result. Evaluation: When evaluating the sampling time, we use models trained on CIFAR-10 and generate a batch of 128 samples. When evaluating the FID, and recall score, following the convention, we use 50k generated samples for CIFAR-10, LSUN-bedroom and LSUN-church, 30k samples for CelebAHQ (since the CelebA HQ dataset contains only 30k samples), 30k samples for the text-to-image datasets. The recall scores are calculated with the recipe in Kynkäänniemi et al. (2019). In the sampling stage, we follow our backbone to apply the same guidance in the diffusion part (t < TTrunc) if applicable. Specifically, for LDM backbone, we use classifier-free guidance (Ho & Salimans, 2022) with scale 1.5 and there are no DDIM steps for TDLM. D.7 ADDITIONAL RESULTS ON UNCONDITIONAL GENERATION TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 14: Qualitative results of TDPM on LSUN-Church (256 × 256), with Ttrunc = 99, 49, and 4. Note NFE = Ttrunc + 1 in TDPM. Each group presents generated samples from pθ(x0) (left) and pθ(xTtrunc) (right). TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 15: Analogous qualitative results to Figure 14 on LSUN-Bedroom. Produced by TDPM. TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 16: Analogous qualitative results to Figure 14 on CelebA-HQ. Produced by TDPM. TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 18: Analogous qualitative results to Figure 14 on LSUN-Bedroom. Produced by TDPM-CT. TTrunc = 99 TTrunc = 49 TTrunc = 4 Figure 19: Analogous qualitative results to Figure 14 on CelebA-HQ. Produced by TDPM-CT. D.8 ADDITIONAL RESULTS ON TEXT-TO-IMAGE GENERATION A white and gray bird with black wings. An airplan flying over a body of water. A sign reads “TDPM”. Busy city street at dusk with sun setting. Figure 20: Additional text-to-image generation results with different text prompt, produced by TLDM with Ttrunc = 49. The bagel is put in a squre plate. The bathroom has a big mirror. A cluster of flower on the wooden table.
1. What is the focus and contribution of the paper regarding semantic correspondence? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of neural representation? 3. Do you have any concerns or questions about the representation of semantic correspondence? 4. What are the limitations of the NeMF approach, and how does it compare to other methods in terms of efficiency and performance? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper introduces truncated diffusion models, which truncates the forward process at certain timesteps so that when sampling, only a small portion of the reverse process is needed, which saves sampling time. To map noise to the noisy data at the truncated timestep, a conditional GAN is trained. The paper also builds up the relation between truncated diffusion models and AAE. Experimental results show that the model can generate similar or better samples than DDPM with a small number of NFEs. Strengths And Weaknesses Strength: The idea is quite interesting. Previously people have observe when sampling from DDPM and do denoising for 1000 steps, the steps close to clean sample x 0 are more important to steps close to white noise x 1 000 . Some noise scheduling techniques are designed based on this observation, but this method takes an interesting approach to utilize this observation. It simply by-pass the early steps with a single step model such as GAN, and start sampling from diffusion models from a truncated time step. Although I feel like it is a combination of denoising diffusion GAN by Xiao et al. and traditional DDPM, but the benefit of such a combination is clear. Basically, the model can be interpreted as applying one step of denoising diffusion GAN, and using DDPM for remaining steps. However, it obtains better results, and it maintains the nice property of DDPM (while DDGAN is more like a GAN). Strong experimental results. It is nice to see we can even surpass the sample quality of DDPM with fewer sampling steps, while many previous speed-up methods have a degrade in sample quality. Weakness: Just like DDGAN, it loses some property of DDPM such as likelihood estimation, due to the GAN. Not necessarily a weakness, but I have one question. I am interested in the results of truncated at 4 steps. If you truncated only at 4 steps out of 1000 steps, is it almost equivalent to training a GAN (or DDGAN with T=1)? Because at t=4, the noisy sample x 4 is almost clean, and you need to train a GAN to map white noise to x 4 . However, from DDGAN paper, it seems like DDGAN at T = 1 does not work very well. I would be interested in knowing more details on this. Clarity, Quality, Novelty And Reproducibility The paper is well written and the method is clearly presented. The idea is novel, to the best of my knowledge. The method is straightforward to implement.
ICLR
Title Towards Safe Deep Learning: Unsupervised Defense Against Generic Adversarial Attacks Abstract Recent advances in adversarial Deep Learning (DL) have opened up a new and largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model. The proposed PCL methodology is unsupervised, meaning that no adversarial sample is leveraged to build/train parallel checkpointing learners. We formalize the goal of preventing adversarial attacks as an optimization problem to minimize the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint checkpointing modules are trained and leveraged to validate the victim model execution in parallel. Each checkpointing learner explicitly characterizes the geometry of the input data and the corresponding high-level data abstractions within a particular DL layer. As such, the adversary is required to simultaneously deceive all the defender modules in order to succeed. We extensively evaluate the performance of the PCL methodology against the state-of-the-art attack scenarios, including Fast-Gradient-Sign (FGS), Jacobian Saliency Map Attack (JSMA), Deepfool, and Carlini&WagnerL2. Extensive proof-of-concept evaluations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate the effectiveness of our proposed defense mechanism against adversarial samples. 1 INTRODUCTION Security and safety consideration is a major obstacle to the wide-scale adoption of emerging learning algorithms in sensitive scenarios, such as intelligent transportation, healthcare, and video surveillance applications (McDaniel et al. (2016); Dahl et al. (2013); Knorr (2015)). While advanced learning technologies are essential for enabling coordination and interaction between the autonomous agents and the environment, a careful analysis of their decision reliability in the face of carefully crafted adversarial samples (Goodfellow et al. (2014); Papernot et al. (2016a); Moosavi-Dezfooli et al. (2016); Carlini & Wagner (2017b)) and thwarting their vulnerabilities are still in their infancy. Consider a traffic sign classifier employed in self-driving cars. In this setting, an adversary can carefully add imperceptible perturbation to a legitimate “stop” sign sample and fool the DL model to classify it as a “yield” sign; thus, jeopardizes the safety of the vehicle as shown in (McDaniel et al. (2016)). As such, it is highly important to reject risky adversarial samples to ensure the integrity of DL models used in autonomous systems such as unmanned vehicles and drones. In this paper, we aim to answer two open questions regarding the adversarial attacks. (i) Why are machine learning models vulnerable to adversarial samples? Our hypothesis is that the vulnerability of neural networks to adversarial samples originates from the existence of rarely explored sub-spaces in each feature map. This phenomenon is particularly caused by the limited access to the labeled data and/or inefficiency of regularization algorithms (Wang et al. (2016); Denil et al. (2013)). Figure 1 provides a simple illustration of the partially explored space in a twodimensional setup. We analytically and empirically back up our hypothesis by extensive evaluations on the state-of-the-art attacks, including Fast-Gradient-Sign (Goodfellow et al. (2014)), Jacobian Saliency Map Attack (Papernot et al. (2016a)), Deepfool (Moosavi-Dezfooli et al. (2016)), and Carlini&WagnerL2 (Carlini & Wagner (2017b)). (ii) How can we characterize and thwart the underlying space for unsupervised model assurance as well as defend against the adversaries? A line of research has shown that there is a trade-off between the robustness of a model and its accuracy (Madry et al. (2017); Papernot et al. (2016b)). Taking this into account, instead of making a single model that is both robust and accurate, we introduce a new defense mechanism called Parallel Checkpointing Learners (PCL). In this setting, the victim model is kept as is while separate defender modules are trained to checkpoint the data abstractions and assess the reliability of the victim’s prediction. Each defender module characterizes the explored sub-space in the pertinent layer by learning the probability density function (pdf) of legitimate data points and marking the complement sub-spaces as rarely observed regions. Once such characterization is obtained, the checkpointing modules1 evaluate the input sample in parallel with the victim model and raise alarm flags for data points that lie within the rarely explored regions (Figure 1c). As we demonstrate in Section 4, adversarial samples created by various attack methods mostly lie within the sub-spaces marked as partially explored sectors. We consider a white-box attack model in which the attacker knows everything about the victim model including its architecture, learning algorithm, and parameters. This threat model represents the most powerful attacker that can endanger the real-world applications. We validate the security of our proposed approach for different DL benchmarks including MNIST, CIFAR10, and a subset of ImageNet data. Based on the result of our analysis, we provide new insights on the reason behind the existence of adversarial transferability. We open-source our API to ensure ease of use by the users (the link is omitted for blind review purposes) and invite the community to attempt attacks against our provided benchmarks in the form of a challenge. The explicit contribution of this paper is as follows: (i) Devising an automated end-to-end framework for unsupervised model assurance as well as defending against the adversaries. (ii) Incepting the idea of parallel checkpointing learners to validate the legitimacy of data abstractions at each intermediate DL layer. (iii) Performing extensive proof-of-concept evaluations against state-of-the-art attack methods. (iv) Providing new insights regarding the transferability of adversarial samples in between different models. 2 TRAINING CHECKPOINTING MODULES FOR INTERMEDIATE LAYERS The goal of each defender (checkpointing) module is to learn the pdf of the explored sub-spaces in a particular intermediate DL feature map. The learned density function is then used to identify the rarely observed regions as depicted in Figure 1b. We consider a Gaussian Mixture Model (GMM) as the prior probability to characterize the data distribution at each checkpoint location.2 1We use the term “checkpointing module” and “defender module” interchangeably throughout the paper. 2It is worth noting that our proposed approach is rather generic and is not restricted to the GMM distribution. The GMM distribution can be replaced with any other prior depending on the application. To effectively characterize the explored sub-space as a GMM distribution, one is required to minimize the entanglement between each two Gaussian distribution (corresponding to every two different classes) while decreasing the inner-class diversity. Figure 2 illustrates the high-level block diagram of the training procedure for devising a parallel checkpointing module. Training a defender module is a one-time offline process and is performed in three steps. 1 Replicating the victim neural network and all its feature maps. An L2 normalization layer is inserted in the desired checkpoint location. The normalization layer maps the latent feature variables, φ(x), into the Euclidean space such that the acquired data embeddings live in a d-dimensional hypersphere, i.e., ‖φ(x)‖2= 1. This normalization is crucial as it partially removes the effect of over-fitting to particular data samples that are highly correlated with the underlying DL parameters.3 2 Fine-tuning the replicated network to enforce disentanglement of data features (at a particular checkpoint location). To do so, we optimize the defender module by incorporating the following loss function with the conventional cross entropy loss: L+= γ [ ‖Cy∗ −φ(x)‖22︸ ︷︷ ︸ loss1 − Σi6=y∗‖Ci−φ(x)‖22︸ ︷︷ ︸ loss2 + Σi(‖Ci‖2−1)2︸ ︷︷ ︸ loss3 ]. (1) Here, γ is a trade-off parameter that specifies the contribution of the additive loss term, φ(x) is the corresponding feature vector of input sample x at the checkpoint location, y∗ is the ground-truth label, and Ci denotes the center of all data abstractions (φ(x)) corresponding to class i. The center values Ci and intermediate feature vectors φ(x) are trainable variables that are learned by fine-tuning the defender module. In our experiments, we set the parameter γ to 0.01 and retrain the defender model with the same optimizer used for training the victim model. The learning rate of the optimizer is set to 110 of that of the victim model as the model is already in a relatively good local minima. Figure 3a illustrates the optimization goal of each defender module per Eq. (1). The first term (loss1) in Eq. (1) aims to condense latent data features φ(x) that belong to the same class. Reducing the inner-class diversity, in turn, yields a sharper Gaussian distribution per class. The second term (loss2) intends to increase the intra-class distance between different categories and promote separability. The composition of the first two terms in Eq. (1) can be arbitrarily small by pushing the centers to (Ci←±∞). We add the term, loss3, to ensure that the underlying centers lie on a unit d-dimensional hyper-sphere and avoid divergence in training the defender modules. Figures 3b and 3c demonstrate the distance of legitimate (blue) and adversarial (red) samples from the corresponding centers Ci in a checkpoint module before and after retraining.4 As shown, finetuning the defender module with proposed objective function can effectively separate the distribution 3The L2 norm is selected to be consistent with our assumption of GMM prior distribution. This norm can be easily replaced by an arbitrarily user-defined norm through our accompanying API. 4The centers Ci before fine-tuning the checkpoint (defender) module are equivalent to the mean of the data points in each class. of legitimate samples from malicious data points. Note that training the defender module is carried out in an unsupervised setting, meaning that no adversarial sample is included in the training phase. 3 High dimensional real-world datasets can be represented as an ensemble of lower dimensional sub-spaces (Bouveyron et al. (2007); Mirhoseini et al. (2016); Rouhani et al. (2017)). As discussed in (Bouveyron et al. (2007)), under a GMM distribution assumption, the data points belonging to each class can be characterized as a spherical density in two sub-spaces: (i) The sub-space where the data actually lives (Ei) and (ii) its orthogonal complementary space (E⊥i ). The orthogonal space (E⊥i ) is defined such that E ⊥ i ⊕ Ei = Rd , where d is the overall dimensionality of the underlying space. We leverage High Dimensional Discriminant Analysis (HDDA) algorithm (Bouveyron et al. (2007)) to learn the mean and the conditional covariance of each class as a composition of lower dimensional sub-spaces. Under the Gaussian distribution and our specific assumptions, the conditional covariance matrix contains two different eigenvalues ai > bi to be determined as shown in (Bouveyron et al. (2007)). The learned pdf variables (i.e., mean and conditional covariance) are used to compute the probability of a feature point φ(x) coming from a specific class. In particular, for each incoming test sample x, the probability p(φ(x)|yi) is evaluated where yi is the predicted class (output of the victim neural network) and φ(x) is the corresponding data abstraction at the checkpoint location. The acquired likelihood is then compared against a user-defined cut-off threshold which we refer to as the security parameter. The Security Parameter (SP) is a constant number in the range of [0%−100%] that determines the hardness of defender modules. Figure 4 illustrates how the SP can control the hardness of the pertinent decision boundaries. In this example, we have depicted the latent features of one category that are projected into the first two Principal Component Analysis (PCA) components in the Euclidean space (each point corresponds to a single input image). The blue and black contours correspond to security parameters of 10% and 20%, respectively. For example, 10% of the legitimate training samples lie outside the contour specified with SP = 10%. One may speculate that an adversary can add a structured noise to a legitimate sample such that the data point is moved from one cluster to the center of the other clusters; thus fooling the defender modules (Figure 5a). The risk of such attack approach is significantly reduced in our proposed PCL countermeasure due to three main reasons: (i) Use of parallel checkpointing modules; the attacker requires to simultaneously deceive all the defender models in order to succeed. (ii) Increasing intraclass distances in each checkpointing module; The latent defender modules are trained such that not only the inner-class diversity is decreased, but also the distance between each pair of different classes is increased (see Eq. (1)). (iii) Learning a separate defender module in the input space to validate the Peak Signal-to-Noise Ratio (PSNR) level of the incoming samples as discussed in Section 3. In the remainder of the paper, we refer to the defender modules operating on the input space as the input defenders. PCL modules that checkpoint the intermediate data features within the DL network are referred as latent defenders. 2.1 RISK ANALYSIS Detecting malicious samples can be cast as a two-category classification task. Let us refer to the category of the legitimate samples as W1 and the category of adversarial samples as W2. If we define ηi j = η(αi|Wj) as the misclassification penalty5 incurred for deciding Wi when the true state is Wj, the conditional risk in each of our checkpointing modules is equal to: R(α1|φ(x)) = η11P(W1|φ(x))+η12P(W2|φ(x)), R(α2|φ(x)) = η21P(W1|φ(x))+η22P(W2|φ(x)). (2) The fundamental rule to express the minimum-risk decision is to decide W1 if R(α1|φ(x)) < R(α2|φ(x)). In terms of the posterior probabilities, we decide W1 if: (η21−η11)P(W1|φ(x))> (η12−η22)P(W2|φ(x)). (3) Generally speaking, the penalty incurred for making an error is greater than the cost incurred for being correct; thus both of the terms η21−η11 and η12−η22 are positive. Following the Bayes’ rule, we should select a sample as a legitimate one (W1) if: (η21−η11)P(φ(x)|W1)P(W1)> (η12−η22)P(φ(x)|W2)P(W2), (4) and select W2 otherwise. By reordering the aforementioned decision criteria we have: P(φ(x)|W1) P(φ(x)|W2) > (η12−η22) (η21−η11) P(W2) P(W1) . (5) Note that the right-hand term in Eq. (5) is application specific and is independent of the input data observation φ(x). In other words, the optimal decision criteria particularly rely on the cost of making a mistake in the given task and the risk of being attacked. This term is tightly correlated with the user-defined cut-off threshold (security parameter) depicted in Figure 4. Under the GMM assumption, the conditional probability P(φ(x)|W1) in Eq. (5) is computed as: p(φ(x)|yi) = 1 (2π) N 2 |Σi| 1 2 exp{−1 2 (φ(x)−µi)T Σ−1i (φ(x)−µi)}, (6) where yi is the output of the victim neural network (predicted class), µi and Σi are the output of the HDDA analysis, and N is the dimension of the latent feature space in the checkpoint module. 5The misclassification penalty is a constant value which determines the cost of each decision. 3 TRAINING CHECKPOINTING MODULES FOR THE INPUT SPACE We leverage dictionary learning and sparse signal recovery techniques to measure the PSNR of each incoming sample and automatically filter out atypical samples in the input space. Figure 5b illustrates the high-level block diagram of an input defender module. As shown, devising an input checkpoint model is performed in two main steps: (i) dictionary learning, and (ii) characterizing the typical PSNR per class after sparse recovery. 1 Dictionary learning; we learn a separate dictionary for each class of data by solving: argmin Di 1 2 ‖Zi−DiV i‖22 + β‖V i‖1 s.t. ‖Dik‖= 1, 0≤ k ≤ kmax. (7) Here, Zi is a matrix whose columns are pixels extracted from different regions of input images belonging to category i. For instance, if we consider 8× 8 patches of pixels, each column of Zi would be a vector of 64 elements. The goal of dictionary learning is to find matrix Di that best represents the distribution of pixel patches from images belonging to class i. We denote the number of columns in Di by kmax. For a certain Di, the image patches Zi are represented with a sparse matrix V i, and DiV i is the reconstructed patches. We leverage Least Angle Regression (LAR) method to solve the Lasso problem defined in Eq. (7). In our experiments, we learn a dictionary of size kmax = 225 for each class of data points using 150,000 randomly selected patches of training data. For an incoming sample, during the execution phase, the input defender module takes the output of the victim DL model (e.g., predicted class i) and uses Orthogonal Matching Pursuit (OMP) routine (Tropp et al. (2007)) to sparsely reconstruct the input data with the corresponding dictionary Di. The dictionary matrix Di contains a set of samples that commonly appear in the training data belonging to class i; As such, the input sample classified as class i should be well-reconstructed as DiV ∗ with a high PSNR value, where V ∗ is the optimal solution obtained by the OMP routine. During the execution phase, all of the non-overlapping patches within the image are denoised by the dictionary to form the reconstructed image. 2 Characterizing typical PSNR in each category; we profile the PSNR of legitimate samples within each class and find a threshold that covers all legitimate training samples. If an incoming sample has a PSNR lower than the threshold (i.e., high perturbation after reconstruction by the corresponding dictionary), it will be regarded as a malicious data point. In particular, PSNR is defined as: PSNR = 20log10(MAXI)−10log10(MSE), (8) where the mean square error (MSE) is defined as the L2 difference of the input image and the reconstructed image based on the corresponding dictionary. The MAXI term is the maximum possible pixel value of the image (usually equivalent to 255). Figure 6 demonstrates the impact of perturbation level on the pertinent adversarial detection rate for three different security parameters (cut-off thresholds). In this experiment, we have considered the FGS attack with different ε values on the MNIST benchmark.6 As shown, the use of input dictionaries facilitate automated detection of adversarial samples with relatively high perturbation (e.g., 6Table 2 in Appendix A summarizes the DL model topology used in each benchmark. The latent defender module (checkpoint) is inserted at the second-to-last layers. ε > 0.25) while the latent defender module is sufficient to effectively distinguish malicious samples even with very small perturbations. We extensively evaluate the impact of security parameter on the ultimate system performance for various benchmarks in Section 4. 4 EXPERIMENTS We evaluate the proposed PCL methodology on three canonical machine learning datasets: MNIST (LeCun et al. (1998b)), CIFAR10 (Krizhevsky & Hinton (2009)), and a sub-set of ImageNet (Deng et al. (2009)) consisting of 10 different classes. A detailed summary of the neural network architectures used in each benchmark along with the specific parameters used for various attacks are provided in Appendix A. We leveraged the attack benchmark sets available at (Nicolas Papernot (2017)) for evaluation of different state-of-the-art attacks including FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks. In our proposed countermeasure, the input and latent defenders are jointly considered to detect adversarial samples. In particular, we treat an input as an adversarial sample if either of the latent or input checkpointing modules raise an alarm signal. Figure 7 demonstrates the impact of security parameter on the ultimate false positive and true positive rates for the MNIST benchmark. As shown, a higher security parameter results in a higher true positive detection rate while it also increases the risk of labeling legitimate samples as possibly malicious ones. To consider the joint decision metric for each application and attack model, we evaluate the false positive and true positive rates and present the pertinent Receiver Operating Characteristic (ROC) curves in Figure 8. The ROC curves are established as follows: first, we consider a latent defender and change the security parameter (SP) in the range of [0%−100%] and evaluate the FP and TP rates for each security parameter, which gives us the dashed blue ROC curves. Next, we consider an input defender and modify the detection policy: a sample is considered to be malicious if either of the input or latent defenders raise an alarm flag. The ROC curve for this joint defense policy is shown as the green curves in Figure 8. The gap between the dashed blue curve and the green curve indicates the effect of the input defender on the overall decision policy; as can be seen, the input defender has more impact for the FGS attack. This is compatible with our intuition since, compared to the other three attack methods, the FGS algorithm induces more perturbation to generate adversarial samples. We summarize the performance of the PCL methodology against each of the FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks for MNIST, CIFAR10, and ImageNet in Table 1. The reported numbers in this table are gathered as follows: we consider a few points on the green ROC curve (marked on Figure 8), which correspond to certain TP rates (i.e., 90%, 95%, 98%, and 99%), then report the FP rates for these points. In all our experiments, the use of only one latent defender module to checkpoint the second-to-last layer of the pertinent victim model was enough to prevent adversarial samples generated by the existing state-of-the-art attacks. Please refer to Appendix B for the complete set of ROC curves for the CIFAR10 and ImageNet benchmarks. 5 DISCUSSION Figure 9 demonstrates an example of the adversarial confusion matrices for victim neural networks with and without using parallel checkpointing learners. In this example, we set the security parameter to only 1%. As shown, the adversarial sample generated for the victim model are not transferred to the checkpointing modules. In fact, the proposed PCL approach can effectively remove/detect adversarial samples by characterizing the rarely explored sub-spaces and looking into the statistical density of data points in the pertinent space. Note that the remaining adversarial samples that are not detected in this experiment are crafted from legitimate samples that are inherently hard to classify even by a human observer due to the closeness of decision boundaries corresponding to such classes. For instance, in the MNIST application, such adversarial samples mostly belong to class 5 that is misclassified to class 3 or class 4 misclassified as 9. Such misclassifications are indeed the model approximation error which is well-understood to the statistical nature of the models. As such, a more precise definition of adversarial samples is extremely required to distinguish malicious samples form those that simply lie near the decision boundaries. We emphasize that the PCL defenders are trained in an unsupervised setting independent of the attack strategy, meaning that no adversarial sample is used to train the defender models. This is particularly important as it corroborates the effectiveness of the proposed countermeasure in the face of generic attack scenarios including possible future adversarial DL algorithms. Nevertheless, one might question the effectiveness of the proposed approach for adaptive attack algorithms that target the defender modules. A comprehensive study of possible adaptive attack algorithms is yet to be performed if such attacks are developed in the future. We emphasize that, thus far, we have been able to significantly thwart all the existing attacks with only one checkpoint model approximating the data distribution in the second-to-last layer of the corresponding models. Our proposed PCL methodology, however, provides a rather more generic approach that can be adapted/modified against potential future attacks by training parallel disjoint models (with diverse objectives/parameters) to further strengthen the defense. Figure 10 demonstrate how using multiple checkpoints with a negative correlation in parallel can effectively reduce the number of false alarms while increasing the detection rate of adversarial samples. In this experiment, we have considered MNIST data classification using LeNet model with 4 layers and FGS attack. The checkpoints are inserted in different layers of the pertinent neural network (first layer up to the second-to-last layer). We empirically select the mixing coefficients to aggregate the confidence of the checkpoint defenders for rejecting an incoming sample. Note that, there is a trade-off between the computational complexity (e.g., runtime overhead) of the PCL defenders and the reliability of the overall system. On the one hand, a high number of validation checkpoints increases the reliability of the systems, but it also increases the computational load as each input sample should be validated by more defender networks. On the other hand, a small number of checkpoints may degrade the defense mechanism performance by treating adversarial samples as legitimate ones. We are looking into automated techniques to customize the number of checkpoint modules and their corresponding mixing coefficients based on application data and physical constraints such as real-time analysis requirement as future work. 6 RELATED WORK In response to the various adversarial attack methodologies proposed in the literature (e.g., Goodfellow et al. (2014); Papernot et al. (2016a); Moosavi-Dezfooli et al. (2016); Carlini & Wagner (2017b)), several research attempts have been made to design DL strategies that are more robust in the face of adversarial examples. The existing countermeasures can be classified into two distinct categories: (i) Supervised strategies which aim to improve the generalization of the learning models by incorporating the noise-corrupted version of inputs as training samples (Jin et al. (2015); Gu & Rigazio (2014)) and/or injecting adversarial examples generated by different attacks into the DL training phase (Huang et al. (2015); Shaham et al. (2015); Goodfellow et al. (2014); Szegedy et al. (2013)). The proposed defense mechanisms in this category are particularly tailored for specific perturbation patterns and can only partially evade adversarial samples generated by other attack scenarios (with different perturbation distributions) from being effective as shown in (Gu & Rigazio (2014)). (ii) Unsupervised approaches which aim to smooth out the underlying gradient space (decision boundaries) by incorporating a smoothness penalty (Miyato et al. (2015); Carlini & Wagner (2017b)) as a regularization term in the loss function or compressing the neural network by removing the nuisance variables (Papernot et al. (2016b)). These set of works have been mainly remained oblivious to the pertinent data density in the latent space. In particular, these works have been developed based on an implicit assumption that the existence of adversarial samples is due to the piece-wise linear behavior of decision boundaries (obtained by gradient descent) in the high-dimensional space. As such, their integrity can be jeopardized by considering different perturbations at the input space and evaluating the same attack on various perturbed data points to even pass the smoothed decision boundaries as shown in (Carlini & Wagner (2016)). More recently, Meng & Chen (2017) propose an unsupervised manifold projection method called MagNet to reform adversarial samples using autoencoders. Unlike PCL, MagNet is inattentive to the density function of the data in the space. As shown in Carlini & Wagner (2017a), manifold projection methods including MagNet are not robust to adversarial samples and can approximately increase the required distortion to generate adversarial sample by only 30%. To the best of our knowledge, the proposed PCL methodology is the first unsupervised countermeasure developed based upon probabilistic density analysis and dictionary learning to effectively characterize and thwart adversarial samples. The PCL method does not assume any particular attack strategy and/or perturbation pattern. This is particularly important as it demonstrates the generalizability of the proposed approach in the face of adversarial attacks. 7 CONCLUSION This paper proposes a novel end-to-end methodology for characterizing and thwarting adversarial DL space. We introduce the concept of parallel checkpointing learners as a viable countermeasure to significantly reduce the risk of integrity attacks. The proposed PCL methodology explicitly characterizes statistical properties of the features within different layers of a neural network by learning a set of complementary dictionaries and corresponding probability density functions. The effectiveness of the PCL approach is evaluated against the state-of-the-art attack models including FGS, JSMA, Deepfool, and Carlini&WagnerL2. Proof-of-concept experiments for analyzing various data collections including MNIST, CIFAR10, and a subset of the ImageNet dataset corroborate successful detection of adversarial samples with relatively small false-positive rates. We devise an open-source API for the proposed countermeasure and invite the community to attempt attacks against the provided benchmarks in the form of a challenge. APPENDIX A Table 2 presents the neural network architectures for the victim models used in each benchmark. The network for MNIST is the popular LeNet-3 architecture, the CIFAR-10 architecture is taken from (Ciregan et al. (2012)), and the ImageNet model is inspired by the AlexNet architecture (Krizhevsky et al. (2012)). We visually evaluate the perturbed examples to determine the attack parameters (e.g., perturbation level ε and niters) such that the perturbations cannot be recognized by a human observer. Table 3 details the parameters used for the realization of different attack algorithms. The JSMA attack for the ImageNet benchmark is computationally expensive (e.g., it took more than 20min to generate one adversarial sample on an NVIDIA TITAN Xp GPU). As such, we could not generate the adversarial samples of this attack using the JSMA library provided by (Nicolas Papernot (2017)). APPENDIX B Corresponding ROC curves for PCL performance against FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks in the CIFAR10 and ImageNet benchmarks.
1. How does the proposed method detect adversarial examples in deep learning classification? 2. What is the purpose of the checkpointing module, and how does it work? 3. Why did the authors choose a value of γ = 0.01 for the concentration parameter? 4. Does the checkpointing module only detect adversarial examples, or can it also classify them robustly? 5. Can you provide a high-level overview of how all the components of the approach fit together? 6. How do the technical aspects of the approach justify its novelty and potential impact?
Review
Review This paper present a method for detecting adversarial examples in a deep learning classification setting. The idea is to characterize the latent feature space (a function of inputs) as observed vs unobserved, and use a module to fit a 'cluster-aware' loss that aims to cluster similar classes tighter in the latent space. Questions/Comments: - How is the checkpointing module represented? Which parameters are fit using the fine-tuning loss described on page 3? - What is the rationale for setting the gamma (concentration?) parameters to .01? Is that a general suggestion or a data-set specific recommendation? - Are the checkpointing modules designed to only detect adversarial examples? Or is it designed to still classify adversarial examples in a robust way? Clarity: I had trouble understanding some of this paper. It would be nice to have a succinct summary of how all of the pieces presented fit together, e.g. the original victim network, fine-tuning loss, per-class dictionary learning w/ OMP. Technical: It is hard to tell how some of the components of this approach are technically justified. Novel: I am not familiar enough with adversarial deep learning to assess novelty or impact.
ICLR
Title Towards Safe Deep Learning: Unsupervised Defense Against Generic Adversarial Attacks Abstract Recent advances in adversarial Deep Learning (DL) have opened up a new and largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model. The proposed PCL methodology is unsupervised, meaning that no adversarial sample is leveraged to build/train parallel checkpointing learners. We formalize the goal of preventing adversarial attacks as an optimization problem to minimize the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint checkpointing modules are trained and leveraged to validate the victim model execution in parallel. Each checkpointing learner explicitly characterizes the geometry of the input data and the corresponding high-level data abstractions within a particular DL layer. As such, the adversary is required to simultaneously deceive all the defender modules in order to succeed. We extensively evaluate the performance of the PCL methodology against the state-of-the-art attack scenarios, including Fast-Gradient-Sign (FGS), Jacobian Saliency Map Attack (JSMA), Deepfool, and Carlini&WagnerL2. Extensive proof-of-concept evaluations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate the effectiveness of our proposed defense mechanism against adversarial samples. 1 INTRODUCTION Security and safety consideration is a major obstacle to the wide-scale adoption of emerging learning algorithms in sensitive scenarios, such as intelligent transportation, healthcare, and video surveillance applications (McDaniel et al. (2016); Dahl et al. (2013); Knorr (2015)). While advanced learning technologies are essential for enabling coordination and interaction between the autonomous agents and the environment, a careful analysis of their decision reliability in the face of carefully crafted adversarial samples (Goodfellow et al. (2014); Papernot et al. (2016a); Moosavi-Dezfooli et al. (2016); Carlini & Wagner (2017b)) and thwarting their vulnerabilities are still in their infancy. Consider a traffic sign classifier employed in self-driving cars. In this setting, an adversary can carefully add imperceptible perturbation to a legitimate “stop” sign sample and fool the DL model to classify it as a “yield” sign; thus, jeopardizes the safety of the vehicle as shown in (McDaniel et al. (2016)). As such, it is highly important to reject risky adversarial samples to ensure the integrity of DL models used in autonomous systems such as unmanned vehicles and drones. In this paper, we aim to answer two open questions regarding the adversarial attacks. (i) Why are machine learning models vulnerable to adversarial samples? Our hypothesis is that the vulnerability of neural networks to adversarial samples originates from the existence of rarely explored sub-spaces in each feature map. This phenomenon is particularly caused by the limited access to the labeled data and/or inefficiency of regularization algorithms (Wang et al. (2016); Denil et al. (2013)). Figure 1 provides a simple illustration of the partially explored space in a twodimensional setup. We analytically and empirically back up our hypothesis by extensive evaluations on the state-of-the-art attacks, including Fast-Gradient-Sign (Goodfellow et al. (2014)), Jacobian Saliency Map Attack (Papernot et al. (2016a)), Deepfool (Moosavi-Dezfooli et al. (2016)), and Carlini&WagnerL2 (Carlini & Wagner (2017b)). (ii) How can we characterize and thwart the underlying space for unsupervised model assurance as well as defend against the adversaries? A line of research has shown that there is a trade-off between the robustness of a model and its accuracy (Madry et al. (2017); Papernot et al. (2016b)). Taking this into account, instead of making a single model that is both robust and accurate, we introduce a new defense mechanism called Parallel Checkpointing Learners (PCL). In this setting, the victim model is kept as is while separate defender modules are trained to checkpoint the data abstractions and assess the reliability of the victim’s prediction. Each defender module characterizes the explored sub-space in the pertinent layer by learning the probability density function (pdf) of legitimate data points and marking the complement sub-spaces as rarely observed regions. Once such characterization is obtained, the checkpointing modules1 evaluate the input sample in parallel with the victim model and raise alarm flags for data points that lie within the rarely explored regions (Figure 1c). As we demonstrate in Section 4, adversarial samples created by various attack methods mostly lie within the sub-spaces marked as partially explored sectors. We consider a white-box attack model in which the attacker knows everything about the victim model including its architecture, learning algorithm, and parameters. This threat model represents the most powerful attacker that can endanger the real-world applications. We validate the security of our proposed approach for different DL benchmarks including MNIST, CIFAR10, and a subset of ImageNet data. Based on the result of our analysis, we provide new insights on the reason behind the existence of adversarial transferability. We open-source our API to ensure ease of use by the users (the link is omitted for blind review purposes) and invite the community to attempt attacks against our provided benchmarks in the form of a challenge. The explicit contribution of this paper is as follows: (i) Devising an automated end-to-end framework for unsupervised model assurance as well as defending against the adversaries. (ii) Incepting the idea of parallel checkpointing learners to validate the legitimacy of data abstractions at each intermediate DL layer. (iii) Performing extensive proof-of-concept evaluations against state-of-the-art attack methods. (iv) Providing new insights regarding the transferability of adversarial samples in between different models. 2 TRAINING CHECKPOINTING MODULES FOR INTERMEDIATE LAYERS The goal of each defender (checkpointing) module is to learn the pdf of the explored sub-spaces in a particular intermediate DL feature map. The learned density function is then used to identify the rarely observed regions as depicted in Figure 1b. We consider a Gaussian Mixture Model (GMM) as the prior probability to characterize the data distribution at each checkpoint location.2 1We use the term “checkpointing module” and “defender module” interchangeably throughout the paper. 2It is worth noting that our proposed approach is rather generic and is not restricted to the GMM distribution. The GMM distribution can be replaced with any other prior depending on the application. To effectively characterize the explored sub-space as a GMM distribution, one is required to minimize the entanglement between each two Gaussian distribution (corresponding to every two different classes) while decreasing the inner-class diversity. Figure 2 illustrates the high-level block diagram of the training procedure for devising a parallel checkpointing module. Training a defender module is a one-time offline process and is performed in three steps. 1 Replicating the victim neural network and all its feature maps. An L2 normalization layer is inserted in the desired checkpoint location. The normalization layer maps the latent feature variables, φ(x), into the Euclidean space such that the acquired data embeddings live in a d-dimensional hypersphere, i.e., ‖φ(x)‖2= 1. This normalization is crucial as it partially removes the effect of over-fitting to particular data samples that are highly correlated with the underlying DL parameters.3 2 Fine-tuning the replicated network to enforce disentanglement of data features (at a particular checkpoint location). To do so, we optimize the defender module by incorporating the following loss function with the conventional cross entropy loss: L+= γ [ ‖Cy∗ −φ(x)‖22︸ ︷︷ ︸ loss1 − Σi6=y∗‖Ci−φ(x)‖22︸ ︷︷ ︸ loss2 + Σi(‖Ci‖2−1)2︸ ︷︷ ︸ loss3 ]. (1) Here, γ is a trade-off parameter that specifies the contribution of the additive loss term, φ(x) is the corresponding feature vector of input sample x at the checkpoint location, y∗ is the ground-truth label, and Ci denotes the center of all data abstractions (φ(x)) corresponding to class i. The center values Ci and intermediate feature vectors φ(x) are trainable variables that are learned by fine-tuning the defender module. In our experiments, we set the parameter γ to 0.01 and retrain the defender model with the same optimizer used for training the victim model. The learning rate of the optimizer is set to 110 of that of the victim model as the model is already in a relatively good local minima. Figure 3a illustrates the optimization goal of each defender module per Eq. (1). The first term (loss1) in Eq. (1) aims to condense latent data features φ(x) that belong to the same class. Reducing the inner-class diversity, in turn, yields a sharper Gaussian distribution per class. The second term (loss2) intends to increase the intra-class distance between different categories and promote separability. The composition of the first two terms in Eq. (1) can be arbitrarily small by pushing the centers to (Ci←±∞). We add the term, loss3, to ensure that the underlying centers lie on a unit d-dimensional hyper-sphere and avoid divergence in training the defender modules. Figures 3b and 3c demonstrate the distance of legitimate (blue) and adversarial (red) samples from the corresponding centers Ci in a checkpoint module before and after retraining.4 As shown, finetuning the defender module with proposed objective function can effectively separate the distribution 3The L2 norm is selected to be consistent with our assumption of GMM prior distribution. This norm can be easily replaced by an arbitrarily user-defined norm through our accompanying API. 4The centers Ci before fine-tuning the checkpoint (defender) module are equivalent to the mean of the data points in each class. of legitimate samples from malicious data points. Note that training the defender module is carried out in an unsupervised setting, meaning that no adversarial sample is included in the training phase. 3 High dimensional real-world datasets can be represented as an ensemble of lower dimensional sub-spaces (Bouveyron et al. (2007); Mirhoseini et al. (2016); Rouhani et al. (2017)). As discussed in (Bouveyron et al. (2007)), under a GMM distribution assumption, the data points belonging to each class can be characterized as a spherical density in two sub-spaces: (i) The sub-space where the data actually lives (Ei) and (ii) its orthogonal complementary space (E⊥i ). The orthogonal space (E⊥i ) is defined such that E ⊥ i ⊕ Ei = Rd , where d is the overall dimensionality of the underlying space. We leverage High Dimensional Discriminant Analysis (HDDA) algorithm (Bouveyron et al. (2007)) to learn the mean and the conditional covariance of each class as a composition of lower dimensional sub-spaces. Under the Gaussian distribution and our specific assumptions, the conditional covariance matrix contains two different eigenvalues ai > bi to be determined as shown in (Bouveyron et al. (2007)). The learned pdf variables (i.e., mean and conditional covariance) are used to compute the probability of a feature point φ(x) coming from a specific class. In particular, for each incoming test sample x, the probability p(φ(x)|yi) is evaluated where yi is the predicted class (output of the victim neural network) and φ(x) is the corresponding data abstraction at the checkpoint location. The acquired likelihood is then compared against a user-defined cut-off threshold which we refer to as the security parameter. The Security Parameter (SP) is a constant number in the range of [0%−100%] that determines the hardness of defender modules. Figure 4 illustrates how the SP can control the hardness of the pertinent decision boundaries. In this example, we have depicted the latent features of one category that are projected into the first two Principal Component Analysis (PCA) components in the Euclidean space (each point corresponds to a single input image). The blue and black contours correspond to security parameters of 10% and 20%, respectively. For example, 10% of the legitimate training samples lie outside the contour specified with SP = 10%. One may speculate that an adversary can add a structured noise to a legitimate sample such that the data point is moved from one cluster to the center of the other clusters; thus fooling the defender modules (Figure 5a). The risk of such attack approach is significantly reduced in our proposed PCL countermeasure due to three main reasons: (i) Use of parallel checkpointing modules; the attacker requires to simultaneously deceive all the defender models in order to succeed. (ii) Increasing intraclass distances in each checkpointing module; The latent defender modules are trained such that not only the inner-class diversity is decreased, but also the distance between each pair of different classes is increased (see Eq. (1)). (iii) Learning a separate defender module in the input space to validate the Peak Signal-to-Noise Ratio (PSNR) level of the incoming samples as discussed in Section 3. In the remainder of the paper, we refer to the defender modules operating on the input space as the input defenders. PCL modules that checkpoint the intermediate data features within the DL network are referred as latent defenders. 2.1 RISK ANALYSIS Detecting malicious samples can be cast as a two-category classification task. Let us refer to the category of the legitimate samples as W1 and the category of adversarial samples as W2. If we define ηi j = η(αi|Wj) as the misclassification penalty5 incurred for deciding Wi when the true state is Wj, the conditional risk in each of our checkpointing modules is equal to: R(α1|φ(x)) = η11P(W1|φ(x))+η12P(W2|φ(x)), R(α2|φ(x)) = η21P(W1|φ(x))+η22P(W2|φ(x)). (2) The fundamental rule to express the minimum-risk decision is to decide W1 if R(α1|φ(x)) < R(α2|φ(x)). In terms of the posterior probabilities, we decide W1 if: (η21−η11)P(W1|φ(x))> (η12−η22)P(W2|φ(x)). (3) Generally speaking, the penalty incurred for making an error is greater than the cost incurred for being correct; thus both of the terms η21−η11 and η12−η22 are positive. Following the Bayes’ rule, we should select a sample as a legitimate one (W1) if: (η21−η11)P(φ(x)|W1)P(W1)> (η12−η22)P(φ(x)|W2)P(W2), (4) and select W2 otherwise. By reordering the aforementioned decision criteria we have: P(φ(x)|W1) P(φ(x)|W2) > (η12−η22) (η21−η11) P(W2) P(W1) . (5) Note that the right-hand term in Eq. (5) is application specific and is independent of the input data observation φ(x). In other words, the optimal decision criteria particularly rely on the cost of making a mistake in the given task and the risk of being attacked. This term is tightly correlated with the user-defined cut-off threshold (security parameter) depicted in Figure 4. Under the GMM assumption, the conditional probability P(φ(x)|W1) in Eq. (5) is computed as: p(φ(x)|yi) = 1 (2π) N 2 |Σi| 1 2 exp{−1 2 (φ(x)−µi)T Σ−1i (φ(x)−µi)}, (6) where yi is the output of the victim neural network (predicted class), µi and Σi are the output of the HDDA analysis, and N is the dimension of the latent feature space in the checkpoint module. 5The misclassification penalty is a constant value which determines the cost of each decision. 3 TRAINING CHECKPOINTING MODULES FOR THE INPUT SPACE We leverage dictionary learning and sparse signal recovery techniques to measure the PSNR of each incoming sample and automatically filter out atypical samples in the input space. Figure 5b illustrates the high-level block diagram of an input defender module. As shown, devising an input checkpoint model is performed in two main steps: (i) dictionary learning, and (ii) characterizing the typical PSNR per class after sparse recovery. 1 Dictionary learning; we learn a separate dictionary for each class of data by solving: argmin Di 1 2 ‖Zi−DiV i‖22 + β‖V i‖1 s.t. ‖Dik‖= 1, 0≤ k ≤ kmax. (7) Here, Zi is a matrix whose columns are pixels extracted from different regions of input images belonging to category i. For instance, if we consider 8× 8 patches of pixels, each column of Zi would be a vector of 64 elements. The goal of dictionary learning is to find matrix Di that best represents the distribution of pixel patches from images belonging to class i. We denote the number of columns in Di by kmax. For a certain Di, the image patches Zi are represented with a sparse matrix V i, and DiV i is the reconstructed patches. We leverage Least Angle Regression (LAR) method to solve the Lasso problem defined in Eq. (7). In our experiments, we learn a dictionary of size kmax = 225 for each class of data points using 150,000 randomly selected patches of training data. For an incoming sample, during the execution phase, the input defender module takes the output of the victim DL model (e.g., predicted class i) and uses Orthogonal Matching Pursuit (OMP) routine (Tropp et al. (2007)) to sparsely reconstruct the input data with the corresponding dictionary Di. The dictionary matrix Di contains a set of samples that commonly appear in the training data belonging to class i; As such, the input sample classified as class i should be well-reconstructed as DiV ∗ with a high PSNR value, where V ∗ is the optimal solution obtained by the OMP routine. During the execution phase, all of the non-overlapping patches within the image are denoised by the dictionary to form the reconstructed image. 2 Characterizing typical PSNR in each category; we profile the PSNR of legitimate samples within each class and find a threshold that covers all legitimate training samples. If an incoming sample has a PSNR lower than the threshold (i.e., high perturbation after reconstruction by the corresponding dictionary), it will be regarded as a malicious data point. In particular, PSNR is defined as: PSNR = 20log10(MAXI)−10log10(MSE), (8) where the mean square error (MSE) is defined as the L2 difference of the input image and the reconstructed image based on the corresponding dictionary. The MAXI term is the maximum possible pixel value of the image (usually equivalent to 255). Figure 6 demonstrates the impact of perturbation level on the pertinent adversarial detection rate for three different security parameters (cut-off thresholds). In this experiment, we have considered the FGS attack with different ε values on the MNIST benchmark.6 As shown, the use of input dictionaries facilitate automated detection of adversarial samples with relatively high perturbation (e.g., 6Table 2 in Appendix A summarizes the DL model topology used in each benchmark. The latent defender module (checkpoint) is inserted at the second-to-last layers. ε > 0.25) while the latent defender module is sufficient to effectively distinguish malicious samples even with very small perturbations. We extensively evaluate the impact of security parameter on the ultimate system performance for various benchmarks in Section 4. 4 EXPERIMENTS We evaluate the proposed PCL methodology on three canonical machine learning datasets: MNIST (LeCun et al. (1998b)), CIFAR10 (Krizhevsky & Hinton (2009)), and a sub-set of ImageNet (Deng et al. (2009)) consisting of 10 different classes. A detailed summary of the neural network architectures used in each benchmark along with the specific parameters used for various attacks are provided in Appendix A. We leveraged the attack benchmark sets available at (Nicolas Papernot (2017)) for evaluation of different state-of-the-art attacks including FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks. In our proposed countermeasure, the input and latent defenders are jointly considered to detect adversarial samples. In particular, we treat an input as an adversarial sample if either of the latent or input checkpointing modules raise an alarm signal. Figure 7 demonstrates the impact of security parameter on the ultimate false positive and true positive rates for the MNIST benchmark. As shown, a higher security parameter results in a higher true positive detection rate while it also increases the risk of labeling legitimate samples as possibly malicious ones. To consider the joint decision metric for each application and attack model, we evaluate the false positive and true positive rates and present the pertinent Receiver Operating Characteristic (ROC) curves in Figure 8. The ROC curves are established as follows: first, we consider a latent defender and change the security parameter (SP) in the range of [0%−100%] and evaluate the FP and TP rates for each security parameter, which gives us the dashed blue ROC curves. Next, we consider an input defender and modify the detection policy: a sample is considered to be malicious if either of the input or latent defenders raise an alarm flag. The ROC curve for this joint defense policy is shown as the green curves in Figure 8. The gap between the dashed blue curve and the green curve indicates the effect of the input defender on the overall decision policy; as can be seen, the input defender has more impact for the FGS attack. This is compatible with our intuition since, compared to the other three attack methods, the FGS algorithm induces more perturbation to generate adversarial samples. We summarize the performance of the PCL methodology against each of the FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks for MNIST, CIFAR10, and ImageNet in Table 1. The reported numbers in this table are gathered as follows: we consider a few points on the green ROC curve (marked on Figure 8), which correspond to certain TP rates (i.e., 90%, 95%, 98%, and 99%), then report the FP rates for these points. In all our experiments, the use of only one latent defender module to checkpoint the second-to-last layer of the pertinent victim model was enough to prevent adversarial samples generated by the existing state-of-the-art attacks. Please refer to Appendix B for the complete set of ROC curves for the CIFAR10 and ImageNet benchmarks. 5 DISCUSSION Figure 9 demonstrates an example of the adversarial confusion matrices for victim neural networks with and without using parallel checkpointing learners. In this example, we set the security parameter to only 1%. As shown, the adversarial sample generated for the victim model are not transferred to the checkpointing modules. In fact, the proposed PCL approach can effectively remove/detect adversarial samples by characterizing the rarely explored sub-spaces and looking into the statistical density of data points in the pertinent space. Note that the remaining adversarial samples that are not detected in this experiment are crafted from legitimate samples that are inherently hard to classify even by a human observer due to the closeness of decision boundaries corresponding to such classes. For instance, in the MNIST application, such adversarial samples mostly belong to class 5 that is misclassified to class 3 or class 4 misclassified as 9. Such misclassifications are indeed the model approximation error which is well-understood to the statistical nature of the models. As such, a more precise definition of adversarial samples is extremely required to distinguish malicious samples form those that simply lie near the decision boundaries. We emphasize that the PCL defenders are trained in an unsupervised setting independent of the attack strategy, meaning that no adversarial sample is used to train the defender models. This is particularly important as it corroborates the effectiveness of the proposed countermeasure in the face of generic attack scenarios including possible future adversarial DL algorithms. Nevertheless, one might question the effectiveness of the proposed approach for adaptive attack algorithms that target the defender modules. A comprehensive study of possible adaptive attack algorithms is yet to be performed if such attacks are developed in the future. We emphasize that, thus far, we have been able to significantly thwart all the existing attacks with only one checkpoint model approximating the data distribution in the second-to-last layer of the corresponding models. Our proposed PCL methodology, however, provides a rather more generic approach that can be adapted/modified against potential future attacks by training parallel disjoint models (with diverse objectives/parameters) to further strengthen the defense. Figure 10 demonstrate how using multiple checkpoints with a negative correlation in parallel can effectively reduce the number of false alarms while increasing the detection rate of adversarial samples. In this experiment, we have considered MNIST data classification using LeNet model with 4 layers and FGS attack. The checkpoints are inserted in different layers of the pertinent neural network (first layer up to the second-to-last layer). We empirically select the mixing coefficients to aggregate the confidence of the checkpoint defenders for rejecting an incoming sample. Note that, there is a trade-off between the computational complexity (e.g., runtime overhead) of the PCL defenders and the reliability of the overall system. On the one hand, a high number of validation checkpoints increases the reliability of the systems, but it also increases the computational load as each input sample should be validated by more defender networks. On the other hand, a small number of checkpoints may degrade the defense mechanism performance by treating adversarial samples as legitimate ones. We are looking into automated techniques to customize the number of checkpoint modules and their corresponding mixing coefficients based on application data and physical constraints such as real-time analysis requirement as future work. 6 RELATED WORK In response to the various adversarial attack methodologies proposed in the literature (e.g., Goodfellow et al. (2014); Papernot et al. (2016a); Moosavi-Dezfooli et al. (2016); Carlini & Wagner (2017b)), several research attempts have been made to design DL strategies that are more robust in the face of adversarial examples. The existing countermeasures can be classified into two distinct categories: (i) Supervised strategies which aim to improve the generalization of the learning models by incorporating the noise-corrupted version of inputs as training samples (Jin et al. (2015); Gu & Rigazio (2014)) and/or injecting adversarial examples generated by different attacks into the DL training phase (Huang et al. (2015); Shaham et al. (2015); Goodfellow et al. (2014); Szegedy et al. (2013)). The proposed defense mechanisms in this category are particularly tailored for specific perturbation patterns and can only partially evade adversarial samples generated by other attack scenarios (with different perturbation distributions) from being effective as shown in (Gu & Rigazio (2014)). (ii) Unsupervised approaches which aim to smooth out the underlying gradient space (decision boundaries) by incorporating a smoothness penalty (Miyato et al. (2015); Carlini & Wagner (2017b)) as a regularization term in the loss function or compressing the neural network by removing the nuisance variables (Papernot et al. (2016b)). These set of works have been mainly remained oblivious to the pertinent data density in the latent space. In particular, these works have been developed based on an implicit assumption that the existence of adversarial samples is due to the piece-wise linear behavior of decision boundaries (obtained by gradient descent) in the high-dimensional space. As such, their integrity can be jeopardized by considering different perturbations at the input space and evaluating the same attack on various perturbed data points to even pass the smoothed decision boundaries as shown in (Carlini & Wagner (2016)). More recently, Meng & Chen (2017) propose an unsupervised manifold projection method called MagNet to reform adversarial samples using autoencoders. Unlike PCL, MagNet is inattentive to the density function of the data in the space. As shown in Carlini & Wagner (2017a), manifold projection methods including MagNet are not robust to adversarial samples and can approximately increase the required distortion to generate adversarial sample by only 30%. To the best of our knowledge, the proposed PCL methodology is the first unsupervised countermeasure developed based upon probabilistic density analysis and dictionary learning to effectively characterize and thwart adversarial samples. The PCL method does not assume any particular attack strategy and/or perturbation pattern. This is particularly important as it demonstrates the generalizability of the proposed approach in the face of adversarial attacks. 7 CONCLUSION This paper proposes a novel end-to-end methodology for characterizing and thwarting adversarial DL space. We introduce the concept of parallel checkpointing learners as a viable countermeasure to significantly reduce the risk of integrity attacks. The proposed PCL methodology explicitly characterizes statistical properties of the features within different layers of a neural network by learning a set of complementary dictionaries and corresponding probability density functions. The effectiveness of the PCL approach is evaluated against the state-of-the-art attack models including FGS, JSMA, Deepfool, and Carlini&WagnerL2. Proof-of-concept experiments for analyzing various data collections including MNIST, CIFAR10, and a subset of the ImageNet dataset corroborate successful detection of adversarial samples with relatively small false-positive rates. We devise an open-source API for the proposed countermeasure and invite the community to attempt attacks against the provided benchmarks in the form of a challenge. APPENDIX A Table 2 presents the neural network architectures for the victim models used in each benchmark. The network for MNIST is the popular LeNet-3 architecture, the CIFAR-10 architecture is taken from (Ciregan et al. (2012)), and the ImageNet model is inspired by the AlexNet architecture (Krizhevsky et al. (2012)). We visually evaluate the perturbed examples to determine the attack parameters (e.g., perturbation level ε and niters) such that the perturbations cannot be recognized by a human observer. Table 3 details the parameters used for the realization of different attack algorithms. The JSMA attack for the ImageNet benchmark is computationally expensive (e.g., it took more than 20min to generate one adversarial sample on an NVIDIA TITAN Xp GPU). As such, we could not generate the adversarial samples of this attack using the JSMA library provided by (Nicolas Papernot (2017)). APPENDIX B Corresponding ROC curves for PCL performance against FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks in the CIFAR10 and ImageNet benchmarks.
1. What is the reviewer's opinion on the significance and novelty of the proposed approach? 2. What are the strengths and weaknesses of the paper regarding its contributions, ideas, and empirical performance? 3. How does the reviewer assess the clarity, quality, and completeness of the paper's content? 4. Are there any concerns or suggestions regarding the methodology, experiments, and results presented in the paper? If so, what are they? 5. Does the reviewer think the paper is suitable for publication after potential revisions?
Review
Review Summary: The paper presents an unsupervised method for detecting adversarial examples of neural networks. The method includes two independent components: an ‘input defender’ which tried to inspect the input, and a ‘latent defender’ trying to inspect a hidden representation. Both are based on the claim that adversarial examples lie outside a certain sub-space occupied by the natural image examples, and modeling this sub-space hence enables their detection. The input defender is based on sparse coding, and the latent defender on modeling the latent activity as a mixture of Gaussians. Experiments are presented on MInst, Cifar10, and ImageNet. - Introduction: The motivation for detecting adversarial examples is not stated clearly enough. How can such examples be used by a malicious agent to cause damage to a system? Sketching some such scenarios would help the reader understand why the issue is practically important. I was not convinced it is. Page 4: - Step 3 of the algorithm is not clear: o How exactly does HDDA model the data (formally) and how does it estimate the parameters? In the current version, the paper does not explain the HDDA formalism and learning algorithm, which is a main building block in the proposed system (as it provides the density score used for adversarial examples detection). Hence the paper cannot be read as a standalone document. I went on to read the relevant HDDA paper, but it is also not clear which of the model variants presented there is used in this paper. o What is the relation between the model learned at stage 2 (the centers c^i) and the model learnt by HDDA? Are they completely different models? Or are the C^I used when learning the HDDA model (and how)? If these are separate models, how are they used in conjunction to give a final density score? If I understand correctly, only the HDDA model is used to get the final score, and the C^i are only used to make the \phy(x) representation more class-seperable. Is that right? - Figure 4, b and c: it is not clear what the (x,y,z) measurements plotted in these 3D drawings are (what are the axis). Page 5: - Section 2: the risk analysis is done in a standard Bayesian way and leads to a ratio of PDFs in equation 5. However, this form is not appropriate for the case presented at this paper, since the method presented only models one of these PDFs (Specifically p(x | W1) - there is not generative model of p(x|W2)). - The authors claim in the last sentence of the section that p(x|W2) is equivalent to 1-p(x|W1), but this is not true: these are two continuous densities, they do not sum to 1, and a model of p(x|W2) is not available (as far as I understand the method) Page 6: - How is equation 7) optimized? - Which patchs are extracted from images, for training and at inference time? Are these patchs a dense coverage of the image? Sparsely sampled? Densely sampled with overlaps? - Its not clear enough what exactly is the ‘PSNR’ value which is used for the adversarial example detection, and what exactly is ‘profile the PSNR of legitimate samples within each class’. A formal definition of PSNR and’profiling’ is missing (does profiling simply mean finding a threshold for filtering?) Page 7: - Figure 7 is not very informative. Given the ROC curves in figure 8 and table 1 it is redundant. Page 8: - The results in general indicate that the method is much better than chance, but it is not clear if it is practical, because the false alarm rates for high detection are quite high. For example on ImageNet, 14.2% of the innocent images are mistakenly rejected as malicious to get 90% detection rate. I do not think this working point is useful for a real application - Given the high flares alarm rate, it is surprising that experiments with multiple checkpoints are not presented (specifically as this case of multiple checkpoints is discussed explicitly in previous sections of the paper). Experiments with multiple checkpoints are clear required to complete the picture regarding the empirical performance of this method - The experiments show that essentially, the latent defenders are stronger than the input defender in most cases. However, an ablation study of the latent defender is missing: Specifcially, it is not clear a) how much does stage 2 (model refinement with clusters) contribute to the accuracy (how does the model do without it? And 3) how important is the HDDA and the specific variant used (which is not clear) important: is it important to model the Gaussians using a sub-space? Of which dimension? Overall: Pros: - A nice idea with some novelty, based on a non-trivial observation - The experimental results how the idea holds some promise Cons - The method is not presented clearly enough: the main component modeling the network activity is not explained (the HDDA module used) - The results presented show that the method is probably not suitable for a practical application yet (high false alarm rate for good detection rate) - Experimental results are partial: results are not presented for multiple defenders, no ablation experiments After revision: Some of my comments were addressed, and some were not. Specifically, results were presented for multiple defenders and some ablation experiments were highlihgted Things not addressed: - The risk analysis is still not relevant. The authors removed a clearly flawed sentence, but the analysis still assumes that two densities (of 'good' and 'bad' examples) are modeled, while in the work presented only one of them is. Hence this analysis does not add anything to the paper- it states a general case which does not fit the current scenario and its relation to the work is not clear. It would have been better to omit it and use the space to describe HDDA and the specific variant used in this work, as this is the main tool doing the distinction. I believe the paper should be accepted.
ICLR
Title Towards Safe Deep Learning: Unsupervised Defense Against Generic Adversarial Attacks Abstract Recent advances in adversarial Deep Learning (DL) have opened up a new and largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model. The proposed PCL methodology is unsupervised, meaning that no adversarial sample is leveraged to build/train parallel checkpointing learners. We formalize the goal of preventing adversarial attacks as an optimization problem to minimize the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint checkpointing modules are trained and leveraged to validate the victim model execution in parallel. Each checkpointing learner explicitly characterizes the geometry of the input data and the corresponding high-level data abstractions within a particular DL layer. As such, the adversary is required to simultaneously deceive all the defender modules in order to succeed. We extensively evaluate the performance of the PCL methodology against the state-of-the-art attack scenarios, including Fast-Gradient-Sign (FGS), Jacobian Saliency Map Attack (JSMA), Deepfool, and Carlini&WagnerL2. Extensive proof-of-concept evaluations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate the effectiveness of our proposed defense mechanism against adversarial samples. 1 INTRODUCTION Security and safety consideration is a major obstacle to the wide-scale adoption of emerging learning algorithms in sensitive scenarios, such as intelligent transportation, healthcare, and video surveillance applications (McDaniel et al. (2016); Dahl et al. (2013); Knorr (2015)). While advanced learning technologies are essential for enabling coordination and interaction between the autonomous agents and the environment, a careful analysis of their decision reliability in the face of carefully crafted adversarial samples (Goodfellow et al. (2014); Papernot et al. (2016a); Moosavi-Dezfooli et al. (2016); Carlini & Wagner (2017b)) and thwarting their vulnerabilities are still in their infancy. Consider a traffic sign classifier employed in self-driving cars. In this setting, an adversary can carefully add imperceptible perturbation to a legitimate “stop” sign sample and fool the DL model to classify it as a “yield” sign; thus, jeopardizes the safety of the vehicle as shown in (McDaniel et al. (2016)). As such, it is highly important to reject risky adversarial samples to ensure the integrity of DL models used in autonomous systems such as unmanned vehicles and drones. In this paper, we aim to answer two open questions regarding the adversarial attacks. (i) Why are machine learning models vulnerable to adversarial samples? Our hypothesis is that the vulnerability of neural networks to adversarial samples originates from the existence of rarely explored sub-spaces in each feature map. This phenomenon is particularly caused by the limited access to the labeled data and/or inefficiency of regularization algorithms (Wang et al. (2016); Denil et al. (2013)). Figure 1 provides a simple illustration of the partially explored space in a twodimensional setup. We analytically and empirically back up our hypothesis by extensive evaluations on the state-of-the-art attacks, including Fast-Gradient-Sign (Goodfellow et al. (2014)), Jacobian Saliency Map Attack (Papernot et al. (2016a)), Deepfool (Moosavi-Dezfooli et al. (2016)), and Carlini&WagnerL2 (Carlini & Wagner (2017b)). (ii) How can we characterize and thwart the underlying space for unsupervised model assurance as well as defend against the adversaries? A line of research has shown that there is a trade-off between the robustness of a model and its accuracy (Madry et al. (2017); Papernot et al. (2016b)). Taking this into account, instead of making a single model that is both robust and accurate, we introduce a new defense mechanism called Parallel Checkpointing Learners (PCL). In this setting, the victim model is kept as is while separate defender modules are trained to checkpoint the data abstractions and assess the reliability of the victim’s prediction. Each defender module characterizes the explored sub-space in the pertinent layer by learning the probability density function (pdf) of legitimate data points and marking the complement sub-spaces as rarely observed regions. Once such characterization is obtained, the checkpointing modules1 evaluate the input sample in parallel with the victim model and raise alarm flags for data points that lie within the rarely explored regions (Figure 1c). As we demonstrate in Section 4, adversarial samples created by various attack methods mostly lie within the sub-spaces marked as partially explored sectors. We consider a white-box attack model in which the attacker knows everything about the victim model including its architecture, learning algorithm, and parameters. This threat model represents the most powerful attacker that can endanger the real-world applications. We validate the security of our proposed approach for different DL benchmarks including MNIST, CIFAR10, and a subset of ImageNet data. Based on the result of our analysis, we provide new insights on the reason behind the existence of adversarial transferability. We open-source our API to ensure ease of use by the users (the link is omitted for blind review purposes) and invite the community to attempt attacks against our provided benchmarks in the form of a challenge. The explicit contribution of this paper is as follows: (i) Devising an automated end-to-end framework for unsupervised model assurance as well as defending against the adversaries. (ii) Incepting the idea of parallel checkpointing learners to validate the legitimacy of data abstractions at each intermediate DL layer. (iii) Performing extensive proof-of-concept evaluations against state-of-the-art attack methods. (iv) Providing new insights regarding the transferability of adversarial samples in between different models. 2 TRAINING CHECKPOINTING MODULES FOR INTERMEDIATE LAYERS The goal of each defender (checkpointing) module is to learn the pdf of the explored sub-spaces in a particular intermediate DL feature map. The learned density function is then used to identify the rarely observed regions as depicted in Figure 1b. We consider a Gaussian Mixture Model (GMM) as the prior probability to characterize the data distribution at each checkpoint location.2 1We use the term “checkpointing module” and “defender module” interchangeably throughout the paper. 2It is worth noting that our proposed approach is rather generic and is not restricted to the GMM distribution. The GMM distribution can be replaced with any other prior depending on the application. To effectively characterize the explored sub-space as a GMM distribution, one is required to minimize the entanglement between each two Gaussian distribution (corresponding to every two different classes) while decreasing the inner-class diversity. Figure 2 illustrates the high-level block diagram of the training procedure for devising a parallel checkpointing module. Training a defender module is a one-time offline process and is performed in three steps. 1 Replicating the victim neural network and all its feature maps. An L2 normalization layer is inserted in the desired checkpoint location. The normalization layer maps the latent feature variables, φ(x), into the Euclidean space such that the acquired data embeddings live in a d-dimensional hypersphere, i.e., ‖φ(x)‖2= 1. This normalization is crucial as it partially removes the effect of over-fitting to particular data samples that are highly correlated with the underlying DL parameters.3 2 Fine-tuning the replicated network to enforce disentanglement of data features (at a particular checkpoint location). To do so, we optimize the defender module by incorporating the following loss function with the conventional cross entropy loss: L+= γ [ ‖Cy∗ −φ(x)‖22︸ ︷︷ ︸ loss1 − Σi6=y∗‖Ci−φ(x)‖22︸ ︷︷ ︸ loss2 + Σi(‖Ci‖2−1)2︸ ︷︷ ︸ loss3 ]. (1) Here, γ is a trade-off parameter that specifies the contribution of the additive loss term, φ(x) is the corresponding feature vector of input sample x at the checkpoint location, y∗ is the ground-truth label, and Ci denotes the center of all data abstractions (φ(x)) corresponding to class i. The center values Ci and intermediate feature vectors φ(x) are trainable variables that are learned by fine-tuning the defender module. In our experiments, we set the parameter γ to 0.01 and retrain the defender model with the same optimizer used for training the victim model. The learning rate of the optimizer is set to 110 of that of the victim model as the model is already in a relatively good local minima. Figure 3a illustrates the optimization goal of each defender module per Eq. (1). The first term (loss1) in Eq. (1) aims to condense latent data features φ(x) that belong to the same class. Reducing the inner-class diversity, in turn, yields a sharper Gaussian distribution per class. The second term (loss2) intends to increase the intra-class distance between different categories and promote separability. The composition of the first two terms in Eq. (1) can be arbitrarily small by pushing the centers to (Ci←±∞). We add the term, loss3, to ensure that the underlying centers lie on a unit d-dimensional hyper-sphere and avoid divergence in training the defender modules. Figures 3b and 3c demonstrate the distance of legitimate (blue) and adversarial (red) samples from the corresponding centers Ci in a checkpoint module before and after retraining.4 As shown, finetuning the defender module with proposed objective function can effectively separate the distribution 3The L2 norm is selected to be consistent with our assumption of GMM prior distribution. This norm can be easily replaced by an arbitrarily user-defined norm through our accompanying API. 4The centers Ci before fine-tuning the checkpoint (defender) module are equivalent to the mean of the data points in each class. of legitimate samples from malicious data points. Note that training the defender module is carried out in an unsupervised setting, meaning that no adversarial sample is included in the training phase. 3 High dimensional real-world datasets can be represented as an ensemble of lower dimensional sub-spaces (Bouveyron et al. (2007); Mirhoseini et al. (2016); Rouhani et al. (2017)). As discussed in (Bouveyron et al. (2007)), under a GMM distribution assumption, the data points belonging to each class can be characterized as a spherical density in two sub-spaces: (i) The sub-space where the data actually lives (Ei) and (ii) its orthogonal complementary space (E⊥i ). The orthogonal space (E⊥i ) is defined such that E ⊥ i ⊕ Ei = Rd , where d is the overall dimensionality of the underlying space. We leverage High Dimensional Discriminant Analysis (HDDA) algorithm (Bouveyron et al. (2007)) to learn the mean and the conditional covariance of each class as a composition of lower dimensional sub-spaces. Under the Gaussian distribution and our specific assumptions, the conditional covariance matrix contains two different eigenvalues ai > bi to be determined as shown in (Bouveyron et al. (2007)). The learned pdf variables (i.e., mean and conditional covariance) are used to compute the probability of a feature point φ(x) coming from a specific class. In particular, for each incoming test sample x, the probability p(φ(x)|yi) is evaluated where yi is the predicted class (output of the victim neural network) and φ(x) is the corresponding data abstraction at the checkpoint location. The acquired likelihood is then compared against a user-defined cut-off threshold which we refer to as the security parameter. The Security Parameter (SP) is a constant number in the range of [0%−100%] that determines the hardness of defender modules. Figure 4 illustrates how the SP can control the hardness of the pertinent decision boundaries. In this example, we have depicted the latent features of one category that are projected into the first two Principal Component Analysis (PCA) components in the Euclidean space (each point corresponds to a single input image). The blue and black contours correspond to security parameters of 10% and 20%, respectively. For example, 10% of the legitimate training samples lie outside the contour specified with SP = 10%. One may speculate that an adversary can add a structured noise to a legitimate sample such that the data point is moved from one cluster to the center of the other clusters; thus fooling the defender modules (Figure 5a). The risk of such attack approach is significantly reduced in our proposed PCL countermeasure due to three main reasons: (i) Use of parallel checkpointing modules; the attacker requires to simultaneously deceive all the defender models in order to succeed. (ii) Increasing intraclass distances in each checkpointing module; The latent defender modules are trained such that not only the inner-class diversity is decreased, but also the distance between each pair of different classes is increased (see Eq. (1)). (iii) Learning a separate defender module in the input space to validate the Peak Signal-to-Noise Ratio (PSNR) level of the incoming samples as discussed in Section 3. In the remainder of the paper, we refer to the defender modules operating on the input space as the input defenders. PCL modules that checkpoint the intermediate data features within the DL network are referred as latent defenders. 2.1 RISK ANALYSIS Detecting malicious samples can be cast as a two-category classification task. Let us refer to the category of the legitimate samples as W1 and the category of adversarial samples as W2. If we define ηi j = η(αi|Wj) as the misclassification penalty5 incurred for deciding Wi when the true state is Wj, the conditional risk in each of our checkpointing modules is equal to: R(α1|φ(x)) = η11P(W1|φ(x))+η12P(W2|φ(x)), R(α2|φ(x)) = η21P(W1|φ(x))+η22P(W2|φ(x)). (2) The fundamental rule to express the minimum-risk decision is to decide W1 if R(α1|φ(x)) < R(α2|φ(x)). In terms of the posterior probabilities, we decide W1 if: (η21−η11)P(W1|φ(x))> (η12−η22)P(W2|φ(x)). (3) Generally speaking, the penalty incurred for making an error is greater than the cost incurred for being correct; thus both of the terms η21−η11 and η12−η22 are positive. Following the Bayes’ rule, we should select a sample as a legitimate one (W1) if: (η21−η11)P(φ(x)|W1)P(W1)> (η12−η22)P(φ(x)|W2)P(W2), (4) and select W2 otherwise. By reordering the aforementioned decision criteria we have: P(φ(x)|W1) P(φ(x)|W2) > (η12−η22) (η21−η11) P(W2) P(W1) . (5) Note that the right-hand term in Eq. (5) is application specific and is independent of the input data observation φ(x). In other words, the optimal decision criteria particularly rely on the cost of making a mistake in the given task and the risk of being attacked. This term is tightly correlated with the user-defined cut-off threshold (security parameter) depicted in Figure 4. Under the GMM assumption, the conditional probability P(φ(x)|W1) in Eq. (5) is computed as: p(φ(x)|yi) = 1 (2π) N 2 |Σi| 1 2 exp{−1 2 (φ(x)−µi)T Σ−1i (φ(x)−µi)}, (6) where yi is the output of the victim neural network (predicted class), µi and Σi are the output of the HDDA analysis, and N is the dimension of the latent feature space in the checkpoint module. 5The misclassification penalty is a constant value which determines the cost of each decision. 3 TRAINING CHECKPOINTING MODULES FOR THE INPUT SPACE We leverage dictionary learning and sparse signal recovery techniques to measure the PSNR of each incoming sample and automatically filter out atypical samples in the input space. Figure 5b illustrates the high-level block diagram of an input defender module. As shown, devising an input checkpoint model is performed in two main steps: (i) dictionary learning, and (ii) characterizing the typical PSNR per class after sparse recovery. 1 Dictionary learning; we learn a separate dictionary for each class of data by solving: argmin Di 1 2 ‖Zi−DiV i‖22 + β‖V i‖1 s.t. ‖Dik‖= 1, 0≤ k ≤ kmax. (7) Here, Zi is a matrix whose columns are pixels extracted from different regions of input images belonging to category i. For instance, if we consider 8× 8 patches of pixels, each column of Zi would be a vector of 64 elements. The goal of dictionary learning is to find matrix Di that best represents the distribution of pixel patches from images belonging to class i. We denote the number of columns in Di by kmax. For a certain Di, the image patches Zi are represented with a sparse matrix V i, and DiV i is the reconstructed patches. We leverage Least Angle Regression (LAR) method to solve the Lasso problem defined in Eq. (7). In our experiments, we learn a dictionary of size kmax = 225 for each class of data points using 150,000 randomly selected patches of training data. For an incoming sample, during the execution phase, the input defender module takes the output of the victim DL model (e.g., predicted class i) and uses Orthogonal Matching Pursuit (OMP) routine (Tropp et al. (2007)) to sparsely reconstruct the input data with the corresponding dictionary Di. The dictionary matrix Di contains a set of samples that commonly appear in the training data belonging to class i; As such, the input sample classified as class i should be well-reconstructed as DiV ∗ with a high PSNR value, where V ∗ is the optimal solution obtained by the OMP routine. During the execution phase, all of the non-overlapping patches within the image are denoised by the dictionary to form the reconstructed image. 2 Characterizing typical PSNR in each category; we profile the PSNR of legitimate samples within each class and find a threshold that covers all legitimate training samples. If an incoming sample has a PSNR lower than the threshold (i.e., high perturbation after reconstruction by the corresponding dictionary), it will be regarded as a malicious data point. In particular, PSNR is defined as: PSNR = 20log10(MAXI)−10log10(MSE), (8) where the mean square error (MSE) is defined as the L2 difference of the input image and the reconstructed image based on the corresponding dictionary. The MAXI term is the maximum possible pixel value of the image (usually equivalent to 255). Figure 6 demonstrates the impact of perturbation level on the pertinent adversarial detection rate for three different security parameters (cut-off thresholds). In this experiment, we have considered the FGS attack with different ε values on the MNIST benchmark.6 As shown, the use of input dictionaries facilitate automated detection of adversarial samples with relatively high perturbation (e.g., 6Table 2 in Appendix A summarizes the DL model topology used in each benchmark. The latent defender module (checkpoint) is inserted at the second-to-last layers. ε > 0.25) while the latent defender module is sufficient to effectively distinguish malicious samples even with very small perturbations. We extensively evaluate the impact of security parameter on the ultimate system performance for various benchmarks in Section 4. 4 EXPERIMENTS We evaluate the proposed PCL methodology on three canonical machine learning datasets: MNIST (LeCun et al. (1998b)), CIFAR10 (Krizhevsky & Hinton (2009)), and a sub-set of ImageNet (Deng et al. (2009)) consisting of 10 different classes. A detailed summary of the neural network architectures used in each benchmark along with the specific parameters used for various attacks are provided in Appendix A. We leveraged the attack benchmark sets available at (Nicolas Papernot (2017)) for evaluation of different state-of-the-art attacks including FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks. In our proposed countermeasure, the input and latent defenders are jointly considered to detect adversarial samples. In particular, we treat an input as an adversarial sample if either of the latent or input checkpointing modules raise an alarm signal. Figure 7 demonstrates the impact of security parameter on the ultimate false positive and true positive rates for the MNIST benchmark. As shown, a higher security parameter results in a higher true positive detection rate while it also increases the risk of labeling legitimate samples as possibly malicious ones. To consider the joint decision metric for each application and attack model, we evaluate the false positive and true positive rates and present the pertinent Receiver Operating Characteristic (ROC) curves in Figure 8. The ROC curves are established as follows: first, we consider a latent defender and change the security parameter (SP) in the range of [0%−100%] and evaluate the FP and TP rates for each security parameter, which gives us the dashed blue ROC curves. Next, we consider an input defender and modify the detection policy: a sample is considered to be malicious if either of the input or latent defenders raise an alarm flag. The ROC curve for this joint defense policy is shown as the green curves in Figure 8. The gap between the dashed blue curve and the green curve indicates the effect of the input defender on the overall decision policy; as can be seen, the input defender has more impact for the FGS attack. This is compatible with our intuition since, compared to the other three attack methods, the FGS algorithm induces more perturbation to generate adversarial samples. We summarize the performance of the PCL methodology against each of the FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks for MNIST, CIFAR10, and ImageNet in Table 1. The reported numbers in this table are gathered as follows: we consider a few points on the green ROC curve (marked on Figure 8), which correspond to certain TP rates (i.e., 90%, 95%, 98%, and 99%), then report the FP rates for these points. In all our experiments, the use of only one latent defender module to checkpoint the second-to-last layer of the pertinent victim model was enough to prevent adversarial samples generated by the existing state-of-the-art attacks. Please refer to Appendix B for the complete set of ROC curves for the CIFAR10 and ImageNet benchmarks. 5 DISCUSSION Figure 9 demonstrates an example of the adversarial confusion matrices for victim neural networks with and without using parallel checkpointing learners. In this example, we set the security parameter to only 1%. As shown, the adversarial sample generated for the victim model are not transferred to the checkpointing modules. In fact, the proposed PCL approach can effectively remove/detect adversarial samples by characterizing the rarely explored sub-spaces and looking into the statistical density of data points in the pertinent space. Note that the remaining adversarial samples that are not detected in this experiment are crafted from legitimate samples that are inherently hard to classify even by a human observer due to the closeness of decision boundaries corresponding to such classes. For instance, in the MNIST application, such adversarial samples mostly belong to class 5 that is misclassified to class 3 or class 4 misclassified as 9. Such misclassifications are indeed the model approximation error which is well-understood to the statistical nature of the models. As such, a more precise definition of adversarial samples is extremely required to distinguish malicious samples form those that simply lie near the decision boundaries. We emphasize that the PCL defenders are trained in an unsupervised setting independent of the attack strategy, meaning that no adversarial sample is used to train the defender models. This is particularly important as it corroborates the effectiveness of the proposed countermeasure in the face of generic attack scenarios including possible future adversarial DL algorithms. Nevertheless, one might question the effectiveness of the proposed approach for adaptive attack algorithms that target the defender modules. A comprehensive study of possible adaptive attack algorithms is yet to be performed if such attacks are developed in the future. We emphasize that, thus far, we have been able to significantly thwart all the existing attacks with only one checkpoint model approximating the data distribution in the second-to-last layer of the corresponding models. Our proposed PCL methodology, however, provides a rather more generic approach that can be adapted/modified against potential future attacks by training parallel disjoint models (with diverse objectives/parameters) to further strengthen the defense. Figure 10 demonstrate how using multiple checkpoints with a negative correlation in parallel can effectively reduce the number of false alarms while increasing the detection rate of adversarial samples. In this experiment, we have considered MNIST data classification using LeNet model with 4 layers and FGS attack. The checkpoints are inserted in different layers of the pertinent neural network (first layer up to the second-to-last layer). We empirically select the mixing coefficients to aggregate the confidence of the checkpoint defenders for rejecting an incoming sample. Note that, there is a trade-off between the computational complexity (e.g., runtime overhead) of the PCL defenders and the reliability of the overall system. On the one hand, a high number of validation checkpoints increases the reliability of the systems, but it also increases the computational load as each input sample should be validated by more defender networks. On the other hand, a small number of checkpoints may degrade the defense mechanism performance by treating adversarial samples as legitimate ones. We are looking into automated techniques to customize the number of checkpoint modules and their corresponding mixing coefficients based on application data and physical constraints such as real-time analysis requirement as future work. 6 RELATED WORK In response to the various adversarial attack methodologies proposed in the literature (e.g., Goodfellow et al. (2014); Papernot et al. (2016a); Moosavi-Dezfooli et al. (2016); Carlini & Wagner (2017b)), several research attempts have been made to design DL strategies that are more robust in the face of adversarial examples. The existing countermeasures can be classified into two distinct categories: (i) Supervised strategies which aim to improve the generalization of the learning models by incorporating the noise-corrupted version of inputs as training samples (Jin et al. (2015); Gu & Rigazio (2014)) and/or injecting adversarial examples generated by different attacks into the DL training phase (Huang et al. (2015); Shaham et al. (2015); Goodfellow et al. (2014); Szegedy et al. (2013)). The proposed defense mechanisms in this category are particularly tailored for specific perturbation patterns and can only partially evade adversarial samples generated by other attack scenarios (with different perturbation distributions) from being effective as shown in (Gu & Rigazio (2014)). (ii) Unsupervised approaches which aim to smooth out the underlying gradient space (decision boundaries) by incorporating a smoothness penalty (Miyato et al. (2015); Carlini & Wagner (2017b)) as a regularization term in the loss function or compressing the neural network by removing the nuisance variables (Papernot et al. (2016b)). These set of works have been mainly remained oblivious to the pertinent data density in the latent space. In particular, these works have been developed based on an implicit assumption that the existence of adversarial samples is due to the piece-wise linear behavior of decision boundaries (obtained by gradient descent) in the high-dimensional space. As such, their integrity can be jeopardized by considering different perturbations at the input space and evaluating the same attack on various perturbed data points to even pass the smoothed decision boundaries as shown in (Carlini & Wagner (2016)). More recently, Meng & Chen (2017) propose an unsupervised manifold projection method called MagNet to reform adversarial samples using autoencoders. Unlike PCL, MagNet is inattentive to the density function of the data in the space. As shown in Carlini & Wagner (2017a), manifold projection methods including MagNet are not robust to adversarial samples and can approximately increase the required distortion to generate adversarial sample by only 30%. To the best of our knowledge, the proposed PCL methodology is the first unsupervised countermeasure developed based upon probabilistic density analysis and dictionary learning to effectively characterize and thwart adversarial samples. The PCL method does not assume any particular attack strategy and/or perturbation pattern. This is particularly important as it demonstrates the generalizability of the proposed approach in the face of adversarial attacks. 7 CONCLUSION This paper proposes a novel end-to-end methodology for characterizing and thwarting adversarial DL space. We introduce the concept of parallel checkpointing learners as a viable countermeasure to significantly reduce the risk of integrity attacks. The proposed PCL methodology explicitly characterizes statistical properties of the features within different layers of a neural network by learning a set of complementary dictionaries and corresponding probability density functions. The effectiveness of the PCL approach is evaluated against the state-of-the-art attack models including FGS, JSMA, Deepfool, and Carlini&WagnerL2. Proof-of-concept experiments for analyzing various data collections including MNIST, CIFAR10, and a subset of the ImageNet dataset corroborate successful detection of adversarial samples with relatively small false-positive rates. We devise an open-source API for the proposed countermeasure and invite the community to attempt attacks against the provided benchmarks in the form of a challenge. APPENDIX A Table 2 presents the neural network architectures for the victim models used in each benchmark. The network for MNIST is the popular LeNet-3 architecture, the CIFAR-10 architecture is taken from (Ciregan et al. (2012)), and the ImageNet model is inspired by the AlexNet architecture (Krizhevsky et al. (2012)). We visually evaluate the perturbed examples to determine the attack parameters (e.g., perturbation level ε and niters) such that the perturbations cannot be recognized by a human observer. Table 3 details the parameters used for the realization of different attack algorithms. The JSMA attack for the ImageNet benchmark is computationally expensive (e.g., it took more than 20min to generate one adversarial sample on an NVIDIA TITAN Xp GPU). As such, we could not generate the adversarial samples of this attack using the JSMA library provided by (Nicolas Papernot (2017)). APPENDIX B Corresponding ROC curves for PCL performance against FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks in the CIFAR10 and ImageNet benchmarks.
1. What is the main contribution of the paper regarding unsupervised defense against adversarial attacks? 2. What are the limitations of the proposed method, particularly in terms of novelty and performance comparison with prior works? 3. How does the reviewer assess the effectiveness of the proposed method in defending against various types of attacks? 4. Do you have any concerns about the experimental setup and the choice of attacks used to evaluate the method's performance?
Review
Review This paper proposes an unsupervised method, called Parallel Checkpointing Learners (PCL), to detect and defend adversarial examples. The main idea is essentially learning the manifold of the data distribution and using Gaussian mixture models (GMMs) and dictionary learning to train a "reformer" (without seeing adversarial examples) to detect and correct adversarial examples. With PCL, one can use hypothesis testing framework to analyze the detection rate and false alarm of different neural networks against adversarial attacks. Although the motivation is well grounded, there are two major issues of this work: (i) limited novelty - the idea of unsupervised manifold projection method has been proposed in the previous work; and (ii) insufficient attack evaluations - the defender performance is evaluated against weak attacks or attacks with improper parameters. The details are as follows. 1. Limited novelty and performance comparison - the idea of unsupervised manifold projection method has been proposed and well-studied in "MagNet: a Two-Pronged Defense against Adversarial Examples", appeared in May 2017. Instead of GMMs and dictionary learning in PCL, MagNet trains autoencoders for defense and provides sufficient experiments to claim its defense capability. On the other hand, the authors of this paper seem to be not aware of this pioneering work and claim "To the best of our knowledge, our proposed PCL methodology is the first unsupervised countermeasure that is able to detect DL adversarial samples generated by the existing state-of-the-art attacks", which is obviously not true. More importantly, MagNet is able to defend the adversarial examples very well (almost 100% success) no matter the adversarial examples are close to the information manifold or not. As a result, the resulting ROC and AUC score are expected be better than PCL. In addition, the authors of MagNet also compared their performance in white-box (attacker knowing the reformer), gray-box (having multiple independent reformers), and black-box (attacker not knowing the reformer) scenarios, whereas this paper only considers the last case. 2. Insufficient attack evaluations - the attacks used in this paper to evaluate the performance of PCL are either weak (no longer state-of-the-art) or incorrectly implemented. For FGSM, the iterative version proposed by (Kurakin, ICLR 2017) should be used. JSMA and deep fool are not considered strong attacks now (see Carlini's bypassing 10 detection methods paper). Carlini-Wagner attack is still strong, but the authors only use 40 iterations (should be at least 500) and setting the confidence=0, which is known to be producing non-transferable adversarial examples. In comparison, MagNet has shown to be effective against different confidence parameters. In summary, this paper has limited novelty, incremental contributions, and lacks convincing experimental results due to weak attack implementation.
ICLR
Title The Traveling Observer Model: Multi-task Learning Through Spatial Variable Embeddings Abstract This paper frames a general prediction system as an observer traveling around a continuous space, measuring values at some locations, and predicting them at others. The observer is completely agnostic about any particular task being solved; it cares only about measurement locations and their values. This perspective leads to a machine learning framework in which seemingly unrelated tasks can be solved by a single model, by embedding their input and output variables into a shared space. An implementation of the framework is developed in which these variable embeddings are learned jointly with internal model parameters. In experiments, the approach is shown to (1) recover intuitive locations of variables in space and time, (2) exploit regularities across related datasets with completely disjoint input and output spaces, and (3) exploit regularities across seemingly unrelated tasks, outperforming task-specific single-task models and multi-task learning alternatives. The results suggest that even seemingly unrelated tasks may originate from similar underlying processes, a fact that the traveling observer model can use to make better predictions. 1 INTRODUCTION Natural organisms benefit from the fact that their sensory inputs and action outputs are all organized in the same space, that is, the physical universe. This consistency makes it easy to apply the same predictive functions across diverse settings. Deep multi-task learning (Deep MTL) has shown a similar ability to adapt knowledge across tasks whose observed variables are embedded in a shared space. Examples include vision, where the input for all tasks (photograph, drawing, or otherwise) is pixels arranged in a 2D plane (Zhang et al., 2014; Misra et al., 2016; Rebuffi et al., 2017); natural language (Collobert & Weston, 2008; Luong et al., 2016; Hashimoto et al., 2017), speech processing (Seltzer & Droppo, 2013; Huang et al., 2015), and genomics (Alipanahi et al., 2015), which exploit the 1D structure of text, waveforms, and nucleotide sequences; and video game-playing (Jaderberg et al., 2017; Teh et al., 2017), where interactions are organized across space and time. Yet, many real-world prediction tasks have no such spatial organization; their input and output variables are simply labeled values, e.g., the height of a tree, the cost of a haircut, or the score on a standardized test. To make matters worse, these sets of variables are often disjoint across a set of tasks. These challenges have led the MTL community to avoid such tasks, despite the fact that general knowledge about how to make good predictions can arise from solving seemingly “unrelated” tasks (Mahmud & Ray, 2008; Mahmud, 2009; Meyerson & Miikkulainen, 2019). This paper proposes a solution: Learn all variable locations in a shared space, while simultaneously training the prediction model itself (Figure 1). To illustrate this idea, Figure 1a gives an example of four tasks whose variable values are measured at different locations in the same underlying 2D embedding space. The shape of each marker (i.e., ◦, ,4, ?) denotes the task to which that variable belongs; white markers denote input variable, black markers denote output variables, and the background coloring indicates the variable values in the entire embedding space when the current sample is drawn. As a concrete example, the color could indicate the air temperature at each point in a geographical region at a given moment in time, and each marker the location of a temperature sensor (however, note that the embedding space is generally more abstract). Figure 1b-c shows a model that can be applied to any task in this universe, using the ◦ task as an example: (b) The function f encodes the value of each observed variable xi given its 2D location zi ∈ R2, and these encodings are aggregated by elementwise addition ⊕ ; (c) The function g decodes the aggregated encoding to a prediction for yj at its location zj . Such a predictor can be viewed as a traveling observer model (TOM): It traverses the space of variables, taking a measurement at the location of each input. Given these observations, the model can make a prediction for the value at the location of an output. In general, the embedded locations z are not known a priori (i.e., when input and output variables do not have obvious physical locations), but they can be learned alongside f and g by gradient descent. The input and output spaces of a prediction problem can be standardized so that the measured value of each input and output variable is a scalar. The prediction model can then be completely agnostic about the particular task for which it is making a prediction. By learning variable embeddings (VEs), i.e., the z’s, the model can capture variable relationships explicitly and supports joint training of a single architecture across seemingly unrelated tasks with disjoint input and output spaces. TOM thus establishes a new lower bound on the commonalities shared across real-world machine learning problems: They are all drawn from the same space of variables that humans can and do measure. This paper develops a first implementation of TOM, using an encoder-decoder architecture, with variable embeddings incorporated using FiLM (Perez et al., 2018). In the experiments, the implementation is shown to (1) recover the intuitive locations of variables in space and time, (2) exploit regularities across related datasets with disjoint input and output spaces, and (3) exploit regularities across seemingly unrelated tasks to outperform single-tasks models tuned to each tasks, as well as current Deep MTL alternatives. The results confirm that TOM is a promising framework for representing and exploiting the underlying processes of seemingly unrelated tasks. 2 BACKGROUND: MULTI-TASK ENCODER-DECODER DECOMPOSITIONS This section reviews Deep MTL methods from the perspective of decomposition into encoders and decoders (Table 1). In MTL, there are T tasks {(xt,yt)}Tt=1 that can, in general, be drawn from different domains and have varying input and output dimensionality. The tth task has nt input variables [xt1, . . . , xtnt ] = xt ∈ Rnt and mt output variables [yt1, . . . , ytmt ] = yt ∈ Rmt . Two tasks (xt,yt) and (xt′ ,yt′) are disjoint if their input and output variables are non-overlapping, i.e.,( {xti}nti=1∪{ytj} mt j=1 ) ∩ ( {xt′i}nt′i=1∪{yt′j} mt′ j=1 ) = ∅. The goal is to exploit regularities across task models xt 7→ ŷt by jointly training them with overlapping parameters. The standard intra-domain approach is for all task models to share their encoder f , and each to have its own task-specific decoder gt (Table 1a). This setup was used in the original introduction of MTL (Caruana, 1998), has been broadly explored in the linear regime (Argyriou et al., 2008; Kang et al., 2011; Kumar & Daumé, 2012), and is the most common approach in Deep MTL (Huang et al., 2013; Zhang et al., 2014; Dong et al., 2015; Liu et al., 2015; Ranjan et al., 2016; Jaderberg et al., 2017). The main limitation of this approach is that it is limited to sets of tasks that are all drawn from the same domain. It also has the risk of the separate decoders doing so much of the learning that there is not much left to be shared, which is why the decoders are usually single affine layers. To address the issue of limited sharing, the task embeddings approach trains a single encoder f and single decoder g, with all task-specific parameters learned in embedding vectors zt that semantically characterize each task, and which are fed into the model as additional input (Yang & Hospedales, 2014; Bilen & Vedaldi, 2017; Zintgraf et al., 2019) (Table 1b). Such methods require that all tasks have the same input and output space, but are flexible in how the embeddings can be used to adapt the model to each task. As a result, they can learn tighter connections between tasks than separate decoders, and these relationships can be analyzed by looking at the learned embeddings. To exploit regularities across tasks from diverse and disjoint domains, cross-domain methods have been introduced. Existing methods address the challenge of disjoint output and input spaces by using separate decoders and encoders for each domain (Table 1c), and thus they require some other method of sharing model parameters across tasks, such as sharing some of their layers (Kaiser et al., 2017; Meyerson & Miikkulainen, 2018) or drawing their parameters from a shared pool (Meyerson & Miikkulainen, 2019). For many datasets, the separate encoder and decoder absorbs too much functionality to share optimally, and their complexity makes it difficult to analyze the relationships between tasks. Earlier work prior to deep learning showed that, from an algorithmic learning theory perspective, sharing knowledge across tasks should always be useful (Mahmud & Ray, 2008; Mahmud, 2009), but the accompanying experiments were limited to learning biases in a decision tree generation process, i.e., the learned models themselves were not shared across tasks. TOM extends the notion of task embeddings to variable embeddings in order to apply the idea in the cross-domain setting (Table 1d). The method is described in the next section. 3 THE TRAVELING OBSERVER MODEL Consider the set of all scalar random variables that could possibly be measured {v1, v2, ...} = V . Each vi ∈ V could be an input or output variable for some prediction task. To characterize each vi semantically, associate with it a vector zi ∈ RC that encodes the meaning of vi, e.g., “height of left ear of human adult in inches”, “answer to survey question 9 on a scale of 1 to 5”, “severity of heart disease”, “brightness of top-left pixel of photograph”, etc. This vector zi is called the variable embedding (VE) of vi. Variable embeddings could be handcoded, e.g., based on some featurization of the space of variables, but such a handcoding is usually unavailable, and would likely miss some of the underlying semantic regularities across variables. An alternative approach is to learn variable embeddings based on their utility in solving prediction problems of interest. A prediction task (x,y) = ([x1, . . . , xn], [y1, . . . , ym]) is defined by its set of observed variables {xi}ni=1 ⊆ V and its set of target variables {yj}mj=1 ⊆ V whose values are unknown. The goal is to find a prediction function Ω that can be applied across any prediction task of interest, so that it can learn to exploit regularities across such problems. Let zi and zj be the variable embeddings corresponding to xi and yj , respectively. Then, this universal prediction model is of the form E[yj | x] = Ω(x, {zi}ni=1, zj). (1) Importantly, for any two tasks (xt,yt), (xt′ ,yt′), their prediction functions (Eq. 1) differ only in their z’s, which enforces the constraint that functionality is otherwise completely shared across the models. One can view Ω as a traveling observer, who visits several locations in the C-dimensional variable space, takes measurements at those locations, and uses this information to make predictions of values at other locations. To make Ω concrete, it must be a function that can be applied to any number of variables, can fit any set of prediction problems, and is invariant to variable ordering, since we cannot in general assume that a meaningful order exists. These requirements lead to the following decomposition: E[yj | x] = Ω(x, {zi}ni=1, zj) = g ( n∑ i=1 f(xi, zi), zj ) , (2) where f and g are functions called the encoder and decoder, with trainable parameters θf and θg , respectively. The variable embeddings z tell f and g which variables they are observing, and these z can be learned by gradient descent alongside θf and θg . A depiction of the model is shown in Figure 1. For some integer M , f : RC+1 → RM and g : RM+C → R. In principle, f and g could be any sufficiently expressive functions of this form. A natural choice is to implement them as neural networks. They are called the encoder and decoder because they map variables to and from a latent space of sizeM . This model can then be trained end-to-end with gradient descent. A batch for gradient descent is constructed by sampling a prediction problem, e.g., a task, from the distribution of problems of interest, and then sampling a batch of data from the data set for that problem. Notice that, in addition to supervised training, in this framework it is natural to autoencode, i.e., predict input variables, and subsample inputs to simulate multiple tasks drawn from the same universe. The question remains: How can f and g be designed so that they can sufficiently capture a broad range of prediction behavior, and be effectively conditioned by variable embeddings? The next section introduces an experimental architecture that satisfies these requirements. 4 INSTANTIATION The experiments in this paper implement TOM using a generic architecture built from standard components (Figure 2). The encoder and decoder are conditioned on VEs via FiLM layers (Perez et al., 2018), which provide a flexible yet inexpensive way to adapt functionality to each variable, and have been previously used to incorporate task embeddings (Vuorio et al., 2019; Zintgraf et al., 2019). For simplicity, the FiLM layers are based on affine transformations of VEs. Specifically, the `th FiLM layer F` is parameterized by affine layers W ∗` and W + ` , and, given a variable embedding z, the hidden state h is modulated by F`(h) = W ∗ ` (z) h +W+` (z), (3) where is the Hadamard product. A FiLM layer is located alongside each fully-connected layer in the encoder and decoder, both of which consist primarily of residual blocks. To avoid deleterious behavior of batch norm across diverse tasks and small datasets/batches, the recently proposed SkipInit (De & Smith, 2020) is used as a replacement to stabilize training. SkipInit adds a trainable scalar α initialized to 0 at the end of each residual block, and uses dropout for regularization. Finally, for computational efficiency, the decoder is redecomposed into the Core, or g1, which is independent of output variable, and the Decoder proper, or g2, which is conditioned on the output variable. That way, generic transformations of the summed Encoder output can be learned by the Core and run in a single forward and backward pass each iteration. With this decomposition, Eq. 2 is rewritten as E[yj | x] = g2 ( g1 ( n∑ i=1 f(xi, zi) ) , zj ) . (4) The complete architecture is depicted in Figure 2. In the following sections, all models are implemented in pytorch (Paske et al., 2017), use Adam for optimization (Kingma & Ba, 2014), and have hidden layer size of 128 for all layers. Variable embeddings for TOM are initialized from N (0, 10−3). See Appendix C for additional details of this implementation. 5 EXPERIMENTS This section presents a suite of experiments that evaluate the behavior of the implementation introduced in Section 4. See Appendix for additional experimental details. 5.1 VALIDATING LEARNED VARIABLE EMBEDDINGS: DISCOVERING SPACE AND TIME The experiments in this section test TOM’s ability to learn variable embeddings that reflect our a priori intuition about the domain, in particular, the organization of space and time. CIFAR. The first experiment is based on the CIFAR dataset (Krizhevsky, 2009). The pixels of the 32 × 32 images are converted to grayscale values in [0, 1], yielding 1024 variables. The goal is to predict all variable values, given only a subset of them as input. The model is trained to minimize the binary cross-entropy of each output, and it uses 2D VEs. The a priori, or Oracle, expectation is that the VEs form a 32× 32 grid corresponding to how pixels are spatially laid out in an image. Daily Temperature. The second experiment is based on the Melbourne minimum daily temperature dataset (Brownlee, 2016), a subset of a larger database for tracking climate change (Della-Marta et al., 2004). As above, the goal is to predict the daily temperature of the previous 10 days, given only some subset of them, by minimizing the MSE of each variable. The a priori, Oracle, expectation is that the VEs are laid out linearly in a single temporal dimension. The goal is to see whether TOM will also learn VEs (in a 2D space) that follow a clear 1D manifold that can be interpreted as time. For both experiments, a subset of the input variables is randomly sampled at each training iteration, which simulates drawing tasks from a limited universe. The resulting learning process for the VEs is illustrated in Figures 3 and 4. The VEs for CIFAR pull apart and unfold, until they reflect the oracle embeddings (Figure 3). The remaining difference is that TOM peels the border of the CIFAR images (the upper loop of VEs at iteration 300K) away from their center (the lower grid). This makes sense, since CIFAR images all feature a central object, which semantically splits the image into foreground (the object itself) and background (the remaining ring of pixels around the object). Similarly, the VEs for daily temperature pull apart until they form a perfect 1D manifold representing the time dimension (Figure 4). The main difference is that TOM has embedded this 1D structure as a ring in 2D, which is well-suited to the nonlinear encoder and decoder, since it mirrors an isotropic Gaussian distribution. Note that unlike visualization methods like SOM (Kohonen, 1990), PCA (Pearson, 1901), or t-SNE (van der Maaten & Hinton, 2008), TOM learns locations for each variable not each sample. Furthermore, TOM has no explicit motivation to visualize; learned VEs are simply the locations found to be useful by using gradient descent when solving the prediction problem. To get an idea of how learning VEs affects prediction performance, comparisons were run with three cases of fixed VEs: (1) all VEs set to zero, to address the question of whether differentiating variables with VEs is needed at all in the model; (2) random VEs, to address the question of whether simply having any unique label for variables is sufficient; and (3) oracle VEs, which reflect the human a priori expectation of how the variables should be arranged. The results show that the learned embeddings outperform zero and random embeddings, achieving performance on par with the Oracle (Table 2). The conclusion is that learned VEs in TOM are not only meaningful, but can help make superior predictions, without a priori knowledge of variable meaning. The next section shows how such VEs can be used to exploit regularities across tasks in an MTL setting. 5.2 EXPLOITING REGULARITIES ACROSS DISJOINT TASKS This section considers two synthetic multi-task problems that contain underlying regularities across tasks. These regularities are not known to the model a priori; it can only exploit them via its VEs. The first problem evaluates TOM in a regression setting where input and output variables are drawn from the same continuous space; the second problem evaluates TOM in a classification setting. For classification tasks, each class defines a distinct output variable. Task 1 Task 2 Transposed Gaussian Process. In the first problem, the universe is defined by a Gaussian process (GP). The GP is 1D, is zero-mean, and has an RBF kernel with length-scale 1. One task is generated for each (# inputs, # outputs) pair in {1, . . . , 10} × {1, . . . , 10}, for a total of 100 tasks. The “true” location of each variable lies in the single dimension of the GP, and is sampled uniformly from [0, 5]. Samples for the task are generated by sampling from the GP, and measuring the value at each variable location. The dataset for each task contains 10 training samples, 10 validation samples, and 100 test samples. Samples are generated independently for each task. The goal is to minimize MSE of the outputs. Figure 5 gives two examples of tasks drawn from this universe. This testbed is ideal for TOM, because, by the definition of the GP, it explicitly captures the idea that variables whose VEs are nearby are closely related, and every variable has some effect on all others. Concentric Hyperspheres. In the second problem, each task is defined by a set of concentric hyperspheres. Many areas of human knowledge have been organized abstractly as such hyperspheres, e.g., planets around a star, electrons around an atom, social relationships around an individual, or suburbs around Washington D.C.; the idea is that a model that discovers this common organization could then share general knowledge across such areas more effectively. To test this hypothesis, one task is generated for each (# features n, # classes m) pair in {1, . . . , 10}×{2, . . . , 10}, for a total of 90 tasks. For each task, its origin ot is drawn from N (0, In). Then, for each class c ∈ {1, . . . ,m}, samples are drawn from Rn uniformly at distance c from ot, i.e., each class is defined by a (hyper) annulus. The dataset for each task contains five training samples, five validation samples, and 100 test samples per class. The model has no a priori knowledge that the classes are structured in annuli, or which annulus corresponds to which class, but it is possible to achieve high accuracy by making analogies of annuli across tasks, i.e., discovering the underlying structure of this universe. In these experiments, TOM is compared to five alternative methods: (1) TOM-STL, i.e. TOM trained on each task independently; (2) DR-MTL (Deep Residual MTL), the standard cross-domain (Table 1c) version of TOM, where instead of FiLM layers, each task has its own linear encoder and decoder layers, and all residual blocks are CoreResBlocks; (3) DR-STL, which is like DR-MTL except it is trained on each task independently; (4) SLO (Soft Layer Ordering; Meyerson & Miikkulainen, 2018), which uses a separate encoder and decoder for each task, and which is (as far as we know) the only prior Deep MTL approach that has been applied across disjoint tabular datasets; and (5) Oracle, i.e. TOM with VEs fixed to intuitively correct values. The Oracle is included to give an upper bound on how well the TOM architecture in Section 4 could possibly perform. The oracle VE for each Transposed GP task variable is the location where it is measured in the GP; for Concentric Hyperspheres, the oracle VE for each class c is c/10, and for the ith feature is oti. TOM outperforms the competing methods and achieves performance on par with the Oracle (Table 3). Note that the improvement of TOM over TOM-STL is much greater than that of DR-MTL over DR-STL, indicating that TOM is particularly well-suited to exploiting structure across disjoint data sets (learned VEs are shown in Figure 6a-b). Now that this suitability has been confirmed, the next section evaluates TOM across a suite of disjoint, and seemingly unrelated, real-world problems. 5.3 MULTI-TASK LEARNING ACROSS SEEMINGLY UNRELATED REAL-WORLD DATASETS This section evaluates TOM in the setting for which it was designed: learning a single shared model across seemingly unrelated real-world datasets. The set of tasks used is UCI-121 (Lichman, 2013; Fernández-Delgado et al., 2014), a set of 121 classification tasks that has been previously used to evaluate the overall performance of a variety of deep NN methods (Klambauer et al., 2017). The tasks come from diverse areas such as medicine, geology, engineering, botany, sociology, politics, and game-playing. Prior work has tuned each model to each task individually in the single-task regime; no prior work has undertaken learning of all 121 tasks in a single joint model. The datasets are highly diverse. Each simply defines a classification task that a machine learning practitioner was interested in solving. The number of features for a task range from 3 to 262, the number of classes from 2 to 100, and the number of samples from 10 to 130,064. To avoid underfitting to the larger tasks, C = 128, and after joint training all model parameters (θf , θg1 , θg2 , and z’s) are finetuned on each task with at least 5K samples. Note that it is not expected that training any two tasks jointly will improve performance in both tasks, but that training all 121 tasks jointly will improve performance overall, as the model learns general knowledge about how to make good predictions. Results across a suite of metrics are shown in Table 4. Mean Accuracy is the test accuracy averaged across all tasks. Normalized Accuracy scales the accuracy within each task before averaging across tasks, with 0 and 100 corresponding to the lowest and highest accuracies. Mean Rank averages the method’s rank across tasks, where the best method gets a rank of 0. Best % is the percentage of tasks for which the method achieves the top accuracy (with possible ties). Win % is the percentage of tasks for which the method achieves accuracy strictly greater than all other methods. TOM outperforms the alternative approaches across all metrics, showing its ability to learn many seemingly unrelated tasks successfully in a single model (see Figure 6c for a high-level visualization of learned VEs). In other words, TOM can both learn meaningful VEs and use them to improve prediction performance. 6 DISCUSSION AND FUTURE WORK Sections 2 and 3 developed the foundations for the TOM approach; Sections 4 and 5 illustrated its capabilities, demonstrating its value as a general multitask learning system. This section discusses four key areas of future work for increasing the understanding and applicability of the approach. First, there is an opportunity to develop a theoretical framework for understanding when TOM will work best. It is straightforward to extend universal approximation results from approximation of single functions (Cybenko, 1989; Lu et al., 2017; Kidger & Lyons, 2020) to approximation of a set of functions each with any input and output dimensionality via Eq. 2. It is also straightforward to extend convergence bounds for certain model classes, such as PAC bounds (Bartlett & Mendelson, 2002; Neyshabur et al., 2018), to TOM architectures implemented with these classes, if the “true” variable embeddings are fixed a priori, so they can simply be treated as features. However, a more intriguing direction involves understanding how the true locations of variables affects TOM’s ability to learn and exploit them, i.e., what are desirable theoretical properties of the space of variables? Second, in this paper, TOM was evaluated only in the case when the data for all tasks is always available, and the model is trained simultaneously across all tasks. However, it would also be natural to apply TOM in a meta-learning regime (Finn et al., 2017; Zintgraf et al., 2019), in which the model is trained explicitly to generalize to future tasks, and to lifelong learning (Thrun & Pratt, 2012; Brunskill & Li, 2014; Abel et al., 2018), where the model must learn new tasks as they appear over time. Simply freezing the learned parameters of TOM results in a parametric class of ML models with C parameters per variable that can be applied to new tasks. However, in practice, it should be possible to improve upon this approach by taking advantage of more sophisticated fine-tuning and parameter adaptation. For example, in low-data settings, methods can be adapted from meta-learning approaches that modulate model weights in a single forward pass instead of performing supervised backpropagation (Garnelo et al., 2018; Vuorio et al., 2019). Interestingly, although they are designed to address issues quite different from those motivating TOM, the architectures of such approaches have a functional decomposition that is similar to that of TOM at a high level (see e.g. Conditional Neural Processes, or CNPs; Garnelo et al., 2018). In essence, replacing the VEs in Eq. 2 with input samples and the variables with output samples yields a function that generates a prediction model given a dataset. This analogy suggests that it should be possible to extend the benefits of CNPs to TOM, including rich uncertainty information. Third, to make the foundational case for TOM, this paper focused on the setting where VEs are a priori unknown, but when such knowledge is available, it could be useful to integrate with learned VEs. Such an approach could eliminate the cost of relearning VEs, and suggest how to take advantage of spatially-customized architectures. E.g., convolution or attention layers could be used instead of dense layers as architectural primitives, as in vision and language tasks. Such specialization could be instrumental in making TOM more broadly applicable and more powerful in practice. Finally, one interpretation of Fig. 6c is that the learned VEs of classes encode a task-agnostic concept of “normal” vs. “abnormal” system states. TOM could be used to analyze the emergence of such general concepts and as an analogy engine: to describe states of one task in the language of another. 7 CONCLUSION This paper introduced the traveling observer model (TOM), which enables a single model to be trained across diverse tasks by embedding all task variables into a shared space. The framework was shown to discover intuitive notions of space and time and use them to learn variable embeddings that exploit knowledge across tasks, outperforming single- and multi-task alternatives. Thus, learning a single function that cares only about variable locations and their values is a promising approach to integrating knowledge across data sets that have no a priori connection. The TOM approach thus extends the benefits of multi-task learning to broader sets of tasks. ACKNOWLEDGEMENTS Thank you to Babak Hodjat and others in the Evolutionary AI research group for helpful discussions and technical feedback. Thank you also to the reviewers, particularly for their suggestions for improving the organizational structure and clarity of the paper. A ADDITIONAL EXPERIMENT ON THE EMBEDDING SIZE C In the experiments in Section 5.1 and 5.2, the VE dimensionality C for TOM was set to 2 in order to most clearly visualize the VEs that were learned. In the experiment in Section 5.3, C was increased in order to accommodate the scale-up to a large number of highly diverse real world tasks. In that experiment C was set to 128 in order to match the number of task-specific parameters of the other Deep MTL methods compared in Table 4. To evaluate the sensitivity of TOM to the setting of C, additional experiments were run for TOM on UCI-121 with C = 64 and C = 256. The results are shown in Table 5. Metrics for all settings of C are computed w.r.t. the external comparison methods, i.e., those in Table 4a. TOM with C = 64 produces performance comparable to C = 128, suggesting that optimizing C could be a useful lever for balancing performance and VE interpretability. B PYTORCH CODE To give a detailed picture of how the TOM architecture in this paper was implemented, the code for the forward pass of the model implemented in pytorch (Paske et al., 2017) is given in Figure 7. For efficiency, TOM is implemented with Conv1D layers with kernel size 1 instead of Dense layers. This approach enables the model to run the encoder and decoder on all variables in parallel. The fact that Conv layers are so highly optimized in pytorch makes the implementation substantially more efficient than with Dense layers. In this code, input batch has shape (batch size, input variables), input contexts has shape (1, VE dim, # input variables), and output contexts has shape (1, VE dim, # output variables). Code for TOM will be available at https://github. com/leaf-ai/tom-release. C ADDITIONAL EXPERIMENTAL DETAILS A sigmoid layer is applied at the end of the decoder for the CIFAR experiments, to squash the output between 0 and 1. For the CIFAR and Daily Temperature experiments, a subset of the variables is sampled each iteration to be used as input. This subset is sampled in the following way: (1) Sample the size k of the subset uniformly from [1, nt], where nt is the number of variables in the experiment; (2) Sample a subset of variables of size k uniformly from all subsets of size k. This sampling method ensures that every subset size has an equal chance of getting selected, so that the universe is not biased towards tasks of a particular size. E.g., if instead the subset were created by sampling each variable independently with probability p, then the subset size would concentrate tightly around pnt. For classification tasks, each class defines a distinct output variable, i.e., a K-class classification task has K output variables. The squared hinge loss was used for classification tasks (Janocha & Czarnecki, 2017). It is preferable to categorical cross-entropy loss in this setting, because it does not require taking a softmax across output variables, so the outputs are kept separate. Also, the loss becomes exactly zero once a sample is learned strongly, so that the model does not continue to overfit as remaining samples and tasks are learned. The number of blocks in the encoder, core, and decoder is N = 3 for all problems except UCI-121, for which it is N = 10. All experiments use a hidden size of 128 for all dense layers aside from the final decoder layer that maps to the output space. The batch size was 32 for CIFAR and Daily Temperature, and max(200, # train samples) for all other tasks. At each step, To tasks are uniformly sampled from the set of all tasks, and gradients are summed over a batch for each task in the sample. To = 1 in all experiments except UCI-121, for which To = 32. To allow for multi-task training with datasets of varying numbers of samples, we say the model has completed one epoch each time it is evaluated on the validation set. An epoch is 1000 steps for CIFAR, 100 steps for Daily Temperature, 1K steps for Transposed Gaussian Process, 1K steps for Concentric Hyperspheres, and 10K steps for UCI-121. For CIFAR, the official training and test splits are used for training and testing. No validation set is needed for CIFAR, because none of the models can overfit to the training set. For Daily Temperature, the second-to-last year of data is withheld for validation, and the final year is withheld for testing. The UCI-121 experiments use the preprocessed versions of the official train-val-test splits (https://github.com/bioinf-jku/SNNs/tree/master/UCI). Adam is used for all experiments, with all parameters initialized to their default values. In all experiments except UCI-121, the learning rate is kept constant at 0.001 throughout training. In UCI-121, the learning rate is decreased by a factor of two when the mean validation accuracy has not increased in 20 epochs; it is decreased five times; model training stops when it would be decreased a sixth time. Models are trained for 500K steps for CIFAR, 100K steps for Daily Temperature, and 250K for Transposed Gaussian Process and Concentric Hyperspheres. The test performance for each task is its performance on the test set after the epoch of its best validation performance. Weights are initialized using the default pytorch initialization (aside from the SkipInit α scalars, which are initialized to zero (De & Smith, 2020)). The experiments in Section 5.1 use no weight decay; in Section 5.2 use weight decay of 10−4; and in Section 5.3 use weight decay of 10−5. Dropout is set to 0.0 for CIFAR, Daily Temperature, and Concentric Hyperspheres; and 0.5 for Transposed Gaussian Process and UCI-121. In UCI-121, fully-trained MTL models are finetuned to tasks with more than 5,000 samples, using the same optimizer configuration as for joint training, except the steps-per-epoch is set to d# train samples/batch sizee, the learning rate is initialized to 0.0001, the patience for early stopping is set to 100, and the validation performance is smoothed over every 10 epochs (simple moving average), following the protocol used to train single-task models in prior work (Klambauer et al., 2017). TOM uses a VE size of C = 2 for all experiments, except for UCI-121, where C = 128 in order to accommodate the complexity of such a large and diverse set of tasks. For Figure 6c, t-SNE (van der Maaten & Hinton, 2008) was used to reduce the dimensionality to two. t-SNE was run for 10K iterations with default parameters in the scikit-learn implementation (Pedregosa et al., 2011), after first reducing the dimensionality from 128 to 32 via PCA. Independent runs of t-SNE yielded qualitatively similar results. Autoencoding (i.e., predicting the input variables as well as unseen variables) was used for CIFAR, Daily Temperature, and Transposed Guassian Process; it was not used for Concentric Hyperspheres or UCI-121. The Soft Layer Ordering architecture follows the original implementation (Meyerson & Miikkulainen, 2018). There are four shared ReLU layers, each of size 128, with dropout after each to ease sharing across different soft combinations of layers. In Tables 2 and 3 means and standard error for each method are computed over ten runs. The Daily Temperature dataset was downloaded from https://raw.githubusercontent. com/jbrownlee/Datasets/master/daily-min-temperatures.csv. D ADDITIONAL DETAILED RESULTS FOR UCI-121 EXPERIMENT Table 6 contains test accuracies for each UCI-121 task for all methods run in the experiments in Section 5.3. Method DR-STL TOM-STL DR-MTL SLO TOM led-display 75.600 27.200 79.600 73.600 74.000 lenses 83.333 66.667 50.000 50.000 50.000 letter 95.980 97.480 87.220 94.580 94.780 libras 43.333 11.111 78.889 76.667 80.000 low-res-spect 81.955 56.391 83.459 82.707 90.977 lung-cancer 50.000 25.000 62.500 50.000 62.500 lymphography 86.486 56.757 94.595 86.486 86.486 magic 86.982 86.898 81.325 86.877 87.024 mammographic 81.250 82.500 80.833 82.083 83.750 miniboone 92.782 94.630 93.345 94.338 93.532 molec-biol-promoter 88.462 50.000 69.231 61.538 92.308 molec-biol-splice 85.696 92.723 86.324 85.822 93.350 monks-1 65.509 50.000 71.991 86.574 80.787 monks-2 40.509 67.130 62.731 64.583 62.500 monks-3 74.306 52.778 66.898 68.981 58.102 mushroom 99.655 100.000 99.803 100.000 100.000 musk-1 83.193 57.143 92.437 90.756 91.597 musk-2 98.666 98.848 98.787 99.272 99.636 nursery 99.568 99.877 95.926 99.753 99.630 oocytes merluccius nucleus 4d 83.922 70.588 77.647 83.529 85.098 oocytes merluccius states 2f 89.412 92.549 94.510 92.157 95.294 oocytes trisopterus nucleus 2f 73.684 75.877 75.439 78.509 78.947 oocytes trisopterus states 5b 94.298 92.544 93.421 94.737 92.982 optical 95.993 95.326 94.658 94.380 95.938 ozone 97.161 97.161 97.161 97.161 97.161 page-blocks 95.468 96.199 94.371 96.272 96.345 parkinsons 89.796 75.510 83.673 87.755 83.673 pendigits 96.855 97.055 97.055 96.884 96.627 pima 71.875 71.875 73.438 75.521 76.562 pittsburg-bridges-MATERIAL 73.077 76.923 88.462 84.615 92.308 pittsburg-bridges-REL-L 69.231 65.385 65.385 73.077 61.538 pittsburg-bridges-SPAN 52.174 56.522 65.217 65.217 60.870 pittsburg-bridges-T-OR-D 84.000 88.000 84.000 84.000 88.000 pittsburg-bridges-TYPE 38.462 50.000 61.538 65.385 53.846 planning 64.444 71.111 71.111 68.889 71.111 plant-margin 76.750 6.750 71.250 69.500 74.000 plant-shape 39.000 20.750 31.500 65.750 70.500 plant-texture 74.250 4.000 69.750 69.000 77.250 post-operative 72.727 72.727 77.273 72.727 72.727 primary-tumor 45.122 30.488 47.561 47.561 51.220 ringnorm 95.027 98.108 84.324 96.054 98.324 seeds 80.769 80.769 86.538 94.231 92.308 semeion 95.729 92.462 94.724 88.693 94.472 soybean 65.426 18.617 89.628 82.979 83.777 spambase 93.826 92.609 92.609 93.478 93.913 spect 61.828 56.989 67.204 65.054 68.280 spectf 49.733 91.979 60.963 60.428 91.979 statlog-australian-credit 66.860 68.023 68.023 63.372 62.209 statlog-german-credit 73.600 76.000 74.400 76.800 74.800 statlog-heart 89.552 79.104 89.552 82.090 83.582 statlog-image 96.360 95.841 90.988 97.054 97.747 statlog-landsat 89.900 91.250 83.450 88.950 90.600 statlog-shuttle 98.621 99.945 98.021 99.910 99.945 statlog-vehicle 73.934 48.341 78.199 79.621 74.882 steel-plates 74.845 64.536 68.041 76.495 77.526 synthetic-control 73.333 69.333 97.333 96.667 99.333 Continued on next page. Method DR-STL TOM-STL DR-MTL SLO TOM teaching 60.526 36.842 55.263 52.632 47.368 thyroid 98.308 98.775 96.820 97.841 98.804 tic-tac-toe 97.071 97.071 97.071 97.071 96.653 titanic 77.636 77.091 78.364 78.364 78.364 trains 100.000 50.000 100.000 100.000 100.000 twonorm 98.270 98.108 98.162 98.108 98.054 vertebral-column-2clases 83.117 67.532 87.013 87.013 85.714 vertebral-column-3clases 70.130 59.740 84.416 68.831 85.714 wall-following 86.437 98.827 72.507 90.396 97.434 waveform 87.520 87.360 87.760 86.800 87.760 waveform-noise 85.920 85.360 85.360 84.720 85.840 wine 100.000 70.455 100.000 100.000 100.000 wine-quality-red 59.000 57.500 57.750 63.750 61.000 wine-quality-white 56.863 53.758 53.513 57.761 56.944 yeast 60.108 53.908 60.377 59.838 59.838 zoo 96.000 48.000 96.000 96.000 92.000 Continued from previous page.
1. What is the focus and contribution of the paper regarding multi-task learning? 2. What are the strengths of the proposed approach, particularly in terms of the embedding mechanism? 3. What are the weaknesses of the paper, especially regarding the lack of references and analysis of similar works? 4. Do you have any questions or concerns regarding the notation and implementation of the TOM embedding? 5. How does the dimensionality of the manifold affect the approach's performance? 6. Are there any suggestions for improving the error metrics and experimental results presentation? 7. Would it be beneficial to include more discussion and motivation for the circle experiment? 8. Does the paper adequately address the problem of disjoint input domains? 9. What are your overall recommendations for improving the paper's content and structure?
Review
Review Summary. Authors present a methodology for performing multi-task learning from data with disjoint and heterogeneous input domains. Particularly, they introduce an embedding of the inputs, in order to project each pair of input-output observations in a common continuous manifold where the exploration is significantly easier. Results show that the approach is valid with both synthetic and real-world data and they also demonstrate that the model is flexible when increasing/decreasing the dimensionality of the latent manifold. Strengths. The explanation of the multi-task learning scenario with disjoint input domains is particularly well-written. This description makes easier to understand the reasons behind the introduction of the embedding between every single input and latent vectors z. Additionally, authors did an effort for explaining point-by-point the structure of the deep NN transformation behind the embedding. This is valuable. I appreciated the design of experiments and (author-blind) video on youtube was impressive. Weaknesses, Questions & Recommendations. The main weaknesses (to me) in the paper are: [W1]. There is likely a lack of references and analysis about similar works on multi-task learning with the particular problem of disjoint inputs. This makes the reader doubt about the potential novelty of the model, in particular about the embedding. [W2]. The notation based on subsets V_t is a bit confusing, (I think that keeping the (x,y,z) notation all along the paper would be better). Particularly in the pp.3, this notation is a difficult to follow before the introduction of the TOM embedding. [W3]. The TOM implementation may be better placed before the experiments, being a bit better connected with the main section of the manuscript, but this is just an opinion. [W4]. More analysis on the dimensionality D of the manifold could be of interest for the reader. In the last experiment, this dimensionality is pretty high. [Q] Why is this? What is the principal consequence? [W5]. Error metrics in the experiments do not include confidence intervals or variance values from several runs. [W6]. Typically, one chooses Discussion or Conclusion. The content of the Conclusion is similar to the thing said in the previous section. Recommendations: [Rec1]. Motivating even better the disjoint input problem from the very beginning would make the paper stronger. [Rec2]. An input-output notation all along the paper and some diagram explaining the projection into a continuous manifold would help as well. [Rec3]. Details about the implementation could be better placed in the appendix, or at least integrated with the model and the flow of explanations. [Rec4]. Confidence intervals in the tables of error metrics as well as a bit more of motivation for the circle experiment would improve the presentation of experiments. Reasons for score. I understood the idea that authors presented and the problem of disjoint input domains. However, I feel that the presentation of the model is a bit weak as well as the experiments could be improved with a few details. The last pp. of the manuscript with the duplicity Discussion+Conclusion is also a bit odd. For this reason, I cannot recommend an acceptance score for this venue. Post-rebuttal comments. Thanks to the authors for their response. The updated version of the manuscript addressed my main concerns and recommendations. Now, it is clearly improved, figures and metrics updated and the proposed methodology is better presented. Authors even did major changes on the structure of the paper, what I recognize as an important revision. Having said this, I raised my score.
ICLR
Title The Traveling Observer Model: Multi-task Learning Through Spatial Variable Embeddings Abstract This paper frames a general prediction system as an observer traveling around a continuous space, measuring values at some locations, and predicting them at others. The observer is completely agnostic about any particular task being solved; it cares only about measurement locations and their values. This perspective leads to a machine learning framework in which seemingly unrelated tasks can be solved by a single model, by embedding their input and output variables into a shared space. An implementation of the framework is developed in which these variable embeddings are learned jointly with internal model parameters. In experiments, the approach is shown to (1) recover intuitive locations of variables in space and time, (2) exploit regularities across related datasets with completely disjoint input and output spaces, and (3) exploit regularities across seemingly unrelated tasks, outperforming task-specific single-task models and multi-task learning alternatives. The results suggest that even seemingly unrelated tasks may originate from similar underlying processes, a fact that the traveling observer model can use to make better predictions. 1 INTRODUCTION Natural organisms benefit from the fact that their sensory inputs and action outputs are all organized in the same space, that is, the physical universe. This consistency makes it easy to apply the same predictive functions across diverse settings. Deep multi-task learning (Deep MTL) has shown a similar ability to adapt knowledge across tasks whose observed variables are embedded in a shared space. Examples include vision, where the input for all tasks (photograph, drawing, or otherwise) is pixels arranged in a 2D plane (Zhang et al., 2014; Misra et al., 2016; Rebuffi et al., 2017); natural language (Collobert & Weston, 2008; Luong et al., 2016; Hashimoto et al., 2017), speech processing (Seltzer & Droppo, 2013; Huang et al., 2015), and genomics (Alipanahi et al., 2015), which exploit the 1D structure of text, waveforms, and nucleotide sequences; and video game-playing (Jaderberg et al., 2017; Teh et al., 2017), where interactions are organized across space and time. Yet, many real-world prediction tasks have no such spatial organization; their input and output variables are simply labeled values, e.g., the height of a tree, the cost of a haircut, or the score on a standardized test. To make matters worse, these sets of variables are often disjoint across a set of tasks. These challenges have led the MTL community to avoid such tasks, despite the fact that general knowledge about how to make good predictions can arise from solving seemingly “unrelated” tasks (Mahmud & Ray, 2008; Mahmud, 2009; Meyerson & Miikkulainen, 2019). This paper proposes a solution: Learn all variable locations in a shared space, while simultaneously training the prediction model itself (Figure 1). To illustrate this idea, Figure 1a gives an example of four tasks whose variable values are measured at different locations in the same underlying 2D embedding space. The shape of each marker (i.e., ◦, ,4, ?) denotes the task to which that variable belongs; white markers denote input variable, black markers denote output variables, and the background coloring indicates the variable values in the entire embedding space when the current sample is drawn. As a concrete example, the color could indicate the air temperature at each point in a geographical region at a given moment in time, and each marker the location of a temperature sensor (however, note that the embedding space is generally more abstract). Figure 1b-c shows a model that can be applied to any task in this universe, using the ◦ task as an example: (b) The function f encodes the value of each observed variable xi given its 2D location zi ∈ R2, and these encodings are aggregated by elementwise addition ⊕ ; (c) The function g decodes the aggregated encoding to a prediction for yj at its location zj . Such a predictor can be viewed as a traveling observer model (TOM): It traverses the space of variables, taking a measurement at the location of each input. Given these observations, the model can make a prediction for the value at the location of an output. In general, the embedded locations z are not known a priori (i.e., when input and output variables do not have obvious physical locations), but they can be learned alongside f and g by gradient descent. The input and output spaces of a prediction problem can be standardized so that the measured value of each input and output variable is a scalar. The prediction model can then be completely agnostic about the particular task for which it is making a prediction. By learning variable embeddings (VEs), i.e., the z’s, the model can capture variable relationships explicitly and supports joint training of a single architecture across seemingly unrelated tasks with disjoint input and output spaces. TOM thus establishes a new lower bound on the commonalities shared across real-world machine learning problems: They are all drawn from the same space of variables that humans can and do measure. This paper develops a first implementation of TOM, using an encoder-decoder architecture, with variable embeddings incorporated using FiLM (Perez et al., 2018). In the experiments, the implementation is shown to (1) recover the intuitive locations of variables in space and time, (2) exploit regularities across related datasets with disjoint input and output spaces, and (3) exploit regularities across seemingly unrelated tasks to outperform single-tasks models tuned to each tasks, as well as current Deep MTL alternatives. The results confirm that TOM is a promising framework for representing and exploiting the underlying processes of seemingly unrelated tasks. 2 BACKGROUND: MULTI-TASK ENCODER-DECODER DECOMPOSITIONS This section reviews Deep MTL methods from the perspective of decomposition into encoders and decoders (Table 1). In MTL, there are T tasks {(xt,yt)}Tt=1 that can, in general, be drawn from different domains and have varying input and output dimensionality. The tth task has nt input variables [xt1, . . . , xtnt ] = xt ∈ Rnt and mt output variables [yt1, . . . , ytmt ] = yt ∈ Rmt . Two tasks (xt,yt) and (xt′ ,yt′) are disjoint if their input and output variables are non-overlapping, i.e.,( {xti}nti=1∪{ytj} mt j=1 ) ∩ ( {xt′i}nt′i=1∪{yt′j} mt′ j=1 ) = ∅. The goal is to exploit regularities across task models xt 7→ ŷt by jointly training them with overlapping parameters. The standard intra-domain approach is for all task models to share their encoder f , and each to have its own task-specific decoder gt (Table 1a). This setup was used in the original introduction of MTL (Caruana, 1998), has been broadly explored in the linear regime (Argyriou et al., 2008; Kang et al., 2011; Kumar & Daumé, 2012), and is the most common approach in Deep MTL (Huang et al., 2013; Zhang et al., 2014; Dong et al., 2015; Liu et al., 2015; Ranjan et al., 2016; Jaderberg et al., 2017). The main limitation of this approach is that it is limited to sets of tasks that are all drawn from the same domain. It also has the risk of the separate decoders doing so much of the learning that there is not much left to be shared, which is why the decoders are usually single affine layers. To address the issue of limited sharing, the task embeddings approach trains a single encoder f and single decoder g, with all task-specific parameters learned in embedding vectors zt that semantically characterize each task, and which are fed into the model as additional input (Yang & Hospedales, 2014; Bilen & Vedaldi, 2017; Zintgraf et al., 2019) (Table 1b). Such methods require that all tasks have the same input and output space, but are flexible in how the embeddings can be used to adapt the model to each task. As a result, they can learn tighter connections between tasks than separate decoders, and these relationships can be analyzed by looking at the learned embeddings. To exploit regularities across tasks from diverse and disjoint domains, cross-domain methods have been introduced. Existing methods address the challenge of disjoint output and input spaces by using separate decoders and encoders for each domain (Table 1c), and thus they require some other method of sharing model parameters across tasks, such as sharing some of their layers (Kaiser et al., 2017; Meyerson & Miikkulainen, 2018) or drawing their parameters from a shared pool (Meyerson & Miikkulainen, 2019). For many datasets, the separate encoder and decoder absorbs too much functionality to share optimally, and their complexity makes it difficult to analyze the relationships between tasks. Earlier work prior to deep learning showed that, from an algorithmic learning theory perspective, sharing knowledge across tasks should always be useful (Mahmud & Ray, 2008; Mahmud, 2009), but the accompanying experiments were limited to learning biases in a decision tree generation process, i.e., the learned models themselves were not shared across tasks. TOM extends the notion of task embeddings to variable embeddings in order to apply the idea in the cross-domain setting (Table 1d). The method is described in the next section. 3 THE TRAVELING OBSERVER MODEL Consider the set of all scalar random variables that could possibly be measured {v1, v2, ...} = V . Each vi ∈ V could be an input or output variable for some prediction task. To characterize each vi semantically, associate with it a vector zi ∈ RC that encodes the meaning of vi, e.g., “height of left ear of human adult in inches”, “answer to survey question 9 on a scale of 1 to 5”, “severity of heart disease”, “brightness of top-left pixel of photograph”, etc. This vector zi is called the variable embedding (VE) of vi. Variable embeddings could be handcoded, e.g., based on some featurization of the space of variables, but such a handcoding is usually unavailable, and would likely miss some of the underlying semantic regularities across variables. An alternative approach is to learn variable embeddings based on their utility in solving prediction problems of interest. A prediction task (x,y) = ([x1, . . . , xn], [y1, . . . , ym]) is defined by its set of observed variables {xi}ni=1 ⊆ V and its set of target variables {yj}mj=1 ⊆ V whose values are unknown. The goal is to find a prediction function Ω that can be applied across any prediction task of interest, so that it can learn to exploit regularities across such problems. Let zi and zj be the variable embeddings corresponding to xi and yj , respectively. Then, this universal prediction model is of the form E[yj | x] = Ω(x, {zi}ni=1, zj). (1) Importantly, for any two tasks (xt,yt), (xt′ ,yt′), their prediction functions (Eq. 1) differ only in their z’s, which enforces the constraint that functionality is otherwise completely shared across the models. One can view Ω as a traveling observer, who visits several locations in the C-dimensional variable space, takes measurements at those locations, and uses this information to make predictions of values at other locations. To make Ω concrete, it must be a function that can be applied to any number of variables, can fit any set of prediction problems, and is invariant to variable ordering, since we cannot in general assume that a meaningful order exists. These requirements lead to the following decomposition: E[yj | x] = Ω(x, {zi}ni=1, zj) = g ( n∑ i=1 f(xi, zi), zj ) , (2) where f and g are functions called the encoder and decoder, with trainable parameters θf and θg , respectively. The variable embeddings z tell f and g which variables they are observing, and these z can be learned by gradient descent alongside θf and θg . A depiction of the model is shown in Figure 1. For some integer M , f : RC+1 → RM and g : RM+C → R. In principle, f and g could be any sufficiently expressive functions of this form. A natural choice is to implement them as neural networks. They are called the encoder and decoder because they map variables to and from a latent space of sizeM . This model can then be trained end-to-end with gradient descent. A batch for gradient descent is constructed by sampling a prediction problem, e.g., a task, from the distribution of problems of interest, and then sampling a batch of data from the data set for that problem. Notice that, in addition to supervised training, in this framework it is natural to autoencode, i.e., predict input variables, and subsample inputs to simulate multiple tasks drawn from the same universe. The question remains: How can f and g be designed so that they can sufficiently capture a broad range of prediction behavior, and be effectively conditioned by variable embeddings? The next section introduces an experimental architecture that satisfies these requirements. 4 INSTANTIATION The experiments in this paper implement TOM using a generic architecture built from standard components (Figure 2). The encoder and decoder are conditioned on VEs via FiLM layers (Perez et al., 2018), which provide a flexible yet inexpensive way to adapt functionality to each variable, and have been previously used to incorporate task embeddings (Vuorio et al., 2019; Zintgraf et al., 2019). For simplicity, the FiLM layers are based on affine transformations of VEs. Specifically, the `th FiLM layer F` is parameterized by affine layers W ∗` and W + ` , and, given a variable embedding z, the hidden state h is modulated by F`(h) = W ∗ ` (z) h +W+` (z), (3) where is the Hadamard product. A FiLM layer is located alongside each fully-connected layer in the encoder and decoder, both of which consist primarily of residual blocks. To avoid deleterious behavior of batch norm across diverse tasks and small datasets/batches, the recently proposed SkipInit (De & Smith, 2020) is used as a replacement to stabilize training. SkipInit adds a trainable scalar α initialized to 0 at the end of each residual block, and uses dropout for regularization. Finally, for computational efficiency, the decoder is redecomposed into the Core, or g1, which is independent of output variable, and the Decoder proper, or g2, which is conditioned on the output variable. That way, generic transformations of the summed Encoder output can be learned by the Core and run in a single forward and backward pass each iteration. With this decomposition, Eq. 2 is rewritten as E[yj | x] = g2 ( g1 ( n∑ i=1 f(xi, zi) ) , zj ) . (4) The complete architecture is depicted in Figure 2. In the following sections, all models are implemented in pytorch (Paske et al., 2017), use Adam for optimization (Kingma & Ba, 2014), and have hidden layer size of 128 for all layers. Variable embeddings for TOM are initialized from N (0, 10−3). See Appendix C for additional details of this implementation. 5 EXPERIMENTS This section presents a suite of experiments that evaluate the behavior of the implementation introduced in Section 4. See Appendix for additional experimental details. 5.1 VALIDATING LEARNED VARIABLE EMBEDDINGS: DISCOVERING SPACE AND TIME The experiments in this section test TOM’s ability to learn variable embeddings that reflect our a priori intuition about the domain, in particular, the organization of space and time. CIFAR. The first experiment is based on the CIFAR dataset (Krizhevsky, 2009). The pixels of the 32 × 32 images are converted to grayscale values in [0, 1], yielding 1024 variables. The goal is to predict all variable values, given only a subset of them as input. The model is trained to minimize the binary cross-entropy of each output, and it uses 2D VEs. The a priori, or Oracle, expectation is that the VEs form a 32× 32 grid corresponding to how pixels are spatially laid out in an image. Daily Temperature. The second experiment is based on the Melbourne minimum daily temperature dataset (Brownlee, 2016), a subset of a larger database for tracking climate change (Della-Marta et al., 2004). As above, the goal is to predict the daily temperature of the previous 10 days, given only some subset of them, by minimizing the MSE of each variable. The a priori, Oracle, expectation is that the VEs are laid out linearly in a single temporal dimension. The goal is to see whether TOM will also learn VEs (in a 2D space) that follow a clear 1D manifold that can be interpreted as time. For both experiments, a subset of the input variables is randomly sampled at each training iteration, which simulates drawing tasks from a limited universe. The resulting learning process for the VEs is illustrated in Figures 3 and 4. The VEs for CIFAR pull apart and unfold, until they reflect the oracle embeddings (Figure 3). The remaining difference is that TOM peels the border of the CIFAR images (the upper loop of VEs at iteration 300K) away from their center (the lower grid). This makes sense, since CIFAR images all feature a central object, which semantically splits the image into foreground (the object itself) and background (the remaining ring of pixels around the object). Similarly, the VEs for daily temperature pull apart until they form a perfect 1D manifold representing the time dimension (Figure 4). The main difference is that TOM has embedded this 1D structure as a ring in 2D, which is well-suited to the nonlinear encoder and decoder, since it mirrors an isotropic Gaussian distribution. Note that unlike visualization methods like SOM (Kohonen, 1990), PCA (Pearson, 1901), or t-SNE (van der Maaten & Hinton, 2008), TOM learns locations for each variable not each sample. Furthermore, TOM has no explicit motivation to visualize; learned VEs are simply the locations found to be useful by using gradient descent when solving the prediction problem. To get an idea of how learning VEs affects prediction performance, comparisons were run with three cases of fixed VEs: (1) all VEs set to zero, to address the question of whether differentiating variables with VEs is needed at all in the model; (2) random VEs, to address the question of whether simply having any unique label for variables is sufficient; and (3) oracle VEs, which reflect the human a priori expectation of how the variables should be arranged. The results show that the learned embeddings outperform zero and random embeddings, achieving performance on par with the Oracle (Table 2). The conclusion is that learned VEs in TOM are not only meaningful, but can help make superior predictions, without a priori knowledge of variable meaning. The next section shows how such VEs can be used to exploit regularities across tasks in an MTL setting. 5.2 EXPLOITING REGULARITIES ACROSS DISJOINT TASKS This section considers two synthetic multi-task problems that contain underlying regularities across tasks. These regularities are not known to the model a priori; it can only exploit them via its VEs. The first problem evaluates TOM in a regression setting where input and output variables are drawn from the same continuous space; the second problem evaluates TOM in a classification setting. For classification tasks, each class defines a distinct output variable. Task 1 Task 2 Transposed Gaussian Process. In the first problem, the universe is defined by a Gaussian process (GP). The GP is 1D, is zero-mean, and has an RBF kernel with length-scale 1. One task is generated for each (# inputs, # outputs) pair in {1, . . . , 10} × {1, . . . , 10}, for a total of 100 tasks. The “true” location of each variable lies in the single dimension of the GP, and is sampled uniformly from [0, 5]. Samples for the task are generated by sampling from the GP, and measuring the value at each variable location. The dataset for each task contains 10 training samples, 10 validation samples, and 100 test samples. Samples are generated independently for each task. The goal is to minimize MSE of the outputs. Figure 5 gives two examples of tasks drawn from this universe. This testbed is ideal for TOM, because, by the definition of the GP, it explicitly captures the idea that variables whose VEs are nearby are closely related, and every variable has some effect on all others. Concentric Hyperspheres. In the second problem, each task is defined by a set of concentric hyperspheres. Many areas of human knowledge have been organized abstractly as such hyperspheres, e.g., planets around a star, electrons around an atom, social relationships around an individual, or suburbs around Washington D.C.; the idea is that a model that discovers this common organization could then share general knowledge across such areas more effectively. To test this hypothesis, one task is generated for each (# features n, # classes m) pair in {1, . . . , 10}×{2, . . . , 10}, for a total of 90 tasks. For each task, its origin ot is drawn from N (0, In). Then, for each class c ∈ {1, . . . ,m}, samples are drawn from Rn uniformly at distance c from ot, i.e., each class is defined by a (hyper) annulus. The dataset for each task contains five training samples, five validation samples, and 100 test samples per class. The model has no a priori knowledge that the classes are structured in annuli, or which annulus corresponds to which class, but it is possible to achieve high accuracy by making analogies of annuli across tasks, i.e., discovering the underlying structure of this universe. In these experiments, TOM is compared to five alternative methods: (1) TOM-STL, i.e. TOM trained on each task independently; (2) DR-MTL (Deep Residual MTL), the standard cross-domain (Table 1c) version of TOM, where instead of FiLM layers, each task has its own linear encoder and decoder layers, and all residual blocks are CoreResBlocks; (3) DR-STL, which is like DR-MTL except it is trained on each task independently; (4) SLO (Soft Layer Ordering; Meyerson & Miikkulainen, 2018), which uses a separate encoder and decoder for each task, and which is (as far as we know) the only prior Deep MTL approach that has been applied across disjoint tabular datasets; and (5) Oracle, i.e. TOM with VEs fixed to intuitively correct values. The Oracle is included to give an upper bound on how well the TOM architecture in Section 4 could possibly perform. The oracle VE for each Transposed GP task variable is the location where it is measured in the GP; for Concentric Hyperspheres, the oracle VE for each class c is c/10, and for the ith feature is oti. TOM outperforms the competing methods and achieves performance on par with the Oracle (Table 3). Note that the improvement of TOM over TOM-STL is much greater than that of DR-MTL over DR-STL, indicating that TOM is particularly well-suited to exploiting structure across disjoint data sets (learned VEs are shown in Figure 6a-b). Now that this suitability has been confirmed, the next section evaluates TOM across a suite of disjoint, and seemingly unrelated, real-world problems. 5.3 MULTI-TASK LEARNING ACROSS SEEMINGLY UNRELATED REAL-WORLD DATASETS This section evaluates TOM in the setting for which it was designed: learning a single shared model across seemingly unrelated real-world datasets. The set of tasks used is UCI-121 (Lichman, 2013; Fernández-Delgado et al., 2014), a set of 121 classification tasks that has been previously used to evaluate the overall performance of a variety of deep NN methods (Klambauer et al., 2017). The tasks come from diverse areas such as medicine, geology, engineering, botany, sociology, politics, and game-playing. Prior work has tuned each model to each task individually in the single-task regime; no prior work has undertaken learning of all 121 tasks in a single joint model. The datasets are highly diverse. Each simply defines a classification task that a machine learning practitioner was interested in solving. The number of features for a task range from 3 to 262, the number of classes from 2 to 100, and the number of samples from 10 to 130,064. To avoid underfitting to the larger tasks, C = 128, and after joint training all model parameters (θf , θg1 , θg2 , and z’s) are finetuned on each task with at least 5K samples. Note that it is not expected that training any two tasks jointly will improve performance in both tasks, but that training all 121 tasks jointly will improve performance overall, as the model learns general knowledge about how to make good predictions. Results across a suite of metrics are shown in Table 4. Mean Accuracy is the test accuracy averaged across all tasks. Normalized Accuracy scales the accuracy within each task before averaging across tasks, with 0 and 100 corresponding to the lowest and highest accuracies. Mean Rank averages the method’s rank across tasks, where the best method gets a rank of 0. Best % is the percentage of tasks for which the method achieves the top accuracy (with possible ties). Win % is the percentage of tasks for which the method achieves accuracy strictly greater than all other methods. TOM outperforms the alternative approaches across all metrics, showing its ability to learn many seemingly unrelated tasks successfully in a single model (see Figure 6c for a high-level visualization of learned VEs). In other words, TOM can both learn meaningful VEs and use them to improve prediction performance. 6 DISCUSSION AND FUTURE WORK Sections 2 and 3 developed the foundations for the TOM approach; Sections 4 and 5 illustrated its capabilities, demonstrating its value as a general multitask learning system. This section discusses four key areas of future work for increasing the understanding and applicability of the approach. First, there is an opportunity to develop a theoretical framework for understanding when TOM will work best. It is straightforward to extend universal approximation results from approximation of single functions (Cybenko, 1989; Lu et al., 2017; Kidger & Lyons, 2020) to approximation of a set of functions each with any input and output dimensionality via Eq. 2. It is also straightforward to extend convergence bounds for certain model classes, such as PAC bounds (Bartlett & Mendelson, 2002; Neyshabur et al., 2018), to TOM architectures implemented with these classes, if the “true” variable embeddings are fixed a priori, so they can simply be treated as features. However, a more intriguing direction involves understanding how the true locations of variables affects TOM’s ability to learn and exploit them, i.e., what are desirable theoretical properties of the space of variables? Second, in this paper, TOM was evaluated only in the case when the data for all tasks is always available, and the model is trained simultaneously across all tasks. However, it would also be natural to apply TOM in a meta-learning regime (Finn et al., 2017; Zintgraf et al., 2019), in which the model is trained explicitly to generalize to future tasks, and to lifelong learning (Thrun & Pratt, 2012; Brunskill & Li, 2014; Abel et al., 2018), where the model must learn new tasks as they appear over time. Simply freezing the learned parameters of TOM results in a parametric class of ML models with C parameters per variable that can be applied to new tasks. However, in practice, it should be possible to improve upon this approach by taking advantage of more sophisticated fine-tuning and parameter adaptation. For example, in low-data settings, methods can be adapted from meta-learning approaches that modulate model weights in a single forward pass instead of performing supervised backpropagation (Garnelo et al., 2018; Vuorio et al., 2019). Interestingly, although they are designed to address issues quite different from those motivating TOM, the architectures of such approaches have a functional decomposition that is similar to that of TOM at a high level (see e.g. Conditional Neural Processes, or CNPs; Garnelo et al., 2018). In essence, replacing the VEs in Eq. 2 with input samples and the variables with output samples yields a function that generates a prediction model given a dataset. This analogy suggests that it should be possible to extend the benefits of CNPs to TOM, including rich uncertainty information. Third, to make the foundational case for TOM, this paper focused on the setting where VEs are a priori unknown, but when such knowledge is available, it could be useful to integrate with learned VEs. Such an approach could eliminate the cost of relearning VEs, and suggest how to take advantage of spatially-customized architectures. E.g., convolution or attention layers could be used instead of dense layers as architectural primitives, as in vision and language tasks. Such specialization could be instrumental in making TOM more broadly applicable and more powerful in practice. Finally, one interpretation of Fig. 6c is that the learned VEs of classes encode a task-agnostic concept of “normal” vs. “abnormal” system states. TOM could be used to analyze the emergence of such general concepts and as an analogy engine: to describe states of one task in the language of another. 7 CONCLUSION This paper introduced the traveling observer model (TOM), which enables a single model to be trained across diverse tasks by embedding all task variables into a shared space. The framework was shown to discover intuitive notions of space and time and use them to learn variable embeddings that exploit knowledge across tasks, outperforming single- and multi-task alternatives. Thus, learning a single function that cares only about variable locations and their values is a promising approach to integrating knowledge across data sets that have no a priori connection. The TOM approach thus extends the benefits of multi-task learning to broader sets of tasks. ACKNOWLEDGEMENTS Thank you to Babak Hodjat and others in the Evolutionary AI research group for helpful discussions and technical feedback. Thank you also to the reviewers, particularly for their suggestions for improving the organizational structure and clarity of the paper. A ADDITIONAL EXPERIMENT ON THE EMBEDDING SIZE C In the experiments in Section 5.1 and 5.2, the VE dimensionality C for TOM was set to 2 in order to most clearly visualize the VEs that were learned. In the experiment in Section 5.3, C was increased in order to accommodate the scale-up to a large number of highly diverse real world tasks. In that experiment C was set to 128 in order to match the number of task-specific parameters of the other Deep MTL methods compared in Table 4. To evaluate the sensitivity of TOM to the setting of C, additional experiments were run for TOM on UCI-121 with C = 64 and C = 256. The results are shown in Table 5. Metrics for all settings of C are computed w.r.t. the external comparison methods, i.e., those in Table 4a. TOM with C = 64 produces performance comparable to C = 128, suggesting that optimizing C could be a useful lever for balancing performance and VE interpretability. B PYTORCH CODE To give a detailed picture of how the TOM architecture in this paper was implemented, the code for the forward pass of the model implemented in pytorch (Paske et al., 2017) is given in Figure 7. For efficiency, TOM is implemented with Conv1D layers with kernel size 1 instead of Dense layers. This approach enables the model to run the encoder and decoder on all variables in parallel. The fact that Conv layers are so highly optimized in pytorch makes the implementation substantially more efficient than with Dense layers. In this code, input batch has shape (batch size, input variables), input contexts has shape (1, VE dim, # input variables), and output contexts has shape (1, VE dim, # output variables). Code for TOM will be available at https://github. com/leaf-ai/tom-release. C ADDITIONAL EXPERIMENTAL DETAILS A sigmoid layer is applied at the end of the decoder for the CIFAR experiments, to squash the output between 0 and 1. For the CIFAR and Daily Temperature experiments, a subset of the variables is sampled each iteration to be used as input. This subset is sampled in the following way: (1) Sample the size k of the subset uniformly from [1, nt], where nt is the number of variables in the experiment; (2) Sample a subset of variables of size k uniformly from all subsets of size k. This sampling method ensures that every subset size has an equal chance of getting selected, so that the universe is not biased towards tasks of a particular size. E.g., if instead the subset were created by sampling each variable independently with probability p, then the subset size would concentrate tightly around pnt. For classification tasks, each class defines a distinct output variable, i.e., a K-class classification task has K output variables. The squared hinge loss was used for classification tasks (Janocha & Czarnecki, 2017). It is preferable to categorical cross-entropy loss in this setting, because it does not require taking a softmax across output variables, so the outputs are kept separate. Also, the loss becomes exactly zero once a sample is learned strongly, so that the model does not continue to overfit as remaining samples and tasks are learned. The number of blocks in the encoder, core, and decoder is N = 3 for all problems except UCI-121, for which it is N = 10. All experiments use a hidden size of 128 for all dense layers aside from the final decoder layer that maps to the output space. The batch size was 32 for CIFAR and Daily Temperature, and max(200, # train samples) for all other tasks. At each step, To tasks are uniformly sampled from the set of all tasks, and gradients are summed over a batch for each task in the sample. To = 1 in all experiments except UCI-121, for which To = 32. To allow for multi-task training with datasets of varying numbers of samples, we say the model has completed one epoch each time it is evaluated on the validation set. An epoch is 1000 steps for CIFAR, 100 steps for Daily Temperature, 1K steps for Transposed Gaussian Process, 1K steps for Concentric Hyperspheres, and 10K steps for UCI-121. For CIFAR, the official training and test splits are used for training and testing. No validation set is needed for CIFAR, because none of the models can overfit to the training set. For Daily Temperature, the second-to-last year of data is withheld for validation, and the final year is withheld for testing. The UCI-121 experiments use the preprocessed versions of the official train-val-test splits (https://github.com/bioinf-jku/SNNs/tree/master/UCI). Adam is used for all experiments, with all parameters initialized to their default values. In all experiments except UCI-121, the learning rate is kept constant at 0.001 throughout training. In UCI-121, the learning rate is decreased by a factor of two when the mean validation accuracy has not increased in 20 epochs; it is decreased five times; model training stops when it would be decreased a sixth time. Models are trained for 500K steps for CIFAR, 100K steps for Daily Temperature, and 250K for Transposed Gaussian Process and Concentric Hyperspheres. The test performance for each task is its performance on the test set after the epoch of its best validation performance. Weights are initialized using the default pytorch initialization (aside from the SkipInit α scalars, which are initialized to zero (De & Smith, 2020)). The experiments in Section 5.1 use no weight decay; in Section 5.2 use weight decay of 10−4; and in Section 5.3 use weight decay of 10−5. Dropout is set to 0.0 for CIFAR, Daily Temperature, and Concentric Hyperspheres; and 0.5 for Transposed Gaussian Process and UCI-121. In UCI-121, fully-trained MTL models are finetuned to tasks with more than 5,000 samples, using the same optimizer configuration as for joint training, except the steps-per-epoch is set to d# train samples/batch sizee, the learning rate is initialized to 0.0001, the patience for early stopping is set to 100, and the validation performance is smoothed over every 10 epochs (simple moving average), following the protocol used to train single-task models in prior work (Klambauer et al., 2017). TOM uses a VE size of C = 2 for all experiments, except for UCI-121, where C = 128 in order to accommodate the complexity of such a large and diverse set of tasks. For Figure 6c, t-SNE (van der Maaten & Hinton, 2008) was used to reduce the dimensionality to two. t-SNE was run for 10K iterations with default parameters in the scikit-learn implementation (Pedregosa et al., 2011), after first reducing the dimensionality from 128 to 32 via PCA. Independent runs of t-SNE yielded qualitatively similar results. Autoencoding (i.e., predicting the input variables as well as unseen variables) was used for CIFAR, Daily Temperature, and Transposed Guassian Process; it was not used for Concentric Hyperspheres or UCI-121. The Soft Layer Ordering architecture follows the original implementation (Meyerson & Miikkulainen, 2018). There are four shared ReLU layers, each of size 128, with dropout after each to ease sharing across different soft combinations of layers. In Tables 2 and 3 means and standard error for each method are computed over ten runs. The Daily Temperature dataset was downloaded from https://raw.githubusercontent. com/jbrownlee/Datasets/master/daily-min-temperatures.csv. D ADDITIONAL DETAILED RESULTS FOR UCI-121 EXPERIMENT Table 6 contains test accuracies for each UCI-121 task for all methods run in the experiments in Section 5.3. Method DR-STL TOM-STL DR-MTL SLO TOM led-display 75.600 27.200 79.600 73.600 74.000 lenses 83.333 66.667 50.000 50.000 50.000 letter 95.980 97.480 87.220 94.580 94.780 libras 43.333 11.111 78.889 76.667 80.000 low-res-spect 81.955 56.391 83.459 82.707 90.977 lung-cancer 50.000 25.000 62.500 50.000 62.500 lymphography 86.486 56.757 94.595 86.486 86.486 magic 86.982 86.898 81.325 86.877 87.024 mammographic 81.250 82.500 80.833 82.083 83.750 miniboone 92.782 94.630 93.345 94.338 93.532 molec-biol-promoter 88.462 50.000 69.231 61.538 92.308 molec-biol-splice 85.696 92.723 86.324 85.822 93.350 monks-1 65.509 50.000 71.991 86.574 80.787 monks-2 40.509 67.130 62.731 64.583 62.500 monks-3 74.306 52.778 66.898 68.981 58.102 mushroom 99.655 100.000 99.803 100.000 100.000 musk-1 83.193 57.143 92.437 90.756 91.597 musk-2 98.666 98.848 98.787 99.272 99.636 nursery 99.568 99.877 95.926 99.753 99.630 oocytes merluccius nucleus 4d 83.922 70.588 77.647 83.529 85.098 oocytes merluccius states 2f 89.412 92.549 94.510 92.157 95.294 oocytes trisopterus nucleus 2f 73.684 75.877 75.439 78.509 78.947 oocytes trisopterus states 5b 94.298 92.544 93.421 94.737 92.982 optical 95.993 95.326 94.658 94.380 95.938 ozone 97.161 97.161 97.161 97.161 97.161 page-blocks 95.468 96.199 94.371 96.272 96.345 parkinsons 89.796 75.510 83.673 87.755 83.673 pendigits 96.855 97.055 97.055 96.884 96.627 pima 71.875 71.875 73.438 75.521 76.562 pittsburg-bridges-MATERIAL 73.077 76.923 88.462 84.615 92.308 pittsburg-bridges-REL-L 69.231 65.385 65.385 73.077 61.538 pittsburg-bridges-SPAN 52.174 56.522 65.217 65.217 60.870 pittsburg-bridges-T-OR-D 84.000 88.000 84.000 84.000 88.000 pittsburg-bridges-TYPE 38.462 50.000 61.538 65.385 53.846 planning 64.444 71.111 71.111 68.889 71.111 plant-margin 76.750 6.750 71.250 69.500 74.000 plant-shape 39.000 20.750 31.500 65.750 70.500 plant-texture 74.250 4.000 69.750 69.000 77.250 post-operative 72.727 72.727 77.273 72.727 72.727 primary-tumor 45.122 30.488 47.561 47.561 51.220 ringnorm 95.027 98.108 84.324 96.054 98.324 seeds 80.769 80.769 86.538 94.231 92.308 semeion 95.729 92.462 94.724 88.693 94.472 soybean 65.426 18.617 89.628 82.979 83.777 spambase 93.826 92.609 92.609 93.478 93.913 spect 61.828 56.989 67.204 65.054 68.280 spectf 49.733 91.979 60.963 60.428 91.979 statlog-australian-credit 66.860 68.023 68.023 63.372 62.209 statlog-german-credit 73.600 76.000 74.400 76.800 74.800 statlog-heart 89.552 79.104 89.552 82.090 83.582 statlog-image 96.360 95.841 90.988 97.054 97.747 statlog-landsat 89.900 91.250 83.450 88.950 90.600 statlog-shuttle 98.621 99.945 98.021 99.910 99.945 statlog-vehicle 73.934 48.341 78.199 79.621 74.882 steel-plates 74.845 64.536 68.041 76.495 77.526 synthetic-control 73.333 69.333 97.333 96.667 99.333 Continued on next page. Method DR-STL TOM-STL DR-MTL SLO TOM teaching 60.526 36.842 55.263 52.632 47.368 thyroid 98.308 98.775 96.820 97.841 98.804 tic-tac-toe 97.071 97.071 97.071 97.071 96.653 titanic 77.636 77.091 78.364 78.364 78.364 trains 100.000 50.000 100.000 100.000 100.000 twonorm 98.270 98.108 98.162 98.108 98.054 vertebral-column-2clases 83.117 67.532 87.013 87.013 85.714 vertebral-column-3clases 70.130 59.740 84.416 68.831 85.714 wall-following 86.437 98.827 72.507 90.396 97.434 waveform 87.520 87.360 87.760 86.800 87.760 waveform-noise 85.920 85.360 85.360 84.720 85.840 wine 100.000 70.455 100.000 100.000 100.000 wine-quality-red 59.000 57.500 57.750 63.750 61.000 wine-quality-white 56.863 53.758 53.513 57.761 56.944 yeast 60.108 53.908 60.377 59.838 59.838 zoo 96.000 48.000 96.000 96.000 92.000 Continued from previous page.
1. How does the proposed method handle unrelated tasks with different output dimensions? 2. Are the observed data still used during training, even when building the task embedding? 3. What do x, y, and z represent in each task, particularly in the toy example and other experiments? 4. Can you clarify the sentence about learning z alongside f and g by gradient descent? 5. Is the name "Traveling Observer Model" misleading, given that the model can observe all training data at once? 6. Would it be helpful to provide explanations for the abbreviations used, such as VEs, HW, LN, MS, and SNN? 7. Could the authors compare their method to a meta-learning baseline, such as the approach in [1], which targets multimodal learning and uses task embedding dependent on observable data with a meta loss for training?
Review
Review This paper tries to solve a multitask learning task by building a task embedding use the training data and use a shared decoder to predict new data. The paper is well structured and easy to understand the general idea. This idea that maps the observed input and output into a shared space as the task embedding which is good. However, I start to get confused when I start to fetch more details and intuition behind this. I list some of my puzzles below and hope to hear from the authors: The tasks could be unrelated. Do they need to have the same output dimension? Otherwise, you cannot use the same decoder for all the tasks. When you train the g, are you still using observable data? I am a little puzzled about the meaning of x, y, z in each task. In the toy example (Figure 1). Are all x, y, z the positions? Do we need to predict the value given z or to predict y give z? What is the value mean in Figure 1? Meanwhile, can you explicitly introduce what x, y, z represents in other experiments? What does this sentence mean "the z’s are not known a priori, but they can be learned alongside f and g by gradient descent"(10th line on page two), from my perspective, z should be the input or the transformation of input, right? Because in equation 4, x_j is shown on the left side but the right side only contains z_j. The name of the model is somehow misleading: TRAVELING OBSERVER MODEL. Travel has the meaning of time flow. But it seems that you can see all the training data (observation) at any time (e.g. the last experiment). It will be better if the authors explain each abbreviation before using them, for example VEs: Does it mean variable embedding? HW, LN, MS, SNN in Table 4. Can the proposed method compare with some meta-learning baseline? For example[1], this paper is targeted at multimodal learning. They also have task embedding for each task and borrow the idea from FILM. Their task embedding also depends on the observable data. They use meta loss to train the model. I am sorry if I missed any part which has been explained in the paper. Looking forward to your reply. [1]Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation
ICLR
Title The Traveling Observer Model: Multi-task Learning Through Spatial Variable Embeddings Abstract This paper frames a general prediction system as an observer traveling around a continuous space, measuring values at some locations, and predicting them at others. The observer is completely agnostic about any particular task being solved; it cares only about measurement locations and their values. This perspective leads to a machine learning framework in which seemingly unrelated tasks can be solved by a single model, by embedding their input and output variables into a shared space. An implementation of the framework is developed in which these variable embeddings are learned jointly with internal model parameters. In experiments, the approach is shown to (1) recover intuitive locations of variables in space and time, (2) exploit regularities across related datasets with completely disjoint input and output spaces, and (3) exploit regularities across seemingly unrelated tasks, outperforming task-specific single-task models and multi-task learning alternatives. The results suggest that even seemingly unrelated tasks may originate from similar underlying processes, a fact that the traveling observer model can use to make better predictions. 1 INTRODUCTION Natural organisms benefit from the fact that their sensory inputs and action outputs are all organized in the same space, that is, the physical universe. This consistency makes it easy to apply the same predictive functions across diverse settings. Deep multi-task learning (Deep MTL) has shown a similar ability to adapt knowledge across tasks whose observed variables are embedded in a shared space. Examples include vision, where the input for all tasks (photograph, drawing, or otherwise) is pixels arranged in a 2D plane (Zhang et al., 2014; Misra et al., 2016; Rebuffi et al., 2017); natural language (Collobert & Weston, 2008; Luong et al., 2016; Hashimoto et al., 2017), speech processing (Seltzer & Droppo, 2013; Huang et al., 2015), and genomics (Alipanahi et al., 2015), which exploit the 1D structure of text, waveforms, and nucleotide sequences; and video game-playing (Jaderberg et al., 2017; Teh et al., 2017), where interactions are organized across space and time. Yet, many real-world prediction tasks have no such spatial organization; their input and output variables are simply labeled values, e.g., the height of a tree, the cost of a haircut, or the score on a standardized test. To make matters worse, these sets of variables are often disjoint across a set of tasks. These challenges have led the MTL community to avoid such tasks, despite the fact that general knowledge about how to make good predictions can arise from solving seemingly “unrelated” tasks (Mahmud & Ray, 2008; Mahmud, 2009; Meyerson & Miikkulainen, 2019). This paper proposes a solution: Learn all variable locations in a shared space, while simultaneously training the prediction model itself (Figure 1). To illustrate this idea, Figure 1a gives an example of four tasks whose variable values are measured at different locations in the same underlying 2D embedding space. The shape of each marker (i.e., ◦, ,4, ?) denotes the task to which that variable belongs; white markers denote input variable, black markers denote output variables, and the background coloring indicates the variable values in the entire embedding space when the current sample is drawn. As a concrete example, the color could indicate the air temperature at each point in a geographical region at a given moment in time, and each marker the location of a temperature sensor (however, note that the embedding space is generally more abstract). Figure 1b-c shows a model that can be applied to any task in this universe, using the ◦ task as an example: (b) The function f encodes the value of each observed variable xi given its 2D location zi ∈ R2, and these encodings are aggregated by elementwise addition ⊕ ; (c) The function g decodes the aggregated encoding to a prediction for yj at its location zj . Such a predictor can be viewed as a traveling observer model (TOM): It traverses the space of variables, taking a measurement at the location of each input. Given these observations, the model can make a prediction for the value at the location of an output. In general, the embedded locations z are not known a priori (i.e., when input and output variables do not have obvious physical locations), but they can be learned alongside f and g by gradient descent. The input and output spaces of a prediction problem can be standardized so that the measured value of each input and output variable is a scalar. The prediction model can then be completely agnostic about the particular task for which it is making a prediction. By learning variable embeddings (VEs), i.e., the z’s, the model can capture variable relationships explicitly and supports joint training of a single architecture across seemingly unrelated tasks with disjoint input and output spaces. TOM thus establishes a new lower bound on the commonalities shared across real-world machine learning problems: They are all drawn from the same space of variables that humans can and do measure. This paper develops a first implementation of TOM, using an encoder-decoder architecture, with variable embeddings incorporated using FiLM (Perez et al., 2018). In the experiments, the implementation is shown to (1) recover the intuitive locations of variables in space and time, (2) exploit regularities across related datasets with disjoint input and output spaces, and (3) exploit regularities across seemingly unrelated tasks to outperform single-tasks models tuned to each tasks, as well as current Deep MTL alternatives. The results confirm that TOM is a promising framework for representing and exploiting the underlying processes of seemingly unrelated tasks. 2 BACKGROUND: MULTI-TASK ENCODER-DECODER DECOMPOSITIONS This section reviews Deep MTL methods from the perspective of decomposition into encoders and decoders (Table 1). In MTL, there are T tasks {(xt,yt)}Tt=1 that can, in general, be drawn from different domains and have varying input and output dimensionality. The tth task has nt input variables [xt1, . . . , xtnt ] = xt ∈ Rnt and mt output variables [yt1, . . . , ytmt ] = yt ∈ Rmt . Two tasks (xt,yt) and (xt′ ,yt′) are disjoint if their input and output variables are non-overlapping, i.e.,( {xti}nti=1∪{ytj} mt j=1 ) ∩ ( {xt′i}nt′i=1∪{yt′j} mt′ j=1 ) = ∅. The goal is to exploit regularities across task models xt 7→ ŷt by jointly training them with overlapping parameters. The standard intra-domain approach is for all task models to share their encoder f , and each to have its own task-specific decoder gt (Table 1a). This setup was used in the original introduction of MTL (Caruana, 1998), has been broadly explored in the linear regime (Argyriou et al., 2008; Kang et al., 2011; Kumar & Daumé, 2012), and is the most common approach in Deep MTL (Huang et al., 2013; Zhang et al., 2014; Dong et al., 2015; Liu et al., 2015; Ranjan et al., 2016; Jaderberg et al., 2017). The main limitation of this approach is that it is limited to sets of tasks that are all drawn from the same domain. It also has the risk of the separate decoders doing so much of the learning that there is not much left to be shared, which is why the decoders are usually single affine layers. To address the issue of limited sharing, the task embeddings approach trains a single encoder f and single decoder g, with all task-specific parameters learned in embedding vectors zt that semantically characterize each task, and which are fed into the model as additional input (Yang & Hospedales, 2014; Bilen & Vedaldi, 2017; Zintgraf et al., 2019) (Table 1b). Such methods require that all tasks have the same input and output space, but are flexible in how the embeddings can be used to adapt the model to each task. As a result, they can learn tighter connections between tasks than separate decoders, and these relationships can be analyzed by looking at the learned embeddings. To exploit regularities across tasks from diverse and disjoint domains, cross-domain methods have been introduced. Existing methods address the challenge of disjoint output and input spaces by using separate decoders and encoders for each domain (Table 1c), and thus they require some other method of sharing model parameters across tasks, such as sharing some of their layers (Kaiser et al., 2017; Meyerson & Miikkulainen, 2018) or drawing their parameters from a shared pool (Meyerson & Miikkulainen, 2019). For many datasets, the separate encoder and decoder absorbs too much functionality to share optimally, and their complexity makes it difficult to analyze the relationships between tasks. Earlier work prior to deep learning showed that, from an algorithmic learning theory perspective, sharing knowledge across tasks should always be useful (Mahmud & Ray, 2008; Mahmud, 2009), but the accompanying experiments were limited to learning biases in a decision tree generation process, i.e., the learned models themselves were not shared across tasks. TOM extends the notion of task embeddings to variable embeddings in order to apply the idea in the cross-domain setting (Table 1d). The method is described in the next section. 3 THE TRAVELING OBSERVER MODEL Consider the set of all scalar random variables that could possibly be measured {v1, v2, ...} = V . Each vi ∈ V could be an input or output variable for some prediction task. To characterize each vi semantically, associate with it a vector zi ∈ RC that encodes the meaning of vi, e.g., “height of left ear of human adult in inches”, “answer to survey question 9 on a scale of 1 to 5”, “severity of heart disease”, “brightness of top-left pixel of photograph”, etc. This vector zi is called the variable embedding (VE) of vi. Variable embeddings could be handcoded, e.g., based on some featurization of the space of variables, but such a handcoding is usually unavailable, and would likely miss some of the underlying semantic regularities across variables. An alternative approach is to learn variable embeddings based on their utility in solving prediction problems of interest. A prediction task (x,y) = ([x1, . . . , xn], [y1, . . . , ym]) is defined by its set of observed variables {xi}ni=1 ⊆ V and its set of target variables {yj}mj=1 ⊆ V whose values are unknown. The goal is to find a prediction function Ω that can be applied across any prediction task of interest, so that it can learn to exploit regularities across such problems. Let zi and zj be the variable embeddings corresponding to xi and yj , respectively. Then, this universal prediction model is of the form E[yj | x] = Ω(x, {zi}ni=1, zj). (1) Importantly, for any two tasks (xt,yt), (xt′ ,yt′), their prediction functions (Eq. 1) differ only in their z’s, which enforces the constraint that functionality is otherwise completely shared across the models. One can view Ω as a traveling observer, who visits several locations in the C-dimensional variable space, takes measurements at those locations, and uses this information to make predictions of values at other locations. To make Ω concrete, it must be a function that can be applied to any number of variables, can fit any set of prediction problems, and is invariant to variable ordering, since we cannot in general assume that a meaningful order exists. These requirements lead to the following decomposition: E[yj | x] = Ω(x, {zi}ni=1, zj) = g ( n∑ i=1 f(xi, zi), zj ) , (2) where f and g are functions called the encoder and decoder, with trainable parameters θf and θg , respectively. The variable embeddings z tell f and g which variables they are observing, and these z can be learned by gradient descent alongside θf and θg . A depiction of the model is shown in Figure 1. For some integer M , f : RC+1 → RM and g : RM+C → R. In principle, f and g could be any sufficiently expressive functions of this form. A natural choice is to implement them as neural networks. They are called the encoder and decoder because they map variables to and from a latent space of sizeM . This model can then be trained end-to-end with gradient descent. A batch for gradient descent is constructed by sampling a prediction problem, e.g., a task, from the distribution of problems of interest, and then sampling a batch of data from the data set for that problem. Notice that, in addition to supervised training, in this framework it is natural to autoencode, i.e., predict input variables, and subsample inputs to simulate multiple tasks drawn from the same universe. The question remains: How can f and g be designed so that they can sufficiently capture a broad range of prediction behavior, and be effectively conditioned by variable embeddings? The next section introduces an experimental architecture that satisfies these requirements. 4 INSTANTIATION The experiments in this paper implement TOM using a generic architecture built from standard components (Figure 2). The encoder and decoder are conditioned on VEs via FiLM layers (Perez et al., 2018), which provide a flexible yet inexpensive way to adapt functionality to each variable, and have been previously used to incorporate task embeddings (Vuorio et al., 2019; Zintgraf et al., 2019). For simplicity, the FiLM layers are based on affine transformations of VEs. Specifically, the `th FiLM layer F` is parameterized by affine layers W ∗` and W + ` , and, given a variable embedding z, the hidden state h is modulated by F`(h) = W ∗ ` (z) h +W+` (z), (3) where is the Hadamard product. A FiLM layer is located alongside each fully-connected layer in the encoder and decoder, both of which consist primarily of residual blocks. To avoid deleterious behavior of batch norm across diverse tasks and small datasets/batches, the recently proposed SkipInit (De & Smith, 2020) is used as a replacement to stabilize training. SkipInit adds a trainable scalar α initialized to 0 at the end of each residual block, and uses dropout for regularization. Finally, for computational efficiency, the decoder is redecomposed into the Core, or g1, which is independent of output variable, and the Decoder proper, or g2, which is conditioned on the output variable. That way, generic transformations of the summed Encoder output can be learned by the Core and run in a single forward and backward pass each iteration. With this decomposition, Eq. 2 is rewritten as E[yj | x] = g2 ( g1 ( n∑ i=1 f(xi, zi) ) , zj ) . (4) The complete architecture is depicted in Figure 2. In the following sections, all models are implemented in pytorch (Paske et al., 2017), use Adam for optimization (Kingma & Ba, 2014), and have hidden layer size of 128 for all layers. Variable embeddings for TOM are initialized from N (0, 10−3). See Appendix C for additional details of this implementation. 5 EXPERIMENTS This section presents a suite of experiments that evaluate the behavior of the implementation introduced in Section 4. See Appendix for additional experimental details. 5.1 VALIDATING LEARNED VARIABLE EMBEDDINGS: DISCOVERING SPACE AND TIME The experiments in this section test TOM’s ability to learn variable embeddings that reflect our a priori intuition about the domain, in particular, the organization of space and time. CIFAR. The first experiment is based on the CIFAR dataset (Krizhevsky, 2009). The pixels of the 32 × 32 images are converted to grayscale values in [0, 1], yielding 1024 variables. The goal is to predict all variable values, given only a subset of them as input. The model is trained to minimize the binary cross-entropy of each output, and it uses 2D VEs. The a priori, or Oracle, expectation is that the VEs form a 32× 32 grid corresponding to how pixels are spatially laid out in an image. Daily Temperature. The second experiment is based on the Melbourne minimum daily temperature dataset (Brownlee, 2016), a subset of a larger database for tracking climate change (Della-Marta et al., 2004). As above, the goal is to predict the daily temperature of the previous 10 days, given only some subset of them, by minimizing the MSE of each variable. The a priori, Oracle, expectation is that the VEs are laid out linearly in a single temporal dimension. The goal is to see whether TOM will also learn VEs (in a 2D space) that follow a clear 1D manifold that can be interpreted as time. For both experiments, a subset of the input variables is randomly sampled at each training iteration, which simulates drawing tasks from a limited universe. The resulting learning process for the VEs is illustrated in Figures 3 and 4. The VEs for CIFAR pull apart and unfold, until they reflect the oracle embeddings (Figure 3). The remaining difference is that TOM peels the border of the CIFAR images (the upper loop of VEs at iteration 300K) away from their center (the lower grid). This makes sense, since CIFAR images all feature a central object, which semantically splits the image into foreground (the object itself) and background (the remaining ring of pixels around the object). Similarly, the VEs for daily temperature pull apart until they form a perfect 1D manifold representing the time dimension (Figure 4). The main difference is that TOM has embedded this 1D structure as a ring in 2D, which is well-suited to the nonlinear encoder and decoder, since it mirrors an isotropic Gaussian distribution. Note that unlike visualization methods like SOM (Kohonen, 1990), PCA (Pearson, 1901), or t-SNE (van der Maaten & Hinton, 2008), TOM learns locations for each variable not each sample. Furthermore, TOM has no explicit motivation to visualize; learned VEs are simply the locations found to be useful by using gradient descent when solving the prediction problem. To get an idea of how learning VEs affects prediction performance, comparisons were run with three cases of fixed VEs: (1) all VEs set to zero, to address the question of whether differentiating variables with VEs is needed at all in the model; (2) random VEs, to address the question of whether simply having any unique label for variables is sufficient; and (3) oracle VEs, which reflect the human a priori expectation of how the variables should be arranged. The results show that the learned embeddings outperform zero and random embeddings, achieving performance on par with the Oracle (Table 2). The conclusion is that learned VEs in TOM are not only meaningful, but can help make superior predictions, without a priori knowledge of variable meaning. The next section shows how such VEs can be used to exploit regularities across tasks in an MTL setting. 5.2 EXPLOITING REGULARITIES ACROSS DISJOINT TASKS This section considers two synthetic multi-task problems that contain underlying regularities across tasks. These regularities are not known to the model a priori; it can only exploit them via its VEs. The first problem evaluates TOM in a regression setting where input and output variables are drawn from the same continuous space; the second problem evaluates TOM in a classification setting. For classification tasks, each class defines a distinct output variable. Task 1 Task 2 Transposed Gaussian Process. In the first problem, the universe is defined by a Gaussian process (GP). The GP is 1D, is zero-mean, and has an RBF kernel with length-scale 1. One task is generated for each (# inputs, # outputs) pair in {1, . . . , 10} × {1, . . . , 10}, for a total of 100 tasks. The “true” location of each variable lies in the single dimension of the GP, and is sampled uniformly from [0, 5]. Samples for the task are generated by sampling from the GP, and measuring the value at each variable location. The dataset for each task contains 10 training samples, 10 validation samples, and 100 test samples. Samples are generated independently for each task. The goal is to minimize MSE of the outputs. Figure 5 gives two examples of tasks drawn from this universe. This testbed is ideal for TOM, because, by the definition of the GP, it explicitly captures the idea that variables whose VEs are nearby are closely related, and every variable has some effect on all others. Concentric Hyperspheres. In the second problem, each task is defined by a set of concentric hyperspheres. Many areas of human knowledge have been organized abstractly as such hyperspheres, e.g., planets around a star, electrons around an atom, social relationships around an individual, or suburbs around Washington D.C.; the idea is that a model that discovers this common organization could then share general knowledge across such areas more effectively. To test this hypothesis, one task is generated for each (# features n, # classes m) pair in {1, . . . , 10}×{2, . . . , 10}, for a total of 90 tasks. For each task, its origin ot is drawn from N (0, In). Then, for each class c ∈ {1, . . . ,m}, samples are drawn from Rn uniformly at distance c from ot, i.e., each class is defined by a (hyper) annulus. The dataset for each task contains five training samples, five validation samples, and 100 test samples per class. The model has no a priori knowledge that the classes are structured in annuli, or which annulus corresponds to which class, but it is possible to achieve high accuracy by making analogies of annuli across tasks, i.e., discovering the underlying structure of this universe. In these experiments, TOM is compared to five alternative methods: (1) TOM-STL, i.e. TOM trained on each task independently; (2) DR-MTL (Deep Residual MTL), the standard cross-domain (Table 1c) version of TOM, where instead of FiLM layers, each task has its own linear encoder and decoder layers, and all residual blocks are CoreResBlocks; (3) DR-STL, which is like DR-MTL except it is trained on each task independently; (4) SLO (Soft Layer Ordering; Meyerson & Miikkulainen, 2018), which uses a separate encoder and decoder for each task, and which is (as far as we know) the only prior Deep MTL approach that has been applied across disjoint tabular datasets; and (5) Oracle, i.e. TOM with VEs fixed to intuitively correct values. The Oracle is included to give an upper bound on how well the TOM architecture in Section 4 could possibly perform. The oracle VE for each Transposed GP task variable is the location where it is measured in the GP; for Concentric Hyperspheres, the oracle VE for each class c is c/10, and for the ith feature is oti. TOM outperforms the competing methods and achieves performance on par with the Oracle (Table 3). Note that the improvement of TOM over TOM-STL is much greater than that of DR-MTL over DR-STL, indicating that TOM is particularly well-suited to exploiting structure across disjoint data sets (learned VEs are shown in Figure 6a-b). Now that this suitability has been confirmed, the next section evaluates TOM across a suite of disjoint, and seemingly unrelated, real-world problems. 5.3 MULTI-TASK LEARNING ACROSS SEEMINGLY UNRELATED REAL-WORLD DATASETS This section evaluates TOM in the setting for which it was designed: learning a single shared model across seemingly unrelated real-world datasets. The set of tasks used is UCI-121 (Lichman, 2013; Fernández-Delgado et al., 2014), a set of 121 classification tasks that has been previously used to evaluate the overall performance of a variety of deep NN methods (Klambauer et al., 2017). The tasks come from diverse areas such as medicine, geology, engineering, botany, sociology, politics, and game-playing. Prior work has tuned each model to each task individually in the single-task regime; no prior work has undertaken learning of all 121 tasks in a single joint model. The datasets are highly diverse. Each simply defines a classification task that a machine learning practitioner was interested in solving. The number of features for a task range from 3 to 262, the number of classes from 2 to 100, and the number of samples from 10 to 130,064. To avoid underfitting to the larger tasks, C = 128, and after joint training all model parameters (θf , θg1 , θg2 , and z’s) are finetuned on each task with at least 5K samples. Note that it is not expected that training any two tasks jointly will improve performance in both tasks, but that training all 121 tasks jointly will improve performance overall, as the model learns general knowledge about how to make good predictions. Results across a suite of metrics are shown in Table 4. Mean Accuracy is the test accuracy averaged across all tasks. Normalized Accuracy scales the accuracy within each task before averaging across tasks, with 0 and 100 corresponding to the lowest and highest accuracies. Mean Rank averages the method’s rank across tasks, where the best method gets a rank of 0. Best % is the percentage of tasks for which the method achieves the top accuracy (with possible ties). Win % is the percentage of tasks for which the method achieves accuracy strictly greater than all other methods. TOM outperforms the alternative approaches across all metrics, showing its ability to learn many seemingly unrelated tasks successfully in a single model (see Figure 6c for a high-level visualization of learned VEs). In other words, TOM can both learn meaningful VEs and use them to improve prediction performance. 6 DISCUSSION AND FUTURE WORK Sections 2 and 3 developed the foundations for the TOM approach; Sections 4 and 5 illustrated its capabilities, demonstrating its value as a general multitask learning system. This section discusses four key areas of future work for increasing the understanding and applicability of the approach. First, there is an opportunity to develop a theoretical framework for understanding when TOM will work best. It is straightforward to extend universal approximation results from approximation of single functions (Cybenko, 1989; Lu et al., 2017; Kidger & Lyons, 2020) to approximation of a set of functions each with any input and output dimensionality via Eq. 2. It is also straightforward to extend convergence bounds for certain model classes, such as PAC bounds (Bartlett & Mendelson, 2002; Neyshabur et al., 2018), to TOM architectures implemented with these classes, if the “true” variable embeddings are fixed a priori, so they can simply be treated as features. However, a more intriguing direction involves understanding how the true locations of variables affects TOM’s ability to learn and exploit them, i.e., what are desirable theoretical properties of the space of variables? Second, in this paper, TOM was evaluated only in the case when the data for all tasks is always available, and the model is trained simultaneously across all tasks. However, it would also be natural to apply TOM in a meta-learning regime (Finn et al., 2017; Zintgraf et al., 2019), in which the model is trained explicitly to generalize to future tasks, and to lifelong learning (Thrun & Pratt, 2012; Brunskill & Li, 2014; Abel et al., 2018), where the model must learn new tasks as they appear over time. Simply freezing the learned parameters of TOM results in a parametric class of ML models with C parameters per variable that can be applied to new tasks. However, in practice, it should be possible to improve upon this approach by taking advantage of more sophisticated fine-tuning and parameter adaptation. For example, in low-data settings, methods can be adapted from meta-learning approaches that modulate model weights in a single forward pass instead of performing supervised backpropagation (Garnelo et al., 2018; Vuorio et al., 2019). Interestingly, although they are designed to address issues quite different from those motivating TOM, the architectures of such approaches have a functional decomposition that is similar to that of TOM at a high level (see e.g. Conditional Neural Processes, or CNPs; Garnelo et al., 2018). In essence, replacing the VEs in Eq. 2 with input samples and the variables with output samples yields a function that generates a prediction model given a dataset. This analogy suggests that it should be possible to extend the benefits of CNPs to TOM, including rich uncertainty information. Third, to make the foundational case for TOM, this paper focused on the setting where VEs are a priori unknown, but when such knowledge is available, it could be useful to integrate with learned VEs. Such an approach could eliminate the cost of relearning VEs, and suggest how to take advantage of spatially-customized architectures. E.g., convolution or attention layers could be used instead of dense layers as architectural primitives, as in vision and language tasks. Such specialization could be instrumental in making TOM more broadly applicable and more powerful in practice. Finally, one interpretation of Fig. 6c is that the learned VEs of classes encode a task-agnostic concept of “normal” vs. “abnormal” system states. TOM could be used to analyze the emergence of such general concepts and as an analogy engine: to describe states of one task in the language of another. 7 CONCLUSION This paper introduced the traveling observer model (TOM), which enables a single model to be trained across diverse tasks by embedding all task variables into a shared space. The framework was shown to discover intuitive notions of space and time and use them to learn variable embeddings that exploit knowledge across tasks, outperforming single- and multi-task alternatives. Thus, learning a single function that cares only about variable locations and their values is a promising approach to integrating knowledge across data sets that have no a priori connection. The TOM approach thus extends the benefits of multi-task learning to broader sets of tasks. ACKNOWLEDGEMENTS Thank you to Babak Hodjat and others in the Evolutionary AI research group for helpful discussions and technical feedback. Thank you also to the reviewers, particularly for their suggestions for improving the organizational structure and clarity of the paper. A ADDITIONAL EXPERIMENT ON THE EMBEDDING SIZE C In the experiments in Section 5.1 and 5.2, the VE dimensionality C for TOM was set to 2 in order to most clearly visualize the VEs that were learned. In the experiment in Section 5.3, C was increased in order to accommodate the scale-up to a large number of highly diverse real world tasks. In that experiment C was set to 128 in order to match the number of task-specific parameters of the other Deep MTL methods compared in Table 4. To evaluate the sensitivity of TOM to the setting of C, additional experiments were run for TOM on UCI-121 with C = 64 and C = 256. The results are shown in Table 5. Metrics for all settings of C are computed w.r.t. the external comparison methods, i.e., those in Table 4a. TOM with C = 64 produces performance comparable to C = 128, suggesting that optimizing C could be a useful lever for balancing performance and VE interpretability. B PYTORCH CODE To give a detailed picture of how the TOM architecture in this paper was implemented, the code for the forward pass of the model implemented in pytorch (Paske et al., 2017) is given in Figure 7. For efficiency, TOM is implemented with Conv1D layers with kernel size 1 instead of Dense layers. This approach enables the model to run the encoder and decoder on all variables in parallel. The fact that Conv layers are so highly optimized in pytorch makes the implementation substantially more efficient than with Dense layers. In this code, input batch has shape (batch size, input variables), input contexts has shape (1, VE dim, # input variables), and output contexts has shape (1, VE dim, # output variables). Code for TOM will be available at https://github. com/leaf-ai/tom-release. C ADDITIONAL EXPERIMENTAL DETAILS A sigmoid layer is applied at the end of the decoder for the CIFAR experiments, to squash the output between 0 and 1. For the CIFAR and Daily Temperature experiments, a subset of the variables is sampled each iteration to be used as input. This subset is sampled in the following way: (1) Sample the size k of the subset uniformly from [1, nt], where nt is the number of variables in the experiment; (2) Sample a subset of variables of size k uniformly from all subsets of size k. This sampling method ensures that every subset size has an equal chance of getting selected, so that the universe is not biased towards tasks of a particular size. E.g., if instead the subset were created by sampling each variable independently with probability p, then the subset size would concentrate tightly around pnt. For classification tasks, each class defines a distinct output variable, i.e., a K-class classification task has K output variables. The squared hinge loss was used for classification tasks (Janocha & Czarnecki, 2017). It is preferable to categorical cross-entropy loss in this setting, because it does not require taking a softmax across output variables, so the outputs are kept separate. Also, the loss becomes exactly zero once a sample is learned strongly, so that the model does not continue to overfit as remaining samples and tasks are learned. The number of blocks in the encoder, core, and decoder is N = 3 for all problems except UCI-121, for which it is N = 10. All experiments use a hidden size of 128 for all dense layers aside from the final decoder layer that maps to the output space. The batch size was 32 for CIFAR and Daily Temperature, and max(200, # train samples) for all other tasks. At each step, To tasks are uniformly sampled from the set of all tasks, and gradients are summed over a batch for each task in the sample. To = 1 in all experiments except UCI-121, for which To = 32. To allow for multi-task training with datasets of varying numbers of samples, we say the model has completed one epoch each time it is evaluated on the validation set. An epoch is 1000 steps for CIFAR, 100 steps for Daily Temperature, 1K steps for Transposed Gaussian Process, 1K steps for Concentric Hyperspheres, and 10K steps for UCI-121. For CIFAR, the official training and test splits are used for training and testing. No validation set is needed for CIFAR, because none of the models can overfit to the training set. For Daily Temperature, the second-to-last year of data is withheld for validation, and the final year is withheld for testing. The UCI-121 experiments use the preprocessed versions of the official train-val-test splits (https://github.com/bioinf-jku/SNNs/tree/master/UCI). Adam is used for all experiments, with all parameters initialized to their default values. In all experiments except UCI-121, the learning rate is kept constant at 0.001 throughout training. In UCI-121, the learning rate is decreased by a factor of two when the mean validation accuracy has not increased in 20 epochs; it is decreased five times; model training stops when it would be decreased a sixth time. Models are trained for 500K steps for CIFAR, 100K steps for Daily Temperature, and 250K for Transposed Gaussian Process and Concentric Hyperspheres. The test performance for each task is its performance on the test set after the epoch of its best validation performance. Weights are initialized using the default pytorch initialization (aside from the SkipInit α scalars, which are initialized to zero (De & Smith, 2020)). The experiments in Section 5.1 use no weight decay; in Section 5.2 use weight decay of 10−4; and in Section 5.3 use weight decay of 10−5. Dropout is set to 0.0 for CIFAR, Daily Temperature, and Concentric Hyperspheres; and 0.5 for Transposed Gaussian Process and UCI-121. In UCI-121, fully-trained MTL models are finetuned to tasks with more than 5,000 samples, using the same optimizer configuration as for joint training, except the steps-per-epoch is set to d# train samples/batch sizee, the learning rate is initialized to 0.0001, the patience for early stopping is set to 100, and the validation performance is smoothed over every 10 epochs (simple moving average), following the protocol used to train single-task models in prior work (Klambauer et al., 2017). TOM uses a VE size of C = 2 for all experiments, except for UCI-121, where C = 128 in order to accommodate the complexity of such a large and diverse set of tasks. For Figure 6c, t-SNE (van der Maaten & Hinton, 2008) was used to reduce the dimensionality to two. t-SNE was run for 10K iterations with default parameters in the scikit-learn implementation (Pedregosa et al., 2011), after first reducing the dimensionality from 128 to 32 via PCA. Independent runs of t-SNE yielded qualitatively similar results. Autoencoding (i.e., predicting the input variables as well as unseen variables) was used for CIFAR, Daily Temperature, and Transposed Guassian Process; it was not used for Concentric Hyperspheres or UCI-121. The Soft Layer Ordering architecture follows the original implementation (Meyerson & Miikkulainen, 2018). There are four shared ReLU layers, each of size 128, with dropout after each to ease sharing across different soft combinations of layers. In Tables 2 and 3 means and standard error for each method are computed over ten runs. The Daily Temperature dataset was downloaded from https://raw.githubusercontent. com/jbrownlee/Datasets/master/daily-min-temperatures.csv. D ADDITIONAL DETAILED RESULTS FOR UCI-121 EXPERIMENT Table 6 contains test accuracies for each UCI-121 task for all methods run in the experiments in Section 5.3. Method DR-STL TOM-STL DR-MTL SLO TOM led-display 75.600 27.200 79.600 73.600 74.000 lenses 83.333 66.667 50.000 50.000 50.000 letter 95.980 97.480 87.220 94.580 94.780 libras 43.333 11.111 78.889 76.667 80.000 low-res-spect 81.955 56.391 83.459 82.707 90.977 lung-cancer 50.000 25.000 62.500 50.000 62.500 lymphography 86.486 56.757 94.595 86.486 86.486 magic 86.982 86.898 81.325 86.877 87.024 mammographic 81.250 82.500 80.833 82.083 83.750 miniboone 92.782 94.630 93.345 94.338 93.532 molec-biol-promoter 88.462 50.000 69.231 61.538 92.308 molec-biol-splice 85.696 92.723 86.324 85.822 93.350 monks-1 65.509 50.000 71.991 86.574 80.787 monks-2 40.509 67.130 62.731 64.583 62.500 monks-3 74.306 52.778 66.898 68.981 58.102 mushroom 99.655 100.000 99.803 100.000 100.000 musk-1 83.193 57.143 92.437 90.756 91.597 musk-2 98.666 98.848 98.787 99.272 99.636 nursery 99.568 99.877 95.926 99.753 99.630 oocytes merluccius nucleus 4d 83.922 70.588 77.647 83.529 85.098 oocytes merluccius states 2f 89.412 92.549 94.510 92.157 95.294 oocytes trisopterus nucleus 2f 73.684 75.877 75.439 78.509 78.947 oocytes trisopterus states 5b 94.298 92.544 93.421 94.737 92.982 optical 95.993 95.326 94.658 94.380 95.938 ozone 97.161 97.161 97.161 97.161 97.161 page-blocks 95.468 96.199 94.371 96.272 96.345 parkinsons 89.796 75.510 83.673 87.755 83.673 pendigits 96.855 97.055 97.055 96.884 96.627 pima 71.875 71.875 73.438 75.521 76.562 pittsburg-bridges-MATERIAL 73.077 76.923 88.462 84.615 92.308 pittsburg-bridges-REL-L 69.231 65.385 65.385 73.077 61.538 pittsburg-bridges-SPAN 52.174 56.522 65.217 65.217 60.870 pittsburg-bridges-T-OR-D 84.000 88.000 84.000 84.000 88.000 pittsburg-bridges-TYPE 38.462 50.000 61.538 65.385 53.846 planning 64.444 71.111 71.111 68.889 71.111 plant-margin 76.750 6.750 71.250 69.500 74.000 plant-shape 39.000 20.750 31.500 65.750 70.500 plant-texture 74.250 4.000 69.750 69.000 77.250 post-operative 72.727 72.727 77.273 72.727 72.727 primary-tumor 45.122 30.488 47.561 47.561 51.220 ringnorm 95.027 98.108 84.324 96.054 98.324 seeds 80.769 80.769 86.538 94.231 92.308 semeion 95.729 92.462 94.724 88.693 94.472 soybean 65.426 18.617 89.628 82.979 83.777 spambase 93.826 92.609 92.609 93.478 93.913 spect 61.828 56.989 67.204 65.054 68.280 spectf 49.733 91.979 60.963 60.428 91.979 statlog-australian-credit 66.860 68.023 68.023 63.372 62.209 statlog-german-credit 73.600 76.000 74.400 76.800 74.800 statlog-heart 89.552 79.104 89.552 82.090 83.582 statlog-image 96.360 95.841 90.988 97.054 97.747 statlog-landsat 89.900 91.250 83.450 88.950 90.600 statlog-shuttle 98.621 99.945 98.021 99.910 99.945 statlog-vehicle 73.934 48.341 78.199 79.621 74.882 steel-plates 74.845 64.536 68.041 76.495 77.526 synthetic-control 73.333 69.333 97.333 96.667 99.333 Continued on next page. Method DR-STL TOM-STL DR-MTL SLO TOM teaching 60.526 36.842 55.263 52.632 47.368 thyroid 98.308 98.775 96.820 97.841 98.804 tic-tac-toe 97.071 97.071 97.071 97.071 96.653 titanic 77.636 77.091 78.364 78.364 78.364 trains 100.000 50.000 100.000 100.000 100.000 twonorm 98.270 98.108 98.162 98.108 98.054 vertebral-column-2clases 83.117 67.532 87.013 87.013 85.714 vertebral-column-3clases 70.130 59.740 84.416 68.831 85.714 wall-following 86.437 98.827 72.507 90.396 97.434 waveform 87.520 87.360 87.760 86.800 87.760 waveform-noise 85.920 85.360 85.360 84.720 85.840 wine 100.000 70.455 100.000 100.000 100.000 wine-quality-red 59.000 57.500 57.750 63.750 61.000 wine-quality-white 56.863 53.758 53.513 57.761 56.944 yeast 60.108 53.908 60.377 59.838 59.838 zoo 96.000 48.000 96.000 96.000 92.000 Continued from previous page.
1. What is the focus of the paper in terms of the application of the Traveling Observer Model (TOM)? 2. What are the strengths of the paper regarding its writing quality, experimental analysis, and comparison with other models? 3. How does the reviewer assess the significance and potential impact of the research presented in the paper? 4. Are there any areas where the reviewer suggests improvements or further research?
Review
Review Paper is very well written and addresses an important topic; using Traveling Observer Model (TOM) in multi-task learning for tasks that do not have no spatial organization unlike, for example, images. Although the paper is said to be a first implementation of TOM, it does thorough experimenting and result analysis of its preformance from various aspects and by comparing it to many sophisticated models. Future research for improving and testing the algorithm is clearly detailed. Related scientific literature is sufficiently addressed, mathematical background and the method are clearly presented, extensive and relevant experiments are done and result analyzed. I didn't even find any typos.
ICLR
Title The Traveling Observer Model: Multi-task Learning Through Spatial Variable Embeddings Abstract This paper frames a general prediction system as an observer traveling around a continuous space, measuring values at some locations, and predicting them at others. The observer is completely agnostic about any particular task being solved; it cares only about measurement locations and their values. This perspective leads to a machine learning framework in which seemingly unrelated tasks can be solved by a single model, by embedding their input and output variables into a shared space. An implementation of the framework is developed in which these variable embeddings are learned jointly with internal model parameters. In experiments, the approach is shown to (1) recover intuitive locations of variables in space and time, (2) exploit regularities across related datasets with completely disjoint input and output spaces, and (3) exploit regularities across seemingly unrelated tasks, outperforming task-specific single-task models and multi-task learning alternatives. The results suggest that even seemingly unrelated tasks may originate from similar underlying processes, a fact that the traveling observer model can use to make better predictions. 1 INTRODUCTION Natural organisms benefit from the fact that their sensory inputs and action outputs are all organized in the same space, that is, the physical universe. This consistency makes it easy to apply the same predictive functions across diverse settings. Deep multi-task learning (Deep MTL) has shown a similar ability to adapt knowledge across tasks whose observed variables are embedded in a shared space. Examples include vision, where the input for all tasks (photograph, drawing, or otherwise) is pixels arranged in a 2D plane (Zhang et al., 2014; Misra et al., 2016; Rebuffi et al., 2017); natural language (Collobert & Weston, 2008; Luong et al., 2016; Hashimoto et al., 2017), speech processing (Seltzer & Droppo, 2013; Huang et al., 2015), and genomics (Alipanahi et al., 2015), which exploit the 1D structure of text, waveforms, and nucleotide sequences; and video game-playing (Jaderberg et al., 2017; Teh et al., 2017), where interactions are organized across space and time. Yet, many real-world prediction tasks have no such spatial organization; their input and output variables are simply labeled values, e.g., the height of a tree, the cost of a haircut, or the score on a standardized test. To make matters worse, these sets of variables are often disjoint across a set of tasks. These challenges have led the MTL community to avoid such tasks, despite the fact that general knowledge about how to make good predictions can arise from solving seemingly “unrelated” tasks (Mahmud & Ray, 2008; Mahmud, 2009; Meyerson & Miikkulainen, 2019). This paper proposes a solution: Learn all variable locations in a shared space, while simultaneously training the prediction model itself (Figure 1). To illustrate this idea, Figure 1a gives an example of four tasks whose variable values are measured at different locations in the same underlying 2D embedding space. The shape of each marker (i.e., ◦, ,4, ?) denotes the task to which that variable belongs; white markers denote input variable, black markers denote output variables, and the background coloring indicates the variable values in the entire embedding space when the current sample is drawn. As a concrete example, the color could indicate the air temperature at each point in a geographical region at a given moment in time, and each marker the location of a temperature sensor (however, note that the embedding space is generally more abstract). Figure 1b-c shows a model that can be applied to any task in this universe, using the ◦ task as an example: (b) The function f encodes the value of each observed variable xi given its 2D location zi ∈ R2, and these encodings are aggregated by elementwise addition ⊕ ; (c) The function g decodes the aggregated encoding to a prediction for yj at its location zj . Such a predictor can be viewed as a traveling observer model (TOM): It traverses the space of variables, taking a measurement at the location of each input. Given these observations, the model can make a prediction for the value at the location of an output. In general, the embedded locations z are not known a priori (i.e., when input and output variables do not have obvious physical locations), but they can be learned alongside f and g by gradient descent. The input and output spaces of a prediction problem can be standardized so that the measured value of each input and output variable is a scalar. The prediction model can then be completely agnostic about the particular task for which it is making a prediction. By learning variable embeddings (VEs), i.e., the z’s, the model can capture variable relationships explicitly and supports joint training of a single architecture across seemingly unrelated tasks with disjoint input and output spaces. TOM thus establishes a new lower bound on the commonalities shared across real-world machine learning problems: They are all drawn from the same space of variables that humans can and do measure. This paper develops a first implementation of TOM, using an encoder-decoder architecture, with variable embeddings incorporated using FiLM (Perez et al., 2018). In the experiments, the implementation is shown to (1) recover the intuitive locations of variables in space and time, (2) exploit regularities across related datasets with disjoint input and output spaces, and (3) exploit regularities across seemingly unrelated tasks to outperform single-tasks models tuned to each tasks, as well as current Deep MTL alternatives. The results confirm that TOM is a promising framework for representing and exploiting the underlying processes of seemingly unrelated tasks. 2 BACKGROUND: MULTI-TASK ENCODER-DECODER DECOMPOSITIONS This section reviews Deep MTL methods from the perspective of decomposition into encoders and decoders (Table 1). In MTL, there are T tasks {(xt,yt)}Tt=1 that can, in general, be drawn from different domains and have varying input and output dimensionality. The tth task has nt input variables [xt1, . . . , xtnt ] = xt ∈ Rnt and mt output variables [yt1, . . . , ytmt ] = yt ∈ Rmt . Two tasks (xt,yt) and (xt′ ,yt′) are disjoint if their input and output variables are non-overlapping, i.e.,( {xti}nti=1∪{ytj} mt j=1 ) ∩ ( {xt′i}nt′i=1∪{yt′j} mt′ j=1 ) = ∅. The goal is to exploit regularities across task models xt 7→ ŷt by jointly training them with overlapping parameters. The standard intra-domain approach is for all task models to share their encoder f , and each to have its own task-specific decoder gt (Table 1a). This setup was used in the original introduction of MTL (Caruana, 1998), has been broadly explored in the linear regime (Argyriou et al., 2008; Kang et al., 2011; Kumar & Daumé, 2012), and is the most common approach in Deep MTL (Huang et al., 2013; Zhang et al., 2014; Dong et al., 2015; Liu et al., 2015; Ranjan et al., 2016; Jaderberg et al., 2017). The main limitation of this approach is that it is limited to sets of tasks that are all drawn from the same domain. It also has the risk of the separate decoders doing so much of the learning that there is not much left to be shared, which is why the decoders are usually single affine layers. To address the issue of limited sharing, the task embeddings approach trains a single encoder f and single decoder g, with all task-specific parameters learned in embedding vectors zt that semantically characterize each task, and which are fed into the model as additional input (Yang & Hospedales, 2014; Bilen & Vedaldi, 2017; Zintgraf et al., 2019) (Table 1b). Such methods require that all tasks have the same input and output space, but are flexible in how the embeddings can be used to adapt the model to each task. As a result, they can learn tighter connections between tasks than separate decoders, and these relationships can be analyzed by looking at the learned embeddings. To exploit regularities across tasks from diverse and disjoint domains, cross-domain methods have been introduced. Existing methods address the challenge of disjoint output and input spaces by using separate decoders and encoders for each domain (Table 1c), and thus they require some other method of sharing model parameters across tasks, such as sharing some of their layers (Kaiser et al., 2017; Meyerson & Miikkulainen, 2018) or drawing their parameters from a shared pool (Meyerson & Miikkulainen, 2019). For many datasets, the separate encoder and decoder absorbs too much functionality to share optimally, and their complexity makes it difficult to analyze the relationships between tasks. Earlier work prior to deep learning showed that, from an algorithmic learning theory perspective, sharing knowledge across tasks should always be useful (Mahmud & Ray, 2008; Mahmud, 2009), but the accompanying experiments were limited to learning biases in a decision tree generation process, i.e., the learned models themselves were not shared across tasks. TOM extends the notion of task embeddings to variable embeddings in order to apply the idea in the cross-domain setting (Table 1d). The method is described in the next section. 3 THE TRAVELING OBSERVER MODEL Consider the set of all scalar random variables that could possibly be measured {v1, v2, ...} = V . Each vi ∈ V could be an input or output variable for some prediction task. To characterize each vi semantically, associate with it a vector zi ∈ RC that encodes the meaning of vi, e.g., “height of left ear of human adult in inches”, “answer to survey question 9 on a scale of 1 to 5”, “severity of heart disease”, “brightness of top-left pixel of photograph”, etc. This vector zi is called the variable embedding (VE) of vi. Variable embeddings could be handcoded, e.g., based on some featurization of the space of variables, but such a handcoding is usually unavailable, and would likely miss some of the underlying semantic regularities across variables. An alternative approach is to learn variable embeddings based on their utility in solving prediction problems of interest. A prediction task (x,y) = ([x1, . . . , xn], [y1, . . . , ym]) is defined by its set of observed variables {xi}ni=1 ⊆ V and its set of target variables {yj}mj=1 ⊆ V whose values are unknown. The goal is to find a prediction function Ω that can be applied across any prediction task of interest, so that it can learn to exploit regularities across such problems. Let zi and zj be the variable embeddings corresponding to xi and yj , respectively. Then, this universal prediction model is of the form E[yj | x] = Ω(x, {zi}ni=1, zj). (1) Importantly, for any two tasks (xt,yt), (xt′ ,yt′), their prediction functions (Eq. 1) differ only in their z’s, which enforces the constraint that functionality is otherwise completely shared across the models. One can view Ω as a traveling observer, who visits several locations in the C-dimensional variable space, takes measurements at those locations, and uses this information to make predictions of values at other locations. To make Ω concrete, it must be a function that can be applied to any number of variables, can fit any set of prediction problems, and is invariant to variable ordering, since we cannot in general assume that a meaningful order exists. These requirements lead to the following decomposition: E[yj | x] = Ω(x, {zi}ni=1, zj) = g ( n∑ i=1 f(xi, zi), zj ) , (2) where f and g are functions called the encoder and decoder, with trainable parameters θf and θg , respectively. The variable embeddings z tell f and g which variables they are observing, and these z can be learned by gradient descent alongside θf and θg . A depiction of the model is shown in Figure 1. For some integer M , f : RC+1 → RM and g : RM+C → R. In principle, f and g could be any sufficiently expressive functions of this form. A natural choice is to implement them as neural networks. They are called the encoder and decoder because they map variables to and from a latent space of sizeM . This model can then be trained end-to-end with gradient descent. A batch for gradient descent is constructed by sampling a prediction problem, e.g., a task, from the distribution of problems of interest, and then sampling a batch of data from the data set for that problem. Notice that, in addition to supervised training, in this framework it is natural to autoencode, i.e., predict input variables, and subsample inputs to simulate multiple tasks drawn from the same universe. The question remains: How can f and g be designed so that they can sufficiently capture a broad range of prediction behavior, and be effectively conditioned by variable embeddings? The next section introduces an experimental architecture that satisfies these requirements. 4 INSTANTIATION The experiments in this paper implement TOM using a generic architecture built from standard components (Figure 2). The encoder and decoder are conditioned on VEs via FiLM layers (Perez et al., 2018), which provide a flexible yet inexpensive way to adapt functionality to each variable, and have been previously used to incorporate task embeddings (Vuorio et al., 2019; Zintgraf et al., 2019). For simplicity, the FiLM layers are based on affine transformations of VEs. Specifically, the `th FiLM layer F` is parameterized by affine layers W ∗` and W + ` , and, given a variable embedding z, the hidden state h is modulated by F`(h) = W ∗ ` (z) h +W+` (z), (3) where is the Hadamard product. A FiLM layer is located alongside each fully-connected layer in the encoder and decoder, both of which consist primarily of residual blocks. To avoid deleterious behavior of batch norm across diverse tasks and small datasets/batches, the recently proposed SkipInit (De & Smith, 2020) is used as a replacement to stabilize training. SkipInit adds a trainable scalar α initialized to 0 at the end of each residual block, and uses dropout for regularization. Finally, for computational efficiency, the decoder is redecomposed into the Core, or g1, which is independent of output variable, and the Decoder proper, or g2, which is conditioned on the output variable. That way, generic transformations of the summed Encoder output can be learned by the Core and run in a single forward and backward pass each iteration. With this decomposition, Eq. 2 is rewritten as E[yj | x] = g2 ( g1 ( n∑ i=1 f(xi, zi) ) , zj ) . (4) The complete architecture is depicted in Figure 2. In the following sections, all models are implemented in pytorch (Paske et al., 2017), use Adam for optimization (Kingma & Ba, 2014), and have hidden layer size of 128 for all layers. Variable embeddings for TOM are initialized from N (0, 10−3). See Appendix C for additional details of this implementation. 5 EXPERIMENTS This section presents a suite of experiments that evaluate the behavior of the implementation introduced in Section 4. See Appendix for additional experimental details. 5.1 VALIDATING LEARNED VARIABLE EMBEDDINGS: DISCOVERING SPACE AND TIME The experiments in this section test TOM’s ability to learn variable embeddings that reflect our a priori intuition about the domain, in particular, the organization of space and time. CIFAR. The first experiment is based on the CIFAR dataset (Krizhevsky, 2009). The pixels of the 32 × 32 images are converted to grayscale values in [0, 1], yielding 1024 variables. The goal is to predict all variable values, given only a subset of them as input. The model is trained to minimize the binary cross-entropy of each output, and it uses 2D VEs. The a priori, or Oracle, expectation is that the VEs form a 32× 32 grid corresponding to how pixels are spatially laid out in an image. Daily Temperature. The second experiment is based on the Melbourne minimum daily temperature dataset (Brownlee, 2016), a subset of a larger database for tracking climate change (Della-Marta et al., 2004). As above, the goal is to predict the daily temperature of the previous 10 days, given only some subset of them, by minimizing the MSE of each variable. The a priori, Oracle, expectation is that the VEs are laid out linearly in a single temporal dimension. The goal is to see whether TOM will also learn VEs (in a 2D space) that follow a clear 1D manifold that can be interpreted as time. For both experiments, a subset of the input variables is randomly sampled at each training iteration, which simulates drawing tasks from a limited universe. The resulting learning process for the VEs is illustrated in Figures 3 and 4. The VEs for CIFAR pull apart and unfold, until they reflect the oracle embeddings (Figure 3). The remaining difference is that TOM peels the border of the CIFAR images (the upper loop of VEs at iteration 300K) away from their center (the lower grid). This makes sense, since CIFAR images all feature a central object, which semantically splits the image into foreground (the object itself) and background (the remaining ring of pixels around the object). Similarly, the VEs for daily temperature pull apart until they form a perfect 1D manifold representing the time dimension (Figure 4). The main difference is that TOM has embedded this 1D structure as a ring in 2D, which is well-suited to the nonlinear encoder and decoder, since it mirrors an isotropic Gaussian distribution. Note that unlike visualization methods like SOM (Kohonen, 1990), PCA (Pearson, 1901), or t-SNE (van der Maaten & Hinton, 2008), TOM learns locations for each variable not each sample. Furthermore, TOM has no explicit motivation to visualize; learned VEs are simply the locations found to be useful by using gradient descent when solving the prediction problem. To get an idea of how learning VEs affects prediction performance, comparisons were run with three cases of fixed VEs: (1) all VEs set to zero, to address the question of whether differentiating variables with VEs is needed at all in the model; (2) random VEs, to address the question of whether simply having any unique label for variables is sufficient; and (3) oracle VEs, which reflect the human a priori expectation of how the variables should be arranged. The results show that the learned embeddings outperform zero and random embeddings, achieving performance on par with the Oracle (Table 2). The conclusion is that learned VEs in TOM are not only meaningful, but can help make superior predictions, without a priori knowledge of variable meaning. The next section shows how such VEs can be used to exploit regularities across tasks in an MTL setting. 5.2 EXPLOITING REGULARITIES ACROSS DISJOINT TASKS This section considers two synthetic multi-task problems that contain underlying regularities across tasks. These regularities are not known to the model a priori; it can only exploit them via its VEs. The first problem evaluates TOM in a regression setting where input and output variables are drawn from the same continuous space; the second problem evaluates TOM in a classification setting. For classification tasks, each class defines a distinct output variable. Task 1 Task 2 Transposed Gaussian Process. In the first problem, the universe is defined by a Gaussian process (GP). The GP is 1D, is zero-mean, and has an RBF kernel with length-scale 1. One task is generated for each (# inputs, # outputs) pair in {1, . . . , 10} × {1, . . . , 10}, for a total of 100 tasks. The “true” location of each variable lies in the single dimension of the GP, and is sampled uniformly from [0, 5]. Samples for the task are generated by sampling from the GP, and measuring the value at each variable location. The dataset for each task contains 10 training samples, 10 validation samples, and 100 test samples. Samples are generated independently for each task. The goal is to minimize MSE of the outputs. Figure 5 gives two examples of tasks drawn from this universe. This testbed is ideal for TOM, because, by the definition of the GP, it explicitly captures the idea that variables whose VEs are nearby are closely related, and every variable has some effect on all others. Concentric Hyperspheres. In the second problem, each task is defined by a set of concentric hyperspheres. Many areas of human knowledge have been organized abstractly as such hyperspheres, e.g., planets around a star, electrons around an atom, social relationships around an individual, or suburbs around Washington D.C.; the idea is that a model that discovers this common organization could then share general knowledge across such areas more effectively. To test this hypothesis, one task is generated for each (# features n, # classes m) pair in {1, . . . , 10}×{2, . . . , 10}, for a total of 90 tasks. For each task, its origin ot is drawn from N (0, In). Then, for each class c ∈ {1, . . . ,m}, samples are drawn from Rn uniformly at distance c from ot, i.e., each class is defined by a (hyper) annulus. The dataset for each task contains five training samples, five validation samples, and 100 test samples per class. The model has no a priori knowledge that the classes are structured in annuli, or which annulus corresponds to which class, but it is possible to achieve high accuracy by making analogies of annuli across tasks, i.e., discovering the underlying structure of this universe. In these experiments, TOM is compared to five alternative methods: (1) TOM-STL, i.e. TOM trained on each task independently; (2) DR-MTL (Deep Residual MTL), the standard cross-domain (Table 1c) version of TOM, where instead of FiLM layers, each task has its own linear encoder and decoder layers, and all residual blocks are CoreResBlocks; (3) DR-STL, which is like DR-MTL except it is trained on each task independently; (4) SLO (Soft Layer Ordering; Meyerson & Miikkulainen, 2018), which uses a separate encoder and decoder for each task, and which is (as far as we know) the only prior Deep MTL approach that has been applied across disjoint tabular datasets; and (5) Oracle, i.e. TOM with VEs fixed to intuitively correct values. The Oracle is included to give an upper bound on how well the TOM architecture in Section 4 could possibly perform. The oracle VE for each Transposed GP task variable is the location where it is measured in the GP; for Concentric Hyperspheres, the oracle VE for each class c is c/10, and for the ith feature is oti. TOM outperforms the competing methods and achieves performance on par with the Oracle (Table 3). Note that the improvement of TOM over TOM-STL is much greater than that of DR-MTL over DR-STL, indicating that TOM is particularly well-suited to exploiting structure across disjoint data sets (learned VEs are shown in Figure 6a-b). Now that this suitability has been confirmed, the next section evaluates TOM across a suite of disjoint, and seemingly unrelated, real-world problems. 5.3 MULTI-TASK LEARNING ACROSS SEEMINGLY UNRELATED REAL-WORLD DATASETS This section evaluates TOM in the setting for which it was designed: learning a single shared model across seemingly unrelated real-world datasets. The set of tasks used is UCI-121 (Lichman, 2013; Fernández-Delgado et al., 2014), a set of 121 classification tasks that has been previously used to evaluate the overall performance of a variety of deep NN methods (Klambauer et al., 2017). The tasks come from diverse areas such as medicine, geology, engineering, botany, sociology, politics, and game-playing. Prior work has tuned each model to each task individually in the single-task regime; no prior work has undertaken learning of all 121 tasks in a single joint model. The datasets are highly diverse. Each simply defines a classification task that a machine learning practitioner was interested in solving. The number of features for a task range from 3 to 262, the number of classes from 2 to 100, and the number of samples from 10 to 130,064. To avoid underfitting to the larger tasks, C = 128, and after joint training all model parameters (θf , θg1 , θg2 , and z’s) are finetuned on each task with at least 5K samples. Note that it is not expected that training any two tasks jointly will improve performance in both tasks, but that training all 121 tasks jointly will improve performance overall, as the model learns general knowledge about how to make good predictions. Results across a suite of metrics are shown in Table 4. Mean Accuracy is the test accuracy averaged across all tasks. Normalized Accuracy scales the accuracy within each task before averaging across tasks, with 0 and 100 corresponding to the lowest and highest accuracies. Mean Rank averages the method’s rank across tasks, where the best method gets a rank of 0. Best % is the percentage of tasks for which the method achieves the top accuracy (with possible ties). Win % is the percentage of tasks for which the method achieves accuracy strictly greater than all other methods. TOM outperforms the alternative approaches across all metrics, showing its ability to learn many seemingly unrelated tasks successfully in a single model (see Figure 6c for a high-level visualization of learned VEs). In other words, TOM can both learn meaningful VEs and use them to improve prediction performance. 6 DISCUSSION AND FUTURE WORK Sections 2 and 3 developed the foundations for the TOM approach; Sections 4 and 5 illustrated its capabilities, demonstrating its value as a general multitask learning system. This section discusses four key areas of future work for increasing the understanding and applicability of the approach. First, there is an opportunity to develop a theoretical framework for understanding when TOM will work best. It is straightforward to extend universal approximation results from approximation of single functions (Cybenko, 1989; Lu et al., 2017; Kidger & Lyons, 2020) to approximation of a set of functions each with any input and output dimensionality via Eq. 2. It is also straightforward to extend convergence bounds for certain model classes, such as PAC bounds (Bartlett & Mendelson, 2002; Neyshabur et al., 2018), to TOM architectures implemented with these classes, if the “true” variable embeddings are fixed a priori, so they can simply be treated as features. However, a more intriguing direction involves understanding how the true locations of variables affects TOM’s ability to learn and exploit them, i.e., what are desirable theoretical properties of the space of variables? Second, in this paper, TOM was evaluated only in the case when the data for all tasks is always available, and the model is trained simultaneously across all tasks. However, it would also be natural to apply TOM in a meta-learning regime (Finn et al., 2017; Zintgraf et al., 2019), in which the model is trained explicitly to generalize to future tasks, and to lifelong learning (Thrun & Pratt, 2012; Brunskill & Li, 2014; Abel et al., 2018), where the model must learn new tasks as they appear over time. Simply freezing the learned parameters of TOM results in a parametric class of ML models with C parameters per variable that can be applied to new tasks. However, in practice, it should be possible to improve upon this approach by taking advantage of more sophisticated fine-tuning and parameter adaptation. For example, in low-data settings, methods can be adapted from meta-learning approaches that modulate model weights in a single forward pass instead of performing supervised backpropagation (Garnelo et al., 2018; Vuorio et al., 2019). Interestingly, although they are designed to address issues quite different from those motivating TOM, the architectures of such approaches have a functional decomposition that is similar to that of TOM at a high level (see e.g. Conditional Neural Processes, or CNPs; Garnelo et al., 2018). In essence, replacing the VEs in Eq. 2 with input samples and the variables with output samples yields a function that generates a prediction model given a dataset. This analogy suggests that it should be possible to extend the benefits of CNPs to TOM, including rich uncertainty information. Third, to make the foundational case for TOM, this paper focused on the setting where VEs are a priori unknown, but when such knowledge is available, it could be useful to integrate with learned VEs. Such an approach could eliminate the cost of relearning VEs, and suggest how to take advantage of spatially-customized architectures. E.g., convolution or attention layers could be used instead of dense layers as architectural primitives, as in vision and language tasks. Such specialization could be instrumental in making TOM more broadly applicable and more powerful in practice. Finally, one interpretation of Fig. 6c is that the learned VEs of classes encode a task-agnostic concept of “normal” vs. “abnormal” system states. TOM could be used to analyze the emergence of such general concepts and as an analogy engine: to describe states of one task in the language of another. 7 CONCLUSION This paper introduced the traveling observer model (TOM), which enables a single model to be trained across diverse tasks by embedding all task variables into a shared space. The framework was shown to discover intuitive notions of space and time and use them to learn variable embeddings that exploit knowledge across tasks, outperforming single- and multi-task alternatives. Thus, learning a single function that cares only about variable locations and their values is a promising approach to integrating knowledge across data sets that have no a priori connection. The TOM approach thus extends the benefits of multi-task learning to broader sets of tasks. ACKNOWLEDGEMENTS Thank you to Babak Hodjat and others in the Evolutionary AI research group for helpful discussions and technical feedback. Thank you also to the reviewers, particularly for their suggestions for improving the organizational structure and clarity of the paper. A ADDITIONAL EXPERIMENT ON THE EMBEDDING SIZE C In the experiments in Section 5.1 and 5.2, the VE dimensionality C for TOM was set to 2 in order to most clearly visualize the VEs that were learned. In the experiment in Section 5.3, C was increased in order to accommodate the scale-up to a large number of highly diverse real world tasks. In that experiment C was set to 128 in order to match the number of task-specific parameters of the other Deep MTL methods compared in Table 4. To evaluate the sensitivity of TOM to the setting of C, additional experiments were run for TOM on UCI-121 with C = 64 and C = 256. The results are shown in Table 5. Metrics for all settings of C are computed w.r.t. the external comparison methods, i.e., those in Table 4a. TOM with C = 64 produces performance comparable to C = 128, suggesting that optimizing C could be a useful lever for balancing performance and VE interpretability. B PYTORCH CODE To give a detailed picture of how the TOM architecture in this paper was implemented, the code for the forward pass of the model implemented in pytorch (Paske et al., 2017) is given in Figure 7. For efficiency, TOM is implemented with Conv1D layers with kernel size 1 instead of Dense layers. This approach enables the model to run the encoder and decoder on all variables in parallel. The fact that Conv layers are so highly optimized in pytorch makes the implementation substantially more efficient than with Dense layers. In this code, input batch has shape (batch size, input variables), input contexts has shape (1, VE dim, # input variables), and output contexts has shape (1, VE dim, # output variables). Code for TOM will be available at https://github. com/leaf-ai/tom-release. C ADDITIONAL EXPERIMENTAL DETAILS A sigmoid layer is applied at the end of the decoder for the CIFAR experiments, to squash the output between 0 and 1. For the CIFAR and Daily Temperature experiments, a subset of the variables is sampled each iteration to be used as input. This subset is sampled in the following way: (1) Sample the size k of the subset uniformly from [1, nt], where nt is the number of variables in the experiment; (2) Sample a subset of variables of size k uniformly from all subsets of size k. This sampling method ensures that every subset size has an equal chance of getting selected, so that the universe is not biased towards tasks of a particular size. E.g., if instead the subset were created by sampling each variable independently with probability p, then the subset size would concentrate tightly around pnt. For classification tasks, each class defines a distinct output variable, i.e., a K-class classification task has K output variables. The squared hinge loss was used for classification tasks (Janocha & Czarnecki, 2017). It is preferable to categorical cross-entropy loss in this setting, because it does not require taking a softmax across output variables, so the outputs are kept separate. Also, the loss becomes exactly zero once a sample is learned strongly, so that the model does not continue to overfit as remaining samples and tasks are learned. The number of blocks in the encoder, core, and decoder is N = 3 for all problems except UCI-121, for which it is N = 10. All experiments use a hidden size of 128 for all dense layers aside from the final decoder layer that maps to the output space. The batch size was 32 for CIFAR and Daily Temperature, and max(200, # train samples) for all other tasks. At each step, To tasks are uniformly sampled from the set of all tasks, and gradients are summed over a batch for each task in the sample. To = 1 in all experiments except UCI-121, for which To = 32. To allow for multi-task training with datasets of varying numbers of samples, we say the model has completed one epoch each time it is evaluated on the validation set. An epoch is 1000 steps for CIFAR, 100 steps for Daily Temperature, 1K steps for Transposed Gaussian Process, 1K steps for Concentric Hyperspheres, and 10K steps for UCI-121. For CIFAR, the official training and test splits are used for training and testing. No validation set is needed for CIFAR, because none of the models can overfit to the training set. For Daily Temperature, the second-to-last year of data is withheld for validation, and the final year is withheld for testing. The UCI-121 experiments use the preprocessed versions of the official train-val-test splits (https://github.com/bioinf-jku/SNNs/tree/master/UCI). Adam is used for all experiments, with all parameters initialized to their default values. In all experiments except UCI-121, the learning rate is kept constant at 0.001 throughout training. In UCI-121, the learning rate is decreased by a factor of two when the mean validation accuracy has not increased in 20 epochs; it is decreased five times; model training stops when it would be decreased a sixth time. Models are trained for 500K steps for CIFAR, 100K steps for Daily Temperature, and 250K for Transposed Gaussian Process and Concentric Hyperspheres. The test performance for each task is its performance on the test set after the epoch of its best validation performance. Weights are initialized using the default pytorch initialization (aside from the SkipInit α scalars, which are initialized to zero (De & Smith, 2020)). The experiments in Section 5.1 use no weight decay; in Section 5.2 use weight decay of 10−4; and in Section 5.3 use weight decay of 10−5. Dropout is set to 0.0 for CIFAR, Daily Temperature, and Concentric Hyperspheres; and 0.5 for Transposed Gaussian Process and UCI-121. In UCI-121, fully-trained MTL models are finetuned to tasks with more than 5,000 samples, using the same optimizer configuration as for joint training, except the steps-per-epoch is set to d# train samples/batch sizee, the learning rate is initialized to 0.0001, the patience for early stopping is set to 100, and the validation performance is smoothed over every 10 epochs (simple moving average), following the protocol used to train single-task models in prior work (Klambauer et al., 2017). TOM uses a VE size of C = 2 for all experiments, except for UCI-121, where C = 128 in order to accommodate the complexity of such a large and diverse set of tasks. For Figure 6c, t-SNE (van der Maaten & Hinton, 2008) was used to reduce the dimensionality to two. t-SNE was run for 10K iterations with default parameters in the scikit-learn implementation (Pedregosa et al., 2011), after first reducing the dimensionality from 128 to 32 via PCA. Independent runs of t-SNE yielded qualitatively similar results. Autoencoding (i.e., predicting the input variables as well as unseen variables) was used for CIFAR, Daily Temperature, and Transposed Guassian Process; it was not used for Concentric Hyperspheres or UCI-121. The Soft Layer Ordering architecture follows the original implementation (Meyerson & Miikkulainen, 2018). There are four shared ReLU layers, each of size 128, with dropout after each to ease sharing across different soft combinations of layers. In Tables 2 and 3 means and standard error for each method are computed over ten runs. The Daily Temperature dataset was downloaded from https://raw.githubusercontent. com/jbrownlee/Datasets/master/daily-min-temperatures.csv. D ADDITIONAL DETAILED RESULTS FOR UCI-121 EXPERIMENT Table 6 contains test accuracies for each UCI-121 task for all methods run in the experiments in Section 5.3. Method DR-STL TOM-STL DR-MTL SLO TOM led-display 75.600 27.200 79.600 73.600 74.000 lenses 83.333 66.667 50.000 50.000 50.000 letter 95.980 97.480 87.220 94.580 94.780 libras 43.333 11.111 78.889 76.667 80.000 low-res-spect 81.955 56.391 83.459 82.707 90.977 lung-cancer 50.000 25.000 62.500 50.000 62.500 lymphography 86.486 56.757 94.595 86.486 86.486 magic 86.982 86.898 81.325 86.877 87.024 mammographic 81.250 82.500 80.833 82.083 83.750 miniboone 92.782 94.630 93.345 94.338 93.532 molec-biol-promoter 88.462 50.000 69.231 61.538 92.308 molec-biol-splice 85.696 92.723 86.324 85.822 93.350 monks-1 65.509 50.000 71.991 86.574 80.787 monks-2 40.509 67.130 62.731 64.583 62.500 monks-3 74.306 52.778 66.898 68.981 58.102 mushroom 99.655 100.000 99.803 100.000 100.000 musk-1 83.193 57.143 92.437 90.756 91.597 musk-2 98.666 98.848 98.787 99.272 99.636 nursery 99.568 99.877 95.926 99.753 99.630 oocytes merluccius nucleus 4d 83.922 70.588 77.647 83.529 85.098 oocytes merluccius states 2f 89.412 92.549 94.510 92.157 95.294 oocytes trisopterus nucleus 2f 73.684 75.877 75.439 78.509 78.947 oocytes trisopterus states 5b 94.298 92.544 93.421 94.737 92.982 optical 95.993 95.326 94.658 94.380 95.938 ozone 97.161 97.161 97.161 97.161 97.161 page-blocks 95.468 96.199 94.371 96.272 96.345 parkinsons 89.796 75.510 83.673 87.755 83.673 pendigits 96.855 97.055 97.055 96.884 96.627 pima 71.875 71.875 73.438 75.521 76.562 pittsburg-bridges-MATERIAL 73.077 76.923 88.462 84.615 92.308 pittsburg-bridges-REL-L 69.231 65.385 65.385 73.077 61.538 pittsburg-bridges-SPAN 52.174 56.522 65.217 65.217 60.870 pittsburg-bridges-T-OR-D 84.000 88.000 84.000 84.000 88.000 pittsburg-bridges-TYPE 38.462 50.000 61.538 65.385 53.846 planning 64.444 71.111 71.111 68.889 71.111 plant-margin 76.750 6.750 71.250 69.500 74.000 plant-shape 39.000 20.750 31.500 65.750 70.500 plant-texture 74.250 4.000 69.750 69.000 77.250 post-operative 72.727 72.727 77.273 72.727 72.727 primary-tumor 45.122 30.488 47.561 47.561 51.220 ringnorm 95.027 98.108 84.324 96.054 98.324 seeds 80.769 80.769 86.538 94.231 92.308 semeion 95.729 92.462 94.724 88.693 94.472 soybean 65.426 18.617 89.628 82.979 83.777 spambase 93.826 92.609 92.609 93.478 93.913 spect 61.828 56.989 67.204 65.054 68.280 spectf 49.733 91.979 60.963 60.428 91.979 statlog-australian-credit 66.860 68.023 68.023 63.372 62.209 statlog-german-credit 73.600 76.000 74.400 76.800 74.800 statlog-heart 89.552 79.104 89.552 82.090 83.582 statlog-image 96.360 95.841 90.988 97.054 97.747 statlog-landsat 89.900 91.250 83.450 88.950 90.600 statlog-shuttle 98.621 99.945 98.021 99.910 99.945 statlog-vehicle 73.934 48.341 78.199 79.621 74.882 steel-plates 74.845 64.536 68.041 76.495 77.526 synthetic-control 73.333 69.333 97.333 96.667 99.333 Continued on next page. Method DR-STL TOM-STL DR-MTL SLO TOM teaching 60.526 36.842 55.263 52.632 47.368 thyroid 98.308 98.775 96.820 97.841 98.804 tic-tac-toe 97.071 97.071 97.071 97.071 96.653 titanic 77.636 77.091 78.364 78.364 78.364 trains 100.000 50.000 100.000 100.000 100.000 twonorm 98.270 98.108 98.162 98.108 98.054 vertebral-column-2clases 83.117 67.532 87.013 87.013 85.714 vertebral-column-3clases 70.130 59.740 84.416 68.831 85.714 wall-following 86.437 98.827 72.507 90.396 97.434 waveform 87.520 87.360 87.760 86.800 87.760 waveform-noise 85.920 85.360 85.360 84.720 85.840 wine 100.000 70.455 100.000 100.000 100.000 wine-quality-red 59.000 57.500 57.750 63.750 61.000 wine-quality-white 56.863 53.758 53.513 57.761 56.944 yeast 60.108 53.908 60.377 59.838 59.838 zoo 96.000 48.000 96.000 96.000 92.000 Continued from previous page.
1. What is the focus and contribution of the paper on learning multiple heterogeneous supervised tasks? 2. What are the strengths of the proposed model, particularly in its ability to learn task embeddings and outperform state-of-the-art methods? 3. Are there any minor suggestions for improving the paper, such as clarifying notations and providing more details in certain sections? 4. How does the reviewer assess the novelty and broad interest of the proposed approach? 5. Are there any similarities or equivalencies between the proposed model and other existing works, such as conditional neural processes?
Review
Review This paper presents the traveling observer model (TOM), a general framework to learn multiple heterogenous supervized (input,output) tasks, which are indexed by a continuous "variable embedding" that is automatically learned by the system. The authors show on simple problems that the learned task embeddings can recover an intuitive organization of the problems' variables in space or time. They also show that the model simultaneously trained on 121 seemingly unrelated classification tasks can outperform state-of-the art supervized methods fine-tuned on single tasks. The proposed model is novel, technically sound, of broad interest and very promising. The paper is clearly written and easy to follow. The presented experiments convincingly demonstrate the sensibility and usefulness of the approach. The topic perfectly fits the scope of ICLR. Minor suggestions for improvement: Section 2 (first paragraph) the notations are a bit confusing here. First, the sample indices s=1...S_t are denoted as superscripts while task indices t=1...T are denoted as subscript, but then the sample indices are dropped and never used again, while task indices become superscripts and variable dimensions are denoted as subscript. The definitions of sets V_t^In and V_t^out is also strange. I think they should denote the union of all the spaces that variables are living in, but instead they are defined as finite sets of specific variables. The definition of "the universe" V in section 3 is also a bit sketchy. Is that a set of sets? a category? I think that when using a pre-defined "oracle" variable embedding, the proposed model becomes very similar or even equivalent to conditional neural processes (Gamello et al. 2018). It would be interesting to comment on that. There is an unfortunate double use of the letter h for two different things in equation (3) and (4) Sec. 4.4 "after joint training the model is finetuned on each task with at least 5K samples" -> is the whole model fine-tuned or only the function g? or g and h? Please clarify.
ICLR
Title Variadic Learning by Bayesian Nonparametric Deep Embedding Abstract Learning at small or large scales of data is addressed by two strong but divided frontiers: few-shot learning and standard supervised learning. Few-shot learning focuses on sample efficiency at small scale, while supervised learning focuses on accuracy at large scale. Ideally they could be reconciled for effective learning at any number of data points (shot) and number of classes (way). To span the full spectrum of shot and way, we frame the variadic learning regime of learning from any number of inputs. We approach variadic learning by meta-learning a novel multi-modal clustering model that connects bayesian nonparametrics and deep metric learning. Our bayesian nonparametric deep embedding (BANDE) method is optimized end-to-end with a single objective, and adaptively adjusts capacity to learn from variable amounts of supervision. We show that multi-modality is critical for learning complex classes such as Omniglot alphabets and carrying out unsupervised clustering. We explore variadic learning by measuring generalization across shot and way between meta-train and meta-test, show the first results for scaling from few-way, few-shot tasks to 1692-way Omniglot classification and 5k-shot CIFAR-10 classification, and find that nonparametric methods generalize better than parametric methods. On the standard few-shot learning benchmarks of Omniglot and mini-ImageNet, BANDE equals or improves on the state-of-the-art for semi-supervised classification. 1 INTRODUCTION In machine learning, classification problems span two important axes: the number of classes to recognize (the "way" of the problem) and the number of examples provided for each class (the "shots" to learn from). At one extreme, there are large-scale tasks like ImageNet in which there are 1000 classes with roughly 1000 examples each (a 1000-way, ∼1000-shot problem). At the other extreme, there are datasets for learning from few examples, such as Omniglot, which features a 5- or 20-way, 1-shot problem. State-of-the-art methods for these two learning regimes are substantially different, with the former dominated by standard parametric deep networks and the latter by episodic meta-learning techniques. Moreover, as shown in our experiments, many methods degrade when the shot and way vary between training and testing. By contrast, humans recognize both familiar and unfamiliar categories whatever the amount of data, and can even learn a new category from a single example (Lake et al., 2015). To this end, we introduce a learning problem which requires generalization from few-way, few-shot problems to many-way, many-shot problems. We call this regime of variable shot and way the variadic learning regime, after variadic functions. Just as variadic functions are those which can take any number of arguments to produce a result, a good variadic learner must learn from any amount of data, whatever the number of examples and classes, and produce strong results across unknown data distributions during test. Meta-learning provides one potential avenue for pursuing a variadic learner. Meta-learning approaches generally use plentiful supervision from one distribution of tasks to learn an algorithm or metric that can be applied to more sparsely supervised tasks. Ideally, meta-learning approaches do not need knowledge of the specific setting in which they will be used. However, in practice, metalearning approaches have commonly been trained and evaluated in constrained circumstances, so their generalization properties are not fully known. Perhaps most significantly, meta-learning is usually carried out independently across settings so that a different learner is specialized to each n-way, k-shot task. This potentially limits their deployment to more diverse settings with variable shot and way that we address in this work. As a first step towards a strong variadic learner, we propose a multi-modal (many-to-one) semisupervised clustering approach which can adapt its capacity to the underlying class representations, and show that this is critical for modeling more complex data distributions. This innovation allows our model to perform inference with any amount of supervision (from totally unsupervised to fully supervised) after training, and adjust better to variable shot and way than existing approaches. Our bayesian nonparametric deep embedding (BANDE) model (see Figure 1) extends prototypical networks to multi-modal clustering. Clustering with multiple modes is critical for complex classes, and multi-modality makes unsupervised clustering possible. BANDE generalizes across any-shot, any-way tasks better than existing methods. At the many-way extreme, when trained with 5-way 1-shot episodes, BANDE achieves 75% accuracy for 1692-way 10-shot classification of Omniglot, improving on both few-shot and supervised learning baselines. At the many-shot extreme, BANDE approaches the accuracy of a standard supervised learner on CIFAR-10/100. On standard few-shot benchmarks BANDE is state-of-the-art in the semi-supervised setting. 2 RELATED WORK Prototypes and Nonparametrics Prototypical networks (Snell et al., 2017) and semi-supervised prototypical networks (Ren et al., 2018) are the most closely related to our work. Prototypical networks simply and efficiently represent each class by its mean in a learned embedding. They assume that the data is fully labeled. Ren et al. (2018) extend prototypes to the semi-supervised setting by refining prototypes through soft k-means clustering of the unlabeled data. They assume that the data is at least partially labeled. Snell et al. (2017) and Ren et al. (2018) are limited to one cluster per class. We define a more general and adaptive approach through bayesian nonparametrics that extends prototypical networks to multi-modal clustering, with one or many clusters per class, of labeled and unlabeled data alike. Through multi-modal representation and adaptive inference of the number of modes, our method is significantly more accurate on complex classes, does unsupervised clustering, and improves on standard semi-supervised few-shot learning benchmarks. For multi-modal clustering we incorporate DP-means (Kulis & Jordan, 2012) in our method. DPmeans is a scalable, bayesian nonparametric algorithm for unsupervised clustering that creates new clusters when data points are more than a threshold λ away from existing clusters. Our full method handles labeled and unlabeled data, augments the clustering with soft assignments under a normalized Gaussian likelihood, and defines a procedure to choose λ during learning and inference. Metric Learning Learning a metric to measure a given notion of distance/similarity addresses recognition by retrieval: given an unlabeled example, find the closest labeled example. The contrastive loss and siamese network architecture (Chopra et al., 2005; Hadsell et al., 2006) optimize an embedding for metric learning by pushing similar pairs together and pulling dissimilar pairs apart. Of particular note is research in face recognition, where a same/different retrieval metric is used for many-way classification (Schroff et al., 2015). Our approach is more aligned with metric learning by meta-learning (Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Garcia & Bruna, 2018). These approaches meta-learn a distance function by directly optimizing the task loss, such as cross-entropy for classification, through episodic optimization (Vinyals et al., 2016) for each setting of way and shot. While we likewise learn by episodic optimization, we differ from previous meta-learning work in our examination of generalization to variable numbers of examples and classes during testing, and show improvement in this regime. Unlike metric learning on either exemplars (Schroff et al., 2015) or prototypes (Snell et al., 2017; Ren et al., 2018), our method adaptively interpolates between exemplar and uni-modal prototype representation by deciding the number of modes during clustering. Learning Regimes Variadic learning is best explained in relation to few-shot learning, low-shot learning, and conventional supervised learning. Few-shot learning (Fei-Fei et al., 2006; Vinyals et al., 2016) handles tasks of fixed, known, and small numbers of data points and classes. In contrast, variadic tasks have variable numbers of data points and classes that can shift across tasks. Low-shot learning (Hariharan & Girshick, 2017; Qi et al., 2018; Qiao et al., 2018) addresses both densely supervised base classes and sparsely supervised novel classes, but presupposes which classes are in which set. Variadic learning also addresses these extremes of supervision, but requires no knowledge of how much or how little supervision each class has. Large-scale supervised learning (Bottou, 2010) parameterizes the model by the number of classes, and is tuned to the amount of data by choosing capacity, optimization schedules, and so forth. Variadic learning requires accuracy without specialization to shot and way. Life-long learning (Thrun, 1996; 1998) concerns variable shot and way for streams of non-stationary problems, while variadic learning is for one problem of unknown dimensions. Bridging life-long and variadic learning is sensible but out of scope for this work. 3 BAYESIAN NONPARAMETRIC DEEP EMBEDDINGS (BANDE) Our method end-to-end learns a deep embedding network and jointly clusters labeled and unlabeled data points by bayesian nonparametrics. Crucially, our model is able to express a single class as multiple modes, unlike the uni-modal clustering approaches of prior work. Figure 1 gives a schematic view of our multi-modal representation and how it differs from prior prototypical representations. Algorithm 1 expresses one step of model optimization in pseudocode. Few-shot Meta-learning In few-shot classification we are given a support set S = {(x1, y1), . . . , (xK , yK)} of K labeled examples and a query set Q = {(x′1, y′1), . . . , (x′K′ , y′K′)} of K ′ labeled examples where each xi, x′i ∈ RD is a D-dimensional feature vector and yi, y′i ∈ {1, . . . , N} is the corresponding label. In the semi-supervised setting, yi may not be provided for every example xi. The support set is for learning while the query set is for inference: the few-shot classification problem is to recognize the class of the queries given the labeled supports. Meta-learning is carried out by episodic optimization of the model parameters for the task loss. Episodes are comprised of support and query sets, constructed by randomly sampling a subset of classes, sampling examples from these classes, and then partitioning the examples into supports and queries. Optimization iterates by making one episode and one update. The update is defined by the task loss, which for classification could be the softmax cross-entropy loss. For deep metric learning models like ours, the model parameters are those of the embedding function hφ : RD → RM that is a deep network with parameters φ. The embedding of an example x is the M -dimensional feature vector taken from the last layer of the network. Meta-training proceeds by optimizing the model parameters φ with respect to a task loss. Meta-testing proceeds episodically like meta-training but without query labels or further optimization. Prototypes Prototypical networks (Snell et al., 2017) take the mean of the embedded support examples of a particular class to form a prototype: µn = 1|Sn| ∑ (xi,yi)∈Sn hφ(xi), with Sn denoting the set of support examples of class n. In conjunction with a distance function d(xi, xj), this provides an inference scheme for a query point x as the softmax over distances to the prototypes: pφ(y = n |x) = exp(−d(hφ(x),µn))∑ n′ exp(−d(hφ(x),µn′ )) . φ is optimized by minimizing the negative log-probability of the true class of each query point by stochastic gradient descent in each episode. Prototypical networks defined in this way learn to create uni-modal class distributions for fully labeled supports. Multi-modal Clustering Our method defines multi-modal prototypes of both labeled and unlabeled data. That is, a single class is represented by a set of cluster modes. By deciding the number of modes, our method interpolates between exemplar and uni-modal prototype representations, in effect adjusting its capacity depending on the data. To create multi-modal prototypes, we extend the non-parametric clustering algorithm DP-means (Kulis & Jordan, 2012) to make it compatible with end-to-end learning. DP-means iterates through all examples in a dataset, computing the example’s minimum distance to all existing cluster means. If this distance is greater than a particular threshold λ, a new cluster is created with mean hφ(xi), the example assigned to it. If xi is labeled, the new cluster takes on its label. While we use DP-means for cluster creation, we include cluster variances for reassignment. Labeled clusters are assigned a variance σl and unlabeled clusters are assigned a variance σu. σl and σu are differentiable, and therefore learned along with the embedding parameters φ. (We discuss the probabilistic interpretations of this choice in the next section.) λ, the threshold for creating a new cluster, is the sole hyperparameter for DP-means clustering. It is non-differentiable, and so it cannot be learned jointly. Instead, we set λ episodically as a function of the data. In Kulis & Jordan (2012), λ is parameterized as −2σ log( α (1+ ρσ ) d/2 ). α is the relative probability of forming a new cluster in the Chinese Restaurant Process prior (Aldous, 1985), and ρ is a measure of the standard deviation for the base distribution from which clusters are assumed to be drawn. We estimate ρ as the variance in the labeled cluster means within an episode, while α is treated as a hyperparameter. In our experiments, we found a wide range of α values to give similar results, with the embeddings adjusting their overall magnitudes to match the magnitude of α. 3.1 PROBABILISTIC INTERPRETATIONS OF HARD AND SOFT CLUSTERING The choice of hard or soft clustering has theoretical ramifications. There are three clustering variants to consider: fully hard, fully soft, and hybrid hard-soft. Fully hard clustering corresponds to following DP-means in a theoretically exact manner, with both σu and σl set to 0, and the UPDATEASSIGNMENTS function assigning zi = argminc[di,c] for each example i. This variant is theoretically precise as an extension of DP-means for endto-end learning and simultaneous clustering of labeled and unlabeled data. Fully soft clustering corresponds to an extension and reinterpretation of prior work on semi-supervised prototypical networks (Ren et al., 2018) (see Section A.4). Through the lens of bayesian nonparametrics, we derive this connection to an approximation of the Chinese Restaurant Process (CRP) (Aldous, 1985) in Section A.4 of the appendix. While fully hard and fully soft clustering admit clearer probabilistic interpretations, they are empirically less accurate than hybrid hard-soft clustering. Table 1: Clustering comparison on 5-way 1-shot semi-sup. Omniglot. Clustering Accuracy Hard-Hard 97.0 Soft-Soft 98.4 BANDE (Hard-Soft) 99.0 Table 1 compares the variants on a standard semi-supervised few-shot learning benchmark (detailed further in Section 4.3). BANDE does hard-soft clustering throughout our experiments. For hard-soft clustering, UPDATEASSIGNMENTS does soft assignment of zi,c = N (hφ(xi);µc,σc)∑ cN (hφ(xi);µc,σc) for all examples i. 3.2 CUMULATIVE SUPERVISION We extend BANDE into a cumulative variant, BANDE-C, that accumulates supervision non-episodically by remembering prototypes across episodes. Concretely, we initialize the cluster means µc by including a cluster mean from memory, φm,c, with the current episodic sample mean (i.e. µc = 1|zi∈C|+1 (φm,c +∑ i,zi∈c hφ(xi)). φm,c is computed as if c was uni-modal, regardless of whether the clustering was multi-modal in a previous episode. Since the embedding representation rapidly changes early in training, we introduce a discount factor on the stored embedding γφm,c proportional to the current learning rate. Whenever the class is encountered in a future episode, we update the remembered prototype with the cluster mean after episodic inference. We only experiment with BANDE-C in the variadic setting (Section 4.2); everywhere else we keep standard episodic training and testing. Note that standard prototypical networks can likewise be augmented to remember prototypes and non-episodically accumulate supervision in this manner. Algorithm 1 BANDE: one optimization episode. ns is the number of labeled classes (way) and ks is the number of labeled examples of each class (shot). kq is the number of query examples per class. For a set A, An is the subset of A with all examples of class n. p(x|µ, σ) is the Gaussian density. Input: support set S, query set Q, and unlabeled set U . Output: loss J for the episode. C ← ns . C is the total number of clusters for c ∈ {1, ..., C} do lc ← c . lc is the cluster label µc ← 1ks ∑ (xi,yi)∈Sc hφ(xi) . µc is the cluster mean σc ← σl . σc is the cluster variance end for . Iterate over the labeled and unlabeled data and create new clusters. for each example i ∈ S ∪ U do for c in {1, ..., C} do di,c ← { ‖hφ(xi)− µc‖2 if (example i is labeled and lc = yi) or example i is unlabeled +∞ otherwise end for if minc(di,c) > λ then C ← C + 1 lC ← yi . Cluster takes the label of the example µC ← hφ(xi) . Cluster mean takes the embedding of the example σC ← { σl, if yi 6= 0 σu, otherwise end if end for z ← UPDATEASSIGNMENTS({hφ(x)}, µ, σ) . Update all cluster-example assignments µ← { ∑ i zi,chφ(xi)∑ i zi,c | c ∈ 1, ..., C} . Update all cluster means . Cross-entropy loss on the most probable cluster of the true class and all clusters of other classes J ← 0 for n in {1, ..., ns} do c∗ ← argmax c:lc=n log p(x|µc, σc) J ← J+ 1 nskq ∑ (x,y)∈Qn − log p(x|µc∗ , σc∗) + log ∑ c′:lc′ 6=n p(x|µc′ , σc′) + p(x|µc∗ , σc∗) end for 4 EXPERIMENTS We experimentally show that multi-modal prototypes are more accurate and more general than uni-modal prototypes. In our new variadic setting for any-shot, any-way learning we explore which methods do (and do not) generalize across shot and way. We report the first results for extreme generalization to 1692-way classification and 5000-shot from few-shot episodic optimization. For few-shot learning, we show competitive results for few-shot fully-supervised and semi-supervised classification on the standard benchmarks of Omniglot and mini-ImageNet. We control for architecture and optimization by comparing methods with the same base architecture and same episodic optimization settings. All code for our method and baselines will be released. For these experiments we make use of standard few-shot and supervised learning datasets and furthermore define new variadic evaluation protocols on these common benchmarks. We consider Omniglot and mini-ImageNet, two widely used datasets for few-shot learning research, and CIFAR10/CIFAR-100, two popular datasets for supervised learning research with deep networks. Omniglot (Lake et al., 2015) is a dataset of 1,623 handwritten characters from 50 alphabets. There are 20 examples of each character, where the images are resized to 28x28 pixels and each image is rotated by multiples of 90◦. This gives 6,492 classes in total, which are then split into 4,112 training classes, 1,692 test classes and 688 validation classes. mini-ImageNet (Vinyals et al., 2016) is a reduced version of the ILSVRC’12 dataset (Russakovsky et al., 2015), which contains 600 84x84 images for 100 classes randomly selected from the full dataset. We use the split from Ravi & Larochelle (2017) with 64/16/20 classes for train/val/test. CIFAR-10/100 (Krizhevsky & Hinton, 2009) are classification datasets of 32x32 color images drawn from the Tiny Images project (Torralba et al., 2008). CIFAR-10 has 10 classes and CIFAR-100 has 100 classes (plus 20 super-classes). Both have 50k training images and 10k testing images and both are balanced so that every class has an equal number of images. 4.1 ACCURACY AND GENERALITY OF MULTI-MODAL PROTOTYPES Our experiments on Omniglot alphabets and characters show that multi-modal prototypes are significantly more accurate than uni-modal prototypes for recognizing complicated classes (alphabets) and recover uni-modal prototypes as a special case for recognizing simple classes (characters). Multi-modal prototypes generalize better for super-class to sub-class transfer learning, improving accuracy when meta-training on alphabets but meta-testing on characters. By unifying the clustering of labeled and unlabeled data alike, our multi-modal prototypes even address fully unsupervised clustering, unlike prior prototypical network models that are undefined without labels. We first show the importance of multimodality for learning representations of multi-modal classes: Omniglot alphabets. For these experiments we meta-train for alphabet classification, using only the super-class labels. Episodes are constructed by sampling 1 example of 200 different random characters in the support set, with 5 examples of each character in the query. For alphabet testing, we provide 100 randomly selected characters with alphabet labels in the support, making this a mixed-shot problem. For character testing, we provide 1 labeled image of 20 different characters as support, and score based on correct character assignments of the queries. As seen in table 2, in both testing configurations, BANDE substantially outperforms prototypical networks. On 20-way 1-shot character recognition, BANDE achieves 95.3% from alphabet supervision alone, slightly out-performing prototypical networks trained directly on character recognition (94.9%). Fully Unsupervised Clustering BANDE is able to do fully unsupervised clustering during meta-test via multi-modality. Prior work on prototypical networks (Snell et al., 2017) and semi-supervised prototypical networks (Ren et al., 2018) cannot address this setting because the models are undefined without labeled data. BANDE handles labeled and unlabeled data by the same clustering rule, inferring the number of clusters as needed, and achieves good accuracy under the standard clustering metrics of normalized mutual information (NMI) and purity. We examine BANDE’s clustering performance in Table 3 by randomly sampling 5 examples of n classes from the test set and treating them as unlabeled samples. BANDE maintains remarkably strong performance across a large number of unlabeled clusters, without knowing the number of classes in advance, and without having seen any examples from the classes during training. 4.2 ANY-SHOT, ANY-WAY LEARNING IN THE VARIADIC SETTING We now move to the any-shot, any-way setting that this paper introduces. We closely examine extreme generalization across shot and way between meta-train and meta-test, unlike previous approaches which only examine small shifts (Munkhdalai & Yu, 2017; Snell et al., 2017). Most notably, we show that nonparametric methods such as BANDE can generalize from few-way training to many-way testing, while parametric methods fail to transfer effectively. We further show that BANDE, a nonparametric method, performs on par with fully parametric methods in the domain of supervised learning; the first demonstration of a meta-learning method evaluated in the many-shot domain without pre-training. These two results cement the suitability of nonparametric meta-learning methods over parametric methods for the variadic setting. Semi-supervised protocol We train and test BANDE and other prototypical methods on semisupervised data to include the number of labeled and unlabeled examples in the scope of the variadic setting. We follow (Ren et al., 2018), taking only 40% of the data as labeled for both the support and query while the rest of the data is included, but as unlabeled examples. The unlabeled data is then incorporated into episodes as (1) within support examples that allow for semi-supervised refinement of the support classes or (2) distractors which lie in the complement of the support classes. Semi-supervised episodes augment the fully supervised n-way, k-shot support with 5 unlabeled examples for each of the n classes and include 5 more distractor classes with 5 unlabeled instances each. The query still contains only support classes. Variable Shot and Way We first look at generalization by moderately adjusting the shot and way in evaluation from their fixed settings during meta-learning. For variable way, we consider Omniglot, because it has many classes. For variable shot, we consider mini-ImageNet, because it has more examples per class. In both cases, we train on 5-way, 1-shot episodes, and test generalization by varying the number of classes and number of examples during meta-testing. We consider four strong fully-supervised baselines trained on 100% of the data (black lines), as well as prototypical baselines trained on 40% of the data (colored). We compare to three parametric methods, MAML (Finn et al., 2017), Reptile (Nichol & Schulman, 2018), and few-shot graph networks (Garcia & Bruna, 2018), as well as the nonparametric memory-based model of Kaiser et al. (2017). Modifications to these approaches for test-way generalization are discussed in Section A.3. As seen in Figure 2 (a), the parametric meta-learning approaches fail to meaningfully generalize to higher way than they were trained for. BANDE is the least sensitive to higher way meta-testing, although the margin between BANDE and semi-supervised prototypical networks in this regime is small compared to the difference with parametric methods. For shot generalization, we compare to MAML’s accuracy after 10 updates vs. accuracy at convergence. We note that MAML is not able to make effective use of more data unless it is allowed to take proportionately larger numbers of updates, while our method improves with more data without taking gradients at test time. Even at convergence, MAML lags BANDE’s performance, suggesting that a nonparametric approach is still superior to parametric meta-learning. Extreme Generalization to Many-Way We demonstrate that BANDE can learn a full 1692- way classifier for Omniglot from only episodic optimization of 5-way 1-shot tasks. Episodes are composed identically to the few-shot semisupervised setting with unlabeled examples and distractor classes. Accuracies for our method and a supervised learning baseline are shown in Figure 3. For inference, we run k examples from each test class through our learned embedding network, and then assign the unseen examples the label of the closest prototype. The baseline shares the same training set and architecture, substituting a linear output layer for prototypes by optimizing the softmax cross-entropy loss. We take the last feature layer as the embedding for prototypical inference. Fine-tuning on the test support proved less accurate, as did k nearest neighbours inference. This result is an example of episodic optimization yielding strong results for many-way classification, motivating the possibility of learning large-scale models cumulatively from small-scale tasks, instead of restricting attention to the adaptation of large-scale models to small-scale, few-shot settings. Scaling to Many-Shot We examine the effectiveness of BANDE in the conventional supervised learning regime. To the best of our knowledge this is the first evaluation of meta-training across the spectrum from few-shot to many-shot. Our base architecture is the Wide ResNet 28-10 of Zagoruyko & Komodakis (2016), which has shown state-of-the-art results on CIFAR-10/100, and has been additionally used as a base architecture for strong low-shot performance on mini-ImageNet (Qiao et al., 2018). We optimize BANDE by meta-training on episodes consisting of 10-way (CIFAR-10) 2-shot and 20-way (CIFAR-100) tasks for computational considerations. With no knowledge of the total number of classes or number of examples per class, and without pre-training or fine-tuning, we achieve accuracies that rival a well-tuned supervised learning baseline. On CIFAR-10 we achieve 94.4% accuracy compared to the 95.1% accuracy of supervised learning. On CIFAR-100 we achieve 75.6% accuracywhich is > 90% of the 81.2% accuracy of supervised learning. When evaluating both the BANDE and supervised learning embeddings as prototypes the accuracies are equal, suggesting that both approaches learn equally good representations, and differ only in the prototypical vs. parametric form of the classifier. 4.3 FEW-SHOT CLASSIFICATION BENCHMARKS We evaluate BANDE on the standard few-shot classification benchmarks of Omniglot and miniImageNet in the fully-supervised and semi-supervised regimes. BANDE learns to recover uni-modal clustering as a special case, matching or out-performing prototypical networks when the classes are uni-modal, as seen in Table 4. In this setting, we evaluate BANDE in the standard episodic protocol of few-shot learning. In this protocol, shot and way are fixed and classes are balanced within an episode. The results reported in Table 4 are for models trained and tested with n-way episodes. This is to equalize comparison across methods. Snell et al. (2017) train at higher-way than testing and report a boost in accuracy. We find that this boost is illusory, and explained away by controlling for the number of gradients per update. We show this by experiment through the use of gradient accumulation in Section A.2 of the appendix. (For completeness, we confirmed that our implementation of prototypical networks reproduces reported results at higher way.) In the semi-supervised setting we follow (Ren et al., 2018), using the set-up outlined in the second paragraph of section 4.2. Our results for this setting are reported in Table 5. Through multi-modality, the clustering of the labeled classes and distractors is decided by the data with a single rule. In particular this helps with the distractor distribution, which is in fact more diffuse and multi-modal by comprising several different classes. Our only specialization to this setting is to have more uncertain distractor clusters by higher cluster variances to compensate for this diffuseness. 5 CONCLUSION We framed the variadic regime to shine a light on learning representations that bridge small-scale and large-scale learning and strive toward the any-shot/any-way adaptability of human perception. As a step toward addressing this full span, we introduced BANDE, a multi-modal extension of prototypical networks, that is capable of generalizing across variable amounts of labeled and unlabeled data. Our results have shown BANDE is state-of-the-art in the few-shot regime and scales from fewway, few-shot meta-learning to many-way, many-shot deployment for both sparse and plentiful supervision. Our experiments demonstrate that multi-modality is key for improved semi-supervised and unsupervised clustering. There is much work to be done to improve variadic generalization, and to connect to life-long learning over non-stationary tasks. A APPENDIX A.1 IMPLEMENTATION DETAILS For all few-shot experiments, we use the same base architecture as prototypical networks for the embedding network. It is composed of four convolutional blocks consisting of a 64-filter 3 x 3 convolution, a batch normalization layer, a ReLU nonlinearity, and a 2 x 2 max-pooling layer per block. This results in a 64-dimensional embedding vector for omniglot, and a 1600 dimensional embedding vector for mini-imagenet. Our models were trained via SGD with RMSProp (Tieleman & Hinton, 2012) with an α parameter of 0.9. For Omniglot, the initial learning rate was set to 1e-3, and cut by a factor of two every 2000 iterations, starting at 4000 iterations. We additionally use gradient accumulation and accumulate gradients over eight episodes before making an update when performing 5-way training for Omniglot. For mini-ImageNet, the initial learning rate was set to 1e-3, and further halved every 20000 iterations, starting at 40000 iterations. For the supervised experiments, we use a wide residual network (Zagoruyko & Komodakis, 2016) with depth 28 and widening factor 10, with a dropout value of 0.3. We were not able to perfectly recover published results with our reimplementation, but the numbers are within 1% of their published values. A.2 CONTROLLING FOR THE NUMBER OF GRADIENTS TAKEN DURING OPTIMIZATION Consider the gradient of the loss: it has the dimensions of shot × way because every example has a derivative with respect to every class. In this way, by default, the episode size determines the number of gradients in an update. Quantitatively, 20-way episodes accumulate 16 times as many gradients as 5-way episodes. By sampling 16 5-way episodes and accumulating the gradients to make an update, we achieve significantly better results, matching the results obtained with 20-way episodes within statistical significance. Note that agreement across conditions may not be perfectly exact because subtle adjustments to hyperparameters might be necessary. A.3 EXTENDING COMPARED MODELS TO VARIADIC REGIME The models we compare to were not designed with variadic generalization in mind, and as a result we attempt to make as fair a comparison as possible by extending them as needed. We describe our approaches below. Semi-supervised prototypical networks In the paper first introducing this semi-supervised setting (Ren et al., 2018), the authors show how to use a distractor cluster centered at 0 to capture samples not belonging to any examples from the support. They additionally introduce length scales rc. In equation 6 from their paper, they use a normalization constantA(rc) defined as 0.5 log(2π)+log(r). However, this is an unscaled normalization constant, and assumes the dimensionality of the embedding space to be 1. The corrected normalization constant is A(rc) = d(log(rc) + 0.5 log(2π)) where d is the dimensionality of the embedding. We compare to their method with this corrected normalization constant, but note that it has only a small effect. For space, we did not compare to all methods from their paper, and chose this one as it performed well across their experiments, and because it was most amenable to the clustering experiments we were interested in performing. MAML We used Finn’s publicly available github repository (Finn et al., 2017). We trained an initial MAML architecture on the 5-way 1-shot task, using the suggested hyperparameters, for 40,000 iterations. We then removed the classification layer, froze the remaining weights of the network (for optimization across episodes, not for gradient descent within an episode), and retrained the top layer for the testing n-way classification task, using the MAML objective again, for 5000 iterations. We tried two hyperparameter settings for the re-training: the hyperparameters for the 5-way 1-shot setting, and the hyperparameters for the 20-way 1-shot setting. We found that re-training with the 20-way 1-shot hyperparameters gave us better performance. While we attempted to also scale these hyperparameters appropriately for even higher way testing, this was not more successful than using the 20-way 1-shot hyperparameters. We then reported the accuracy after 10 update steps on the test data. We also tried simply randomly initializing the top-layer weights, and allowing MAML to take more update steps to see if it could learn the top layer online. These results were worse than those obtained after the fine-tuning procedure. Reptile We used the publicly available github repository from OpenAI. We used transductive training for 100,000 iterations on the 5-way 1-shot task, using the suggested hyperparameters. We then removed the classification layer, froze the remaining weights of the network, and retrained the top layer for the testing n-way classification task, using the Reptile training procedure. As in MAML, we tried setting hyperparameters during re-training to be similar to 5-way 1-shot, and 20-way 1-shot, but did not notice significant differences. Using random initializations for the top-layer weights, and then applying "fast weight" updates at test time also worked reasonably well. Graph Neural Networks Modifying the Graph Neural Network architecture to be applicable for test-way generalization was more difficult, since the approach assumes that labels are represented as a one-hot encoding, and concatenated with node features before being fed to the metric network. At training, we padded the one-hot labels to allow for 200 possible classes. At test time, these could then be filled in without needing to completely retrain the metric network. We additionally fine-tuned the classification layer of the metric network. We were unable to achieve greater than chance performance for the 200-way task. We expect that this is because the metric network learns to ignore the padded input dimensions during training. One possible fix would be to randomize the labels during training to fall in the full (0, 200) range, but we leave this to future work. Scaling this approach up to full-way classification is impossible with this encoding of the labels, as the computational memory requirements are substantial. A.4 SOFT-SOFT CLUSTERING BY APPROXIMATING THE CHINESE RESTAURANT PROCESS Here we discuss an alternative to BANDE which follows Gibbs sampling in an infinite mixture model more closely, in that it incorporates variances of clusters throughout, instead of only during reassignment as in BANDE. This fully soft variant has a probabilistic interpretation through the Chinese Restaurant Process (CRP) of Aldous (1985), but in our experiments it achieves lower accuracy than BANDE. For a certain setting of its parameters we can reinterpret it as an infinite mixture model extension of (Ren et al., 2018), which did not include this theoretical perspective. The generative model of the CRP consists of sampling assignments z1, ..., zJ which could take on cluster values c = 1, ..., C from the CRP prior with hyperparameter α, which controls the concentration of clusters, and number of cluster members Nc. Cluster parameters µc, σc are sampled from a base distribution H(θ0;µ0, σ0), and instances xj are then sampled from the associated Gaussian distribution N (µzj , σzj ). θ0 and θ consist of the parameters to be estimated, which in this case are the mean µ and variance σ of the Gaussian distributions. The CRP generative model is defined as p(zJ+1 = c|z1:J , α) = Nc N + α for c ∈ {1 . . . C} and p(zJ+1 = C + 1|z1:J , α) = α N + α (1) for assignments z of examples x to clusters c, cluster counts Nc, and parameter α to control assignments to new clusters. N is the total number of examples observed so far. One popular sampling procedure for parameter estimation is Gibbs sampling (Neal, 2000). In Gibbs sampling, we draw from a conditional distribution on the cluster assignments until convergence. The conditional draws are: p(zJ+1 = c|z1:J , α) ∝ { Nc,−j ∫ P (xj |θ)dH−j,c(θ) for c ≤ C α ∫ P (xj |θ)dH0(θ) for c = C + 1 For the case of a spherical Gaussian likelihood, let us define Nc = N (xi;µc, σ) as the likelihood of assigning xi to cluster c and N0 = N (xi;µ0, σ + σ0) as the likelihood of assigning xi to a new cluster drawn from the base distribution (Gaussian with mean µ0 and σ0) . We can then write: p(zi = c|µ) = Nk,−nNc αN0 + ∑C j=1Nj,−nNj (2) p(zi = C + 1|µ) = αN0 αN0 + ∑C j=1Nj,−nNj (3) p(σc|z) = σσ0 σ + σ0Nc (4) p(µc|z) = N ( µc; σµ0 + σ0 ∑ i,zi=c xi σc + σ0Nc , σσ0 σ + σ0Nc ) (5) Algorithm 2 Soft-soft clustering: multi-modal clustering with cluster variances for labeled and unlabeled data by approximating the Chinese Restaurant Process (CRP). ns is the number of labeled classes (way). q(i, c) is log p(i, c), the joint probability of cluster C and assignment i. N (x;µ, σ) is the Gaussian density. α is the concentration hyperparameter of the CRP. is the threshold hyperparameter for creating a new cluster. initialize {µ1, . . . , µns} . Initialize a cluster for each labeled class by taking class-wise means initialize {σ1, . . . , σns} . Initialize cluster variances based on equation 4. initialize {z1, . . . , zI} . Initialize cluster assignments for labeled data points. All unlabeled cluster assignments start at 0. C = ns . Initialize number of clusters C . Begin clustering pass for each example i do for each cluster c ∈ {1, ..., C} do Nc ← ∑ i zi,c σc ← σσ0σ+σ0Nc µc ← σµ0+σ0 ∑ i zi,chφ(xi) σc+σ0Nc estimate qi,c ∝ log(Nc,−i) + log(N (xi;µc, σc)) based on equation 2 end for estimate qi,C+1 ∝ log(α) + log(N0(xi;µ0, σ0)) based on equation 3 zi,c ← softmax(qi,1, ..., qi,C+1) if zi,C+1 > then C ← C + 1 end if end for Determining the assignment for a query sample is performed after clustering using the updated means and cluster counts. We connect our fully soft clustering variant to prior work on semi-supervised prototypical networks (Ren et al., 2018) to give it a new probabilistic perspective. Their method clusters labeled examples into a cluster per class by class-wise means, defines a “distractor” cluster for unrelated unlabeled examples, and then refines the labeled clusters by soft k-means. Their distractor cluster is fixed to have a mean of zero and variance of 100. If we set µ0 = 0 and σ0 = 100 accordingly, and do not update σ parameters, then our fully soft clustering can be seen as the infinite mixture model extension of their method, where the distractor cluster corresponds to a draw from a general base distribution with a CRP prior placed on the cluster assignments. A.5 SEMI-SUPERVISED CLUSTERING EVALUATION We show the importance of multi-modality for discovering unlabeled clusters during meta-testing after semi-supervised meta-learning. We randomly sampled n classes from Omniglot’s test set, and ran one randomly selected example from each of the n classes through our model to obtain a set of n prototypes. We then presented a new set of examples drawn equally from these n support (known) classes and n out-of-support (unknown) classes, then let each method cluster the examples into either the computed n prototypes or new clusters. Figure 4 shows the n+1 accuracy of correctly classifying the new example as either the correct, known label, or correctly identifying the example as a distractor. Only BANDE achieved higher accuracy than chance for numbers of clusters greater than 5, suggesting that a multi-modal distribution for unlabeled clusters is paramount for the algorithm’s clustering performance. The number of clusters created for the unlabeled examples closely tracked the correct number of unlabeled clusters, with an average relative error in the number of created clusters of 1.87 across the range from 5 - 200.
1. What is the focus and contribution of the paper on variadic learning? 2. What are the strengths of the proposed approach, particularly in terms of using multiple clusters for each class? 3. What are the weaknesses of the paper regarding its presentation quality and algorithmic issues? 4. How does the reviewer assess the significance and originality of the paper's content? 5. What are the concerns regarding the episodic learning description and overall algorithm clarity? 6. What are the major technical concerns, such as the mixing of hard and soft assignments and the lack of justification for certain aspects of the algorithm? 7. How does the reviewer suggest improving the paper, such as adding formal step-by-step descriptions and clarifying the procedure for computing losses?
Review
Review Update after Author Rebuttal -------------- After reading the rebuttal, I'm pleased that the authors have made significant revisions, but I still think more work is needed. The "hard/soft" hybrid approach still lacks justification and perhaps wasn't compared to a soft/soft approach in a fair and fully-correct way (see detailed reply to authors). I also appreciate the efforts on revising clarity, but still find many clarity issues in the newest version that make the method hard to understand let alone reproduce. I thus stand by my rating of "borderline rejection" and urge the authors to prepare significant revisions for a future venue that avoid hybrids of hard/soft probabilities without justification. (Original review text below. Detailed replies to authors are in posts below their responses). Review Summary -------------- While the focus on variadic learning is interesting, I think the present version of the paper needs far more presentational polish as well as algorithmic improvements before it is ready for ICLR. I think there is the potential for some neat ideas here and I hope the authors prepare stronger versions in the future. However, the current version is unfortunately not comprehensible or reproducible. Paper Summary ------------- The paper investigates developing an effective ML method for the "variadic" regime, where the method might be required to perform learning from few or many examples (shots) and few or many classes (ways). The term "variadic" comes from use in computer science for functions that can a flexible number of arguments. There may also be unlabeled data available in the few shot case, creating semi-supervised learning opportunities. The specific method proposed is called BANDE: Bayesian Nonparametric Deep Embedding. The idea is that each data point's feature vector x_i is transformed into an embedding vector h(x_i) using a neural network, and then clustering occurs in the embedding space via a single-pass of the DP-means algorithm (Kulis & Jordan 2012). Each cluster is assumed to correspond to one "class" in the eventual classification problem, though each class might have multiple clusters (and thus be multi-modal). Learning occurs in an episodic manner. After each episode (single-pass of DP-means), each point in a query set is embedded to its feature vector, then fed into each cluster's Gaussian likelihoods to produce a normalized cluster-assignment-probability vector that sums to one. This vector is then fed into a cross-entropy loss, where the true class's nearest cluster (largest probability value) is taken to be the true cluster. This loss is used to perform gradient updates of the embedding neural network. There is also a "cumulative" version of the method called BANDE-C. This version keeps track of cluster means from previous episodes and allows new episodes to be initialized with these. Experiments examine the proposed approach across image categorization tasks on Omniglot, mini-ImageNet, and CIFAR datasets. Strengths --------- * I like that many clusters are used for each true class label, which is better than rigid one-to-one assumptions. Limitations ----------- * Can only be used for classification, not regression * The DP-means procedure does not account for the cluster-specific variance information that is used at other steps of the algorithm Significance and Originality ---------------------------- To me, the method appears original. Any method that could really succeed across various variadic settings would be significant. Presentation Concerns --------------------- I have serious concerns about the presentation quality of this paper. Each section needs careful reorganization as well as rewording. ## P1: Algo. 1 contains numerous omissions that make it as written not correct. * the number of clusters count variable "n" is not updated anywhere. As writting this algo can only update one extra cluster beyond the original n. * the variable "c" is unbound in the else clause. You need a line that clarifies that c = argmin_{c in 1 ... n} d_ic Would be careful about saying that "a single pass is sufficient"... you have *chosen* to do only one pass. When doing k-means, we could also make this choice. Certainly the DP-means objective could keep improving with multiple passes. ## P2: Many figures and tables lack appropriate captions/labels Table 1: What metric is reported? Accuracy percentage? Not obvious from title/caption. Should also make very clear here how much labeled data was used. Table 2: What metric is reported? Accuracy percentage? Not obvious from title/caption. Should also make how many labeled and unlabeled examples were used easier to find. ## P3: Descriptions of episodic learning and overall algorithm clarity Readers unfamiliar with episodic learning are not helped with the limited coverage provided here in 3.1 and 3.2. When exactly is the "support" set used and the "query" set used? How do unlabeled points get used (both support and query appear fully labeled)? What is n? What is k? What is T? Why are some points in Q denoted with apostrophes but not others? Providing a more formal step-by-step description (perhaps with pseudocode) will be crucial. In Sec. 3.2, the paragraph that starts with "The loss is defined" is very hard to read and parse. I suggest adding math to formally define the loss with equations. What parameters are being optimized? Which ones are fixed? Additionally, in Sec. 3.2: "computed in the same way as standard prototypical networks"... what is the procedure exactly? If your method relies on a procedure, you should specify it in this paper and not make readers guess or lookup a procedure elsewhere. ## P4: Many steps of the algorithm are not detailed The paper claims to set \lambda using a technique from another paper, but does not summarize this technique. This makes things nearly impossible to reproduce. Please add such details in the appendix. Major Technical Concerns ------------------------ ## Alg. 1 concerns: Requires two (not one) passes and mixes hard and soft assingments and different variance assumptions awkwardly The BANDE algorithm (Alg. 1) has some unjustified properties. Hard assignment decisions which assume vanishing variances are used to find a closest cluster, but then later soft assignments with non-zero variances are used. This is a bit heuristic and lacks justification... why not use soft assignment throughout? The DP means procedure is derived from a specific objective function that assumes hard assignment. Seems weird to use it for convenience and then discard instead of coming up with the small fix that would make soft assignment consistent throughout. Furthermore, The authors claim it is a one pass algorithm, but in fact as written in Alg. 1 it seems to require two passes: the first pass keeps an original set of cluster centers fixed and then creates new centers whenever an example's distance to the closest center exceeds \lambda. But then, the *soft* assignment step that updates "z" requires again the distance from each point to all centers be computed, which requires another pass (since some new clusters may exist which did not when the point was first visited). While the new soft values will be close to zero, they will not be *exactly* zero, and thus they matter. ## Unclear if/how cluster-specific variance parameters learned From the text on top of page 4, it seems that the paper assumes that there exist cluster-specific variances \sigma_c. However, these are not mentioned elsewhere, only a general (not cluster-specific) label variance \sigma and fixed unlabeled variance sigma_u are used. ## Experiments lack comparison to internal baselines The paper doesn't evaluate sensitivity to key fixed hyperparameters (e.g. \alpha, \lambda) or compare variants of their approach (with and without soft clustering step, with and without multimodality via DP-means). It is difficult to tell which design choices of the method are most crucial.
ICLR
Title Variadic Learning by Bayesian Nonparametric Deep Embedding Abstract Learning at small or large scales of data is addressed by two strong but divided frontiers: few-shot learning and standard supervised learning. Few-shot learning focuses on sample efficiency at small scale, while supervised learning focuses on accuracy at large scale. Ideally they could be reconciled for effective learning at any number of data points (shot) and number of classes (way). To span the full spectrum of shot and way, we frame the variadic learning regime of learning from any number of inputs. We approach variadic learning by meta-learning a novel multi-modal clustering model that connects bayesian nonparametrics and deep metric learning. Our bayesian nonparametric deep embedding (BANDE) method is optimized end-to-end with a single objective, and adaptively adjusts capacity to learn from variable amounts of supervision. We show that multi-modality is critical for learning complex classes such as Omniglot alphabets and carrying out unsupervised clustering. We explore variadic learning by measuring generalization across shot and way between meta-train and meta-test, show the first results for scaling from few-way, few-shot tasks to 1692-way Omniglot classification and 5k-shot CIFAR-10 classification, and find that nonparametric methods generalize better than parametric methods. On the standard few-shot learning benchmarks of Omniglot and mini-ImageNet, BANDE equals or improves on the state-of-the-art for semi-supervised classification. 1 INTRODUCTION In machine learning, classification problems span two important axes: the number of classes to recognize (the "way" of the problem) and the number of examples provided for each class (the "shots" to learn from). At one extreme, there are large-scale tasks like ImageNet in which there are 1000 classes with roughly 1000 examples each (a 1000-way, ∼1000-shot problem). At the other extreme, there are datasets for learning from few examples, such as Omniglot, which features a 5- or 20-way, 1-shot problem. State-of-the-art methods for these two learning regimes are substantially different, with the former dominated by standard parametric deep networks and the latter by episodic meta-learning techniques. Moreover, as shown in our experiments, many methods degrade when the shot and way vary between training and testing. By contrast, humans recognize both familiar and unfamiliar categories whatever the amount of data, and can even learn a new category from a single example (Lake et al., 2015). To this end, we introduce a learning problem which requires generalization from few-way, few-shot problems to many-way, many-shot problems. We call this regime of variable shot and way the variadic learning regime, after variadic functions. Just as variadic functions are those which can take any number of arguments to produce a result, a good variadic learner must learn from any amount of data, whatever the number of examples and classes, and produce strong results across unknown data distributions during test. Meta-learning provides one potential avenue for pursuing a variadic learner. Meta-learning approaches generally use plentiful supervision from one distribution of tasks to learn an algorithm or metric that can be applied to more sparsely supervised tasks. Ideally, meta-learning approaches do not need knowledge of the specific setting in which they will be used. However, in practice, metalearning approaches have commonly been trained and evaluated in constrained circumstances, so their generalization properties are not fully known. Perhaps most significantly, meta-learning is usually carried out independently across settings so that a different learner is specialized to each n-way, k-shot task. This potentially limits their deployment to more diverse settings with variable shot and way that we address in this work. As a first step towards a strong variadic learner, we propose a multi-modal (many-to-one) semisupervised clustering approach which can adapt its capacity to the underlying class representations, and show that this is critical for modeling more complex data distributions. This innovation allows our model to perform inference with any amount of supervision (from totally unsupervised to fully supervised) after training, and adjust better to variable shot and way than existing approaches. Our bayesian nonparametric deep embedding (BANDE) model (see Figure 1) extends prototypical networks to multi-modal clustering. Clustering with multiple modes is critical for complex classes, and multi-modality makes unsupervised clustering possible. BANDE generalizes across any-shot, any-way tasks better than existing methods. At the many-way extreme, when trained with 5-way 1-shot episodes, BANDE achieves 75% accuracy for 1692-way 10-shot classification of Omniglot, improving on both few-shot and supervised learning baselines. At the many-shot extreme, BANDE approaches the accuracy of a standard supervised learner on CIFAR-10/100. On standard few-shot benchmarks BANDE is state-of-the-art in the semi-supervised setting. 2 RELATED WORK Prototypes and Nonparametrics Prototypical networks (Snell et al., 2017) and semi-supervised prototypical networks (Ren et al., 2018) are the most closely related to our work. Prototypical networks simply and efficiently represent each class by its mean in a learned embedding. They assume that the data is fully labeled. Ren et al. (2018) extend prototypes to the semi-supervised setting by refining prototypes through soft k-means clustering of the unlabeled data. They assume that the data is at least partially labeled. Snell et al. (2017) and Ren et al. (2018) are limited to one cluster per class. We define a more general and adaptive approach through bayesian nonparametrics that extends prototypical networks to multi-modal clustering, with one or many clusters per class, of labeled and unlabeled data alike. Through multi-modal representation and adaptive inference of the number of modes, our method is significantly more accurate on complex classes, does unsupervised clustering, and improves on standard semi-supervised few-shot learning benchmarks. For multi-modal clustering we incorporate DP-means (Kulis & Jordan, 2012) in our method. DPmeans is a scalable, bayesian nonparametric algorithm for unsupervised clustering that creates new clusters when data points are more than a threshold λ away from existing clusters. Our full method handles labeled and unlabeled data, augments the clustering with soft assignments under a normalized Gaussian likelihood, and defines a procedure to choose λ during learning and inference. Metric Learning Learning a metric to measure a given notion of distance/similarity addresses recognition by retrieval: given an unlabeled example, find the closest labeled example. The contrastive loss and siamese network architecture (Chopra et al., 2005; Hadsell et al., 2006) optimize an embedding for metric learning by pushing similar pairs together and pulling dissimilar pairs apart. Of particular note is research in face recognition, where a same/different retrieval metric is used for many-way classification (Schroff et al., 2015). Our approach is more aligned with metric learning by meta-learning (Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Garcia & Bruna, 2018). These approaches meta-learn a distance function by directly optimizing the task loss, such as cross-entropy for classification, through episodic optimization (Vinyals et al., 2016) for each setting of way and shot. While we likewise learn by episodic optimization, we differ from previous meta-learning work in our examination of generalization to variable numbers of examples and classes during testing, and show improvement in this regime. Unlike metric learning on either exemplars (Schroff et al., 2015) or prototypes (Snell et al., 2017; Ren et al., 2018), our method adaptively interpolates between exemplar and uni-modal prototype representation by deciding the number of modes during clustering. Learning Regimes Variadic learning is best explained in relation to few-shot learning, low-shot learning, and conventional supervised learning. Few-shot learning (Fei-Fei et al., 2006; Vinyals et al., 2016) handles tasks of fixed, known, and small numbers of data points and classes. In contrast, variadic tasks have variable numbers of data points and classes that can shift across tasks. Low-shot learning (Hariharan & Girshick, 2017; Qi et al., 2018; Qiao et al., 2018) addresses both densely supervised base classes and sparsely supervised novel classes, but presupposes which classes are in which set. Variadic learning also addresses these extremes of supervision, but requires no knowledge of how much or how little supervision each class has. Large-scale supervised learning (Bottou, 2010) parameterizes the model by the number of classes, and is tuned to the amount of data by choosing capacity, optimization schedules, and so forth. Variadic learning requires accuracy without specialization to shot and way. Life-long learning (Thrun, 1996; 1998) concerns variable shot and way for streams of non-stationary problems, while variadic learning is for one problem of unknown dimensions. Bridging life-long and variadic learning is sensible but out of scope for this work. 3 BAYESIAN NONPARAMETRIC DEEP EMBEDDINGS (BANDE) Our method end-to-end learns a deep embedding network and jointly clusters labeled and unlabeled data points by bayesian nonparametrics. Crucially, our model is able to express a single class as multiple modes, unlike the uni-modal clustering approaches of prior work. Figure 1 gives a schematic view of our multi-modal representation and how it differs from prior prototypical representations. Algorithm 1 expresses one step of model optimization in pseudocode. Few-shot Meta-learning In few-shot classification we are given a support set S = {(x1, y1), . . . , (xK , yK)} of K labeled examples and a query set Q = {(x′1, y′1), . . . , (x′K′ , y′K′)} of K ′ labeled examples where each xi, x′i ∈ RD is a D-dimensional feature vector and yi, y′i ∈ {1, . . . , N} is the corresponding label. In the semi-supervised setting, yi may not be provided for every example xi. The support set is for learning while the query set is for inference: the few-shot classification problem is to recognize the class of the queries given the labeled supports. Meta-learning is carried out by episodic optimization of the model parameters for the task loss. Episodes are comprised of support and query sets, constructed by randomly sampling a subset of classes, sampling examples from these classes, and then partitioning the examples into supports and queries. Optimization iterates by making one episode and one update. The update is defined by the task loss, which for classification could be the softmax cross-entropy loss. For deep metric learning models like ours, the model parameters are those of the embedding function hφ : RD → RM that is a deep network with parameters φ. The embedding of an example x is the M -dimensional feature vector taken from the last layer of the network. Meta-training proceeds by optimizing the model parameters φ with respect to a task loss. Meta-testing proceeds episodically like meta-training but without query labels or further optimization. Prototypes Prototypical networks (Snell et al., 2017) take the mean of the embedded support examples of a particular class to form a prototype: µn = 1|Sn| ∑ (xi,yi)∈Sn hφ(xi), with Sn denoting the set of support examples of class n. In conjunction with a distance function d(xi, xj), this provides an inference scheme for a query point x as the softmax over distances to the prototypes: pφ(y = n |x) = exp(−d(hφ(x),µn))∑ n′ exp(−d(hφ(x),µn′ )) . φ is optimized by minimizing the negative log-probability of the true class of each query point by stochastic gradient descent in each episode. Prototypical networks defined in this way learn to create uni-modal class distributions for fully labeled supports. Multi-modal Clustering Our method defines multi-modal prototypes of both labeled and unlabeled data. That is, a single class is represented by a set of cluster modes. By deciding the number of modes, our method interpolates between exemplar and uni-modal prototype representations, in effect adjusting its capacity depending on the data. To create multi-modal prototypes, we extend the non-parametric clustering algorithm DP-means (Kulis & Jordan, 2012) to make it compatible with end-to-end learning. DP-means iterates through all examples in a dataset, computing the example’s minimum distance to all existing cluster means. If this distance is greater than a particular threshold λ, a new cluster is created with mean hφ(xi), the example assigned to it. If xi is labeled, the new cluster takes on its label. While we use DP-means for cluster creation, we include cluster variances for reassignment. Labeled clusters are assigned a variance σl and unlabeled clusters are assigned a variance σu. σl and σu are differentiable, and therefore learned along with the embedding parameters φ. (We discuss the probabilistic interpretations of this choice in the next section.) λ, the threshold for creating a new cluster, is the sole hyperparameter for DP-means clustering. It is non-differentiable, and so it cannot be learned jointly. Instead, we set λ episodically as a function of the data. In Kulis & Jordan (2012), λ is parameterized as −2σ log( α (1+ ρσ ) d/2 ). α is the relative probability of forming a new cluster in the Chinese Restaurant Process prior (Aldous, 1985), and ρ is a measure of the standard deviation for the base distribution from which clusters are assumed to be drawn. We estimate ρ as the variance in the labeled cluster means within an episode, while α is treated as a hyperparameter. In our experiments, we found a wide range of α values to give similar results, with the embeddings adjusting their overall magnitudes to match the magnitude of α. 3.1 PROBABILISTIC INTERPRETATIONS OF HARD AND SOFT CLUSTERING The choice of hard or soft clustering has theoretical ramifications. There are three clustering variants to consider: fully hard, fully soft, and hybrid hard-soft. Fully hard clustering corresponds to following DP-means in a theoretically exact manner, with both σu and σl set to 0, and the UPDATEASSIGNMENTS function assigning zi = argminc[di,c] for each example i. This variant is theoretically precise as an extension of DP-means for endto-end learning and simultaneous clustering of labeled and unlabeled data. Fully soft clustering corresponds to an extension and reinterpretation of prior work on semi-supervised prototypical networks (Ren et al., 2018) (see Section A.4). Through the lens of bayesian nonparametrics, we derive this connection to an approximation of the Chinese Restaurant Process (CRP) (Aldous, 1985) in Section A.4 of the appendix. While fully hard and fully soft clustering admit clearer probabilistic interpretations, they are empirically less accurate than hybrid hard-soft clustering. Table 1: Clustering comparison on 5-way 1-shot semi-sup. Omniglot. Clustering Accuracy Hard-Hard 97.0 Soft-Soft 98.4 BANDE (Hard-Soft) 99.0 Table 1 compares the variants on a standard semi-supervised few-shot learning benchmark (detailed further in Section 4.3). BANDE does hard-soft clustering throughout our experiments. For hard-soft clustering, UPDATEASSIGNMENTS does soft assignment of zi,c = N (hφ(xi);µc,σc)∑ cN (hφ(xi);µc,σc) for all examples i. 3.2 CUMULATIVE SUPERVISION We extend BANDE into a cumulative variant, BANDE-C, that accumulates supervision non-episodically by remembering prototypes across episodes. Concretely, we initialize the cluster means µc by including a cluster mean from memory, φm,c, with the current episodic sample mean (i.e. µc = 1|zi∈C|+1 (φm,c +∑ i,zi∈c hφ(xi)). φm,c is computed as if c was uni-modal, regardless of whether the clustering was multi-modal in a previous episode. Since the embedding representation rapidly changes early in training, we introduce a discount factor on the stored embedding γφm,c proportional to the current learning rate. Whenever the class is encountered in a future episode, we update the remembered prototype with the cluster mean after episodic inference. We only experiment with BANDE-C in the variadic setting (Section 4.2); everywhere else we keep standard episodic training and testing. Note that standard prototypical networks can likewise be augmented to remember prototypes and non-episodically accumulate supervision in this manner. Algorithm 1 BANDE: one optimization episode. ns is the number of labeled classes (way) and ks is the number of labeled examples of each class (shot). kq is the number of query examples per class. For a set A, An is the subset of A with all examples of class n. p(x|µ, σ) is the Gaussian density. Input: support set S, query set Q, and unlabeled set U . Output: loss J for the episode. C ← ns . C is the total number of clusters for c ∈ {1, ..., C} do lc ← c . lc is the cluster label µc ← 1ks ∑ (xi,yi)∈Sc hφ(xi) . µc is the cluster mean σc ← σl . σc is the cluster variance end for . Iterate over the labeled and unlabeled data and create new clusters. for each example i ∈ S ∪ U do for c in {1, ..., C} do di,c ← { ‖hφ(xi)− µc‖2 if (example i is labeled and lc = yi) or example i is unlabeled +∞ otherwise end for if minc(di,c) > λ then C ← C + 1 lC ← yi . Cluster takes the label of the example µC ← hφ(xi) . Cluster mean takes the embedding of the example σC ← { σl, if yi 6= 0 σu, otherwise end if end for z ← UPDATEASSIGNMENTS({hφ(x)}, µ, σ) . Update all cluster-example assignments µ← { ∑ i zi,chφ(xi)∑ i zi,c | c ∈ 1, ..., C} . Update all cluster means . Cross-entropy loss on the most probable cluster of the true class and all clusters of other classes J ← 0 for n in {1, ..., ns} do c∗ ← argmax c:lc=n log p(x|µc, σc) J ← J+ 1 nskq ∑ (x,y)∈Qn − log p(x|µc∗ , σc∗) + log ∑ c′:lc′ 6=n p(x|µc′ , σc′) + p(x|µc∗ , σc∗) end for 4 EXPERIMENTS We experimentally show that multi-modal prototypes are more accurate and more general than uni-modal prototypes. In our new variadic setting for any-shot, any-way learning we explore which methods do (and do not) generalize across shot and way. We report the first results for extreme generalization to 1692-way classification and 5000-shot from few-shot episodic optimization. For few-shot learning, we show competitive results for few-shot fully-supervised and semi-supervised classification on the standard benchmarks of Omniglot and mini-ImageNet. We control for architecture and optimization by comparing methods with the same base architecture and same episodic optimization settings. All code for our method and baselines will be released. For these experiments we make use of standard few-shot and supervised learning datasets and furthermore define new variadic evaluation protocols on these common benchmarks. We consider Omniglot and mini-ImageNet, two widely used datasets for few-shot learning research, and CIFAR10/CIFAR-100, two popular datasets for supervised learning research with deep networks. Omniglot (Lake et al., 2015) is a dataset of 1,623 handwritten characters from 50 alphabets. There are 20 examples of each character, where the images are resized to 28x28 pixels and each image is rotated by multiples of 90◦. This gives 6,492 classes in total, which are then split into 4,112 training classes, 1,692 test classes and 688 validation classes. mini-ImageNet (Vinyals et al., 2016) is a reduced version of the ILSVRC’12 dataset (Russakovsky et al., 2015), which contains 600 84x84 images for 100 classes randomly selected from the full dataset. We use the split from Ravi & Larochelle (2017) with 64/16/20 classes for train/val/test. CIFAR-10/100 (Krizhevsky & Hinton, 2009) are classification datasets of 32x32 color images drawn from the Tiny Images project (Torralba et al., 2008). CIFAR-10 has 10 classes and CIFAR-100 has 100 classes (plus 20 super-classes). Both have 50k training images and 10k testing images and both are balanced so that every class has an equal number of images. 4.1 ACCURACY AND GENERALITY OF MULTI-MODAL PROTOTYPES Our experiments on Omniglot alphabets and characters show that multi-modal prototypes are significantly more accurate than uni-modal prototypes for recognizing complicated classes (alphabets) and recover uni-modal prototypes as a special case for recognizing simple classes (characters). Multi-modal prototypes generalize better for super-class to sub-class transfer learning, improving accuracy when meta-training on alphabets but meta-testing on characters. By unifying the clustering of labeled and unlabeled data alike, our multi-modal prototypes even address fully unsupervised clustering, unlike prior prototypical network models that are undefined without labels. We first show the importance of multimodality for learning representations of multi-modal classes: Omniglot alphabets. For these experiments we meta-train for alphabet classification, using only the super-class labels. Episodes are constructed by sampling 1 example of 200 different random characters in the support set, with 5 examples of each character in the query. For alphabet testing, we provide 100 randomly selected characters with alphabet labels in the support, making this a mixed-shot problem. For character testing, we provide 1 labeled image of 20 different characters as support, and score based on correct character assignments of the queries. As seen in table 2, in both testing configurations, BANDE substantially outperforms prototypical networks. On 20-way 1-shot character recognition, BANDE achieves 95.3% from alphabet supervision alone, slightly out-performing prototypical networks trained directly on character recognition (94.9%). Fully Unsupervised Clustering BANDE is able to do fully unsupervised clustering during meta-test via multi-modality. Prior work on prototypical networks (Snell et al., 2017) and semi-supervised prototypical networks (Ren et al., 2018) cannot address this setting because the models are undefined without labeled data. BANDE handles labeled and unlabeled data by the same clustering rule, inferring the number of clusters as needed, and achieves good accuracy under the standard clustering metrics of normalized mutual information (NMI) and purity. We examine BANDE’s clustering performance in Table 3 by randomly sampling 5 examples of n classes from the test set and treating them as unlabeled samples. BANDE maintains remarkably strong performance across a large number of unlabeled clusters, without knowing the number of classes in advance, and without having seen any examples from the classes during training. 4.2 ANY-SHOT, ANY-WAY LEARNING IN THE VARIADIC SETTING We now move to the any-shot, any-way setting that this paper introduces. We closely examine extreme generalization across shot and way between meta-train and meta-test, unlike previous approaches which only examine small shifts (Munkhdalai & Yu, 2017; Snell et al., 2017). Most notably, we show that nonparametric methods such as BANDE can generalize from few-way training to many-way testing, while parametric methods fail to transfer effectively. We further show that BANDE, a nonparametric method, performs on par with fully parametric methods in the domain of supervised learning; the first demonstration of a meta-learning method evaluated in the many-shot domain without pre-training. These two results cement the suitability of nonparametric meta-learning methods over parametric methods for the variadic setting. Semi-supervised protocol We train and test BANDE and other prototypical methods on semisupervised data to include the number of labeled and unlabeled examples in the scope of the variadic setting. We follow (Ren et al., 2018), taking only 40% of the data as labeled for both the support and query while the rest of the data is included, but as unlabeled examples. The unlabeled data is then incorporated into episodes as (1) within support examples that allow for semi-supervised refinement of the support classes or (2) distractors which lie in the complement of the support classes. Semi-supervised episodes augment the fully supervised n-way, k-shot support with 5 unlabeled examples for each of the n classes and include 5 more distractor classes with 5 unlabeled instances each. The query still contains only support classes. Variable Shot and Way We first look at generalization by moderately adjusting the shot and way in evaluation from their fixed settings during meta-learning. For variable way, we consider Omniglot, because it has many classes. For variable shot, we consider mini-ImageNet, because it has more examples per class. In both cases, we train on 5-way, 1-shot episodes, and test generalization by varying the number of classes and number of examples during meta-testing. We consider four strong fully-supervised baselines trained on 100% of the data (black lines), as well as prototypical baselines trained on 40% of the data (colored). We compare to three parametric methods, MAML (Finn et al., 2017), Reptile (Nichol & Schulman, 2018), and few-shot graph networks (Garcia & Bruna, 2018), as well as the nonparametric memory-based model of Kaiser et al. (2017). Modifications to these approaches for test-way generalization are discussed in Section A.3. As seen in Figure 2 (a), the parametric meta-learning approaches fail to meaningfully generalize to higher way than they were trained for. BANDE is the least sensitive to higher way meta-testing, although the margin between BANDE and semi-supervised prototypical networks in this regime is small compared to the difference with parametric methods. For shot generalization, we compare to MAML’s accuracy after 10 updates vs. accuracy at convergence. We note that MAML is not able to make effective use of more data unless it is allowed to take proportionately larger numbers of updates, while our method improves with more data without taking gradients at test time. Even at convergence, MAML lags BANDE’s performance, suggesting that a nonparametric approach is still superior to parametric meta-learning. Extreme Generalization to Many-Way We demonstrate that BANDE can learn a full 1692- way classifier for Omniglot from only episodic optimization of 5-way 1-shot tasks. Episodes are composed identically to the few-shot semisupervised setting with unlabeled examples and distractor classes. Accuracies for our method and a supervised learning baseline are shown in Figure 3. For inference, we run k examples from each test class through our learned embedding network, and then assign the unseen examples the label of the closest prototype. The baseline shares the same training set and architecture, substituting a linear output layer for prototypes by optimizing the softmax cross-entropy loss. We take the last feature layer as the embedding for prototypical inference. Fine-tuning on the test support proved less accurate, as did k nearest neighbours inference. This result is an example of episodic optimization yielding strong results for many-way classification, motivating the possibility of learning large-scale models cumulatively from small-scale tasks, instead of restricting attention to the adaptation of large-scale models to small-scale, few-shot settings. Scaling to Many-Shot We examine the effectiveness of BANDE in the conventional supervised learning regime. To the best of our knowledge this is the first evaluation of meta-training across the spectrum from few-shot to many-shot. Our base architecture is the Wide ResNet 28-10 of Zagoruyko & Komodakis (2016), which has shown state-of-the-art results on CIFAR-10/100, and has been additionally used as a base architecture for strong low-shot performance on mini-ImageNet (Qiao et al., 2018). We optimize BANDE by meta-training on episodes consisting of 10-way (CIFAR-10) 2-shot and 20-way (CIFAR-100) tasks for computational considerations. With no knowledge of the total number of classes or number of examples per class, and without pre-training or fine-tuning, we achieve accuracies that rival a well-tuned supervised learning baseline. On CIFAR-10 we achieve 94.4% accuracy compared to the 95.1% accuracy of supervised learning. On CIFAR-100 we achieve 75.6% accuracywhich is > 90% of the 81.2% accuracy of supervised learning. When evaluating both the BANDE and supervised learning embeddings as prototypes the accuracies are equal, suggesting that both approaches learn equally good representations, and differ only in the prototypical vs. parametric form of the classifier. 4.3 FEW-SHOT CLASSIFICATION BENCHMARKS We evaluate BANDE on the standard few-shot classification benchmarks of Omniglot and miniImageNet in the fully-supervised and semi-supervised regimes. BANDE learns to recover uni-modal clustering as a special case, matching or out-performing prototypical networks when the classes are uni-modal, as seen in Table 4. In this setting, we evaluate BANDE in the standard episodic protocol of few-shot learning. In this protocol, shot and way are fixed and classes are balanced within an episode. The results reported in Table 4 are for models trained and tested with n-way episodes. This is to equalize comparison across methods. Snell et al. (2017) train at higher-way than testing and report a boost in accuracy. We find that this boost is illusory, and explained away by controlling for the number of gradients per update. We show this by experiment through the use of gradient accumulation in Section A.2 of the appendix. (For completeness, we confirmed that our implementation of prototypical networks reproduces reported results at higher way.) In the semi-supervised setting we follow (Ren et al., 2018), using the set-up outlined in the second paragraph of section 4.2. Our results for this setting are reported in Table 5. Through multi-modality, the clustering of the labeled classes and distractors is decided by the data with a single rule. In particular this helps with the distractor distribution, which is in fact more diffuse and multi-modal by comprising several different classes. Our only specialization to this setting is to have more uncertain distractor clusters by higher cluster variances to compensate for this diffuseness. 5 CONCLUSION We framed the variadic regime to shine a light on learning representations that bridge small-scale and large-scale learning and strive toward the any-shot/any-way adaptability of human perception. As a step toward addressing this full span, we introduced BANDE, a multi-modal extension of prototypical networks, that is capable of generalizing across variable amounts of labeled and unlabeled data. Our results have shown BANDE is state-of-the-art in the few-shot regime and scales from fewway, few-shot meta-learning to many-way, many-shot deployment for both sparse and plentiful supervision. Our experiments demonstrate that multi-modality is key for improved semi-supervised and unsupervised clustering. There is much work to be done to improve variadic generalization, and to connect to life-long learning over non-stationary tasks. A APPENDIX A.1 IMPLEMENTATION DETAILS For all few-shot experiments, we use the same base architecture as prototypical networks for the embedding network. It is composed of four convolutional blocks consisting of a 64-filter 3 x 3 convolution, a batch normalization layer, a ReLU nonlinearity, and a 2 x 2 max-pooling layer per block. This results in a 64-dimensional embedding vector for omniglot, and a 1600 dimensional embedding vector for mini-imagenet. Our models were trained via SGD with RMSProp (Tieleman & Hinton, 2012) with an α parameter of 0.9. For Omniglot, the initial learning rate was set to 1e-3, and cut by a factor of two every 2000 iterations, starting at 4000 iterations. We additionally use gradient accumulation and accumulate gradients over eight episodes before making an update when performing 5-way training for Omniglot. For mini-ImageNet, the initial learning rate was set to 1e-3, and further halved every 20000 iterations, starting at 40000 iterations. For the supervised experiments, we use a wide residual network (Zagoruyko & Komodakis, 2016) with depth 28 and widening factor 10, with a dropout value of 0.3. We were not able to perfectly recover published results with our reimplementation, but the numbers are within 1% of their published values. A.2 CONTROLLING FOR THE NUMBER OF GRADIENTS TAKEN DURING OPTIMIZATION Consider the gradient of the loss: it has the dimensions of shot × way because every example has a derivative with respect to every class. In this way, by default, the episode size determines the number of gradients in an update. Quantitatively, 20-way episodes accumulate 16 times as many gradients as 5-way episodes. By sampling 16 5-way episodes and accumulating the gradients to make an update, we achieve significantly better results, matching the results obtained with 20-way episodes within statistical significance. Note that agreement across conditions may not be perfectly exact because subtle adjustments to hyperparameters might be necessary. A.3 EXTENDING COMPARED MODELS TO VARIADIC REGIME The models we compare to were not designed with variadic generalization in mind, and as a result we attempt to make as fair a comparison as possible by extending them as needed. We describe our approaches below. Semi-supervised prototypical networks In the paper first introducing this semi-supervised setting (Ren et al., 2018), the authors show how to use a distractor cluster centered at 0 to capture samples not belonging to any examples from the support. They additionally introduce length scales rc. In equation 6 from their paper, they use a normalization constantA(rc) defined as 0.5 log(2π)+log(r). However, this is an unscaled normalization constant, and assumes the dimensionality of the embedding space to be 1. The corrected normalization constant is A(rc) = d(log(rc) + 0.5 log(2π)) where d is the dimensionality of the embedding. We compare to their method with this corrected normalization constant, but note that it has only a small effect. For space, we did not compare to all methods from their paper, and chose this one as it performed well across their experiments, and because it was most amenable to the clustering experiments we were interested in performing. MAML We used Finn’s publicly available github repository (Finn et al., 2017). We trained an initial MAML architecture on the 5-way 1-shot task, using the suggested hyperparameters, for 40,000 iterations. We then removed the classification layer, froze the remaining weights of the network (for optimization across episodes, not for gradient descent within an episode), and retrained the top layer for the testing n-way classification task, using the MAML objective again, for 5000 iterations. We tried two hyperparameter settings for the re-training: the hyperparameters for the 5-way 1-shot setting, and the hyperparameters for the 20-way 1-shot setting. We found that re-training with the 20-way 1-shot hyperparameters gave us better performance. While we attempted to also scale these hyperparameters appropriately for even higher way testing, this was not more successful than using the 20-way 1-shot hyperparameters. We then reported the accuracy after 10 update steps on the test data. We also tried simply randomly initializing the top-layer weights, and allowing MAML to take more update steps to see if it could learn the top layer online. These results were worse than those obtained after the fine-tuning procedure. Reptile We used the publicly available github repository from OpenAI. We used transductive training for 100,000 iterations on the 5-way 1-shot task, using the suggested hyperparameters. We then removed the classification layer, froze the remaining weights of the network, and retrained the top layer for the testing n-way classification task, using the Reptile training procedure. As in MAML, we tried setting hyperparameters during re-training to be similar to 5-way 1-shot, and 20-way 1-shot, but did not notice significant differences. Using random initializations for the top-layer weights, and then applying "fast weight" updates at test time also worked reasonably well. Graph Neural Networks Modifying the Graph Neural Network architecture to be applicable for test-way generalization was more difficult, since the approach assumes that labels are represented as a one-hot encoding, and concatenated with node features before being fed to the metric network. At training, we padded the one-hot labels to allow for 200 possible classes. At test time, these could then be filled in without needing to completely retrain the metric network. We additionally fine-tuned the classification layer of the metric network. We were unable to achieve greater than chance performance for the 200-way task. We expect that this is because the metric network learns to ignore the padded input dimensions during training. One possible fix would be to randomize the labels during training to fall in the full (0, 200) range, but we leave this to future work. Scaling this approach up to full-way classification is impossible with this encoding of the labels, as the computational memory requirements are substantial. A.4 SOFT-SOFT CLUSTERING BY APPROXIMATING THE CHINESE RESTAURANT PROCESS Here we discuss an alternative to BANDE which follows Gibbs sampling in an infinite mixture model more closely, in that it incorporates variances of clusters throughout, instead of only during reassignment as in BANDE. This fully soft variant has a probabilistic interpretation through the Chinese Restaurant Process (CRP) of Aldous (1985), but in our experiments it achieves lower accuracy than BANDE. For a certain setting of its parameters we can reinterpret it as an infinite mixture model extension of (Ren et al., 2018), which did not include this theoretical perspective. The generative model of the CRP consists of sampling assignments z1, ..., zJ which could take on cluster values c = 1, ..., C from the CRP prior with hyperparameter α, which controls the concentration of clusters, and number of cluster members Nc. Cluster parameters µc, σc are sampled from a base distribution H(θ0;µ0, σ0), and instances xj are then sampled from the associated Gaussian distribution N (µzj , σzj ). θ0 and θ consist of the parameters to be estimated, which in this case are the mean µ and variance σ of the Gaussian distributions. The CRP generative model is defined as p(zJ+1 = c|z1:J , α) = Nc N + α for c ∈ {1 . . . C} and p(zJ+1 = C + 1|z1:J , α) = α N + α (1) for assignments z of examples x to clusters c, cluster counts Nc, and parameter α to control assignments to new clusters. N is the total number of examples observed so far. One popular sampling procedure for parameter estimation is Gibbs sampling (Neal, 2000). In Gibbs sampling, we draw from a conditional distribution on the cluster assignments until convergence. The conditional draws are: p(zJ+1 = c|z1:J , α) ∝ { Nc,−j ∫ P (xj |θ)dH−j,c(θ) for c ≤ C α ∫ P (xj |θ)dH0(θ) for c = C + 1 For the case of a spherical Gaussian likelihood, let us define Nc = N (xi;µc, σ) as the likelihood of assigning xi to cluster c and N0 = N (xi;µ0, σ + σ0) as the likelihood of assigning xi to a new cluster drawn from the base distribution (Gaussian with mean µ0 and σ0) . We can then write: p(zi = c|µ) = Nk,−nNc αN0 + ∑C j=1Nj,−nNj (2) p(zi = C + 1|µ) = αN0 αN0 + ∑C j=1Nj,−nNj (3) p(σc|z) = σσ0 σ + σ0Nc (4) p(µc|z) = N ( µc; σµ0 + σ0 ∑ i,zi=c xi σc + σ0Nc , σσ0 σ + σ0Nc ) (5) Algorithm 2 Soft-soft clustering: multi-modal clustering with cluster variances for labeled and unlabeled data by approximating the Chinese Restaurant Process (CRP). ns is the number of labeled classes (way). q(i, c) is log p(i, c), the joint probability of cluster C and assignment i. N (x;µ, σ) is the Gaussian density. α is the concentration hyperparameter of the CRP. is the threshold hyperparameter for creating a new cluster. initialize {µ1, . . . , µns} . Initialize a cluster for each labeled class by taking class-wise means initialize {σ1, . . . , σns} . Initialize cluster variances based on equation 4. initialize {z1, . . . , zI} . Initialize cluster assignments for labeled data points. All unlabeled cluster assignments start at 0. C = ns . Initialize number of clusters C . Begin clustering pass for each example i do for each cluster c ∈ {1, ..., C} do Nc ← ∑ i zi,c σc ← σσ0σ+σ0Nc µc ← σµ0+σ0 ∑ i zi,chφ(xi) σc+σ0Nc estimate qi,c ∝ log(Nc,−i) + log(N (xi;µc, σc)) based on equation 2 end for estimate qi,C+1 ∝ log(α) + log(N0(xi;µ0, σ0)) based on equation 3 zi,c ← softmax(qi,1, ..., qi,C+1) if zi,C+1 > then C ← C + 1 end if end for Determining the assignment for a query sample is performed after clustering using the updated means and cluster counts. We connect our fully soft clustering variant to prior work on semi-supervised prototypical networks (Ren et al., 2018) to give it a new probabilistic perspective. Their method clusters labeled examples into a cluster per class by class-wise means, defines a “distractor” cluster for unrelated unlabeled examples, and then refines the labeled clusters by soft k-means. Their distractor cluster is fixed to have a mean of zero and variance of 100. If we set µ0 = 0 and σ0 = 100 accordingly, and do not update σ parameters, then our fully soft clustering can be seen as the infinite mixture model extension of their method, where the distractor cluster corresponds to a draw from a general base distribution with a CRP prior placed on the cluster assignments. A.5 SEMI-SUPERVISED CLUSTERING EVALUATION We show the importance of multi-modality for discovering unlabeled clusters during meta-testing after semi-supervised meta-learning. We randomly sampled n classes from Omniglot’s test set, and ran one randomly selected example from each of the n classes through our model to obtain a set of n prototypes. We then presented a new set of examples drawn equally from these n support (known) classes and n out-of-support (unknown) classes, then let each method cluster the examples into either the computed n prototypes or new clusters. Figure 4 shows the n+1 accuracy of correctly classifying the new example as either the correct, known label, or correctly identifying the example as a distractor. Only BANDE achieved higher accuracy than chance for numbers of clusters greater than 5, suggesting that a multi-modal distribution for unlabeled clusters is paramount for the algorithm’s clustering performance. The number of clusters created for the unlabeled examples closely tracked the correct number of unlabeled clusters, with an average relative error in the number of created clusters of 1.87 across the range from 5 - 200.
1. What is the focus of the paper, and how does it relate to previous works in the field? 2. What are the differences between the proposed method and previous approaches, specifically the one by Ren et al.? 3. How does the paper justify the choice of a different clustering algorithm, and what are its implications? 4. What is the significance of "multi-model clustering" in the context of the paper, and how does it compare to other approaches? 5. How does the reviewer assess the novelty and impact of the proposed method, particularly in light of previous research?
Review
Review The paper proposes a meta-learning method that utilizes unlabeled examples along with labeled examples. The technique proposed is very similar to the one by (Ren et al. 2018), only differing in the choice of a different clustering algorithm (Kulis and Jordan, 2012) instead of soft k-means as used by Ren et al. I feel the contrast to Ren et al, is not provided to the degree it should be. The Appendix paragraph A4 is not sufficient in terms of explaining why this method is conceptually different or significantly better than the related approach. It is hard for me to certify the merits of their work, including explaining the experimental results. I also do not understand the significance of "multi-model clustering" in this context. Also, by their definition of "variadic", how is this more variadic than Ren et al. or Snell et al.?
ICLR
Title Variadic Learning by Bayesian Nonparametric Deep Embedding Abstract Learning at small or large scales of data is addressed by two strong but divided frontiers: few-shot learning and standard supervised learning. Few-shot learning focuses on sample efficiency at small scale, while supervised learning focuses on accuracy at large scale. Ideally they could be reconciled for effective learning at any number of data points (shot) and number of classes (way). To span the full spectrum of shot and way, we frame the variadic learning regime of learning from any number of inputs. We approach variadic learning by meta-learning a novel multi-modal clustering model that connects bayesian nonparametrics and deep metric learning. Our bayesian nonparametric deep embedding (BANDE) method is optimized end-to-end with a single objective, and adaptively adjusts capacity to learn from variable amounts of supervision. We show that multi-modality is critical for learning complex classes such as Omniglot alphabets and carrying out unsupervised clustering. We explore variadic learning by measuring generalization across shot and way between meta-train and meta-test, show the first results for scaling from few-way, few-shot tasks to 1692-way Omniglot classification and 5k-shot CIFAR-10 classification, and find that nonparametric methods generalize better than parametric methods. On the standard few-shot learning benchmarks of Omniglot and mini-ImageNet, BANDE equals or improves on the state-of-the-art for semi-supervised classification. 1 INTRODUCTION In machine learning, classification problems span two important axes: the number of classes to recognize (the "way" of the problem) and the number of examples provided for each class (the "shots" to learn from). At one extreme, there are large-scale tasks like ImageNet in which there are 1000 classes with roughly 1000 examples each (a 1000-way, ∼1000-shot problem). At the other extreme, there are datasets for learning from few examples, such as Omniglot, which features a 5- or 20-way, 1-shot problem. State-of-the-art methods for these two learning regimes are substantially different, with the former dominated by standard parametric deep networks and the latter by episodic meta-learning techniques. Moreover, as shown in our experiments, many methods degrade when the shot and way vary between training and testing. By contrast, humans recognize both familiar and unfamiliar categories whatever the amount of data, and can even learn a new category from a single example (Lake et al., 2015). To this end, we introduce a learning problem which requires generalization from few-way, few-shot problems to many-way, many-shot problems. We call this regime of variable shot and way the variadic learning regime, after variadic functions. Just as variadic functions are those which can take any number of arguments to produce a result, a good variadic learner must learn from any amount of data, whatever the number of examples and classes, and produce strong results across unknown data distributions during test. Meta-learning provides one potential avenue for pursuing a variadic learner. Meta-learning approaches generally use plentiful supervision from one distribution of tasks to learn an algorithm or metric that can be applied to more sparsely supervised tasks. Ideally, meta-learning approaches do not need knowledge of the specific setting in which they will be used. However, in practice, metalearning approaches have commonly been trained and evaluated in constrained circumstances, so their generalization properties are not fully known. Perhaps most significantly, meta-learning is usually carried out independently across settings so that a different learner is specialized to each n-way, k-shot task. This potentially limits their deployment to more diverse settings with variable shot and way that we address in this work. As a first step towards a strong variadic learner, we propose a multi-modal (many-to-one) semisupervised clustering approach which can adapt its capacity to the underlying class representations, and show that this is critical for modeling more complex data distributions. This innovation allows our model to perform inference with any amount of supervision (from totally unsupervised to fully supervised) after training, and adjust better to variable shot and way than existing approaches. Our bayesian nonparametric deep embedding (BANDE) model (see Figure 1) extends prototypical networks to multi-modal clustering. Clustering with multiple modes is critical for complex classes, and multi-modality makes unsupervised clustering possible. BANDE generalizes across any-shot, any-way tasks better than existing methods. At the many-way extreme, when trained with 5-way 1-shot episodes, BANDE achieves 75% accuracy for 1692-way 10-shot classification of Omniglot, improving on both few-shot and supervised learning baselines. At the many-shot extreme, BANDE approaches the accuracy of a standard supervised learner on CIFAR-10/100. On standard few-shot benchmarks BANDE is state-of-the-art in the semi-supervised setting. 2 RELATED WORK Prototypes and Nonparametrics Prototypical networks (Snell et al., 2017) and semi-supervised prototypical networks (Ren et al., 2018) are the most closely related to our work. Prototypical networks simply and efficiently represent each class by its mean in a learned embedding. They assume that the data is fully labeled. Ren et al. (2018) extend prototypes to the semi-supervised setting by refining prototypes through soft k-means clustering of the unlabeled data. They assume that the data is at least partially labeled. Snell et al. (2017) and Ren et al. (2018) are limited to one cluster per class. We define a more general and adaptive approach through bayesian nonparametrics that extends prototypical networks to multi-modal clustering, with one or many clusters per class, of labeled and unlabeled data alike. Through multi-modal representation and adaptive inference of the number of modes, our method is significantly more accurate on complex classes, does unsupervised clustering, and improves on standard semi-supervised few-shot learning benchmarks. For multi-modal clustering we incorporate DP-means (Kulis & Jordan, 2012) in our method. DPmeans is a scalable, bayesian nonparametric algorithm for unsupervised clustering that creates new clusters when data points are more than a threshold λ away from existing clusters. Our full method handles labeled and unlabeled data, augments the clustering with soft assignments under a normalized Gaussian likelihood, and defines a procedure to choose λ during learning and inference. Metric Learning Learning a metric to measure a given notion of distance/similarity addresses recognition by retrieval: given an unlabeled example, find the closest labeled example. The contrastive loss and siamese network architecture (Chopra et al., 2005; Hadsell et al., 2006) optimize an embedding for metric learning by pushing similar pairs together and pulling dissimilar pairs apart. Of particular note is research in face recognition, where a same/different retrieval metric is used for many-way classification (Schroff et al., 2015). Our approach is more aligned with metric learning by meta-learning (Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Garcia & Bruna, 2018). These approaches meta-learn a distance function by directly optimizing the task loss, such as cross-entropy for classification, through episodic optimization (Vinyals et al., 2016) for each setting of way and shot. While we likewise learn by episodic optimization, we differ from previous meta-learning work in our examination of generalization to variable numbers of examples and classes during testing, and show improvement in this regime. Unlike metric learning on either exemplars (Schroff et al., 2015) or prototypes (Snell et al., 2017; Ren et al., 2018), our method adaptively interpolates between exemplar and uni-modal prototype representation by deciding the number of modes during clustering. Learning Regimes Variadic learning is best explained in relation to few-shot learning, low-shot learning, and conventional supervised learning. Few-shot learning (Fei-Fei et al., 2006; Vinyals et al., 2016) handles tasks of fixed, known, and small numbers of data points and classes. In contrast, variadic tasks have variable numbers of data points and classes that can shift across tasks. Low-shot learning (Hariharan & Girshick, 2017; Qi et al., 2018; Qiao et al., 2018) addresses both densely supervised base classes and sparsely supervised novel classes, but presupposes which classes are in which set. Variadic learning also addresses these extremes of supervision, but requires no knowledge of how much or how little supervision each class has. Large-scale supervised learning (Bottou, 2010) parameterizes the model by the number of classes, and is tuned to the amount of data by choosing capacity, optimization schedules, and so forth. Variadic learning requires accuracy without specialization to shot and way. Life-long learning (Thrun, 1996; 1998) concerns variable shot and way for streams of non-stationary problems, while variadic learning is for one problem of unknown dimensions. Bridging life-long and variadic learning is sensible but out of scope for this work. 3 BAYESIAN NONPARAMETRIC DEEP EMBEDDINGS (BANDE) Our method end-to-end learns a deep embedding network and jointly clusters labeled and unlabeled data points by bayesian nonparametrics. Crucially, our model is able to express a single class as multiple modes, unlike the uni-modal clustering approaches of prior work. Figure 1 gives a schematic view of our multi-modal representation and how it differs from prior prototypical representations. Algorithm 1 expresses one step of model optimization in pseudocode. Few-shot Meta-learning In few-shot classification we are given a support set S = {(x1, y1), . . . , (xK , yK)} of K labeled examples and a query set Q = {(x′1, y′1), . . . , (x′K′ , y′K′)} of K ′ labeled examples where each xi, x′i ∈ RD is a D-dimensional feature vector and yi, y′i ∈ {1, . . . , N} is the corresponding label. In the semi-supervised setting, yi may not be provided for every example xi. The support set is for learning while the query set is for inference: the few-shot classification problem is to recognize the class of the queries given the labeled supports. Meta-learning is carried out by episodic optimization of the model parameters for the task loss. Episodes are comprised of support and query sets, constructed by randomly sampling a subset of classes, sampling examples from these classes, and then partitioning the examples into supports and queries. Optimization iterates by making one episode and one update. The update is defined by the task loss, which for classification could be the softmax cross-entropy loss. For deep metric learning models like ours, the model parameters are those of the embedding function hφ : RD → RM that is a deep network with parameters φ. The embedding of an example x is the M -dimensional feature vector taken from the last layer of the network. Meta-training proceeds by optimizing the model parameters φ with respect to a task loss. Meta-testing proceeds episodically like meta-training but without query labels or further optimization. Prototypes Prototypical networks (Snell et al., 2017) take the mean of the embedded support examples of a particular class to form a prototype: µn = 1|Sn| ∑ (xi,yi)∈Sn hφ(xi), with Sn denoting the set of support examples of class n. In conjunction with a distance function d(xi, xj), this provides an inference scheme for a query point x as the softmax over distances to the prototypes: pφ(y = n |x) = exp(−d(hφ(x),µn))∑ n′ exp(−d(hφ(x),µn′ )) . φ is optimized by minimizing the negative log-probability of the true class of each query point by stochastic gradient descent in each episode. Prototypical networks defined in this way learn to create uni-modal class distributions for fully labeled supports. Multi-modal Clustering Our method defines multi-modal prototypes of both labeled and unlabeled data. That is, a single class is represented by a set of cluster modes. By deciding the number of modes, our method interpolates between exemplar and uni-modal prototype representations, in effect adjusting its capacity depending on the data. To create multi-modal prototypes, we extend the non-parametric clustering algorithm DP-means (Kulis & Jordan, 2012) to make it compatible with end-to-end learning. DP-means iterates through all examples in a dataset, computing the example’s minimum distance to all existing cluster means. If this distance is greater than a particular threshold λ, a new cluster is created with mean hφ(xi), the example assigned to it. If xi is labeled, the new cluster takes on its label. While we use DP-means for cluster creation, we include cluster variances for reassignment. Labeled clusters are assigned a variance σl and unlabeled clusters are assigned a variance σu. σl and σu are differentiable, and therefore learned along with the embedding parameters φ. (We discuss the probabilistic interpretations of this choice in the next section.) λ, the threshold for creating a new cluster, is the sole hyperparameter for DP-means clustering. It is non-differentiable, and so it cannot be learned jointly. Instead, we set λ episodically as a function of the data. In Kulis & Jordan (2012), λ is parameterized as −2σ log( α (1+ ρσ ) d/2 ). α is the relative probability of forming a new cluster in the Chinese Restaurant Process prior (Aldous, 1985), and ρ is a measure of the standard deviation for the base distribution from which clusters are assumed to be drawn. We estimate ρ as the variance in the labeled cluster means within an episode, while α is treated as a hyperparameter. In our experiments, we found a wide range of α values to give similar results, with the embeddings adjusting their overall magnitudes to match the magnitude of α. 3.1 PROBABILISTIC INTERPRETATIONS OF HARD AND SOFT CLUSTERING The choice of hard or soft clustering has theoretical ramifications. There are three clustering variants to consider: fully hard, fully soft, and hybrid hard-soft. Fully hard clustering corresponds to following DP-means in a theoretically exact manner, with both σu and σl set to 0, and the UPDATEASSIGNMENTS function assigning zi = argminc[di,c] for each example i. This variant is theoretically precise as an extension of DP-means for endto-end learning and simultaneous clustering of labeled and unlabeled data. Fully soft clustering corresponds to an extension and reinterpretation of prior work on semi-supervised prototypical networks (Ren et al., 2018) (see Section A.4). Through the lens of bayesian nonparametrics, we derive this connection to an approximation of the Chinese Restaurant Process (CRP) (Aldous, 1985) in Section A.4 of the appendix. While fully hard and fully soft clustering admit clearer probabilistic interpretations, they are empirically less accurate than hybrid hard-soft clustering. Table 1: Clustering comparison on 5-way 1-shot semi-sup. Omniglot. Clustering Accuracy Hard-Hard 97.0 Soft-Soft 98.4 BANDE (Hard-Soft) 99.0 Table 1 compares the variants on a standard semi-supervised few-shot learning benchmark (detailed further in Section 4.3). BANDE does hard-soft clustering throughout our experiments. For hard-soft clustering, UPDATEASSIGNMENTS does soft assignment of zi,c = N (hφ(xi);µc,σc)∑ cN (hφ(xi);µc,σc) for all examples i. 3.2 CUMULATIVE SUPERVISION We extend BANDE into a cumulative variant, BANDE-C, that accumulates supervision non-episodically by remembering prototypes across episodes. Concretely, we initialize the cluster means µc by including a cluster mean from memory, φm,c, with the current episodic sample mean (i.e. µc = 1|zi∈C|+1 (φm,c +∑ i,zi∈c hφ(xi)). φm,c is computed as if c was uni-modal, regardless of whether the clustering was multi-modal in a previous episode. Since the embedding representation rapidly changes early in training, we introduce a discount factor on the stored embedding γφm,c proportional to the current learning rate. Whenever the class is encountered in a future episode, we update the remembered prototype with the cluster mean after episodic inference. We only experiment with BANDE-C in the variadic setting (Section 4.2); everywhere else we keep standard episodic training and testing. Note that standard prototypical networks can likewise be augmented to remember prototypes and non-episodically accumulate supervision in this manner. Algorithm 1 BANDE: one optimization episode. ns is the number of labeled classes (way) and ks is the number of labeled examples of each class (shot). kq is the number of query examples per class. For a set A, An is the subset of A with all examples of class n. p(x|µ, σ) is the Gaussian density. Input: support set S, query set Q, and unlabeled set U . Output: loss J for the episode. C ← ns . C is the total number of clusters for c ∈ {1, ..., C} do lc ← c . lc is the cluster label µc ← 1ks ∑ (xi,yi)∈Sc hφ(xi) . µc is the cluster mean σc ← σl . σc is the cluster variance end for . Iterate over the labeled and unlabeled data and create new clusters. for each example i ∈ S ∪ U do for c in {1, ..., C} do di,c ← { ‖hφ(xi)− µc‖2 if (example i is labeled and lc = yi) or example i is unlabeled +∞ otherwise end for if minc(di,c) > λ then C ← C + 1 lC ← yi . Cluster takes the label of the example µC ← hφ(xi) . Cluster mean takes the embedding of the example σC ← { σl, if yi 6= 0 σu, otherwise end if end for z ← UPDATEASSIGNMENTS({hφ(x)}, µ, σ) . Update all cluster-example assignments µ← { ∑ i zi,chφ(xi)∑ i zi,c | c ∈ 1, ..., C} . Update all cluster means . Cross-entropy loss on the most probable cluster of the true class and all clusters of other classes J ← 0 for n in {1, ..., ns} do c∗ ← argmax c:lc=n log p(x|µc, σc) J ← J+ 1 nskq ∑ (x,y)∈Qn − log p(x|µc∗ , σc∗) + log ∑ c′:lc′ 6=n p(x|µc′ , σc′) + p(x|µc∗ , σc∗) end for 4 EXPERIMENTS We experimentally show that multi-modal prototypes are more accurate and more general than uni-modal prototypes. In our new variadic setting for any-shot, any-way learning we explore which methods do (and do not) generalize across shot and way. We report the first results for extreme generalization to 1692-way classification and 5000-shot from few-shot episodic optimization. For few-shot learning, we show competitive results for few-shot fully-supervised and semi-supervised classification on the standard benchmarks of Omniglot and mini-ImageNet. We control for architecture and optimization by comparing methods with the same base architecture and same episodic optimization settings. All code for our method and baselines will be released. For these experiments we make use of standard few-shot and supervised learning datasets and furthermore define new variadic evaluation protocols on these common benchmarks. We consider Omniglot and mini-ImageNet, two widely used datasets for few-shot learning research, and CIFAR10/CIFAR-100, two popular datasets for supervised learning research with deep networks. Omniglot (Lake et al., 2015) is a dataset of 1,623 handwritten characters from 50 alphabets. There are 20 examples of each character, where the images are resized to 28x28 pixels and each image is rotated by multiples of 90◦. This gives 6,492 classes in total, which are then split into 4,112 training classes, 1,692 test classes and 688 validation classes. mini-ImageNet (Vinyals et al., 2016) is a reduced version of the ILSVRC’12 dataset (Russakovsky et al., 2015), which contains 600 84x84 images for 100 classes randomly selected from the full dataset. We use the split from Ravi & Larochelle (2017) with 64/16/20 classes for train/val/test. CIFAR-10/100 (Krizhevsky & Hinton, 2009) are classification datasets of 32x32 color images drawn from the Tiny Images project (Torralba et al., 2008). CIFAR-10 has 10 classes and CIFAR-100 has 100 classes (plus 20 super-classes). Both have 50k training images and 10k testing images and both are balanced so that every class has an equal number of images. 4.1 ACCURACY AND GENERALITY OF MULTI-MODAL PROTOTYPES Our experiments on Omniglot alphabets and characters show that multi-modal prototypes are significantly more accurate than uni-modal prototypes for recognizing complicated classes (alphabets) and recover uni-modal prototypes as a special case for recognizing simple classes (characters). Multi-modal prototypes generalize better for super-class to sub-class transfer learning, improving accuracy when meta-training on alphabets but meta-testing on characters. By unifying the clustering of labeled and unlabeled data alike, our multi-modal prototypes even address fully unsupervised clustering, unlike prior prototypical network models that are undefined without labels. We first show the importance of multimodality for learning representations of multi-modal classes: Omniglot alphabets. For these experiments we meta-train for alphabet classification, using only the super-class labels. Episodes are constructed by sampling 1 example of 200 different random characters in the support set, with 5 examples of each character in the query. For alphabet testing, we provide 100 randomly selected characters with alphabet labels in the support, making this a mixed-shot problem. For character testing, we provide 1 labeled image of 20 different characters as support, and score based on correct character assignments of the queries. As seen in table 2, in both testing configurations, BANDE substantially outperforms prototypical networks. On 20-way 1-shot character recognition, BANDE achieves 95.3% from alphabet supervision alone, slightly out-performing prototypical networks trained directly on character recognition (94.9%). Fully Unsupervised Clustering BANDE is able to do fully unsupervised clustering during meta-test via multi-modality. Prior work on prototypical networks (Snell et al., 2017) and semi-supervised prototypical networks (Ren et al., 2018) cannot address this setting because the models are undefined without labeled data. BANDE handles labeled and unlabeled data by the same clustering rule, inferring the number of clusters as needed, and achieves good accuracy under the standard clustering metrics of normalized mutual information (NMI) and purity. We examine BANDE’s clustering performance in Table 3 by randomly sampling 5 examples of n classes from the test set and treating them as unlabeled samples. BANDE maintains remarkably strong performance across a large number of unlabeled clusters, without knowing the number of classes in advance, and without having seen any examples from the classes during training. 4.2 ANY-SHOT, ANY-WAY LEARNING IN THE VARIADIC SETTING We now move to the any-shot, any-way setting that this paper introduces. We closely examine extreme generalization across shot and way between meta-train and meta-test, unlike previous approaches which only examine small shifts (Munkhdalai & Yu, 2017; Snell et al., 2017). Most notably, we show that nonparametric methods such as BANDE can generalize from few-way training to many-way testing, while parametric methods fail to transfer effectively. We further show that BANDE, a nonparametric method, performs on par with fully parametric methods in the domain of supervised learning; the first demonstration of a meta-learning method evaluated in the many-shot domain without pre-training. These two results cement the suitability of nonparametric meta-learning methods over parametric methods for the variadic setting. Semi-supervised protocol We train and test BANDE and other prototypical methods on semisupervised data to include the number of labeled and unlabeled examples in the scope of the variadic setting. We follow (Ren et al., 2018), taking only 40% of the data as labeled for both the support and query while the rest of the data is included, but as unlabeled examples. The unlabeled data is then incorporated into episodes as (1) within support examples that allow for semi-supervised refinement of the support classes or (2) distractors which lie in the complement of the support classes. Semi-supervised episodes augment the fully supervised n-way, k-shot support with 5 unlabeled examples for each of the n classes and include 5 more distractor classes with 5 unlabeled instances each. The query still contains only support classes. Variable Shot and Way We first look at generalization by moderately adjusting the shot and way in evaluation from their fixed settings during meta-learning. For variable way, we consider Omniglot, because it has many classes. For variable shot, we consider mini-ImageNet, because it has more examples per class. In both cases, we train on 5-way, 1-shot episodes, and test generalization by varying the number of classes and number of examples during meta-testing. We consider four strong fully-supervised baselines trained on 100% of the data (black lines), as well as prototypical baselines trained on 40% of the data (colored). We compare to three parametric methods, MAML (Finn et al., 2017), Reptile (Nichol & Schulman, 2018), and few-shot graph networks (Garcia & Bruna, 2018), as well as the nonparametric memory-based model of Kaiser et al. (2017). Modifications to these approaches for test-way generalization are discussed in Section A.3. As seen in Figure 2 (a), the parametric meta-learning approaches fail to meaningfully generalize to higher way than they were trained for. BANDE is the least sensitive to higher way meta-testing, although the margin between BANDE and semi-supervised prototypical networks in this regime is small compared to the difference with parametric methods. For shot generalization, we compare to MAML’s accuracy after 10 updates vs. accuracy at convergence. We note that MAML is not able to make effective use of more data unless it is allowed to take proportionately larger numbers of updates, while our method improves with more data without taking gradients at test time. Even at convergence, MAML lags BANDE’s performance, suggesting that a nonparametric approach is still superior to parametric meta-learning. Extreme Generalization to Many-Way We demonstrate that BANDE can learn a full 1692- way classifier for Omniglot from only episodic optimization of 5-way 1-shot tasks. Episodes are composed identically to the few-shot semisupervised setting with unlabeled examples and distractor classes. Accuracies for our method and a supervised learning baseline are shown in Figure 3. For inference, we run k examples from each test class through our learned embedding network, and then assign the unseen examples the label of the closest prototype. The baseline shares the same training set and architecture, substituting a linear output layer for prototypes by optimizing the softmax cross-entropy loss. We take the last feature layer as the embedding for prototypical inference. Fine-tuning on the test support proved less accurate, as did k nearest neighbours inference. This result is an example of episodic optimization yielding strong results for many-way classification, motivating the possibility of learning large-scale models cumulatively from small-scale tasks, instead of restricting attention to the adaptation of large-scale models to small-scale, few-shot settings. Scaling to Many-Shot We examine the effectiveness of BANDE in the conventional supervised learning regime. To the best of our knowledge this is the first evaluation of meta-training across the spectrum from few-shot to many-shot. Our base architecture is the Wide ResNet 28-10 of Zagoruyko & Komodakis (2016), which has shown state-of-the-art results on CIFAR-10/100, and has been additionally used as a base architecture for strong low-shot performance on mini-ImageNet (Qiao et al., 2018). We optimize BANDE by meta-training on episodes consisting of 10-way (CIFAR-10) 2-shot and 20-way (CIFAR-100) tasks for computational considerations. With no knowledge of the total number of classes or number of examples per class, and without pre-training or fine-tuning, we achieve accuracies that rival a well-tuned supervised learning baseline. On CIFAR-10 we achieve 94.4% accuracy compared to the 95.1% accuracy of supervised learning. On CIFAR-100 we achieve 75.6% accuracywhich is > 90% of the 81.2% accuracy of supervised learning. When evaluating both the BANDE and supervised learning embeddings as prototypes the accuracies are equal, suggesting that both approaches learn equally good representations, and differ only in the prototypical vs. parametric form of the classifier. 4.3 FEW-SHOT CLASSIFICATION BENCHMARKS We evaluate BANDE on the standard few-shot classification benchmarks of Omniglot and miniImageNet in the fully-supervised and semi-supervised regimes. BANDE learns to recover uni-modal clustering as a special case, matching or out-performing prototypical networks when the classes are uni-modal, as seen in Table 4. In this setting, we evaluate BANDE in the standard episodic protocol of few-shot learning. In this protocol, shot and way are fixed and classes are balanced within an episode. The results reported in Table 4 are for models trained and tested with n-way episodes. This is to equalize comparison across methods. Snell et al. (2017) train at higher-way than testing and report a boost in accuracy. We find that this boost is illusory, and explained away by controlling for the number of gradients per update. We show this by experiment through the use of gradient accumulation in Section A.2 of the appendix. (For completeness, we confirmed that our implementation of prototypical networks reproduces reported results at higher way.) In the semi-supervised setting we follow (Ren et al., 2018), using the set-up outlined in the second paragraph of section 4.2. Our results for this setting are reported in Table 5. Through multi-modality, the clustering of the labeled classes and distractors is decided by the data with a single rule. In particular this helps with the distractor distribution, which is in fact more diffuse and multi-modal by comprising several different classes. Our only specialization to this setting is to have more uncertain distractor clusters by higher cluster variances to compensate for this diffuseness. 5 CONCLUSION We framed the variadic regime to shine a light on learning representations that bridge small-scale and large-scale learning and strive toward the any-shot/any-way adaptability of human perception. As a step toward addressing this full span, we introduced BANDE, a multi-modal extension of prototypical networks, that is capable of generalizing across variable amounts of labeled and unlabeled data. Our results have shown BANDE is state-of-the-art in the few-shot regime and scales from fewway, few-shot meta-learning to many-way, many-shot deployment for both sparse and plentiful supervision. Our experiments demonstrate that multi-modality is key for improved semi-supervised and unsupervised clustering. There is much work to be done to improve variadic generalization, and to connect to life-long learning over non-stationary tasks. A APPENDIX A.1 IMPLEMENTATION DETAILS For all few-shot experiments, we use the same base architecture as prototypical networks for the embedding network. It is composed of four convolutional blocks consisting of a 64-filter 3 x 3 convolution, a batch normalization layer, a ReLU nonlinearity, and a 2 x 2 max-pooling layer per block. This results in a 64-dimensional embedding vector for omniglot, and a 1600 dimensional embedding vector for mini-imagenet. Our models were trained via SGD with RMSProp (Tieleman & Hinton, 2012) with an α parameter of 0.9. For Omniglot, the initial learning rate was set to 1e-3, and cut by a factor of two every 2000 iterations, starting at 4000 iterations. We additionally use gradient accumulation and accumulate gradients over eight episodes before making an update when performing 5-way training for Omniglot. For mini-ImageNet, the initial learning rate was set to 1e-3, and further halved every 20000 iterations, starting at 40000 iterations. For the supervised experiments, we use a wide residual network (Zagoruyko & Komodakis, 2016) with depth 28 and widening factor 10, with a dropout value of 0.3. We were not able to perfectly recover published results with our reimplementation, but the numbers are within 1% of their published values. A.2 CONTROLLING FOR THE NUMBER OF GRADIENTS TAKEN DURING OPTIMIZATION Consider the gradient of the loss: it has the dimensions of shot × way because every example has a derivative with respect to every class. In this way, by default, the episode size determines the number of gradients in an update. Quantitatively, 20-way episodes accumulate 16 times as many gradients as 5-way episodes. By sampling 16 5-way episodes and accumulating the gradients to make an update, we achieve significantly better results, matching the results obtained with 20-way episodes within statistical significance. Note that agreement across conditions may not be perfectly exact because subtle adjustments to hyperparameters might be necessary. A.3 EXTENDING COMPARED MODELS TO VARIADIC REGIME The models we compare to were not designed with variadic generalization in mind, and as a result we attempt to make as fair a comparison as possible by extending them as needed. We describe our approaches below. Semi-supervised prototypical networks In the paper first introducing this semi-supervised setting (Ren et al., 2018), the authors show how to use a distractor cluster centered at 0 to capture samples not belonging to any examples from the support. They additionally introduce length scales rc. In equation 6 from their paper, they use a normalization constantA(rc) defined as 0.5 log(2π)+log(r). However, this is an unscaled normalization constant, and assumes the dimensionality of the embedding space to be 1. The corrected normalization constant is A(rc) = d(log(rc) + 0.5 log(2π)) where d is the dimensionality of the embedding. We compare to their method with this corrected normalization constant, but note that it has only a small effect. For space, we did not compare to all methods from their paper, and chose this one as it performed well across their experiments, and because it was most amenable to the clustering experiments we were interested in performing. MAML We used Finn’s publicly available github repository (Finn et al., 2017). We trained an initial MAML architecture on the 5-way 1-shot task, using the suggested hyperparameters, for 40,000 iterations. We then removed the classification layer, froze the remaining weights of the network (for optimization across episodes, not for gradient descent within an episode), and retrained the top layer for the testing n-way classification task, using the MAML objective again, for 5000 iterations. We tried two hyperparameter settings for the re-training: the hyperparameters for the 5-way 1-shot setting, and the hyperparameters for the 20-way 1-shot setting. We found that re-training with the 20-way 1-shot hyperparameters gave us better performance. While we attempted to also scale these hyperparameters appropriately for even higher way testing, this was not more successful than using the 20-way 1-shot hyperparameters. We then reported the accuracy after 10 update steps on the test data. We also tried simply randomly initializing the top-layer weights, and allowing MAML to take more update steps to see if it could learn the top layer online. These results were worse than those obtained after the fine-tuning procedure. Reptile We used the publicly available github repository from OpenAI. We used transductive training for 100,000 iterations on the 5-way 1-shot task, using the suggested hyperparameters. We then removed the classification layer, froze the remaining weights of the network, and retrained the top layer for the testing n-way classification task, using the Reptile training procedure. As in MAML, we tried setting hyperparameters during re-training to be similar to 5-way 1-shot, and 20-way 1-shot, but did not notice significant differences. Using random initializations for the top-layer weights, and then applying "fast weight" updates at test time also worked reasonably well. Graph Neural Networks Modifying the Graph Neural Network architecture to be applicable for test-way generalization was more difficult, since the approach assumes that labels are represented as a one-hot encoding, and concatenated with node features before being fed to the metric network. At training, we padded the one-hot labels to allow for 200 possible classes. At test time, these could then be filled in without needing to completely retrain the metric network. We additionally fine-tuned the classification layer of the metric network. We were unable to achieve greater than chance performance for the 200-way task. We expect that this is because the metric network learns to ignore the padded input dimensions during training. One possible fix would be to randomize the labels during training to fall in the full (0, 200) range, but we leave this to future work. Scaling this approach up to full-way classification is impossible with this encoding of the labels, as the computational memory requirements are substantial. A.4 SOFT-SOFT CLUSTERING BY APPROXIMATING THE CHINESE RESTAURANT PROCESS Here we discuss an alternative to BANDE which follows Gibbs sampling in an infinite mixture model more closely, in that it incorporates variances of clusters throughout, instead of only during reassignment as in BANDE. This fully soft variant has a probabilistic interpretation through the Chinese Restaurant Process (CRP) of Aldous (1985), but in our experiments it achieves lower accuracy than BANDE. For a certain setting of its parameters we can reinterpret it as an infinite mixture model extension of (Ren et al., 2018), which did not include this theoretical perspective. The generative model of the CRP consists of sampling assignments z1, ..., zJ which could take on cluster values c = 1, ..., C from the CRP prior with hyperparameter α, which controls the concentration of clusters, and number of cluster members Nc. Cluster parameters µc, σc are sampled from a base distribution H(θ0;µ0, σ0), and instances xj are then sampled from the associated Gaussian distribution N (µzj , σzj ). θ0 and θ consist of the parameters to be estimated, which in this case are the mean µ and variance σ of the Gaussian distributions. The CRP generative model is defined as p(zJ+1 = c|z1:J , α) = Nc N + α for c ∈ {1 . . . C} and p(zJ+1 = C + 1|z1:J , α) = α N + α (1) for assignments z of examples x to clusters c, cluster counts Nc, and parameter α to control assignments to new clusters. N is the total number of examples observed so far. One popular sampling procedure for parameter estimation is Gibbs sampling (Neal, 2000). In Gibbs sampling, we draw from a conditional distribution on the cluster assignments until convergence. The conditional draws are: p(zJ+1 = c|z1:J , α) ∝ { Nc,−j ∫ P (xj |θ)dH−j,c(θ) for c ≤ C α ∫ P (xj |θ)dH0(θ) for c = C + 1 For the case of a spherical Gaussian likelihood, let us define Nc = N (xi;µc, σ) as the likelihood of assigning xi to cluster c and N0 = N (xi;µ0, σ + σ0) as the likelihood of assigning xi to a new cluster drawn from the base distribution (Gaussian with mean µ0 and σ0) . We can then write: p(zi = c|µ) = Nk,−nNc αN0 + ∑C j=1Nj,−nNj (2) p(zi = C + 1|µ) = αN0 αN0 + ∑C j=1Nj,−nNj (3) p(σc|z) = σσ0 σ + σ0Nc (4) p(µc|z) = N ( µc; σµ0 + σ0 ∑ i,zi=c xi σc + σ0Nc , σσ0 σ + σ0Nc ) (5) Algorithm 2 Soft-soft clustering: multi-modal clustering with cluster variances for labeled and unlabeled data by approximating the Chinese Restaurant Process (CRP). ns is the number of labeled classes (way). q(i, c) is log p(i, c), the joint probability of cluster C and assignment i. N (x;µ, σ) is the Gaussian density. α is the concentration hyperparameter of the CRP. is the threshold hyperparameter for creating a new cluster. initialize {µ1, . . . , µns} . Initialize a cluster for each labeled class by taking class-wise means initialize {σ1, . . . , σns} . Initialize cluster variances based on equation 4. initialize {z1, . . . , zI} . Initialize cluster assignments for labeled data points. All unlabeled cluster assignments start at 0. C = ns . Initialize number of clusters C . Begin clustering pass for each example i do for each cluster c ∈ {1, ..., C} do Nc ← ∑ i zi,c σc ← σσ0σ+σ0Nc µc ← σµ0+σ0 ∑ i zi,chφ(xi) σc+σ0Nc estimate qi,c ∝ log(Nc,−i) + log(N (xi;µc, σc)) based on equation 2 end for estimate qi,C+1 ∝ log(α) + log(N0(xi;µ0, σ0)) based on equation 3 zi,c ← softmax(qi,1, ..., qi,C+1) if zi,C+1 > then C ← C + 1 end if end for Determining the assignment for a query sample is performed after clustering using the updated means and cluster counts. We connect our fully soft clustering variant to prior work on semi-supervised prototypical networks (Ren et al., 2018) to give it a new probabilistic perspective. Their method clusters labeled examples into a cluster per class by class-wise means, defines a “distractor” cluster for unrelated unlabeled examples, and then refines the labeled clusters by soft k-means. Their distractor cluster is fixed to have a mean of zero and variance of 100. If we set µ0 = 0 and σ0 = 100 accordingly, and do not update σ parameters, then our fully soft clustering can be seen as the infinite mixture model extension of their method, where the distractor cluster corresponds to a draw from a general base distribution with a CRP prior placed on the cluster assignments. A.5 SEMI-SUPERVISED CLUSTERING EVALUATION We show the importance of multi-modality for discovering unlabeled clusters during meta-testing after semi-supervised meta-learning. We randomly sampled n classes from Omniglot’s test set, and ran one randomly selected example from each of the n classes through our model to obtain a set of n prototypes. We then presented a new set of examples drawn equally from these n support (known) classes and n out-of-support (unknown) classes, then let each method cluster the examples into either the computed n prototypes or new clusters. Figure 4 shows the n+1 accuracy of correctly classifying the new example as either the correct, known label, or correctly identifying the example as a distractor. Only BANDE achieved higher accuracy than chance for numbers of clusters greater than 5, suggesting that a multi-modal distribution for unlabeled clusters is paramount for the algorithm’s clustering performance. The number of clusters created for the unlabeled examples closely tracked the correct number of unlabeled clusters, with an average relative error in the number of created clusters of 1.87 across the range from 5 - 200.
1. What is the main contribution of the paper regarding deep subspace clustering? 2. What are the strengths and weaknesses of the proposed method, particularly in its formulation and presentation? 3. How does the reviewer assess the novelty and uniqueness of the proposed approach compared to prior works such as [1], [2], [3], [4], and [5]? 4. Do you have any questions or concerns regarding the use of terminology, such as "Bayesian nonparametric," or the lack of clarity and readability in certain parts of the paper?
Review
Review This work proposes a learning method based on deep subspace clustering. The method is formulated by identifying a deep data embedding, where clustering is performed in the latent space by a revised version of k-means, inspired by the work [1]. In this way, the proposed method can adapt to account for uni-modal distributions. The authors propose some variations of the framework based on soft cluster assignments, and on cumulative learning of the cluster means. The method is tested on several scenarios and datasets, showing promising results in prediction accuracy. The idea presented in this work is reasonable and rather intuitive. However, the paper presentation is often unnecessarily convoluted, and fails in clarifying the key points about the proposed methodology. The paper makes often use of abstract terms and jargon, which sensibly reduce the manuscript clarity and readability. For this reason, in my opinion, it is very difficult to appreciate the contribution of this work, from both methodological and applicative point of view. Related to this latter point, the use of the term “Bayesian nonparametric” is inappropriate. It is completely unclear in which sense the proposed framework is Bayesian, as it doesn’t present any element related to parameters inference, uncertainty estimation, … Even the fact that the method uses an algorithm illustrated in [1] doesn’t justifies this terminology, as the clustering procedure used here only corresponds to the limit case of a Dirichlet Process Gibbs Sampler when the covariance parameters goes to zero. Moreover, the original procedure requires the iteration until convergence, while it is here applied with a single pass only. The procedure is also known to be sensitive to the order by which the data is provided, and this point is not addressed in this work. Finally, the novelty of the proposed contribution is questionable. To my understanding, it may consist in the use of embedding methods based on the approach provided in [1]. However, for the reasons illustrated above, this is not clear. There is also a substantial amount of literature on deep subspace embeddings that proposes very similar methodologies to the one of this paper (e.g. [2-5]). For this reason, the paper would largely benefit from further clarifications and comparison with respect to these methods. [1] Kulis and Jordan, Revisiting k-means: New Algorithms via Bayesian Nonparametrics, ICML 2012 [2] Xie, Junyuan, Ross Girshick, and Ali Farhadi. "Unsupervised deep embedding for clustering analysis." International conference on machine learning. 2016. [3] Ji, Pan, et al. "Deep subspace clustering networks." Advances in Neural Information Processing Systems. 2017. [4] Jiang, Zhuxi, et al. "Variational deep embedding: An unsupervised and generative approach to clustering." IJCAI 2017 [5] Kodirov, Elyor, Tao Xiang, and Shaogang Gong. "Semantic autoencoder for zero-shot learning. CVPR 2017.
ICLR
Title Spatial Graph Attention and Curiosity-driven Policy for Antiviral Drug Discovery Abstract We developed Distilled Graph Attention Policy Network (DGAPN), a reinforcement learning model to generate novel graph-structured chemical representations that optimize user-defined objectives by efficiently navigating a physically constrained domain. The framework is examined on the task of generating molecules that are designed to bind, noncovalently, to functional sites of SARS-CoV-2 proteins. We present a spatial Graph Attention (sGAT) mechanism that leverages self-attention over both node and edge attributes as well as encoding the spatial structure — this capability is of considerable interest in synthetic biology and drug discovery. An attentional policy network is introduced to learn the decision rules for a dynamic, fragment-based chemical environment, and state-of-the-art policy gradient techniques are employed to train the network with stability. Exploration is driven by the stochasticity of the action space design and the innovation reward bonuses learned and proposed by random network distillation. In experiments, our framework achieved outstanding results compared to state-of-the-art algorithms, while reducing the complexity of paths to chemical synthesis. 1 INTRODUCTION This work aims to address the challenge of establishing an automated process for the design of objects with connected components, such as molecules, that optimize specific properties. Achieving this goal is particularly desirable in drug development and materials science, where manual discovery remains a time-consuming and expensive process (Hughes et al., 2011; Schneider et al., 2020). However, there are two major difficulties that have long impeded rapid progress. Firstly, the chemical space is discrete and massive (Polishchuk et al., 2013), presenting a complicated environment for an Artificial Intelligence (AI) approach to efficiently and effectively explore. Secondly, it is not trivial to compress such connected objects into feature representations that preserve most of the information, while also being highly computable for Deep Learning (DL) methods to exploit. We introduce Distilled Graph Attention Policy Network (DGAPN), a framework that advances prior work in addressing both of these challenges. We present a Reinforcement Learning (RL) architecture that is efficiently encouraged to take innovative actions with an environment that is able to construct a † University of California, Berkeley, § National Virtual Biotechnology Laboratory, US Department of Energy, ‡ Lawrence Berkeley National Laboratory, ¶ Oak Ridge National Laboratory, || University of Tennessee, Knoxville, †† University of Chicago, §§ Argonne National Laboratory, ‡‡ Pacific Northwest National Laboratory dynamic and chemically valid fragment-based action space. We also propose a hybrid Graph Neural Network (GNN) that comprehensively encodes graph objects’ attributes and spatial structures in addition to adjacency structures. The following paragraphs discuss how we addressed limitations of prior work and its relevance to antiviral drug discovery. For more descriptions of key prior methodologies that we used as benchmarks in this paper, see Section 4. Graph Representation Learning Despite their spatial efficiency, string representation of molecules acquired by the simplified molecular-input line-entry system (SMILES) (Weininger, 1988) suffers from significant information loss and poor robustness (Liu et al., 2017). Graph representations have become predominant and preferable for their ability to efficiently encode an object’s scaffold structure and attributes. Graph representations are particularly ideal for RL since intermediate representations can be decoded and evaluated for reward assignments. While GNNs such as Graph Convolutional Networks (GCN) (Kipf & Welling, 2016) and Graph Attention Networks (GAT) (Veličković et al., 2017) have demonstrated impressive performance on many DL tasks, further exploitation into richer information contained in graph-structured data is needed to faithfully represent the complexity of chemical space (Morris et al., 2019; Wang et al., 2019; Chen et al., 2020). In this work, we made improvements to previous studies on attributes encoding and structural encoding. For structural encoding, previous studies have covered adjacency distance encoding (Li et al., 2020), spatial cutoff (Pei et al., 2020) and coordinates encoding (Schütt et al., 2017; Danel et al., 2020). Our work presents an alternative approach to spatial structure encoding similar to Gilmer et al. (2017) which do not rely on node coordinates, but different in embedding and updating scheme. Distinct from Danel et al. (2020) and Chen & Chen (2021), we extended attentional embedding to be edge-featured, while still node-centric for message passing efficiency. Reinforcement Learning A variety of graph generative models have been used in prior work, predominantly Variational Autoencoders (VAE) (Simonovsky & Komodakis, 2018; Samanta et al., 2020; Liu et al., 2018; Ma et al., 2018; Jin et al., 2018) and Generative Adversarial Networks (GAN) (De Cao & Kipf, 2018). While some of these have a recurrent structure (Li et al., 2018; You et al., 2018b), RL and other search algorithms that interact dynamically with the environment excel in sequential generation due to their ability to resist overfitting on training data. Both policy learning (You et al., 2018a) and value function learning (Zhou et al., 2019) have been adopted for molecule generation: however, they generate molecules node-by-node and edge-by-edge. In comparison, an action space consisting of molecular fragments, i.e., a collection of chemically valid components and realizable synthesis paths, is favorable since different atom types and bonds are defined by the local molecular environment. Furthermore, the chemical space to explore can be largely reduced. Fragment-by-fragment sequential generation has been used in VAE (Jin et al., 2018) and search algorithms (Jin et al., 2020; Xie et al., 2021), but has not been utilized in a deep graph RL framework. In this work, we designed our environment with the Chemically Reasonable Mutations (CReM) (Polishchuk, 2020) library to realize a valid fragment-based action space. In addition, we enhanced exploration by employing a simple and efficient technique, adapting Random Network Distillation (RND) (Burda et al., 2018) to GNNs and proposing surrogate innovation rewards for intermediate states during the generating process. Antiviral Drug Discovery — A Timely Challenge The severity of the COVID-19 pandemic highlighted the major role of computational workflows to characterize the viral machinery and identify druggable targets for the rapid development of novel antivirals. Particularly, the synergistic use of DL methods and structural knowledge via molecular docking is at the cutting edge of molecular biology — consolidating such integrative protocols to accelerate drug discovery is of paramount importance (Yang et al., 2021; Jeon & Kim, 2020; Thomas et al., 2021). Here we experimentally examined our architecture on the task of discovering novel inhibitors targeting the SARS-CoV-2 non-structural protein endoribonuclease (NSP15), which is critical for viral evasion of host defense systems (Pillon et al., 2021). Structural information about the putative protein-ligand complexes was integrated into this framework with AutoDock-GPU (Santos-Martins et al., 2021), which leverages the GPU resources from leadership-class computing facilities, including the Summit supercomputer, for high-throughput molecular docking (LeGrand et al., 2020). We show that our results outperformed state-of-the-art generation models in finding molecules with high affinity to the target and reasonable synthetic accessibility. 2 PROPOSED METHOD 2.1 ENVIRONMENT SETTINGS In the case of molecular generation, single-atom or single-bond additions are often not realizable by known biochemical reactions. Rather than employing abstract architectures such as GANs to suggest synthetic accessibility, we use the chemical library CReM (Polishchuk, 2020) to construct our environment such that all next possible molecules can be obtained by one step of interchanging chemical fragments with the current molecule. This explicit approach is considerably more reliable and interpretable compared to DL approaches. A detailed description of the CReM library can be found in Appendix B.1. The generating process is formulated as a Markov decision problem (details are given in Appendix A). At each time step t, we use CReM to sample a set of valid molecules vt+1 as the candidates for the next state st+1 based on current state st. Under this setting, the transition dynamics are deterministic, set A of the action space can be defined as equal to S of the state space, and action at is induced by the direct selection of st+1. With an abuse of notation, we let r(st+1) := r(st, at). 2.2 SPATIAL GRAPH ATTENTION We introduce a graph embedding mechanism called Spatial Graph Attention (sGAT) in an attempt to faithfully extract feature vectors ht ∈ Rdh representing graph-structured objects such as molecules. Two different types of information graphs constructed from a connected object are heterogeneous and thus handled differently in forward passes as described in the following sections. See Figure 1 for an overview. 2.2.1 ATTENTION ON ATTRIBUTION GRAPHS The attribution graph of a molecule with n atoms and e bonds is given by the triple (A,N ,E), where A ∈ {0, 1}n×n is the node adjacency matrix, N is the node attribution matrix of dimension n× dn and E is the edge attribution matrix of dimension e× de. Each entry aij ofA is 1 if a bond exists between atom i and j, and 0 otherwise. Each row vector ni of N is a concatenation of the properties of atom i, including its atomic number, mass, etc., with the categorical properties being one-hot encoded. E is formed similar to N , but with bond attributes. We denote a row vector of E as eij if it corresponds to the bond between atom i and j. We proceed to define a multi-head forward propagation that handles these rich graph information: let hnk ∈ R1×dhn denote a given representation for nk, heij ∈ R1×dhe denote a representation for eij , then the m-th head attention αmij from node j to node i (i 6= j) is given by αmij = softmax j ( ⋃ k: aik=1 { σ([hniWn,m ‖ heikWe,m ‖ hnkWn,m] · attmT ) }) (1) where softmax j is the softmax score of node j; ‖ is column concatenation; σ is some non-linear activation;Wn,m ∈ Rdhn×dwn ,We,m ∈ Rdhe×dwe are them-th head weight matrices for nodes and edges respectively; attm ∈ R1×(2dwn+dwe ) is the m-th head attention weight. The representations after a feed-forward operation are consequently given as follow: h′ni = aggr1≤m≤nm σ ∑ j: aij=1 αmij · hnj + hni Wn,m (2) h′eij = aggr1≤m≤nm { σ ([ hniWn,m ‖ heijWe,m ‖ hnjWn,m ] ·Wh,m )} (3) where Wh,m ∈ R(2dwn+dwe )×dwe ; nm is the total number of attention heads and aggr denotes an aggregation method, most commonly mean , sum , or concat (Hamilton et al., 2017). We note that we have not found significant difference across these methods and have used mean for all aggregations in our experiments. In principle, a single-head operation on nodes is essentially graph convolution with the adjacency matrix  = à + I where à is attention-regularized according to (1). This approach sufficiently embeds edge attributes while still being a node-centric convolution mechanism, for which efficient frameworks like Pytorch-Geometric (Fey & Lenssen, 2019) have been well established. 2.2.2 SPATIAL CONVOLUTION In addition to attributions and logical adjacency, one might also wish to exploit the spatial structure of an graph object. In the case of molecular docking, spatial structure informs the molecular volume and the spatial distribution of interaction sites — shape and chemical complementarity to the receptor binding site is essential for an effective association. Let G = ( dij −1) i,j≤n be the inverse distance matrix where dij is the Euclidean distance between node i and j for ∀i 6= j, and dii−1 := 0. G can then be seen as an adjacency matrix with weighted “edge”s indicating nodes’ spatial relations, and the forward propagation is thus given by H ′′n = σ (( D̃− 1 2 G̃D̃− 1 2 + I ) HnWn ) (4) where G̃ is optionally sparsified and attention-regularized from G to be described below; D̃ = diag1≤i≤n {∑n j=1 G̃ij } ; Hn is the row concatenation of {hni}1≤i≤n; Wn ∈ Rdhn×dwn is the weight matrix. In reality,G inducesO(n) of convolution operations on each node and can drastically increase training time when the number of nodes is high. Therefore, one might want to derive G̃ by enforcing a cut-off around each node’s neighborhood (Pei et al., 2020), or preserving an O(n) number of largest entries in G and dropping out the rest. In our case, although the average number of nodes is low enough for the gather and scatter operations (GS) of Pytorch-Geometric to experience no noticeable difference in runtime as node degrees scale up (Fey & Lenssen, 2019), the latter approach of sparsification was still carried out because we have discovered that proper cutoffs improved the validation loss in our supervised learning experiments. If one perceives the relations between chemical properties and spatial information as more abstract, G should be regularized by attention as described in (1), in which case the spatial convolution is principally fully-connected graph attention with the Euclidean distance as a one-dimensional edge attribution. 2.3 GRAPH ATTENTION POLICY NETWORK In this section we introduce Graph Attention Policy Network (GAPN) that is tailored to environments that possess a dynamic range of actions. Note that ρ(·|st, at) is a degenerate distribution for deterministic transition dynamics and the future trajectory τ ∼ p(st+1, st+2, . . . |st) is strictly equal in distribution to a ∼ π(at, at+1, . . . |st), hence simplified as the latter in the following sections. To learn the policy more efficiently, we let st and vt share a few mutual embedding layers, and provided option to pre-train the first ng layers with supervised learning. Layers inherited from pretraining are not updated during the training of RL. See Figure 2 for an overview of the architecture. 2.3.1 ACTION SELECTION At each time step t, we sample the next state st+1 from a categorical distribution constructed by applying a retrieval-system-inspired attention mechanism (Vaswani et al., 2017): st+1 ∼ OHC softmax ⋃ g∈gt+1 {Lfinal(EQ(gt) ‖ EK(g)} · vt+1 (5) where OHC{p1, . . . , pnv} is a one-hot categorical distribution with nv categories; gt, gt+1 are the embeddings for st and vt+1 acquired by the shared encoder; EQ, EK are two sGAT+MLP graph encoders with output feature dimension dk; Lfinal : Rb×2dk → Rb is the final feed-forward layer. Essentially, each candidate state is predicted a probability based on its ‘attention’ to the query state. The next state is then sampled categorically according to these probabilities. There could be a number of ways to determine stopping time T . For instance, an intuitive approach would be to append st to vt+1 and terminate the process if st is selected as st+1. In our experiments, we simply pick T to be constant, i.e. we perform a fixed number of modifications for an input. This design encourages the process to not take meaningless long routes or get stuck in a cycle, and enables episodic docking evaluations in parallelization (further described in Section 2.5). Note that constant trajectory length is feasible because the maximum limit of time steps can be set significantly lower for fragment-based action space compared to node-by-node and edge-by-edge action spaces. 2.3.2 ACTOR-CRITIC ALGORITHM For the purpose of obeying causal logic and reducing variance, the advantage on discounted rewardto-go are predominantly used instead of raw rewards in policy iterations. The Q-function and advantage function are expressed as Qπ(st, at) = Eπ [ T∑ t′=t γt ′−t · r(st′ , at′) ∣∣∣∣∣st, at ] (6) Aπ(st, at) = Q π(st, at)− Eπ [Qπ(st, at)|st] (7) where γ is the rate of time discount. The Advantage Actor-Critic (A2C) algorithm approximates Eπ [Qπ(st, at)|st] with a value network Vζ(st) and Qπ(st, at) with r(st, at) + γVζ(st+1). For a more detailed description of actor-critic algorithm in RL, see Grondman et al. (2012). 2.3.3 PROXIMAL POLICY OPTIMIZATION We use Proximal Policy Optimization (PPO) (Schulman et al., 2017), a state-of-the-art policy gradient technique, to train our network. PPO holds a leash on policy updates whose necessity is elaborated in trust region policy optimization (TRPO) (Schulman et al., 2015), yet much simplified. It also enables multiple epochs of minibatch updates within one iteration. The objective function is given as follow: J∗(θ) = max θ ED,πoldθ [ T∑ t=1 min { rt(θ)A πoldθ (st, at), clip (rt(θ))A πoldθ (st, at) }] (8) where rt(θ) = πnewθ (at ∣∣st)/πoldθ (at∣∣st) , clip (x) = min {max {1− , x} , 1 + } and s0 ∼ D. During policy iterations, πnew is updated each epoch and πold is cloned from πnew each iteration. 2.4 EXPLORATION WITH RANDOM NETWORK DISTILLATION We seek to employ a simple and efficient exploration technique that can be naturally incorporated into our architecture to enhance the curiosity of our policy. We perform Random Network Distillation (RND) (Burda et al., 2018) on graphs or pre-trained feature graphs to fulfill this need. Two random functions f̂ψ, f∗ that map input graphs to feature vectors in Rdr are initialized with neural networks, and f̂ψ is trained to match the output of f∗: ψ∗ = arg min ψ Es′∼p̂next‖f̂ψ(s′)− f∗(s′)‖ (9) where p̂next is the empirical distribution of all the previously selected next states, i.e. the states that have been explored. We record the running errors in a buffer and construct the surrogate innovation reward as: ri(s ′) = clipη (( ‖f̂ψ(s′)− f∗(s′)‖ −mb )/√ vb ) (10) where mb and vb are the first and second central moment inferred from the running buffer, clipη(x) = min {max {−η, x} , η}. 2.5 PARALLELIZATION AND SYNCHRONIZED EVALUATION Interacting with the environment and obtaining rewards through external software programs are the two major performance bottlenecks in ours as well as RL in general. An advantage of our environment settings, as stated in Section 2.3.1, is that a constant trajectory length is feasible. Moreover, the costs for environmental interactions are about the same for different input states. To take advantage of this, we parallelize environments on CPU subprocesses and execute batched operations on one GPU process, which enables synchronized and sparse docking evaluations that reduces the number of calls to the docking program. For future experiments where such conditions might be unrealistic, we also provided options for asynchronous Parallel-GPU and Parallel-CPU samplers (described in Stooke & Abbeel (2019)) in addition to the Parallel-GPU sampler used in our experiments. 3 EXPERIMENTS 3.1 SETUP Objectives We evaluated our model against five state-of-the-art models (detailed in Section 4) with the objective of discovering novel inhibitors targeting SARS-CoV-2 NSP15. Molecular docking scores are computed by docking programs that use the three-dimensional structure of the protein to predict the most stable bound conformations of the molecules of interest, targeting a pre-defined functional site. For more details on molecular docking and our GPU implementation of an automated docking tool used in the experiments, see Appendix B.2. In addition, we evaluated our model in the context of optimizing QED and penalized LogP values, two tasks commonly presented in machine learning literature for molecular design. The results for this can be found in Appendix D. Dataset For the models/settings that do require a dataset, we used a set of SMILES IDs taken from more than six million compounds from the MCULE molecular library — a publicly available dataset of purchasable molecules (Kiss et al., 2012), and their docking scores for the NSP15 target. 3.2 RESULTS 3.2.1 SINGLE-OBJECTIVE OPTIMIZATION The raw docking score is a negative value that represents higher estimated binding affinity when the score is lower. We use the negative docking score as the main reward rm and assign it to the final state sT as the single objective. For DGAPN, we also assign innovation reward to each intermediate state, and the total raw reward for a trajectory τ is given by r(τ ) = rm(sT ) + ι · T∑ t=1 ri(st) (11) where ι is the relative important of innovation rewards, for which we chose 0.1 and incorporated them with a 100 episode delay and 1,000 episode cutoff. Detailed hyperparameter settings for DGAPN can be found in Appendix C. We sampled 1,000 molecules from each method and showed the evaluation results in Table 1. We note that we have a separate approach to evaluate our model that is able to achieve a −7.73 mean and −10.38 best docking score (see the Ablation Study paragraph below), but here we only evaluated the latest molecules found in training in order to maintain consistency with the manner in which GCPN and MolDQN are evaluated. In the result table, ordinary validity is checked by examining atoms’ valency and consistency of bonds in aromatic rings. In addition, we propose adjusted validity which further deems molecules that fail on conformer generation (Riniker & Landrum, 2015) invalid on top of the ordinary validity criteria. This is required for docking evaluation, and molecules that fail this check are assigned a docking score of 0. We also provide additional summary metrics to help gain perspective of the generated molecules: Uniq. and Div. are the uniqueness and diversity (Polykovskiy et al., 2020); QED (Bickerton et al., 2012) is an indicator of drug-likeness, SA (Ertl & Schuffenhauer, 2009) is the synthetic accessibility. QED is better when the score is higher and SA is better when lower. Definitions of QED and SA can be found in Appendix E. On this task, DGAPN significantly outperformed state-of-the-art models in terms of top scores and average score, obtaining a high statistical significance over the second best model (MolDQN) with a p-value of 8.55×10−209 under Welch’s t-test (Welch, 1947). As anticipated, the molecules generated by fragment-based algorithms (JTVAE, MARS and DGAPN) have significantly better SAs. Yet we note that additional summary metrics are not of particular interest in single-objective optimization, and obtaining good summary metrics does not always indicate useful results. For example, during model tuning, we found out that worse convergence often tend to result in better diversity score. There also seems to be a trade-off between docking score and QED which we further examined in Section 3.2.3. Ablation study We performed some ablation studies to examine the efficacy of each component of our model. Firstly, we segregated spatial graph attention from the RL framework and examined its effect solely in a supervised learning setting with the NSP15 dataset. The loss curves are shown in Figure 3, in which spatial convolution exhibited a strong impact on molecular graph representation learning. Secondly, we ran single-objective optimization with (DGAPN) and without (GAPN) innovation rewards, and thirdly, compared the results from DGAPN in evaluation against greedy algorithm with only the CReM environment. These results are shown in Table 2. Note that it is not exactly fair to compare greedy algorithm to other approaches since it has access to more information (docking reward for each intermediate candidate) when making decisions, yet our model still managed to outperform it in evaluation mode (see Appendix C for more information). From results generated by the greedy approach, we can see that the environment and the stochasticity design of action space alone are powerful for the efficacy and exploration of our policies. While the innovation bonus helped discover molecules with better docking scores, it also worsened SA. We further investigated this docking score vs. SA trade-off in Section 3.2.3. To see samples of molecules generated by DGAPN in evaluation, visit our repository†. 3.2.2 CONSTRAINED OPTIMIZATION The goal of constrained optimization is to find molecules that have large improvement over a given molecule from the dataset while maintaining a certain level of similarity: rm′(sT ) = rm(sT )− λ ·max{0, δ − SIM {s0, sT }} (12) where λ is a scaling coefficient, for which we chose 100; SIM {·, ·} is the Tanimoto similarity between Morgan fingerprints. We used a subset of 100 molecules from our dataset as the starting molecules, chose the two most recent and best performing benchmark models in single-objective optimization to compete against, and evaluated 100 molecules generated from theirs and ours. The results are shown in Table 3. From the results, it seems that MARS is not capable of performing optimizations with similarity constraint. Compared to MolDQN, DGAPN gave better improvements across all levels of δ, although MolDQN was able to produce molecules with more stable similarity scores. 3.2.3 MULTI-OBJECTIVE OPTIMIZATION We investigate the balancing between main objective and realism by performing multi-objective optimization, and thus provide another approach to generate useful molecules in practice. We weight rm with two additional metrics — QED and SA, yielding the new main reward as rm′(sT ) = ω · rm(sT ) + (1− ω) · µ · [QED(sT ) + SA∗(sT )] (13) where SA∗(sT ) = (10− SA(sT ))/9 is an adjustment of SA such that it ranges from 0 to 1 and is preferred to be larger; µ is a scaling coefficient, for which we chose 8. The results obtained by DGAPN under different settings of ω are shown in Figure 4. With ω = 0.6, DGAPN is able to generate molecules having better average QED (0.72) and SA (2.20) than that of the best model (JTVAE) in terms of these two metrics in Table 1, while still maintaining a mean docking score (−5.69) better than all benchmark models in single-objective optimization. †https://github.com/yulun-rayn/DGAPN A trade-off between docking reward and QED/SA was identified. We acknowledge that optimizing docking alone does not guarantee finding practically useful molecules, but our goal is to generate promising chemicals with room for rational hit optimization. We also note that commonly used alternative main objectives such as pLogP and QED are themselves unreliable or undiscerning as discussed in Appendix D. Hence, for methodological study purposes, we believe that molecular docking provides a more useful and realistic test bed for algorithm development. 4 RELATED WORK The REINVENT (Olivecrona et al., 2017) architecture consists of two recurrent neural network (RNN) architectures, generating molecules as tokenized SMILE strings. The “Prior network” is trained with maximum likelihood estimation on a set of canonical SMILE strings, while the “Agent network” is trained with policy gradient and rewarded using a combination of task scores and Prior network estimations. The Junction Tree Variational Autoencoder (JTVAE, Jin et al. (2018)) trains two encoder/decoder networks in building a fixed-dimension latent space representation of molecules, where one network captures junction tree structure of molecules and the other is responsible for fine grain connectivity. Novel molecules with desired properties are then generated using Bayesian optimization on the latent space. Graph Convolutional Policy Network (GCPN, You et al. (2018a)) is a policy gradient RL architecture for de novo molecular generation. The network defines domain-specific modifications on molecular graphs so that chemical validity is maintained at each episode. Additionally, the model optimizes for realism with adversarial training and expert pre-training using trajectories generated from known molecules in the ZINC library. Molecule Deep Q-Networks (MolDQN, Zhou et al. (2019)) is a Q-learning model using Morgan fingerprint as representations of molecules. To achieve molecular validity, chemical modifications are directly defined for each episode. To enhance exploration of chemical space, MolDQN learns H independent Q-functions, each of which is trained on separate sub-samples of the training data. Markov Molecular Sampling (MARS, Xie et al. (2021)) generates molecules by employing an iterative method of editing fragments within a molecular graph, producing high-quality candidates through Markov chain Monte Carlo sampling (MCMC). MARS then uses the MCMC samples in training a GNN to represent and select candidate edits, further improving sampling efficiency. 5 CONCLUSIONS In this work, we introduced a spatial graph attention mechanism and a curiosity-driven policy network to discover novel molecules optimized for targeted objectives. We identified candidate antiviral compounds designed to inhibit the SARS-CoV-2 protein NSP15, leveraging extensive molecular docking simulations. Our framework advances the state-of-the-art algorithms in the optimization of molecules with antiviral potential as measured by molecular docking scores, while maintaining reasonable synthetic accessibility. We note that a valuable extension of our work would be to focus on lead-optimization — the refinement of molecules already known to bind the protein of interest through position-constrained modification. Such knowledge-based and iterative refinements may help to work around limitations of the accuracy of molecular docking predictions. ACKNOWLEDGMENTS This work was funded via the DOE Office of Science through the National Virtual Biotechnology Laboratory (NVBL), a consortium of DOE national laboratories focused on the response to COVID-19, with funding provided by the Coronavirus CARES Act. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. This manuscript has been coauthored by UT-Battelle, LLC under contract no. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a nonexclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-publicaccess-plan, last accessed September 16, 2020). A DETAILED FORMULATION OF THE PROBLEM Our goal is to establish a set of decision rules to generate graph-structured data that maximizes compound objectives under certain constraints. Similar to prior formulations, the generating process is defined as a time homogeneous Markov Decision Process (MDP). We give a formal definition of this process in Appendix A.1. Under this setting, the action policies and state transition dynamics at step t can be factorized according to the Markov property: P (at|s0, a0, s1, a1, . . . , st) = P (at|st) := π(at|st) (14) P (st+1|s0, a0, s1, a1, . . . , st, at) = P (st+1|st, at) := ρ(st+1|st, at) (15) where {st, at}t are state-action sequences. A reward function r(s, a) is used to assess an action a taken at a given state s. The process terminates at an optional stopping time T and sT is then proposed as the final product of the current generating cycle. We aim to estimate the optimal policy π in terms of various objectives to be constructed later in the experiment section. A.1 MEASURE THEORY CONSTRUCTION OF MARKOV DECISION PROCESS Let (S,S) and (A,A) be two measurable spaces called the state space and action space; functions Π : S × A → R and T : S × A × S → R are said to be a policy and a transition probability respectively if 1. For each s ∈ S, E → Π(s, E) is a probability measure on (A,A); for each (s, a) ∈ S×A, F → T (s, a, F ) is a probability measure on (S,S). 2. For each E ∈ A, s → Π(s, E) is a measurable function from (S,S) → (R,B); for each F ∈ S, (s, a)→ T (s, a, F ) is a measurable function from (S ×A,S ⊗A)→ (R,B). We say a sequence of random variable duples (St, At) defined on the two measurable spaces is a Markov decision chain if P (At ∈ E | σ(S0, A0, S1, A1, . . . , St)) = Π(St, E) (16) P (St+1 ∈ F | σ(S0, A0, S1, A1, . . . , St, At)) = T (St, At, F ) (17) A function r : S × A → R is said to be the reward function w.r.t. the Markov decision chain if r(st, Et) = EΠ,T [R(st+1) | St = st, At ∈ Et] whereR : S → R is its underlying reward function. With an abuse of notation, we define π(a|s) := Π(s, {a}), ρ(s′|s, a) := T (s, a, {s′}) and let r(s, a) denote r(s, {a}). B LEARNING ENVIRONMENT AND REWARD EVALUATION B.1 ENVIRONMENT - CREM Chemically Reasonable Mutations (CReM) is an open-source fragment-based framework for chemical structure modification. The use of libraries of chemical fragments allows for a direct control of the chemical validity of molecular substructures and to consider the chemical context of coupled fragments (e.g., resonance effects). Compared to atom-based approaches, CReM explores less of chemical space but guarantees chemical validity for each modification, because only fragments that are in the same chemical context are interchangeable. Compared to reaction-based frameworks, CReM enables a larger exploration of chemical space but may explore chemical modifications that are less synthetically feasible. Fragments are generated from the ChEMBL database (Gaulton et al., 2012) and for each fragment, the chemical context is encoded for several context radius sizes in a SMILES string and stored along with the fragment in a separate database. For each query molecule, mutations are enumerated by matching the context of its fragments with those that are found in the CReM fragment-context database (Polishchuk, 2020). In this work, we use grow function on a single carbon to generate initial choices if a warm-start dataset is not provided, and mutate function to enumerate possible modifications with the default context radius size of 3 to find replacements. B.2 EVALUATION - AUTODOCK-GPU Docking programs use the three-dimensional structure of the protein (i.e., the receptor) to predict the most stable bound conformations of the small molecules (i.e., its putative ligands) of interest, often targeting a pre-defined functional site, such as the catalytic site. An optimization algorithm within a scoring function is employed towards finding the ligand conformations that likely correspond to binding free energy minima. The scoring function is conformation-dependent and typically comprises physics-based empirical or semi-empirical potentials that describe pair-wise atomic terms, such as dispersion, hydrogen bonding, electrostatics, and desolvation (Huang et al., 2010; Huey et al., 2007). AutoDock is a computational simulated docking program that uses a Lamarckian genetic algorithm to predict native-like conformations of protein-ligand complexes and a semiempirical scoring function to estimate the corresponding binding affinities. Lower values of docking scores indicate stronger predicted interactions. The opposite value of the lowest estimated binding affinity energy obtained for each molecule forms the reward. AutoDock-GPU (Santos-Martins et al., 2021) is an extension of AutoDock to leverage the highlyparallel architecture of GPUs. Within AutoDock-GPU, ADADELTA (Zeiler, 2012), a gradientbased method, is used for local refinement. The structural information of the receptor (here, the NSP15 protein) used by AutoDock-GPU is processed prior to running the framework. In this preparatory step, AutoDockTools (Morris et al., 2009b) was used to define the search space for docking on NSP15 (PDB ID 6W01; Figure 5) and to generate the PDBQT file of the receptor, which contains atomic coordinates, partial charges, and AutoDock atom types. AutoGrid4 (Morris et al., 2009a) was used to pre-calculate grid maps of interaction energy at the binding site for the different atom types defined in CReM. In evaluation, after applying an initial filter within RDKit to check whether a given SMILES is chemically valid (i.e., hybridization, ring membership etc.), a 3D conformer of the molecule is generated using AllChem.EmbedMolecule. SMILES that do not correspond to valid compounds are discarded. Next, the molecular geometry is energy minimized within RDKit using the generalized force filed MMFF94. The resulting conformer is used as input for molecular docking via AutoDockGPU. We also excluded any molecules from the final result set that were both fully rigid and larger than the search box in the receptor. This only occurred in two molecules from the JTVAE evaluation. C HYPERPARAMETER SETTINGS FOR SINGLE-OBJECTIVE OPTIMIZATION Based on a parameter sweep, we set number of GNN layers to be 3, MLP layers to be 3, with 3 of the GNN layers and 0 of the MLP layers shared between query and key. Number of layers in RND is set to 1; all numbers of hidden neurons 256; learning rate for actor 2−3, for critic 1−4, for RND 2−3; update time steps (i.e. batch size) 300. Number of epochs per iteration and clipping parameter for PPO are 30 and 0.1. Output dimensions and clipping parameter η for RND are 8 and 5. In evaluation mode, we use arg max policy instead of sampling policy, expand the number of candidates per step from 15-20 to 128 and expand the maximum time steps per episode from 12 to 20 compared to training. For more details regarding hyperparameter settings, see our codebase at https://github.com/yulun-rayn/DGAPN. D MORE RESULTS ON QED AND PENALIZED LOGP Although QED and penalized LogP are the most popular objectives to benchmark ML algorithms for molecule generation, these benchmarks are questionable for both scientific study and practical use as Xie et al. (2021) pointed out. Most methods can obtain QED scores close or equal to the highest possible of 0.948, making the metric hard to distinguish different methods. As for pLogP, if we simply construct a large molecule with no ring, such as the molecule from SMILES ‘CCCCC...CCCCC’ (139 carbons), it will give us a pLogP score of 50.31 which beats all state-of-the-art models in Table 4. Needless to say, we will achieve a even higher pLogP by continuously adding carbons, which was exactly how REINVENT performed in our experiment. We note that we were able to raise our results to around 18 solely by doubling the maximum time steps per episode reported in Appendix C, yet not so interested in pushing the performance on this somewhat meaningless metric by continuously increasing one hyperparameter. The results from REINVENT were produced in our own experiments, while others were directly pulled out from the original results reported in the literature. E DEFINITIONS OF QED AND SA E.1 QUANTITATIVE ESTIMATE OF DRUGLIKENESS (QED) is defined as QED = exp ( 1 n n∑ i=1 ln di ) , where di are eight widely used molecular properties. Specifically, they are molecular weight (MW), octanol-water partition coefficient (ALOGP), number of hydrogen bond donors (HBD), number of hydrogen bond acceptors (HBA), molecular polar surface area (PSA), number of rotatable bonds (ROTB), the number of aromatic rings (AROM), and number of structural alerts. For each di, di(x) = ai + bi 1 + exp ( −x−ci+ di 2 ei ) · 1− 1 1 + exp ( −x−ci+ di 2 fi ) , each ai, . . . , fi are given by a supplementary table in Bickerton et al. (2012). E.2 SYNTHETIC ACCESSIBILITY (SA) is defined as SA = fragmentScore− complexityPenalty The fragment score is calculated as a sum of contributions from fragments of 934,046 PubChem already-synthesized chemicals. The complexity penalty is computed from a combination of ringComplexityScore, stereoComplexityScore, macroCyclePenalty, and the sizePenalty: ringComplexityScore = log(nRingBridgeAtoms + 1) + log(nSprioAtoms + 1) stereoComplexityScore = log(nStereoCenters + 1) macroCyclePenalty = log(nMacroCycles + 1) sizePenalty = nAtoms1.005 − nAtoms
1. What is the focus and contribution of the paper on generating molecular graphs? 2. What are the strengths of the proposed approach, particularly in its application and comparison with other works? 3. Do you have any concerns or suggestions regarding the presentation of the results, such as the readability of figures and additional information that could be provided? 4. How does the reviewer assess the novelty and significance of the proposed method in the context of related work?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a method to generate molecular graphs with optimized properties. Molecular graphs are constructed by the iterative addition of molecular fragments in a deep reinforcement learning framework. The method is benchmarked against a set of baselines in a task to generate molecules that maximizes the docking score to a protein (NSP15) from the SARS-CoV-2 virus, and shows good performance. Review Strengths: Relevant application of designing small molecules with desired properties, eg inhibitors of SARS-CoV-2 Paper is overall well written Proposed method compared with quite a large range of relevant baselines Some informative ablation studies done Code has been provided Other comments/questions: Although the proposed method is apparently the first to utilize a fragment by fragment graph construction in a deep reinforcement learning framework, I think the idea to extend the atom by atom graph construction in previous RL works with a fragment based action space (explored in more recent works Jin 2018, Jin 2020, Xie 2021) is interesting but not particularly novel. Parts of figure 4, such as the molecular structures, are too small and unreadable It would be useful to see the distribution of the properties in Table 1 (dock score, QED, SA, FCD). Also, it would be interesting to see the QED, SA, and FCD that correspond to the top 3 dock score molecules
ICLR
Title Spatial Graph Attention and Curiosity-driven Policy for Antiviral Drug Discovery Abstract We developed Distilled Graph Attention Policy Network (DGAPN), a reinforcement learning model to generate novel graph-structured chemical representations that optimize user-defined objectives by efficiently navigating a physically constrained domain. The framework is examined on the task of generating molecules that are designed to bind, noncovalently, to functional sites of SARS-CoV-2 proteins. We present a spatial Graph Attention (sGAT) mechanism that leverages self-attention over both node and edge attributes as well as encoding the spatial structure — this capability is of considerable interest in synthetic biology and drug discovery. An attentional policy network is introduced to learn the decision rules for a dynamic, fragment-based chemical environment, and state-of-the-art policy gradient techniques are employed to train the network with stability. Exploration is driven by the stochasticity of the action space design and the innovation reward bonuses learned and proposed by random network distillation. In experiments, our framework achieved outstanding results compared to state-of-the-art algorithms, while reducing the complexity of paths to chemical synthesis. 1 INTRODUCTION This work aims to address the challenge of establishing an automated process for the design of objects with connected components, such as molecules, that optimize specific properties. Achieving this goal is particularly desirable in drug development and materials science, where manual discovery remains a time-consuming and expensive process (Hughes et al., 2011; Schneider et al., 2020). However, there are two major difficulties that have long impeded rapid progress. Firstly, the chemical space is discrete and massive (Polishchuk et al., 2013), presenting a complicated environment for an Artificial Intelligence (AI) approach to efficiently and effectively explore. Secondly, it is not trivial to compress such connected objects into feature representations that preserve most of the information, while also being highly computable for Deep Learning (DL) methods to exploit. We introduce Distilled Graph Attention Policy Network (DGAPN), a framework that advances prior work in addressing both of these challenges. We present a Reinforcement Learning (RL) architecture that is efficiently encouraged to take innovative actions with an environment that is able to construct a † University of California, Berkeley, § National Virtual Biotechnology Laboratory, US Department of Energy, ‡ Lawrence Berkeley National Laboratory, ¶ Oak Ridge National Laboratory, || University of Tennessee, Knoxville, †† University of Chicago, §§ Argonne National Laboratory, ‡‡ Pacific Northwest National Laboratory dynamic and chemically valid fragment-based action space. We also propose a hybrid Graph Neural Network (GNN) that comprehensively encodes graph objects’ attributes and spatial structures in addition to adjacency structures. The following paragraphs discuss how we addressed limitations of prior work and its relevance to antiviral drug discovery. For more descriptions of key prior methodologies that we used as benchmarks in this paper, see Section 4. Graph Representation Learning Despite their spatial efficiency, string representation of molecules acquired by the simplified molecular-input line-entry system (SMILES) (Weininger, 1988) suffers from significant information loss and poor robustness (Liu et al., 2017). Graph representations have become predominant and preferable for their ability to efficiently encode an object’s scaffold structure and attributes. Graph representations are particularly ideal for RL since intermediate representations can be decoded and evaluated for reward assignments. While GNNs such as Graph Convolutional Networks (GCN) (Kipf & Welling, 2016) and Graph Attention Networks (GAT) (Veličković et al., 2017) have demonstrated impressive performance on many DL tasks, further exploitation into richer information contained in graph-structured data is needed to faithfully represent the complexity of chemical space (Morris et al., 2019; Wang et al., 2019; Chen et al., 2020). In this work, we made improvements to previous studies on attributes encoding and structural encoding. For structural encoding, previous studies have covered adjacency distance encoding (Li et al., 2020), spatial cutoff (Pei et al., 2020) and coordinates encoding (Schütt et al., 2017; Danel et al., 2020). Our work presents an alternative approach to spatial structure encoding similar to Gilmer et al. (2017) which do not rely on node coordinates, but different in embedding and updating scheme. Distinct from Danel et al. (2020) and Chen & Chen (2021), we extended attentional embedding to be edge-featured, while still node-centric for message passing efficiency. Reinforcement Learning A variety of graph generative models have been used in prior work, predominantly Variational Autoencoders (VAE) (Simonovsky & Komodakis, 2018; Samanta et al., 2020; Liu et al., 2018; Ma et al., 2018; Jin et al., 2018) and Generative Adversarial Networks (GAN) (De Cao & Kipf, 2018). While some of these have a recurrent structure (Li et al., 2018; You et al., 2018b), RL and other search algorithms that interact dynamically with the environment excel in sequential generation due to their ability to resist overfitting on training data. Both policy learning (You et al., 2018a) and value function learning (Zhou et al., 2019) have been adopted for molecule generation: however, they generate molecules node-by-node and edge-by-edge. In comparison, an action space consisting of molecular fragments, i.e., a collection of chemically valid components and realizable synthesis paths, is favorable since different atom types and bonds are defined by the local molecular environment. Furthermore, the chemical space to explore can be largely reduced. Fragment-by-fragment sequential generation has been used in VAE (Jin et al., 2018) and search algorithms (Jin et al., 2020; Xie et al., 2021), but has not been utilized in a deep graph RL framework. In this work, we designed our environment with the Chemically Reasonable Mutations (CReM) (Polishchuk, 2020) library to realize a valid fragment-based action space. In addition, we enhanced exploration by employing a simple and efficient technique, adapting Random Network Distillation (RND) (Burda et al., 2018) to GNNs and proposing surrogate innovation rewards for intermediate states during the generating process. Antiviral Drug Discovery — A Timely Challenge The severity of the COVID-19 pandemic highlighted the major role of computational workflows to characterize the viral machinery and identify druggable targets for the rapid development of novel antivirals. Particularly, the synergistic use of DL methods and structural knowledge via molecular docking is at the cutting edge of molecular biology — consolidating such integrative protocols to accelerate drug discovery is of paramount importance (Yang et al., 2021; Jeon & Kim, 2020; Thomas et al., 2021). Here we experimentally examined our architecture on the task of discovering novel inhibitors targeting the SARS-CoV-2 non-structural protein endoribonuclease (NSP15), which is critical for viral evasion of host defense systems (Pillon et al., 2021). Structural information about the putative protein-ligand complexes was integrated into this framework with AutoDock-GPU (Santos-Martins et al., 2021), which leverages the GPU resources from leadership-class computing facilities, including the Summit supercomputer, for high-throughput molecular docking (LeGrand et al., 2020). We show that our results outperformed state-of-the-art generation models in finding molecules with high affinity to the target and reasonable synthetic accessibility. 2 PROPOSED METHOD 2.1 ENVIRONMENT SETTINGS In the case of molecular generation, single-atom or single-bond additions are often not realizable by known biochemical reactions. Rather than employing abstract architectures such as GANs to suggest synthetic accessibility, we use the chemical library CReM (Polishchuk, 2020) to construct our environment such that all next possible molecules can be obtained by one step of interchanging chemical fragments with the current molecule. This explicit approach is considerably more reliable and interpretable compared to DL approaches. A detailed description of the CReM library can be found in Appendix B.1. The generating process is formulated as a Markov decision problem (details are given in Appendix A). At each time step t, we use CReM to sample a set of valid molecules vt+1 as the candidates for the next state st+1 based on current state st. Under this setting, the transition dynamics are deterministic, set A of the action space can be defined as equal to S of the state space, and action at is induced by the direct selection of st+1. With an abuse of notation, we let r(st+1) := r(st, at). 2.2 SPATIAL GRAPH ATTENTION We introduce a graph embedding mechanism called Spatial Graph Attention (sGAT) in an attempt to faithfully extract feature vectors ht ∈ Rdh representing graph-structured objects such as molecules. Two different types of information graphs constructed from a connected object are heterogeneous and thus handled differently in forward passes as described in the following sections. See Figure 1 for an overview. 2.2.1 ATTENTION ON ATTRIBUTION GRAPHS The attribution graph of a molecule with n atoms and e bonds is given by the triple (A,N ,E), where A ∈ {0, 1}n×n is the node adjacency matrix, N is the node attribution matrix of dimension n× dn and E is the edge attribution matrix of dimension e× de. Each entry aij ofA is 1 if a bond exists between atom i and j, and 0 otherwise. Each row vector ni of N is a concatenation of the properties of atom i, including its atomic number, mass, etc., with the categorical properties being one-hot encoded. E is formed similar to N , but with bond attributes. We denote a row vector of E as eij if it corresponds to the bond between atom i and j. We proceed to define a multi-head forward propagation that handles these rich graph information: let hnk ∈ R1×dhn denote a given representation for nk, heij ∈ R1×dhe denote a representation for eij , then the m-th head attention αmij from node j to node i (i 6= j) is given by αmij = softmax j ( ⋃ k: aik=1 { σ([hniWn,m ‖ heikWe,m ‖ hnkWn,m] · attmT ) }) (1) where softmax j is the softmax score of node j; ‖ is column concatenation; σ is some non-linear activation;Wn,m ∈ Rdhn×dwn ,We,m ∈ Rdhe×dwe are them-th head weight matrices for nodes and edges respectively; attm ∈ R1×(2dwn+dwe ) is the m-th head attention weight. The representations after a feed-forward operation are consequently given as follow: h′ni = aggr1≤m≤nm σ ∑ j: aij=1 αmij · hnj + hni Wn,m (2) h′eij = aggr1≤m≤nm { σ ([ hniWn,m ‖ heijWe,m ‖ hnjWn,m ] ·Wh,m )} (3) where Wh,m ∈ R(2dwn+dwe )×dwe ; nm is the total number of attention heads and aggr denotes an aggregation method, most commonly mean , sum , or concat (Hamilton et al., 2017). We note that we have not found significant difference across these methods and have used mean for all aggregations in our experiments. In principle, a single-head operation on nodes is essentially graph convolution with the adjacency matrix  = à + I where à is attention-regularized according to (1). This approach sufficiently embeds edge attributes while still being a node-centric convolution mechanism, for which efficient frameworks like Pytorch-Geometric (Fey & Lenssen, 2019) have been well established. 2.2.2 SPATIAL CONVOLUTION In addition to attributions and logical adjacency, one might also wish to exploit the spatial structure of an graph object. In the case of molecular docking, spatial structure informs the molecular volume and the spatial distribution of interaction sites — shape and chemical complementarity to the receptor binding site is essential for an effective association. Let G = ( dij −1) i,j≤n be the inverse distance matrix where dij is the Euclidean distance between node i and j for ∀i 6= j, and dii−1 := 0. G can then be seen as an adjacency matrix with weighted “edge”s indicating nodes’ spatial relations, and the forward propagation is thus given by H ′′n = σ (( D̃− 1 2 G̃D̃− 1 2 + I ) HnWn ) (4) where G̃ is optionally sparsified and attention-regularized from G to be described below; D̃ = diag1≤i≤n {∑n j=1 G̃ij } ; Hn is the row concatenation of {hni}1≤i≤n; Wn ∈ Rdhn×dwn is the weight matrix. In reality,G inducesO(n) of convolution operations on each node and can drastically increase training time when the number of nodes is high. Therefore, one might want to derive G̃ by enforcing a cut-off around each node’s neighborhood (Pei et al., 2020), or preserving an O(n) number of largest entries in G and dropping out the rest. In our case, although the average number of nodes is low enough for the gather and scatter operations (GS) of Pytorch-Geometric to experience no noticeable difference in runtime as node degrees scale up (Fey & Lenssen, 2019), the latter approach of sparsification was still carried out because we have discovered that proper cutoffs improved the validation loss in our supervised learning experiments. If one perceives the relations between chemical properties and spatial information as more abstract, G should be regularized by attention as described in (1), in which case the spatial convolution is principally fully-connected graph attention with the Euclidean distance as a one-dimensional edge attribution. 2.3 GRAPH ATTENTION POLICY NETWORK In this section we introduce Graph Attention Policy Network (GAPN) that is tailored to environments that possess a dynamic range of actions. Note that ρ(·|st, at) is a degenerate distribution for deterministic transition dynamics and the future trajectory τ ∼ p(st+1, st+2, . . . |st) is strictly equal in distribution to a ∼ π(at, at+1, . . . |st), hence simplified as the latter in the following sections. To learn the policy more efficiently, we let st and vt share a few mutual embedding layers, and provided option to pre-train the first ng layers with supervised learning. Layers inherited from pretraining are not updated during the training of RL. See Figure 2 for an overview of the architecture. 2.3.1 ACTION SELECTION At each time step t, we sample the next state st+1 from a categorical distribution constructed by applying a retrieval-system-inspired attention mechanism (Vaswani et al., 2017): st+1 ∼ OHC softmax ⋃ g∈gt+1 {Lfinal(EQ(gt) ‖ EK(g)} · vt+1 (5) where OHC{p1, . . . , pnv} is a one-hot categorical distribution with nv categories; gt, gt+1 are the embeddings for st and vt+1 acquired by the shared encoder; EQ, EK are two sGAT+MLP graph encoders with output feature dimension dk; Lfinal : Rb×2dk → Rb is the final feed-forward layer. Essentially, each candidate state is predicted a probability based on its ‘attention’ to the query state. The next state is then sampled categorically according to these probabilities. There could be a number of ways to determine stopping time T . For instance, an intuitive approach would be to append st to vt+1 and terminate the process if st is selected as st+1. In our experiments, we simply pick T to be constant, i.e. we perform a fixed number of modifications for an input. This design encourages the process to not take meaningless long routes or get stuck in a cycle, and enables episodic docking evaluations in parallelization (further described in Section 2.5). Note that constant trajectory length is feasible because the maximum limit of time steps can be set significantly lower for fragment-based action space compared to node-by-node and edge-by-edge action spaces. 2.3.2 ACTOR-CRITIC ALGORITHM For the purpose of obeying causal logic and reducing variance, the advantage on discounted rewardto-go are predominantly used instead of raw rewards in policy iterations. The Q-function and advantage function are expressed as Qπ(st, at) = Eπ [ T∑ t′=t γt ′−t · r(st′ , at′) ∣∣∣∣∣st, at ] (6) Aπ(st, at) = Q π(st, at)− Eπ [Qπ(st, at)|st] (7) where γ is the rate of time discount. The Advantage Actor-Critic (A2C) algorithm approximates Eπ [Qπ(st, at)|st] with a value network Vζ(st) and Qπ(st, at) with r(st, at) + γVζ(st+1). For a more detailed description of actor-critic algorithm in RL, see Grondman et al. (2012). 2.3.3 PROXIMAL POLICY OPTIMIZATION We use Proximal Policy Optimization (PPO) (Schulman et al., 2017), a state-of-the-art policy gradient technique, to train our network. PPO holds a leash on policy updates whose necessity is elaborated in trust region policy optimization (TRPO) (Schulman et al., 2015), yet much simplified. It also enables multiple epochs of minibatch updates within one iteration. The objective function is given as follow: J∗(θ) = max θ ED,πoldθ [ T∑ t=1 min { rt(θ)A πoldθ (st, at), clip (rt(θ))A πoldθ (st, at) }] (8) where rt(θ) = πnewθ (at ∣∣st)/πoldθ (at∣∣st) , clip (x) = min {max {1− , x} , 1 + } and s0 ∼ D. During policy iterations, πnew is updated each epoch and πold is cloned from πnew each iteration. 2.4 EXPLORATION WITH RANDOM NETWORK DISTILLATION We seek to employ a simple and efficient exploration technique that can be naturally incorporated into our architecture to enhance the curiosity of our policy. We perform Random Network Distillation (RND) (Burda et al., 2018) on graphs or pre-trained feature graphs to fulfill this need. Two random functions f̂ψ, f∗ that map input graphs to feature vectors in Rdr are initialized with neural networks, and f̂ψ is trained to match the output of f∗: ψ∗ = arg min ψ Es′∼p̂next‖f̂ψ(s′)− f∗(s′)‖ (9) where p̂next is the empirical distribution of all the previously selected next states, i.e. the states that have been explored. We record the running errors in a buffer and construct the surrogate innovation reward as: ri(s ′) = clipη (( ‖f̂ψ(s′)− f∗(s′)‖ −mb )/√ vb ) (10) where mb and vb are the first and second central moment inferred from the running buffer, clipη(x) = min {max {−η, x} , η}. 2.5 PARALLELIZATION AND SYNCHRONIZED EVALUATION Interacting with the environment and obtaining rewards through external software programs are the two major performance bottlenecks in ours as well as RL in general. An advantage of our environment settings, as stated in Section 2.3.1, is that a constant trajectory length is feasible. Moreover, the costs for environmental interactions are about the same for different input states. To take advantage of this, we parallelize environments on CPU subprocesses and execute batched operations on one GPU process, which enables synchronized and sparse docking evaluations that reduces the number of calls to the docking program. For future experiments where such conditions might be unrealistic, we also provided options for asynchronous Parallel-GPU and Parallel-CPU samplers (described in Stooke & Abbeel (2019)) in addition to the Parallel-GPU sampler used in our experiments. 3 EXPERIMENTS 3.1 SETUP Objectives We evaluated our model against five state-of-the-art models (detailed in Section 4) with the objective of discovering novel inhibitors targeting SARS-CoV-2 NSP15. Molecular docking scores are computed by docking programs that use the three-dimensional structure of the protein to predict the most stable bound conformations of the molecules of interest, targeting a pre-defined functional site. For more details on molecular docking and our GPU implementation of an automated docking tool used in the experiments, see Appendix B.2. In addition, we evaluated our model in the context of optimizing QED and penalized LogP values, two tasks commonly presented in machine learning literature for molecular design. The results for this can be found in Appendix D. Dataset For the models/settings that do require a dataset, we used a set of SMILES IDs taken from more than six million compounds from the MCULE molecular library — a publicly available dataset of purchasable molecules (Kiss et al., 2012), and their docking scores for the NSP15 target. 3.2 RESULTS 3.2.1 SINGLE-OBJECTIVE OPTIMIZATION The raw docking score is a negative value that represents higher estimated binding affinity when the score is lower. We use the negative docking score as the main reward rm and assign it to the final state sT as the single objective. For DGAPN, we also assign innovation reward to each intermediate state, and the total raw reward for a trajectory τ is given by r(τ ) = rm(sT ) + ι · T∑ t=1 ri(st) (11) where ι is the relative important of innovation rewards, for which we chose 0.1 and incorporated them with a 100 episode delay and 1,000 episode cutoff. Detailed hyperparameter settings for DGAPN can be found in Appendix C. We sampled 1,000 molecules from each method and showed the evaluation results in Table 1. We note that we have a separate approach to evaluate our model that is able to achieve a −7.73 mean and −10.38 best docking score (see the Ablation Study paragraph below), but here we only evaluated the latest molecules found in training in order to maintain consistency with the manner in which GCPN and MolDQN are evaluated. In the result table, ordinary validity is checked by examining atoms’ valency and consistency of bonds in aromatic rings. In addition, we propose adjusted validity which further deems molecules that fail on conformer generation (Riniker & Landrum, 2015) invalid on top of the ordinary validity criteria. This is required for docking evaluation, and molecules that fail this check are assigned a docking score of 0. We also provide additional summary metrics to help gain perspective of the generated molecules: Uniq. and Div. are the uniqueness and diversity (Polykovskiy et al., 2020); QED (Bickerton et al., 2012) is an indicator of drug-likeness, SA (Ertl & Schuffenhauer, 2009) is the synthetic accessibility. QED is better when the score is higher and SA is better when lower. Definitions of QED and SA can be found in Appendix E. On this task, DGAPN significantly outperformed state-of-the-art models in terms of top scores and average score, obtaining a high statistical significance over the second best model (MolDQN) with a p-value of 8.55×10−209 under Welch’s t-test (Welch, 1947). As anticipated, the molecules generated by fragment-based algorithms (JTVAE, MARS and DGAPN) have significantly better SAs. Yet we note that additional summary metrics are not of particular interest in single-objective optimization, and obtaining good summary metrics does not always indicate useful results. For example, during model tuning, we found out that worse convergence often tend to result in better diversity score. There also seems to be a trade-off between docking score and QED which we further examined in Section 3.2.3. Ablation study We performed some ablation studies to examine the efficacy of each component of our model. Firstly, we segregated spatial graph attention from the RL framework and examined its effect solely in a supervised learning setting with the NSP15 dataset. The loss curves are shown in Figure 3, in which spatial convolution exhibited a strong impact on molecular graph representation learning. Secondly, we ran single-objective optimization with (DGAPN) and without (GAPN) innovation rewards, and thirdly, compared the results from DGAPN in evaluation against greedy algorithm with only the CReM environment. These results are shown in Table 2. Note that it is not exactly fair to compare greedy algorithm to other approaches since it has access to more information (docking reward for each intermediate candidate) when making decisions, yet our model still managed to outperform it in evaluation mode (see Appendix C for more information). From results generated by the greedy approach, we can see that the environment and the stochasticity design of action space alone are powerful for the efficacy and exploration of our policies. While the innovation bonus helped discover molecules with better docking scores, it also worsened SA. We further investigated this docking score vs. SA trade-off in Section 3.2.3. To see samples of molecules generated by DGAPN in evaluation, visit our repository†. 3.2.2 CONSTRAINED OPTIMIZATION The goal of constrained optimization is to find molecules that have large improvement over a given molecule from the dataset while maintaining a certain level of similarity: rm′(sT ) = rm(sT )− λ ·max{0, δ − SIM {s0, sT }} (12) where λ is a scaling coefficient, for which we chose 100; SIM {·, ·} is the Tanimoto similarity between Morgan fingerprints. We used a subset of 100 molecules from our dataset as the starting molecules, chose the two most recent and best performing benchmark models in single-objective optimization to compete against, and evaluated 100 molecules generated from theirs and ours. The results are shown in Table 3. From the results, it seems that MARS is not capable of performing optimizations with similarity constraint. Compared to MolDQN, DGAPN gave better improvements across all levels of δ, although MolDQN was able to produce molecules with more stable similarity scores. 3.2.3 MULTI-OBJECTIVE OPTIMIZATION We investigate the balancing between main objective and realism by performing multi-objective optimization, and thus provide another approach to generate useful molecules in practice. We weight rm with two additional metrics — QED and SA, yielding the new main reward as rm′(sT ) = ω · rm(sT ) + (1− ω) · µ · [QED(sT ) + SA∗(sT )] (13) where SA∗(sT ) = (10− SA(sT ))/9 is an adjustment of SA such that it ranges from 0 to 1 and is preferred to be larger; µ is a scaling coefficient, for which we chose 8. The results obtained by DGAPN under different settings of ω are shown in Figure 4. With ω = 0.6, DGAPN is able to generate molecules having better average QED (0.72) and SA (2.20) than that of the best model (JTVAE) in terms of these two metrics in Table 1, while still maintaining a mean docking score (−5.69) better than all benchmark models in single-objective optimization. †https://github.com/yulun-rayn/DGAPN A trade-off between docking reward and QED/SA was identified. We acknowledge that optimizing docking alone does not guarantee finding practically useful molecules, but our goal is to generate promising chemicals with room for rational hit optimization. We also note that commonly used alternative main objectives such as pLogP and QED are themselves unreliable or undiscerning as discussed in Appendix D. Hence, for methodological study purposes, we believe that molecular docking provides a more useful and realistic test bed for algorithm development. 4 RELATED WORK The REINVENT (Olivecrona et al., 2017) architecture consists of two recurrent neural network (RNN) architectures, generating molecules as tokenized SMILE strings. The “Prior network” is trained with maximum likelihood estimation on a set of canonical SMILE strings, while the “Agent network” is trained with policy gradient and rewarded using a combination of task scores and Prior network estimations. The Junction Tree Variational Autoencoder (JTVAE, Jin et al. (2018)) trains two encoder/decoder networks in building a fixed-dimension latent space representation of molecules, where one network captures junction tree structure of molecules and the other is responsible for fine grain connectivity. Novel molecules with desired properties are then generated using Bayesian optimization on the latent space. Graph Convolutional Policy Network (GCPN, You et al. (2018a)) is a policy gradient RL architecture for de novo molecular generation. The network defines domain-specific modifications on molecular graphs so that chemical validity is maintained at each episode. Additionally, the model optimizes for realism with adversarial training and expert pre-training using trajectories generated from known molecules in the ZINC library. Molecule Deep Q-Networks (MolDQN, Zhou et al. (2019)) is a Q-learning model using Morgan fingerprint as representations of molecules. To achieve molecular validity, chemical modifications are directly defined for each episode. To enhance exploration of chemical space, MolDQN learns H independent Q-functions, each of which is trained on separate sub-samples of the training data. Markov Molecular Sampling (MARS, Xie et al. (2021)) generates molecules by employing an iterative method of editing fragments within a molecular graph, producing high-quality candidates through Markov chain Monte Carlo sampling (MCMC). MARS then uses the MCMC samples in training a GNN to represent and select candidate edits, further improving sampling efficiency. 5 CONCLUSIONS In this work, we introduced a spatial graph attention mechanism and a curiosity-driven policy network to discover novel molecules optimized for targeted objectives. We identified candidate antiviral compounds designed to inhibit the SARS-CoV-2 protein NSP15, leveraging extensive molecular docking simulations. Our framework advances the state-of-the-art algorithms in the optimization of molecules with antiviral potential as measured by molecular docking scores, while maintaining reasonable synthetic accessibility. We note that a valuable extension of our work would be to focus on lead-optimization — the refinement of molecules already known to bind the protein of interest through position-constrained modification. Such knowledge-based and iterative refinements may help to work around limitations of the accuracy of molecular docking predictions. ACKNOWLEDGMENTS This work was funded via the DOE Office of Science through the National Virtual Biotechnology Laboratory (NVBL), a consortium of DOE national laboratories focused on the response to COVID-19, with funding provided by the Coronavirus CARES Act. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. This manuscript has been coauthored by UT-Battelle, LLC under contract no. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a nonexclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-publicaccess-plan, last accessed September 16, 2020). A DETAILED FORMULATION OF THE PROBLEM Our goal is to establish a set of decision rules to generate graph-structured data that maximizes compound objectives under certain constraints. Similar to prior formulations, the generating process is defined as a time homogeneous Markov Decision Process (MDP). We give a formal definition of this process in Appendix A.1. Under this setting, the action policies and state transition dynamics at step t can be factorized according to the Markov property: P (at|s0, a0, s1, a1, . . . , st) = P (at|st) := π(at|st) (14) P (st+1|s0, a0, s1, a1, . . . , st, at) = P (st+1|st, at) := ρ(st+1|st, at) (15) where {st, at}t are state-action sequences. A reward function r(s, a) is used to assess an action a taken at a given state s. The process terminates at an optional stopping time T and sT is then proposed as the final product of the current generating cycle. We aim to estimate the optimal policy π in terms of various objectives to be constructed later in the experiment section. A.1 MEASURE THEORY CONSTRUCTION OF MARKOV DECISION PROCESS Let (S,S) and (A,A) be two measurable spaces called the state space and action space; functions Π : S × A → R and T : S × A × S → R are said to be a policy and a transition probability respectively if 1. For each s ∈ S, E → Π(s, E) is a probability measure on (A,A); for each (s, a) ∈ S×A, F → T (s, a, F ) is a probability measure on (S,S). 2. For each E ∈ A, s → Π(s, E) is a measurable function from (S,S) → (R,B); for each F ∈ S, (s, a)→ T (s, a, F ) is a measurable function from (S ×A,S ⊗A)→ (R,B). We say a sequence of random variable duples (St, At) defined on the two measurable spaces is a Markov decision chain if P (At ∈ E | σ(S0, A0, S1, A1, . . . , St)) = Π(St, E) (16) P (St+1 ∈ F | σ(S0, A0, S1, A1, . . . , St, At)) = T (St, At, F ) (17) A function r : S × A → R is said to be the reward function w.r.t. the Markov decision chain if r(st, Et) = EΠ,T [R(st+1) | St = st, At ∈ Et] whereR : S → R is its underlying reward function. With an abuse of notation, we define π(a|s) := Π(s, {a}), ρ(s′|s, a) := T (s, a, {s′}) and let r(s, a) denote r(s, {a}). B LEARNING ENVIRONMENT AND REWARD EVALUATION B.1 ENVIRONMENT - CREM Chemically Reasonable Mutations (CReM) is an open-source fragment-based framework for chemical structure modification. The use of libraries of chemical fragments allows for a direct control of the chemical validity of molecular substructures and to consider the chemical context of coupled fragments (e.g., resonance effects). Compared to atom-based approaches, CReM explores less of chemical space but guarantees chemical validity for each modification, because only fragments that are in the same chemical context are interchangeable. Compared to reaction-based frameworks, CReM enables a larger exploration of chemical space but may explore chemical modifications that are less synthetically feasible. Fragments are generated from the ChEMBL database (Gaulton et al., 2012) and for each fragment, the chemical context is encoded for several context radius sizes in a SMILES string and stored along with the fragment in a separate database. For each query molecule, mutations are enumerated by matching the context of its fragments with those that are found in the CReM fragment-context database (Polishchuk, 2020). In this work, we use grow function on a single carbon to generate initial choices if a warm-start dataset is not provided, and mutate function to enumerate possible modifications with the default context radius size of 3 to find replacements. B.2 EVALUATION - AUTODOCK-GPU Docking programs use the three-dimensional structure of the protein (i.e., the receptor) to predict the most stable bound conformations of the small molecules (i.e., its putative ligands) of interest, often targeting a pre-defined functional site, such as the catalytic site. An optimization algorithm within a scoring function is employed towards finding the ligand conformations that likely correspond to binding free energy minima. The scoring function is conformation-dependent and typically comprises physics-based empirical or semi-empirical potentials that describe pair-wise atomic terms, such as dispersion, hydrogen bonding, electrostatics, and desolvation (Huang et al., 2010; Huey et al., 2007). AutoDock is a computational simulated docking program that uses a Lamarckian genetic algorithm to predict native-like conformations of protein-ligand complexes and a semiempirical scoring function to estimate the corresponding binding affinities. Lower values of docking scores indicate stronger predicted interactions. The opposite value of the lowest estimated binding affinity energy obtained for each molecule forms the reward. AutoDock-GPU (Santos-Martins et al., 2021) is an extension of AutoDock to leverage the highlyparallel architecture of GPUs. Within AutoDock-GPU, ADADELTA (Zeiler, 2012), a gradientbased method, is used for local refinement. The structural information of the receptor (here, the NSP15 protein) used by AutoDock-GPU is processed prior to running the framework. In this preparatory step, AutoDockTools (Morris et al., 2009b) was used to define the search space for docking on NSP15 (PDB ID 6W01; Figure 5) and to generate the PDBQT file of the receptor, which contains atomic coordinates, partial charges, and AutoDock atom types. AutoGrid4 (Morris et al., 2009a) was used to pre-calculate grid maps of interaction energy at the binding site for the different atom types defined in CReM. In evaluation, after applying an initial filter within RDKit to check whether a given SMILES is chemically valid (i.e., hybridization, ring membership etc.), a 3D conformer of the molecule is generated using AllChem.EmbedMolecule. SMILES that do not correspond to valid compounds are discarded. Next, the molecular geometry is energy minimized within RDKit using the generalized force filed MMFF94. The resulting conformer is used as input for molecular docking via AutoDockGPU. We also excluded any molecules from the final result set that were both fully rigid and larger than the search box in the receptor. This only occurred in two molecules from the JTVAE evaluation. C HYPERPARAMETER SETTINGS FOR SINGLE-OBJECTIVE OPTIMIZATION Based on a parameter sweep, we set number of GNN layers to be 3, MLP layers to be 3, with 3 of the GNN layers and 0 of the MLP layers shared between query and key. Number of layers in RND is set to 1; all numbers of hidden neurons 256; learning rate for actor 2−3, for critic 1−4, for RND 2−3; update time steps (i.e. batch size) 300. Number of epochs per iteration and clipping parameter for PPO are 30 and 0.1. Output dimensions and clipping parameter η for RND are 8 and 5. In evaluation mode, we use arg max policy instead of sampling policy, expand the number of candidates per step from 15-20 to 128 and expand the maximum time steps per episode from 12 to 20 compared to training. For more details regarding hyperparameter settings, see our codebase at https://github.com/yulun-rayn/DGAPN. D MORE RESULTS ON QED AND PENALIZED LOGP Although QED and penalized LogP are the most popular objectives to benchmark ML algorithms for molecule generation, these benchmarks are questionable for both scientific study and practical use as Xie et al. (2021) pointed out. Most methods can obtain QED scores close or equal to the highest possible of 0.948, making the metric hard to distinguish different methods. As for pLogP, if we simply construct a large molecule with no ring, such as the molecule from SMILES ‘CCCCC...CCCCC’ (139 carbons), it will give us a pLogP score of 50.31 which beats all state-of-the-art models in Table 4. Needless to say, we will achieve a even higher pLogP by continuously adding carbons, which was exactly how REINVENT performed in our experiment. We note that we were able to raise our results to around 18 solely by doubling the maximum time steps per episode reported in Appendix C, yet not so interested in pushing the performance on this somewhat meaningless metric by continuously increasing one hyperparameter. The results from REINVENT were produced in our own experiments, while others were directly pulled out from the original results reported in the literature. E DEFINITIONS OF QED AND SA E.1 QUANTITATIVE ESTIMATE OF DRUGLIKENESS (QED) is defined as QED = exp ( 1 n n∑ i=1 ln di ) , where di are eight widely used molecular properties. Specifically, they are molecular weight (MW), octanol-water partition coefficient (ALOGP), number of hydrogen bond donors (HBD), number of hydrogen bond acceptors (HBA), molecular polar surface area (PSA), number of rotatable bonds (ROTB), the number of aromatic rings (AROM), and number of structural alerts. For each di, di(x) = ai + bi 1 + exp ( −x−ci+ di 2 ei ) · 1− 1 1 + exp ( −x−ci+ di 2 fi ) , each ai, . . . , fi are given by a supplementary table in Bickerton et al. (2012). E.2 SYNTHETIC ACCESSIBILITY (SA) is defined as SA = fragmentScore− complexityPenalty The fragment score is calculated as a sum of contributions from fragments of 934,046 PubChem already-synthesized chemicals. The complexity penalty is computed from a combination of ringComplexityScore, stereoComplexityScore, macroCyclePenalty, and the sizePenalty: ringComplexityScore = log(nRingBridgeAtoms + 1) + log(nSprioAtoms + 1) stereoComplexityScore = log(nStereoCenters + 1) macroCyclePenalty = log(nMacroCycles + 1) sizePenalty = nAtoms1.005 − nAtoms
1. What is the focus and contribution of the paper on molecule generation using graph attention networks? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its extension to edge features and the use of an innovation reward? 3. Do you have any concerns regarding the chemically reasonable mutations sampled by the RL algorithm or the low QED score for DGAPN? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, especially regarding the mathematical notation and figure illustrations? 5. Are there any suggestions for improving the paper, such as including more proteins in the experiments or showing docking poses of generated compounds?
Summary Of The Paper Review
Summary Of The Paper The authors propose a new fragment-based method for molecule generation. The model, called DGAPN, uses chemical fragments extracted from a public compound library with their chemical context (atom neighborhoods to which these fragments are attached). This way, all modifications made by the model should produce synthetically accessible compounds with high probability. Additionally, the authors introduce a new class of graph neural networks to predict chemical properties that employ an attention mechanism over atoms and chemical bonds. These models are used to guide the generative MDP trained with reinforcement learning. Inhibition of a SARS-CoV-2 antiviral target, NSP15, is used as an example task for the proposed model. The experimental section shows the results of single- and multi-objective compound generation, in which DGAPN obtains better docking scores of the generated molecules than other methods. Review Pros: The authors extend graph attention networks to edge features and notice that spatial information is crucial in activity prediction. The introduced sGAT model is a novel contribution. It would be interesting to see an ablation study comparing this model to simpler graph neural networks, e.g. to the closely related EGAT architecture [1]. Chemically reasonable mutations (CReM) are sampled by the RL algorithm to ensure that the generated structures are realistic. An innovation reward is added to encourage exploration. The reward is based on random network distillation. The ablation study shows the usefulness of spatial convolution for the supervised learning task. The results of the pretraining are shown in the ablation study even though they do not improve the generated compounds in terms of the main objective (an interesting negative result). Cons: The authors state that "fragment-by-fragment sequential generation [...] has not been utilized in a deep graph RL framework", but I am not sure it is true. For example, Ståhl et al. [2] use deep reinforcement learning on fragments to optimize molecules. Recently, more and more publications focus on optimizing molecular docking scores using de novo drug design models, and I think this topic should be briefly summarized in the related work. For example, see [3]. In Figure 1, the meaning of lines, symbols, and colors is unclear. Even after reading Equations 3-6, the drawing is difficult to process. The mathematical notation is difficult to follow in some parts of the methods section. For example, what is the purpose of the OHC operator? Is it there only to sample from the softmax distribution and map the decision onto the set of valid molecules? The QED score is very low for DGAPN. Can it be caused by generating big compounds that exceed a typical molecular weight of drug-like compounds? The compounds are generated by adding whole chemical fragments, which can cause fast growth of the molecules. If that is the case, then these structures will be difficult to optimize (cannot be easily used as hits, which was suggested by the authors). This defeats the purpose of the generative model. The generated structures should be shown at least in the supplementary material. In this paper, only one antiviral target is considered. The results would be more convincing if there were more proteins in the experiments. The rewards are based on docking scores, so it would be interesting to see the docking poses of generated compounds. Especially, large compounds like those shown in Figure 4, which probably barely fit in the binding pocket. Other comments: Why is the second column for DGAPN bolded in Figure 1? In Section 5, there is a citation that failed to compile: "MARS, xie2021mars" In Appendix D.3, the notation p w is used two times - should it not be just p for the generated molecules? [1] Chen, Jun, and Haopeng Chen. "Edge-Featured Graph Attention Network." arXiv preprint arXiv:2101.07671 (2021). [2] Ståhl, Niclas, et al. "Deep reinforcement learning for multiparameter optimization in de novo drug design." Journal of chemical information and modeling 59.7 (2019): 3166-3176. [3] Thomas, Morgan, et al. "Comparison of structure-and ligand-based scoring functions for deep generative models: a GPCR case study." Journal of Cheminformatics 13.1 (2021): 1-20.
ICLR
Title Spatial Graph Attention and Curiosity-driven Policy for Antiviral Drug Discovery Abstract We developed Distilled Graph Attention Policy Network (DGAPN), a reinforcement learning model to generate novel graph-structured chemical representations that optimize user-defined objectives by efficiently navigating a physically constrained domain. The framework is examined on the task of generating molecules that are designed to bind, noncovalently, to functional sites of SARS-CoV-2 proteins. We present a spatial Graph Attention (sGAT) mechanism that leverages self-attention over both node and edge attributes as well as encoding the spatial structure — this capability is of considerable interest in synthetic biology and drug discovery. An attentional policy network is introduced to learn the decision rules for a dynamic, fragment-based chemical environment, and state-of-the-art policy gradient techniques are employed to train the network with stability. Exploration is driven by the stochasticity of the action space design and the innovation reward bonuses learned and proposed by random network distillation. In experiments, our framework achieved outstanding results compared to state-of-the-art algorithms, while reducing the complexity of paths to chemical synthesis. 1 INTRODUCTION This work aims to address the challenge of establishing an automated process for the design of objects with connected components, such as molecules, that optimize specific properties. Achieving this goal is particularly desirable in drug development and materials science, where manual discovery remains a time-consuming and expensive process (Hughes et al., 2011; Schneider et al., 2020). However, there are two major difficulties that have long impeded rapid progress. Firstly, the chemical space is discrete and massive (Polishchuk et al., 2013), presenting a complicated environment for an Artificial Intelligence (AI) approach to efficiently and effectively explore. Secondly, it is not trivial to compress such connected objects into feature representations that preserve most of the information, while also being highly computable for Deep Learning (DL) methods to exploit. We introduce Distilled Graph Attention Policy Network (DGAPN), a framework that advances prior work in addressing both of these challenges. We present a Reinforcement Learning (RL) architecture that is efficiently encouraged to take innovative actions with an environment that is able to construct a † University of California, Berkeley, § National Virtual Biotechnology Laboratory, US Department of Energy, ‡ Lawrence Berkeley National Laboratory, ¶ Oak Ridge National Laboratory, || University of Tennessee, Knoxville, †† University of Chicago, §§ Argonne National Laboratory, ‡‡ Pacific Northwest National Laboratory dynamic and chemically valid fragment-based action space. We also propose a hybrid Graph Neural Network (GNN) that comprehensively encodes graph objects’ attributes and spatial structures in addition to adjacency structures. The following paragraphs discuss how we addressed limitations of prior work and its relevance to antiviral drug discovery. For more descriptions of key prior methodologies that we used as benchmarks in this paper, see Section 4. Graph Representation Learning Despite their spatial efficiency, string representation of molecules acquired by the simplified molecular-input line-entry system (SMILES) (Weininger, 1988) suffers from significant information loss and poor robustness (Liu et al., 2017). Graph representations have become predominant and preferable for their ability to efficiently encode an object’s scaffold structure and attributes. Graph representations are particularly ideal for RL since intermediate representations can be decoded and evaluated for reward assignments. While GNNs such as Graph Convolutional Networks (GCN) (Kipf & Welling, 2016) and Graph Attention Networks (GAT) (Veličković et al., 2017) have demonstrated impressive performance on many DL tasks, further exploitation into richer information contained in graph-structured data is needed to faithfully represent the complexity of chemical space (Morris et al., 2019; Wang et al., 2019; Chen et al., 2020). In this work, we made improvements to previous studies on attributes encoding and structural encoding. For structural encoding, previous studies have covered adjacency distance encoding (Li et al., 2020), spatial cutoff (Pei et al., 2020) and coordinates encoding (Schütt et al., 2017; Danel et al., 2020). Our work presents an alternative approach to spatial structure encoding similar to Gilmer et al. (2017) which do not rely on node coordinates, but different in embedding and updating scheme. Distinct from Danel et al. (2020) and Chen & Chen (2021), we extended attentional embedding to be edge-featured, while still node-centric for message passing efficiency. Reinforcement Learning A variety of graph generative models have been used in prior work, predominantly Variational Autoencoders (VAE) (Simonovsky & Komodakis, 2018; Samanta et al., 2020; Liu et al., 2018; Ma et al., 2018; Jin et al., 2018) and Generative Adversarial Networks (GAN) (De Cao & Kipf, 2018). While some of these have a recurrent structure (Li et al., 2018; You et al., 2018b), RL and other search algorithms that interact dynamically with the environment excel in sequential generation due to their ability to resist overfitting on training data. Both policy learning (You et al., 2018a) and value function learning (Zhou et al., 2019) have been adopted for molecule generation: however, they generate molecules node-by-node and edge-by-edge. In comparison, an action space consisting of molecular fragments, i.e., a collection of chemically valid components and realizable synthesis paths, is favorable since different atom types and bonds are defined by the local molecular environment. Furthermore, the chemical space to explore can be largely reduced. Fragment-by-fragment sequential generation has been used in VAE (Jin et al., 2018) and search algorithms (Jin et al., 2020; Xie et al., 2021), but has not been utilized in a deep graph RL framework. In this work, we designed our environment with the Chemically Reasonable Mutations (CReM) (Polishchuk, 2020) library to realize a valid fragment-based action space. In addition, we enhanced exploration by employing a simple and efficient technique, adapting Random Network Distillation (RND) (Burda et al., 2018) to GNNs and proposing surrogate innovation rewards for intermediate states during the generating process. Antiviral Drug Discovery — A Timely Challenge The severity of the COVID-19 pandemic highlighted the major role of computational workflows to characterize the viral machinery and identify druggable targets for the rapid development of novel antivirals. Particularly, the synergistic use of DL methods and structural knowledge via molecular docking is at the cutting edge of molecular biology — consolidating such integrative protocols to accelerate drug discovery is of paramount importance (Yang et al., 2021; Jeon & Kim, 2020; Thomas et al., 2021). Here we experimentally examined our architecture on the task of discovering novel inhibitors targeting the SARS-CoV-2 non-structural protein endoribonuclease (NSP15), which is critical for viral evasion of host defense systems (Pillon et al., 2021). Structural information about the putative protein-ligand complexes was integrated into this framework with AutoDock-GPU (Santos-Martins et al., 2021), which leverages the GPU resources from leadership-class computing facilities, including the Summit supercomputer, for high-throughput molecular docking (LeGrand et al., 2020). We show that our results outperformed state-of-the-art generation models in finding molecules with high affinity to the target and reasonable synthetic accessibility. 2 PROPOSED METHOD 2.1 ENVIRONMENT SETTINGS In the case of molecular generation, single-atom or single-bond additions are often not realizable by known biochemical reactions. Rather than employing abstract architectures such as GANs to suggest synthetic accessibility, we use the chemical library CReM (Polishchuk, 2020) to construct our environment such that all next possible molecules can be obtained by one step of interchanging chemical fragments with the current molecule. This explicit approach is considerably more reliable and interpretable compared to DL approaches. A detailed description of the CReM library can be found in Appendix B.1. The generating process is formulated as a Markov decision problem (details are given in Appendix A). At each time step t, we use CReM to sample a set of valid molecules vt+1 as the candidates for the next state st+1 based on current state st. Under this setting, the transition dynamics are deterministic, set A of the action space can be defined as equal to S of the state space, and action at is induced by the direct selection of st+1. With an abuse of notation, we let r(st+1) := r(st, at). 2.2 SPATIAL GRAPH ATTENTION We introduce a graph embedding mechanism called Spatial Graph Attention (sGAT) in an attempt to faithfully extract feature vectors ht ∈ Rdh representing graph-structured objects such as molecules. Two different types of information graphs constructed from a connected object are heterogeneous and thus handled differently in forward passes as described in the following sections. See Figure 1 for an overview. 2.2.1 ATTENTION ON ATTRIBUTION GRAPHS The attribution graph of a molecule with n atoms and e bonds is given by the triple (A,N ,E), where A ∈ {0, 1}n×n is the node adjacency matrix, N is the node attribution matrix of dimension n× dn and E is the edge attribution matrix of dimension e× de. Each entry aij ofA is 1 if a bond exists between atom i and j, and 0 otherwise. Each row vector ni of N is a concatenation of the properties of atom i, including its atomic number, mass, etc., with the categorical properties being one-hot encoded. E is formed similar to N , but with bond attributes. We denote a row vector of E as eij if it corresponds to the bond between atom i and j. We proceed to define a multi-head forward propagation that handles these rich graph information: let hnk ∈ R1×dhn denote a given representation for nk, heij ∈ R1×dhe denote a representation for eij , then the m-th head attention αmij from node j to node i (i 6= j) is given by αmij = softmax j ( ⋃ k: aik=1 { σ([hniWn,m ‖ heikWe,m ‖ hnkWn,m] · attmT ) }) (1) where softmax j is the softmax score of node j; ‖ is column concatenation; σ is some non-linear activation;Wn,m ∈ Rdhn×dwn ,We,m ∈ Rdhe×dwe are them-th head weight matrices for nodes and edges respectively; attm ∈ R1×(2dwn+dwe ) is the m-th head attention weight. The representations after a feed-forward operation are consequently given as follow: h′ni = aggr1≤m≤nm σ ∑ j: aij=1 αmij · hnj + hni Wn,m (2) h′eij = aggr1≤m≤nm { σ ([ hniWn,m ‖ heijWe,m ‖ hnjWn,m ] ·Wh,m )} (3) where Wh,m ∈ R(2dwn+dwe )×dwe ; nm is the total number of attention heads and aggr denotes an aggregation method, most commonly mean , sum , or concat (Hamilton et al., 2017). We note that we have not found significant difference across these methods and have used mean for all aggregations in our experiments. In principle, a single-head operation on nodes is essentially graph convolution with the adjacency matrix  = à + I where à is attention-regularized according to (1). This approach sufficiently embeds edge attributes while still being a node-centric convolution mechanism, for which efficient frameworks like Pytorch-Geometric (Fey & Lenssen, 2019) have been well established. 2.2.2 SPATIAL CONVOLUTION In addition to attributions and logical adjacency, one might also wish to exploit the spatial structure of an graph object. In the case of molecular docking, spatial structure informs the molecular volume and the spatial distribution of interaction sites — shape and chemical complementarity to the receptor binding site is essential for an effective association. Let G = ( dij −1) i,j≤n be the inverse distance matrix where dij is the Euclidean distance between node i and j for ∀i 6= j, and dii−1 := 0. G can then be seen as an adjacency matrix with weighted “edge”s indicating nodes’ spatial relations, and the forward propagation is thus given by H ′′n = σ (( D̃− 1 2 G̃D̃− 1 2 + I ) HnWn ) (4) where G̃ is optionally sparsified and attention-regularized from G to be described below; D̃ = diag1≤i≤n {∑n j=1 G̃ij } ; Hn is the row concatenation of {hni}1≤i≤n; Wn ∈ Rdhn×dwn is the weight matrix. In reality,G inducesO(n) of convolution operations on each node and can drastically increase training time when the number of nodes is high. Therefore, one might want to derive G̃ by enforcing a cut-off around each node’s neighborhood (Pei et al., 2020), or preserving an O(n) number of largest entries in G and dropping out the rest. In our case, although the average number of nodes is low enough for the gather and scatter operations (GS) of Pytorch-Geometric to experience no noticeable difference in runtime as node degrees scale up (Fey & Lenssen, 2019), the latter approach of sparsification was still carried out because we have discovered that proper cutoffs improved the validation loss in our supervised learning experiments. If one perceives the relations between chemical properties and spatial information as more abstract, G should be regularized by attention as described in (1), in which case the spatial convolution is principally fully-connected graph attention with the Euclidean distance as a one-dimensional edge attribution. 2.3 GRAPH ATTENTION POLICY NETWORK In this section we introduce Graph Attention Policy Network (GAPN) that is tailored to environments that possess a dynamic range of actions. Note that ρ(·|st, at) is a degenerate distribution for deterministic transition dynamics and the future trajectory τ ∼ p(st+1, st+2, . . . |st) is strictly equal in distribution to a ∼ π(at, at+1, . . . |st), hence simplified as the latter in the following sections. To learn the policy more efficiently, we let st and vt share a few mutual embedding layers, and provided option to pre-train the first ng layers with supervised learning. Layers inherited from pretraining are not updated during the training of RL. See Figure 2 for an overview of the architecture. 2.3.1 ACTION SELECTION At each time step t, we sample the next state st+1 from a categorical distribution constructed by applying a retrieval-system-inspired attention mechanism (Vaswani et al., 2017): st+1 ∼ OHC softmax ⋃ g∈gt+1 {Lfinal(EQ(gt) ‖ EK(g)} · vt+1 (5) where OHC{p1, . . . , pnv} is a one-hot categorical distribution with nv categories; gt, gt+1 are the embeddings for st and vt+1 acquired by the shared encoder; EQ, EK are two sGAT+MLP graph encoders with output feature dimension dk; Lfinal : Rb×2dk → Rb is the final feed-forward layer. Essentially, each candidate state is predicted a probability based on its ‘attention’ to the query state. The next state is then sampled categorically according to these probabilities. There could be a number of ways to determine stopping time T . For instance, an intuitive approach would be to append st to vt+1 and terminate the process if st is selected as st+1. In our experiments, we simply pick T to be constant, i.e. we perform a fixed number of modifications for an input. This design encourages the process to not take meaningless long routes or get stuck in a cycle, and enables episodic docking evaluations in parallelization (further described in Section 2.5). Note that constant trajectory length is feasible because the maximum limit of time steps can be set significantly lower for fragment-based action space compared to node-by-node and edge-by-edge action spaces. 2.3.2 ACTOR-CRITIC ALGORITHM For the purpose of obeying causal logic and reducing variance, the advantage on discounted rewardto-go are predominantly used instead of raw rewards in policy iterations. The Q-function and advantage function are expressed as Qπ(st, at) = Eπ [ T∑ t′=t γt ′−t · r(st′ , at′) ∣∣∣∣∣st, at ] (6) Aπ(st, at) = Q π(st, at)− Eπ [Qπ(st, at)|st] (7) where γ is the rate of time discount. The Advantage Actor-Critic (A2C) algorithm approximates Eπ [Qπ(st, at)|st] with a value network Vζ(st) and Qπ(st, at) with r(st, at) + γVζ(st+1). For a more detailed description of actor-critic algorithm in RL, see Grondman et al. (2012). 2.3.3 PROXIMAL POLICY OPTIMIZATION We use Proximal Policy Optimization (PPO) (Schulman et al., 2017), a state-of-the-art policy gradient technique, to train our network. PPO holds a leash on policy updates whose necessity is elaborated in trust region policy optimization (TRPO) (Schulman et al., 2015), yet much simplified. It also enables multiple epochs of minibatch updates within one iteration. The objective function is given as follow: J∗(θ) = max θ ED,πoldθ [ T∑ t=1 min { rt(θ)A πoldθ (st, at), clip (rt(θ))A πoldθ (st, at) }] (8) where rt(θ) = πnewθ (at ∣∣st)/πoldθ (at∣∣st) , clip (x) = min {max {1− , x} , 1 + } and s0 ∼ D. During policy iterations, πnew is updated each epoch and πold is cloned from πnew each iteration. 2.4 EXPLORATION WITH RANDOM NETWORK DISTILLATION We seek to employ a simple and efficient exploration technique that can be naturally incorporated into our architecture to enhance the curiosity of our policy. We perform Random Network Distillation (RND) (Burda et al., 2018) on graphs or pre-trained feature graphs to fulfill this need. Two random functions f̂ψ, f∗ that map input graphs to feature vectors in Rdr are initialized with neural networks, and f̂ψ is trained to match the output of f∗: ψ∗ = arg min ψ Es′∼p̂next‖f̂ψ(s′)− f∗(s′)‖ (9) where p̂next is the empirical distribution of all the previously selected next states, i.e. the states that have been explored. We record the running errors in a buffer and construct the surrogate innovation reward as: ri(s ′) = clipη (( ‖f̂ψ(s′)− f∗(s′)‖ −mb )/√ vb ) (10) where mb and vb are the first and second central moment inferred from the running buffer, clipη(x) = min {max {−η, x} , η}. 2.5 PARALLELIZATION AND SYNCHRONIZED EVALUATION Interacting with the environment and obtaining rewards through external software programs are the two major performance bottlenecks in ours as well as RL in general. An advantage of our environment settings, as stated in Section 2.3.1, is that a constant trajectory length is feasible. Moreover, the costs for environmental interactions are about the same for different input states. To take advantage of this, we parallelize environments on CPU subprocesses and execute batched operations on one GPU process, which enables synchronized and sparse docking evaluations that reduces the number of calls to the docking program. For future experiments where such conditions might be unrealistic, we also provided options for asynchronous Parallel-GPU and Parallel-CPU samplers (described in Stooke & Abbeel (2019)) in addition to the Parallel-GPU sampler used in our experiments. 3 EXPERIMENTS 3.1 SETUP Objectives We evaluated our model against five state-of-the-art models (detailed in Section 4) with the objective of discovering novel inhibitors targeting SARS-CoV-2 NSP15. Molecular docking scores are computed by docking programs that use the three-dimensional structure of the protein to predict the most stable bound conformations of the molecules of interest, targeting a pre-defined functional site. For more details on molecular docking and our GPU implementation of an automated docking tool used in the experiments, see Appendix B.2. In addition, we evaluated our model in the context of optimizing QED and penalized LogP values, two tasks commonly presented in machine learning literature for molecular design. The results for this can be found in Appendix D. Dataset For the models/settings that do require a dataset, we used a set of SMILES IDs taken from more than six million compounds from the MCULE molecular library — a publicly available dataset of purchasable molecules (Kiss et al., 2012), and their docking scores for the NSP15 target. 3.2 RESULTS 3.2.1 SINGLE-OBJECTIVE OPTIMIZATION The raw docking score is a negative value that represents higher estimated binding affinity when the score is lower. We use the negative docking score as the main reward rm and assign it to the final state sT as the single objective. For DGAPN, we also assign innovation reward to each intermediate state, and the total raw reward for a trajectory τ is given by r(τ ) = rm(sT ) + ι · T∑ t=1 ri(st) (11) where ι is the relative important of innovation rewards, for which we chose 0.1 and incorporated them with a 100 episode delay and 1,000 episode cutoff. Detailed hyperparameter settings for DGAPN can be found in Appendix C. We sampled 1,000 molecules from each method and showed the evaluation results in Table 1. We note that we have a separate approach to evaluate our model that is able to achieve a −7.73 mean and −10.38 best docking score (see the Ablation Study paragraph below), but here we only evaluated the latest molecules found in training in order to maintain consistency with the manner in which GCPN and MolDQN are evaluated. In the result table, ordinary validity is checked by examining atoms’ valency and consistency of bonds in aromatic rings. In addition, we propose adjusted validity which further deems molecules that fail on conformer generation (Riniker & Landrum, 2015) invalid on top of the ordinary validity criteria. This is required for docking evaluation, and molecules that fail this check are assigned a docking score of 0. We also provide additional summary metrics to help gain perspective of the generated molecules: Uniq. and Div. are the uniqueness and diversity (Polykovskiy et al., 2020); QED (Bickerton et al., 2012) is an indicator of drug-likeness, SA (Ertl & Schuffenhauer, 2009) is the synthetic accessibility. QED is better when the score is higher and SA is better when lower. Definitions of QED and SA can be found in Appendix E. On this task, DGAPN significantly outperformed state-of-the-art models in terms of top scores and average score, obtaining a high statistical significance over the second best model (MolDQN) with a p-value of 8.55×10−209 under Welch’s t-test (Welch, 1947). As anticipated, the molecules generated by fragment-based algorithms (JTVAE, MARS and DGAPN) have significantly better SAs. Yet we note that additional summary metrics are not of particular interest in single-objective optimization, and obtaining good summary metrics does not always indicate useful results. For example, during model tuning, we found out that worse convergence often tend to result in better diversity score. There also seems to be a trade-off between docking score and QED which we further examined in Section 3.2.3. Ablation study We performed some ablation studies to examine the efficacy of each component of our model. Firstly, we segregated spatial graph attention from the RL framework and examined its effect solely in a supervised learning setting with the NSP15 dataset. The loss curves are shown in Figure 3, in which spatial convolution exhibited a strong impact on molecular graph representation learning. Secondly, we ran single-objective optimization with (DGAPN) and without (GAPN) innovation rewards, and thirdly, compared the results from DGAPN in evaluation against greedy algorithm with only the CReM environment. These results are shown in Table 2. Note that it is not exactly fair to compare greedy algorithm to other approaches since it has access to more information (docking reward for each intermediate candidate) when making decisions, yet our model still managed to outperform it in evaluation mode (see Appendix C for more information). From results generated by the greedy approach, we can see that the environment and the stochasticity design of action space alone are powerful for the efficacy and exploration of our policies. While the innovation bonus helped discover molecules with better docking scores, it also worsened SA. We further investigated this docking score vs. SA trade-off in Section 3.2.3. To see samples of molecules generated by DGAPN in evaluation, visit our repository†. 3.2.2 CONSTRAINED OPTIMIZATION The goal of constrained optimization is to find molecules that have large improvement over a given molecule from the dataset while maintaining a certain level of similarity: rm′(sT ) = rm(sT )− λ ·max{0, δ − SIM {s0, sT }} (12) where λ is a scaling coefficient, for which we chose 100; SIM {·, ·} is the Tanimoto similarity between Morgan fingerprints. We used a subset of 100 molecules from our dataset as the starting molecules, chose the two most recent and best performing benchmark models in single-objective optimization to compete against, and evaluated 100 molecules generated from theirs and ours. The results are shown in Table 3. From the results, it seems that MARS is not capable of performing optimizations with similarity constraint. Compared to MolDQN, DGAPN gave better improvements across all levels of δ, although MolDQN was able to produce molecules with more stable similarity scores. 3.2.3 MULTI-OBJECTIVE OPTIMIZATION We investigate the balancing between main objective and realism by performing multi-objective optimization, and thus provide another approach to generate useful molecules in practice. We weight rm with two additional metrics — QED and SA, yielding the new main reward as rm′(sT ) = ω · rm(sT ) + (1− ω) · µ · [QED(sT ) + SA∗(sT )] (13) where SA∗(sT ) = (10− SA(sT ))/9 is an adjustment of SA such that it ranges from 0 to 1 and is preferred to be larger; µ is a scaling coefficient, for which we chose 8. The results obtained by DGAPN under different settings of ω are shown in Figure 4. With ω = 0.6, DGAPN is able to generate molecules having better average QED (0.72) and SA (2.20) than that of the best model (JTVAE) in terms of these two metrics in Table 1, while still maintaining a mean docking score (−5.69) better than all benchmark models in single-objective optimization. †https://github.com/yulun-rayn/DGAPN A trade-off between docking reward and QED/SA was identified. We acknowledge that optimizing docking alone does not guarantee finding practically useful molecules, but our goal is to generate promising chemicals with room for rational hit optimization. We also note that commonly used alternative main objectives such as pLogP and QED are themselves unreliable or undiscerning as discussed in Appendix D. Hence, for methodological study purposes, we believe that molecular docking provides a more useful and realistic test bed for algorithm development. 4 RELATED WORK The REINVENT (Olivecrona et al., 2017) architecture consists of two recurrent neural network (RNN) architectures, generating molecules as tokenized SMILE strings. The “Prior network” is trained with maximum likelihood estimation on a set of canonical SMILE strings, while the “Agent network” is trained with policy gradient and rewarded using a combination of task scores and Prior network estimations. The Junction Tree Variational Autoencoder (JTVAE, Jin et al. (2018)) trains two encoder/decoder networks in building a fixed-dimension latent space representation of molecules, where one network captures junction tree structure of molecules and the other is responsible for fine grain connectivity. Novel molecules with desired properties are then generated using Bayesian optimization on the latent space. Graph Convolutional Policy Network (GCPN, You et al. (2018a)) is a policy gradient RL architecture for de novo molecular generation. The network defines domain-specific modifications on molecular graphs so that chemical validity is maintained at each episode. Additionally, the model optimizes for realism with adversarial training and expert pre-training using trajectories generated from known molecules in the ZINC library. Molecule Deep Q-Networks (MolDQN, Zhou et al. (2019)) is a Q-learning model using Morgan fingerprint as representations of molecules. To achieve molecular validity, chemical modifications are directly defined for each episode. To enhance exploration of chemical space, MolDQN learns H independent Q-functions, each of which is trained on separate sub-samples of the training data. Markov Molecular Sampling (MARS, Xie et al. (2021)) generates molecules by employing an iterative method of editing fragments within a molecular graph, producing high-quality candidates through Markov chain Monte Carlo sampling (MCMC). MARS then uses the MCMC samples in training a GNN to represent and select candidate edits, further improving sampling efficiency. 5 CONCLUSIONS In this work, we introduced a spatial graph attention mechanism and a curiosity-driven policy network to discover novel molecules optimized for targeted objectives. We identified candidate antiviral compounds designed to inhibit the SARS-CoV-2 protein NSP15, leveraging extensive molecular docking simulations. Our framework advances the state-of-the-art algorithms in the optimization of molecules with antiviral potential as measured by molecular docking scores, while maintaining reasonable synthetic accessibility. We note that a valuable extension of our work would be to focus on lead-optimization — the refinement of molecules already known to bind the protein of interest through position-constrained modification. Such knowledge-based and iterative refinements may help to work around limitations of the accuracy of molecular docking predictions. ACKNOWLEDGMENTS This work was funded via the DOE Office of Science through the National Virtual Biotechnology Laboratory (NVBL), a consortium of DOE national laboratories focused on the response to COVID-19, with funding provided by the Coronavirus CARES Act. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. This manuscript has been coauthored by UT-Battelle, LLC under contract no. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a nonexclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-publicaccess-plan, last accessed September 16, 2020). A DETAILED FORMULATION OF THE PROBLEM Our goal is to establish a set of decision rules to generate graph-structured data that maximizes compound objectives under certain constraints. Similar to prior formulations, the generating process is defined as a time homogeneous Markov Decision Process (MDP). We give a formal definition of this process in Appendix A.1. Under this setting, the action policies and state transition dynamics at step t can be factorized according to the Markov property: P (at|s0, a0, s1, a1, . . . , st) = P (at|st) := π(at|st) (14) P (st+1|s0, a0, s1, a1, . . . , st, at) = P (st+1|st, at) := ρ(st+1|st, at) (15) where {st, at}t are state-action sequences. A reward function r(s, a) is used to assess an action a taken at a given state s. The process terminates at an optional stopping time T and sT is then proposed as the final product of the current generating cycle. We aim to estimate the optimal policy π in terms of various objectives to be constructed later in the experiment section. A.1 MEASURE THEORY CONSTRUCTION OF MARKOV DECISION PROCESS Let (S,S) and (A,A) be two measurable spaces called the state space and action space; functions Π : S × A → R and T : S × A × S → R are said to be a policy and a transition probability respectively if 1. For each s ∈ S, E → Π(s, E) is a probability measure on (A,A); for each (s, a) ∈ S×A, F → T (s, a, F ) is a probability measure on (S,S). 2. For each E ∈ A, s → Π(s, E) is a measurable function from (S,S) → (R,B); for each F ∈ S, (s, a)→ T (s, a, F ) is a measurable function from (S ×A,S ⊗A)→ (R,B). We say a sequence of random variable duples (St, At) defined on the two measurable spaces is a Markov decision chain if P (At ∈ E | σ(S0, A0, S1, A1, . . . , St)) = Π(St, E) (16) P (St+1 ∈ F | σ(S0, A0, S1, A1, . . . , St, At)) = T (St, At, F ) (17) A function r : S × A → R is said to be the reward function w.r.t. the Markov decision chain if r(st, Et) = EΠ,T [R(st+1) | St = st, At ∈ Et] whereR : S → R is its underlying reward function. With an abuse of notation, we define π(a|s) := Π(s, {a}), ρ(s′|s, a) := T (s, a, {s′}) and let r(s, a) denote r(s, {a}). B LEARNING ENVIRONMENT AND REWARD EVALUATION B.1 ENVIRONMENT - CREM Chemically Reasonable Mutations (CReM) is an open-source fragment-based framework for chemical structure modification. The use of libraries of chemical fragments allows for a direct control of the chemical validity of molecular substructures and to consider the chemical context of coupled fragments (e.g., resonance effects). Compared to atom-based approaches, CReM explores less of chemical space but guarantees chemical validity for each modification, because only fragments that are in the same chemical context are interchangeable. Compared to reaction-based frameworks, CReM enables a larger exploration of chemical space but may explore chemical modifications that are less synthetically feasible. Fragments are generated from the ChEMBL database (Gaulton et al., 2012) and for each fragment, the chemical context is encoded for several context radius sizes in a SMILES string and stored along with the fragment in a separate database. For each query molecule, mutations are enumerated by matching the context of its fragments with those that are found in the CReM fragment-context database (Polishchuk, 2020). In this work, we use grow function on a single carbon to generate initial choices if a warm-start dataset is not provided, and mutate function to enumerate possible modifications with the default context radius size of 3 to find replacements. B.2 EVALUATION - AUTODOCK-GPU Docking programs use the three-dimensional structure of the protein (i.e., the receptor) to predict the most stable bound conformations of the small molecules (i.e., its putative ligands) of interest, often targeting a pre-defined functional site, such as the catalytic site. An optimization algorithm within a scoring function is employed towards finding the ligand conformations that likely correspond to binding free energy minima. The scoring function is conformation-dependent and typically comprises physics-based empirical or semi-empirical potentials that describe pair-wise atomic terms, such as dispersion, hydrogen bonding, electrostatics, and desolvation (Huang et al., 2010; Huey et al., 2007). AutoDock is a computational simulated docking program that uses a Lamarckian genetic algorithm to predict native-like conformations of protein-ligand complexes and a semiempirical scoring function to estimate the corresponding binding affinities. Lower values of docking scores indicate stronger predicted interactions. The opposite value of the lowest estimated binding affinity energy obtained for each molecule forms the reward. AutoDock-GPU (Santos-Martins et al., 2021) is an extension of AutoDock to leverage the highlyparallel architecture of GPUs. Within AutoDock-GPU, ADADELTA (Zeiler, 2012), a gradientbased method, is used for local refinement. The structural information of the receptor (here, the NSP15 protein) used by AutoDock-GPU is processed prior to running the framework. In this preparatory step, AutoDockTools (Morris et al., 2009b) was used to define the search space for docking on NSP15 (PDB ID 6W01; Figure 5) and to generate the PDBQT file of the receptor, which contains atomic coordinates, partial charges, and AutoDock atom types. AutoGrid4 (Morris et al., 2009a) was used to pre-calculate grid maps of interaction energy at the binding site for the different atom types defined in CReM. In evaluation, after applying an initial filter within RDKit to check whether a given SMILES is chemically valid (i.e., hybridization, ring membership etc.), a 3D conformer of the molecule is generated using AllChem.EmbedMolecule. SMILES that do not correspond to valid compounds are discarded. Next, the molecular geometry is energy minimized within RDKit using the generalized force filed MMFF94. The resulting conformer is used as input for molecular docking via AutoDockGPU. We also excluded any molecules from the final result set that were both fully rigid and larger than the search box in the receptor. This only occurred in two molecules from the JTVAE evaluation. C HYPERPARAMETER SETTINGS FOR SINGLE-OBJECTIVE OPTIMIZATION Based on a parameter sweep, we set number of GNN layers to be 3, MLP layers to be 3, with 3 of the GNN layers and 0 of the MLP layers shared between query and key. Number of layers in RND is set to 1; all numbers of hidden neurons 256; learning rate for actor 2−3, for critic 1−4, for RND 2−3; update time steps (i.e. batch size) 300. Number of epochs per iteration and clipping parameter for PPO are 30 and 0.1. Output dimensions and clipping parameter η for RND are 8 and 5. In evaluation mode, we use arg max policy instead of sampling policy, expand the number of candidates per step from 15-20 to 128 and expand the maximum time steps per episode from 12 to 20 compared to training. For more details regarding hyperparameter settings, see our codebase at https://github.com/yulun-rayn/DGAPN. D MORE RESULTS ON QED AND PENALIZED LOGP Although QED and penalized LogP are the most popular objectives to benchmark ML algorithms for molecule generation, these benchmarks are questionable for both scientific study and practical use as Xie et al. (2021) pointed out. Most methods can obtain QED scores close or equal to the highest possible of 0.948, making the metric hard to distinguish different methods. As for pLogP, if we simply construct a large molecule with no ring, such as the molecule from SMILES ‘CCCCC...CCCCC’ (139 carbons), it will give us a pLogP score of 50.31 which beats all state-of-the-art models in Table 4. Needless to say, we will achieve a even higher pLogP by continuously adding carbons, which was exactly how REINVENT performed in our experiment. We note that we were able to raise our results to around 18 solely by doubling the maximum time steps per episode reported in Appendix C, yet not so interested in pushing the performance on this somewhat meaningless metric by continuously increasing one hyperparameter. The results from REINVENT were produced in our own experiments, while others were directly pulled out from the original results reported in the literature. E DEFINITIONS OF QED AND SA E.1 QUANTITATIVE ESTIMATE OF DRUGLIKENESS (QED) is defined as QED = exp ( 1 n n∑ i=1 ln di ) , where di are eight widely used molecular properties. Specifically, they are molecular weight (MW), octanol-water partition coefficient (ALOGP), number of hydrogen bond donors (HBD), number of hydrogen bond acceptors (HBA), molecular polar surface area (PSA), number of rotatable bonds (ROTB), the number of aromatic rings (AROM), and number of structural alerts. For each di, di(x) = ai + bi 1 + exp ( −x−ci+ di 2 ei ) · 1− 1 1 + exp ( −x−ci+ di 2 fi ) , each ai, . . . , fi are given by a supplementary table in Bickerton et al. (2012). E.2 SYNTHETIC ACCESSIBILITY (SA) is defined as SA = fragmentScore− complexityPenalty The fragment score is calculated as a sum of contributions from fragments of 934,046 PubChem already-synthesized chemicals. The complexity penalty is computed from a combination of ringComplexityScore, stereoComplexityScore, macroCyclePenalty, and the sizePenalty: ringComplexityScore = log(nRingBridgeAtoms + 1) + log(nSprioAtoms + 1) stereoComplexityScore = log(nStereoCenters + 1) macroCyclePenalty = log(nMacroCycles + 1) sizePenalty = nAtoms1.005 − nAtoms
1. What is the focus and contribution of the paper regarding generating molecules that bind to functional sites of SARS-Cov-2 protein? 2. What are the strengths of the proposed reinforcement learning model with a fragment-based framework for chemical structure modification? 3. What are the weaknesses of the paper, especially regarding the model's performance on certain metrics and the lack of innovation in specific modules? 4. Do you have any questions or concerns about the paper's experimental results or the model's architecture?
Summary Of The Paper Review
Summary Of The Paper With the goal of generating molecules that bind to functional sites of SARS-Cov-2 protein, the paper proposed a reinforcement learning model with a fragment-based framework for chemical structure modification. In the network part, spatial graph attention and spatial convolution are utilized to extract more structural information from the input graph into the representation of nodes or graphs. Based on the actor-critic algorithm, the reinforcement learning part is designed to find the state with the best docking score computed by docking programs and some novel approaches like PPO and RND are used to train the model more effectively. In the experiments, the model shows great performance while reducing the complexity of chemical synthesis meanwhile. Review Strengths: 1. The model in the paper is designed to resolve the up-to-date challenge of discovering novel inhibitors to target the SARS-CoV-2 non-structural protein endoribonuclease, and the experimental results show that molecules generated by the proposed model enjoy better synthetic accessibility according to SA value, which is a metric generally used to measure the synthetic difficulty of drugs. Meanwhile, the molecules show higher docking scores compared to other existing algorithms. 2. The reinforcement learning part of the model is based on a fragment-based chemical environment for chemical synthesis and utilizes many novel approaches like Proximal Policy Optimization and Random Network Distillation as an effective attempt. This may give inspirations of the model architecture to future researches in this field. Weaknesses: 1. Spatial graph attention and spatial convolution indeed improve the model’s ability to extract structural information from input graphs, but these modules are not innovative enough. Actually, spatial graph attention is just a multi-head attention mechanism applied to graphs and spatial convolution is a Euclidean distance version of GNN’s convolution operation. Some new modules are expected to be employed by the model to make it more powerful. 2. As is seen from the experimental results, the docking scores and SA value are better than other existing algorithms indeed. But the model’s performance on other metrics is unsatisfactory and particularly, Diversity is even the worst among algorithms in Table 1. Although this has been explained in the paper as a general drawback of fragment-based algorithms, more optimizations may need to be realized. 3. Some statements in the paper may make readers confused, which are listed below. Equation 3 and 4 don’t share a consistent representation of the element in the adjacency matrix. Table 1 lacks some necessary annotations like what is the definition of Diversity and what is the meaning of the values in bold. The right plot in Figure 4 also lacks some necessary explanations and this plot seems to be redundant now.
ICLR
Title Towards Robust Neural Networks via Close-loop Control Abstract Despite their success in massive engineering applications, deep neural networks are vulnerable to various perturbations due to their black-box nature. Recent study has shown that a deep neural network can misclassify the data even if the input data is perturbed by an imperceptible amount. In this paper, we address the robustness issue of neural networks by a novel close-loop control method from the perspective of dynamic systems. Instead of modifying the parameters in a fixed neural network architecture, a close-loop control process is added to generate control signals adaptively for the perturbed or corrupted data. We connect the robustness of neural networks with optimal control using the geometrical information of underlying data to design the control objective. The detailed analysis shows how the embedding manifolds of state trajectory affect error estimation of the proposed method. Our approach can simultaneously maintain the performance on clean data and improve the robustness against many types of data perturbations. It can also further improve the performance of robustly trained neural networks against different perturbations. To the best of our knowledge, this is the first work that improves the robustness of neural networks with close-loop control 1. 1 INTRODUCTION Due to the increasing data and computing power, deep neural networks have achieved state-of-theart performance in many applications such as computer vision, natural language processing and recommendation systems. However, many deep neural networks are vulnerable to various malicious perturbations due to their black-box nature: a small (even imperceptible) perturbation of input data may lead to completely wrong predictions (Szegedy et al., 2013; Nguyen et al., 2015). This has been a major concern in some safety-critical applications such as autonomous driving (Grigorescu et al., 2020) and medical image analysis (Lundervold & Lundervold, 2019). Various perturbations have been reported, including the `p norm based attack (Madry et al., 2017; Moosavi-Dezfooli et al., 2016; Carlini & Wagner, 2017), semantic perturbation (Engstrom et al., 2017) etc. On the other side, some algorithms to improve the robustness against those perturbations have shown great success (Madry et al., 2017). However, most robustly trained models are tailored for certain types of perturbations, and they do not work well for other types of perturbations. Khoury & Hadfield-Menell (2018) showed the non-existence of optimal decision boundary for any `p-norm perturbation. Recent works (E, 2017; Haber & Ruthotto, 2017) have shown the connection between dynamical systems and neural networks. This dynamic system perspective provides some interesting theoretical insights about the robustness issue. Given a set of data x0 ∈ Rd and its labels y ∈ Rl with a joint distribution D, training a neural network can be considered as following min θ E (x0,y)∼D [Φ(xT ,y)], s.t. xt+1 = f(xt,θt), §Equal contributing authors. 1A Pytorch implementation can be found in:https://github.com/zhuotongchen/ Towards-Robust-Neural-Networks-via-Close-loop-Control.git where θ are the unknown parameters to train, and f , Φ represent the forward propagation rule and loss function (e.g. cross-entropy) respectively. The dynamical system perspective interprets the vulnerability of neural networks as a system instability issue, which addresses the state trajectory variation under small perturbations applied on initial conditions. The optimal control theory focuses on developing a control model to adjust the system state trajectory in an optimal manner. The first work that links and extends the classical back-propagation algorithm using optimal control theory was presented in Li et al. (2017), where the direct relationship between the Pontryagin’s Maximum Principle (Kirk, 1970) and the gradient based network training was established. Ye et al. (2019) used control theory to adjust the hyperparameters in the adversarial training algorithm. Han et al. (2018) established the mathematical basis of the optimal control viewpoint of deep learning. These existing works on algorithm development are open-loop control methods since they commonly treat the network weights θ as control parameters and keep them fixed once the training is done. The fixed control parameters θ operate optimally for data sampled from the data distribution D. However, various perturbation methods cause data distributions to deviate from the true distribution D (Song et al., 2017) and cause poor performance with the fixed open-loop control parameters. 1.1 PAPER CONTRIBUTIONS To address the limitation of using open-loop control methods, we propose the Close- Loop Control Neural Network (CLC-NN), the first close-loop control method to improve the robustness of neural networks. As shown in Fig. 1, our method adds additional blocks to a given T -layer neural network: embedding functions Et, which induce running losses in all layers that measure the discrepancies between true features and observed features under input perturbation, then control processes generate control variables ut to minimize the total running loss under various data perturbations. The original neural network can be designed by either standard training or robust training. In the latter case, our CLC-NN framework can achieve extra robustness against different perturbations. The forward propagation rule is thus modified with an extra control parameter ut ∈ Rd ′ xt+1 = f(xt,θt,ut). Fig. 1 should not be misunderstood as an open-loop control. From the perspective of dynamic systems, x0 is an initial condition, and the excitation input signal is ut (which is 0 in a standard feed-forward network). Therefore, the forward signal path is from ut to the internal states xt and then to the output label y. The path from xt to the embedding function Et(xt) and then to the excitation signal ut forms a feedback and closes the whole loop. The technical contributions of this paper are summarized below: • The proposed method relies on the well accepted assumption that the data and hidden state manifolds are low dimensional compared to the ambient dimension (Fefferman et al., 2016). We study the geometrical information of the data and hidden layers to define the objective function for control. Given a trained T -layer neural network, a set of embedding functions Et are trained off-line by minimizing the reconstruction loss ‖E(xt) − xt‖ over some clean data from D only. The embedding functions support defining a running loss required in our control method. • We define the control problem by dynamic programming and implement the online iterative solver based on the Pontryagin’s Maximum Principle to avoid the curse of dimensionality. The proposed close-loop control formulation does not require prior information of the perturbation. • We provide a theoretical error bound of the controlled system for the simplified case with linear activation functions and linear embedding. This error bound reveals how the close-loop control improves neural network robustness in the simplest setting. 2 RELATED WORKS Many techniques have been reported to improve the robustness of neural networks, such as data augmentation (Shorten & Khoshgoftaar, 2019), gradient masking (Liu et al., 2018), etc. We review adversarial training and reactive defense which are most relevant to this work. Adversarial Training. Adversarial training is (possibly) the most popular robust training method, and it solves a min-max robust optimization problem to minimize the worse-case loss with perturbed data. Adversarial training effectively regularizes the network’s local Lipschitz constants of the loss surface around the data manifold (Liu et al., 2018). Zhang et al. (2019) formulated the robustness training using the Pontryagon’s Maximum Principle, such open-loop control methods result in a set of fixed parameters that operates optimally on the considered perturbation. Liu et al. (2020a;b) considered a close-loop formulation from the differential dynamic programming perspective, this algorithm is categorized as a open-loop control method because it utilizes the state feedback information to boost the training convergence and results in a set of fixed controls for any unseen data. On the contrary, the proposed CLC-NN formulation adaptively targets on the inputs with different control parameters and is capable of distinguishing clean data by generating no control. Reactive Defense. A reactive defense method tries to reject or pre-process the input data that may cause mis-classifications. Metzen et al. (2017) rejected perturbed data by using adversarial detectors that are trained with adversarial data to detect abnormal data during forward propagation. Song et al. (2017) estimated the input data distribution D with a generative model (Oord et al., 2016) to detect data that does not belong to D, it applies a greedy method to search the local neighbour of input data for a more statistically plausible counterpart. This purification process has shown improved accuracy with adversarial data contaminated by various types of perturbations. Purification can be considered as a one-step method to solve the optimal control problem that has the objective function defined over the initial condition only. On the contrary, the proposed CLC-NN solves the control problem by the dynamic programming principle and its objective function is defined over the entire state trajectory, which guarantees the optimality for the resulted controls. 3 THE CLOSE-LOOP CONTROL FRAMEWORK FOR NEURAL NETWORKS Now we present a close-loop optimal control formulation to address the robustness issue of deep learning. Consider a neural network consisting of model parameters θ equipped with external control policy π, where π ∈ Π is a collection of functions Rd → Rd′ acting on the state and outputting the control signal. The feed-forward propagation in a T -layer neural network can be represented as xt+1 = f(xt,θt,πt(xt)), t = 0, · · · , T − 1. (1) Given a trained network, we solve the following optimization problem min π E(x0,y)∼D [J(x0,y,π)] := min π E(x0,y)∼D [ Φ(xT ,y) + T−1∑ s=0 L(xs,πs(xs)) ] , s.t. Eq. (1), (2) where π collects the control policies π0, · · · ,πT−1 for all layers. Note that (2) differs from the open-loop control used in standard training. An open-loop control that treats the network parameters as control variables seeks for a set of fixed parameters θ to match the output with true label y by minimizing the terminal loss Φ, and the running loss L defines a regularization for θ. However, the terminal and running losses play different roles when our goal is to improve the robustness of a neural network by generating some adaptive controls for different inputs. Challenge of Close-loop Control for Neural Networks. Optimal control has been well studied in the control community for trajectory optimization, where one defines the running loss as the error between the actual state xt and a reference state xt,ref over time interval [0, T ]. The resulting control policy adjusts xt and makes it approach xt,ref. In this paper, we apply the idea of trajectory optimization to improve the robustness of a neural network via adjusting the undesired state of xt. However, the formulation is more challenging in neural networks: we do not have a “reference” state during the inference process, therefore it is unclear how to define the running loss L. In the following, we investigate manifold embedding of the state trajectory to precisely define the loss functions Φ and L of Eq. (2) required for the control objective function of a neural network. 3.1 MANIFOLD LEARNING FOR STATE TRAJECTORIES State Manifold. Our controller design is based on the “manifold hypothesis”: real-world high dimensional data can often be embedded in a lower dimensional manifold M (Fefferman et al., 2016). Indeed, neural networks extract the embedded features fromM. To fool a well-trained neural network, the perturbed data often stays away from the data manifoldM (Khoury & Hadfield-Menell, 2018). We consider the data space Z (x ∈ Z,∀x ∼ D) as: Z = Z‖ ⊕ Z⊥, where Z‖ contains the embedded manifoldM and Z⊥ is the orthogonal complement of Z‖. During forward propagation, the state manifold embedded in Z‖ varies at different layers due to both the nonlinear activation function f and state dimensionality variation. Therefore, we denote Zt = Zt‖ ⊕ Zt⊥ as the state space decomposition at layer t and Mt ∈ Zt‖. Once an input data is perturbed, the main effects of causing misclassifications are in Z⊥. Therefore, it is important to measure how far the possibly perturbed state xt deviates from the state manifoldMt. Embedding Function. Given an embedding function Et that encodes xt onto the lowerdimensional manifoldMt and decodes the result back to the full state space Zt, the reconstruction loss ‖Et(xt)−xt‖ measures the deviation of the possibly perturbed state xt from the manifoldMt. The reconstruction loss is nonzero as long as xt has components in Zt⊥. The embedding functions are constructed offline by minimizing the total reconstruction losses over a clean training data set. • Linear Case: Et(·) can be considered as Vrt (Vrt )T where Vrt forms an orthonormal basis for Zt‖. Specifically one can first perform a principle component analysis over a collection of hidden states at layer t, then Vrt can be obtained as the first r columns of the resulting eigenvectors. • Nonlinear Case: we choose a convolutional auto-encoder (detailed in Appendix B) to obtain a representative manifold embedding function Et due to its ease of implementation. Based on the assumption that most perturbations are in the Z⊥ subspace, the embeddings are effective to detect the perturbations as long as the target manifold is of a low dimension. Alternative manifold learning methods such as Izenman (2012) may also be employed. 3.2 FORMULATION FOR THE CLOSE-LOOP CONTROL OF NEURAL NETWORKS Control Objectives. The above embedding function allows us to define a running loss L: L(xt,πt(xt), Et(·)) = ‖Et(xt)− xt‖22 + (πt(xt))TRπt(xt). (3) Here the matrix R defines a regularization term promoting controls of small magnitudes. In practical implementations, using a diagonal matrix R with small elements often helps to improve the performance. Now we are ready to design the control objective function of CLC-NN. Different from a standard open-loop control, this work sets the terminal loss Φ as zero because no true label is given during inference. Consequently, the close-loop control formulation in Eq. (2) becomes min π E(x0,y)∼D [J(x0,y,π)] := min π E(x0,y)∼D T−1∑ t=0 [L(xt,πt(xt), Et(·))] , s.t. Eq. (1). (4) Assume that the input data is perturbed by a bounded and small amount, i.e., x ,0 = x0 + · z, where z can be either random or adversarial. The proposed CLC-NN adjusts the perturbed state trajectory x ,t such that it stays at a minimum distance from the desired manifoldMt while promoting small magnitudes of controls. Intuition. We use an intuitive example to show how CLC-NN controls the state trajectory of unseen data samples. We create a synthetic binary classification data set with 1500 samples. We train a residual neural network with one hidden layer of dimension 2, and adopt the fast gradient sign method (Goodfellow et al., 2014) to generate adversarial data. Fig. 2 (a) and (b) show the states of clean data (red and blue) and of perturbed data (black and gray) at t = 0 and t = 1, respectively. The CLC-NN adjusts the state trajectory to reduce the reconstruction loss as shown in Fig. 2 (c) and (d), where lighter background color represents lower reconstruction loss. Comparing Fig. 2 (a) with (c), and Fig. 2 (b) with (d), we see that the perturbed states in Fig. 2 (a) and (b) deviate from the desired state manifold (light green region) and has a high reconstruction loss. Running 1000 iterations of Alg. 1 adjusts the perturbed states and improves the classification accuracy from 86% to 100%. 4 IMPLEMENTATION VIA THE PONTRYAGIN’S MAXIMUM PRINCIPLE Dynamic Programming for Close-Loop Control (4). The control problem in Eq. (4) can be solved by the dynamical programming principle (Bellman, 1952). For simplicity we consider one input data sample, and define a value function V : T × Rd → R (where T := {0, 1, . . . , T − 1}). Here V (t,x) represents the optimal cost-to-go function of Eq. (4) incurred from time t at state x. One can show that V (t,x) satisfies the dynamic programming principle V (t,x) = inf π∈Π [V (t+ 1,x + f(x,θt,π(x))) + L(x,π(x), Et(·))] . (5) Eq. (5) gives a necessary and sufficient condition for optimality of Eq. (4), and it is often solved backward in time by discretizing the entire state space. The state dimension of a modern neural network is at the order of thousands or even higher, therefore, discretizing the state space and directly solving Eq. (5) is intractable for real-world applications due to the curse of dimensionality. Solving (5) via the Pontryagin’s Maximum Principle. To overcome the computational challenge, the Pontryagin’s Maximum Principle (Kirk, 1970) converts the intractable dynamical programming into two ordinary differential equations and a maximization condition. Instead of computing the control policy π of Eq. (5), the Pontryagin’s Maximum Principle provides a necessary condition for the optimality with a set of control parameters [u∗0, · · · ,u∗T ]. The mean-field Pontryagin’s Maximum Principle can be considered when the initial condition is a batch of i.i.d. samples drawn fromD. Specifically, we trade the intractable computational complexity with processing time for solving the Hamilton equations and its maximization condition for every newly observed data. To begin with, we define the Hamiltonian H : T × Rd × Rd × Rl × Rm → R as H(t,xt,pt+1,θt,ut) := p T t+1 · f(xt,θt,ut)− L(xt,ut, Et(·)). (6) Let x∗ denote the corresponding optimally controlled state trajectory. There exists a co-state process p∗ : [0, T ]→ Rd such that the Hamilton’s equations x∗t+1 = ∇pH(t,x∗t ,p∗t ,θt,u∗t ), (x∗0,y) ∼ D, (7) p∗t = ∇xH(t,x∗t ,p∗t+1,θt,u∗t ), p∗T = 0, (8) are satisfied. The terminal co-state pT = 0, since we do not consider the terminal loss Φ(xT ,y). Moreover, we have the Hamiltonian maximization condition H(t,x∗t ,p ∗ t ,θt,u ∗ t ) ≥ H(t,x∗t ,p∗t ,θt,ut),∀u ∈ Rd ′ and ∀t ∈ T . (9) Instead of solving Eq. (5) for the optimal control policy π∗(xt), for a given initial condition, the Pontryagin’s Maximum Principle seeks for a open-loop optimal solution such that the global optimum of Eq. (5) is satisfied. The limitation of using the maximum principle is that the control parameters u∗t need to be solved for every unseen data to achieve the optimal solution. Algorithm Flow. The numerical implementation of CLC-NN is summarized in Alg. 1. Given a trained network (either from standard or adversarial training) and a set of embedding functions, the controls are initialized as ut = 0,∀t ∈ T , because adding random initialization weakens the Algorithm 1: CLC-NN with the Pontryagin’s Maximum Principle. Input : Possibly perturbed data x , a trained neural network, embedding functions [E1, · · · , ET−1], maxItr (maximum number of iterations). Output: A set of optimal control parameters u∗0, · · · ,u∗T−1. 1 for k = 0 to maxItr do 2 Jk = 0, 3 for t = 0 to T − 1 do 4 xt+1,k = f(xt,k,θt,ut,k), where x0,k = x , . Forward propagation Eq. (7), 5 Jk = Jk + L(xt,k,ut,k, Et(xt,k)), . Objective function Eq. (4), 6 end for 7 for t = T to 1 do 8 pt,k = p T t+1 · ∇xtf(xt,k,θt,ut,k)−∇xtL(xt,k,ut,k, Et(xt,k)), 9 where pT,k = 0, . Backward propagation Eq. (8) 10 end for 11 for t = 0 to T − 1 do 12 ut,k+1 = ut,k + ( pTt+1,k · ∇utf(xt,k,θt,ut,k)−∇utL(xt,k,ut,k, Et(xt,k)) ) , 13 . Maximization of Hamiltonian Eq. (9) based on Eq. (6) and gradient ascent. 14 end for 15 end for robustness performance in general, and clean trajectory often does not result in any running loss for the gradient update on the control parameters. In every iteration, a given input x0 is propagated forwardly with Eq. (7) to obtain all the intermediate hidden states xt for all t and to accumulate cost J . Eq. (8) backward propagates the co-state pt and Eq. (9) maximizes the tth Hamiltonian with current xt and pt to compute the optimal control parameters u∗t . 5 ERROR ANALYSIS FOR SIMPLIFIED LINEAR CASES For the ease of analysis, we consider a simplified neural network with linear activation functions: xt+1 = θt(xt + ut), and reveal why our proposed method can improve robustness in the simplest setting. Given a perturbed data sample x ,0, we denote its perturbation-free counterpart as x0 so that z = x ,0−x0. We consider a general perturbation where z is the direct sum of two orthogonal contributions: z‖, which is a perturbation within the data manifold (subspace), and z⊥, which is a perturbation in the orthogonal complement of the data manifold. This case is general: if we consider adversarial attacks, then the perturbation along the orthogonal complement dominates. In contrast, if we consider random perturbations, then the two perturbations are on the same scale. Our formulation covers both such extreme scenarios, together with intermediate cases. We use an orthogonal projection as the embedding function such that Et = Vrt (Vrt )T , where Vrt is the first r columns of the eigenvectors computed by the Principle Component Analysis on a collection of states xt. The proposed CLC-NN minimizes ‖x ,t−xt‖22 by reducing the components of x ,t that lie in the the orthogonal complement of Zt‖. The following theorem provides an error estimation between x ,t and xt. Theorem 1. For t ≥ 1, we have the error estimation ‖x ,t−xt‖22 ≤ ‖θt−1 · · ·θ0‖22· ( α2t‖z⊥‖22+‖z‖‖22+γt‖z‖22 ( γtα 2(1−αt−1)2+2(α−αt) )) , (10) where γt := max s≤t ( 1 + κ(θs) 2 ) ‖I− θTs θs‖2, and α = c1+c , c represents the control regularization. In particular, the equality ‖x ,t − xt‖22 = α2t‖z⊥‖22 + ‖z‖‖22, (11) holds when all θt are orthogonal. The detailed derivation is presented in Appendix A. Let us summarize the insights from Theorem 1. • The above error estimation is general for any input perturbation. It shows the working principle behind the proposed CLC-NN on controlling the perturbation that lies in the orthogonal complement of input subspace (z⊥). • The above error estimation improves as the control regularization c goes to 0 (so α→ 0). It is not the sharpest possible as it relies on a greedily optimal control at each layer. The globally optimal control defined by the Ricatti equation may achieve a lower loss when c 6= 0. • When the dimension of embedding subspace r decreases, our control becomes more effective in reducing ‖x ,t − xt‖22. This means that the control approach works the best when the data is constrained on a low dimensional manifold, which is consistent with the manifold hypothesis. In particular, observe that as r → 0, ‖z‖‖22 → 0 • The obtained upper bound is tight: the estimated upper bound becomes the actual error if all the forward propagation layers are orthogonal matrices. 6 NUMERICAL EXPERIMENTS We test our proposed CLC-NN framework under various input data perturbations. Here we briefly summarize our experimental settings, and we refer readers to Appendix B for the details. • Original Networks without Close-Loop Control. We choose residual neural networks (He et al., 2016) with ReLU activation functions as our target for close-loop control. In order to show that CLC-NN can improve the robustness in various settings, we consider networks from both standard and adversarial trainings. We consider multiple adversarial training methods: fast gradient sign method (FGSM) (Goodfellow et al., 2014), project gradient descent (PGD) (Madry et al., 2017), and the Label smoothing training (Label Smooth) (Hazan et al., 2017). • Input Perturbations. In order to test our CLC-NN framework, we perturb the input data within a radius of with = 2, 4 and 8 respectively. We consider various perturbations, including nonadversarial perturbations with the manifold-based attack (Jalal et al., 2017) (Manifold), as well as some adversarial attacks such as FGSM, PGD and the CW methods (Carlini & Wagner, 2017). • CLC-NN Implementations. We consider both linear and nonlinear embedding in our closeloop control. Specifically, we employ a principal component analysis with a 1% truncation error for linear embedding, and convolutional auto-encoders for nonlinear embedding. We use Adam (Kingma & Ba, 2014) to maximize the Hamiltonian function (9) and keep the same hyperparameters (learning rate, maximum iterations) for each model against all perturbations. Result Summary: Table 1 and Table 2 show the results for both CIFAR-10 and CIFAR-100 datasets on some neural networks from both standard training and adversarial training respectively. • CLC-NN significantly improves the robustness of neural networks from standard training. Table 1 shows that the baseline network trained on a clean data set becomes completely vulnerable (with almost 0% accuracy) under PGD and CW attacks. Our CLC-NN improves its accuracy to nearly 40% and 80% under PGD and CW attacks respectively. The accuracy under FGSM attacks has almost been doubled by our CLC-NN method. The accuracy on clean data is slightly decreased because the lower-dimensional embedding functions cannot exactly capture Z‖ orM. • CLC-NN further improves the robustness of adversarially trained networks. Table 2 shows that while an adversarially trained network is inherently robust against certain types of perturbations, CLC-NN strengthens its robustness significantly against various perturbations. For in- • The robustness improvement of adversarially trained networks is less significant. This is expected because the trajectory of perturbed data lies on the embedding subspace Z‖ if that data sample has been used in adversarial training. However, our experiments show that applying CLCNN to adversarially trained networks can achieve the best performance under most attacks. Comparison with PixelDefend (Song et al., 2017). Our method achieves similar performance on CIFAR-10 with slightly different experimental setting. Specifically, PixelDefend improved the robustness of a normally trained 62-layer ResNet from 0% to 78% against CW attack. Our proposed CLC-NN improves the robustness of a 20-layer ResNet from 0% to 81% against CW attacks. Furthermore, we show that CLC-NN is robust against the manifold-based attack. No result was reported for CIFAR-100 in Song et al. (2017). Comparison with Reactive Defense Reactive defenses can be understood as only applying a control at the initial condition of a dynamical system. Specifically, reactive defense equipped with linear embedding admits the following dynamics: xt+1 = f(xt,θt), s.t. x0 = V r 0(V r 0) Tx ,0. (12) By contrast, CLC-NN controls all hidden states and results in a decreasing error as the number of layers T increases (c.f. Theorem 1). To quantitatively compare CLC-NN with reactive defense, we implement them with the same linear embedding functions and against all perturbations. In Table 3, CLC-NN outperforms reactive defense in almost all cases except that their performances are case-dependent on clean data. 7 CONCLUSION We have proposed a close-loop control formulation to improve the robustness of neural networks. We have studied the embedding of state trajectory during forward propagation to define the optimal control objective function. The numerical experiments have shown that our method can improve the robustness of a trained neural network against various perturbations. We have provided an error estimation for the proposed method in the linear case. Our current implementation uses the Pontryagin’s Maximum Principle and an online iterative algorithm to overcome the intractability of solving a dynamical programming. This online process adds extra inference time. In the future, we plan to show the theoretical analysis for the nonlinear embedding case. Acknowledgement Zhuotong Chen and Zheng Zhang are supported by NSF CAREER Award No. 1846476 and NSF CCF No. 1817037. Qianxiao Li is supported by the start-up grant under the NUS PYP programme. A APPENDIX A ERROR ESTIMATION FOR THE PROPOSED CLC-NN Preliminaries We define the performance index at time t as J(xt,ut) = 1 2 ‖Qt(xt + ut)‖22 + c 2 ‖ut‖22, (13) where Qt = I−Vrt (Vrt )T , Vrt is the linear projection matrix at time t with only its first r principle components corresponding to the largest r eigenvalues. The optimal feedback control is defined as u∗t (xt) = arg min ut J(xt,ut), due to the linear system and quadratic performance index, the optimal feedback control admits an analytic solution by taking the gradient of performance index (Eq. (13)) and setting it to 0. ∇uJ(xt,ut) = ∇u ( 1 2 ‖Qt(xt + ut)‖22 + c 2 ‖ut‖22 ) , = QTt Qtxt + Q T t Qtut + c · ut, which leads to the analytic solution of u∗t (xt) as u∗t (xt) = −(c · I + QTt Qt)−1QTt Qtxt. (14) The above analytic control solution u∗t optimizes the performance index instantly at time step t, the error measured by Eq. (13) for the dynamical programming solution x ,t must be smaller or equal to the state trajectory equipped with u∗t define by Eq. (14), which gives a guaranteed upper bound for the error estimation of the dynamic programming solution. We define the feedback gain matrix Kt = (c · I + QTt Qt)−1QTt Qt. Thus, the one-step optimal feedback control can be represented as u∗t = −Ktxt. The difference between the controlled system applied with perturbation at initial condition and the uncontrolled system without perturbation is shown x ,t+1 − xt+1 = θt(x ,t + ut − xt), = θt(x ,t −Ktx ,t − xt). (15) The control objective is to minimize the state components that span the orthogonal complement of the data manifold (I − Vrt (Vrt )T ), when the input data to feedback control only stays in the state manifold, such that ‖(I−Vrt (Vrt )T )xt‖22 = 0, the feedback control Ktxt = 0. The state difference of Eq. (15) can be further shown by adding a 0 term of (θtKtxt) x ,t+1 − xt+1 = θt(I−Kt)x ,t − θtxt + θtKtxt, = θt(I−Kt)(x ,t − xt). (16) In the following, we show a transformation on the control dynamic term (I − Kt) based on its definition. Lemma 1. For t ≥ 0, we have I−Kt = α · I + (1− α) ·Pt, where Pt := Vrt (V r t ) T , which is the orthogonal projection onto Zt‖, and α := c 1+c such that α ∈ [0, 1]. Proof. Recall that Kt = (c ·I+QTt Qt)−1QTt Qt, and Qt = I−Vrt (Vrt )T , Qt can be diagonalized as following Qt = Vt 0 0 · · · 0 0 0 0 · · · 0 0 ... ... . . . 0 0 0 0 · · · 1 0 0 0 · · · 0 1 VTt , where the first r diagonal elements have common value of 0 and the last (d − r) diagonal elements have common value of 1. Furthermore, the feedback gain matrix Kt can be diagonalized as Kt = Vt 0 0 · · · 0 0 0 0 · · · 0 0 ... ... . . . 0 0 0 0 · · · 11+c 0 0 0 · · · 0 11+c VTt , where the last (d − r) diagonal elements have common value of 11+c . The control term (I −Kt) thus can be represented as I−Kt = Vt 1 0 · · · 0 0 0 1 · · · 0 0 ... ... . . . 0 0 0 0 · · · c1+c 0 0 0 · · · 0 11+c VTt , where the first r diagonal elements have common value of 1 and the last (d − r) diagonal elements have common value of c1+c . By denoting the projection of first r columns as V r t and last (d − r) columns as V̂rt , it can be further shown as I−Kt = Vrt (Vrt )T + c 1 + c ( V̂rt (V̂ r t ) T ) , = Pt + α ( I−Pt ) , = α · I + (1− α) ·Pt. Oblique Projections. Let P be a linear operator on Rd, • We say that P is an projection if P2 = P. • P is an orthogonal projection if P = PT = P2. • If P2 = P but P 6= PT , it is called an oblique projection. Proposition 2. For a projection P, 1. If P is an orthogonal projection, then ‖P‖2 = 1. 2. If P is an oblique projection, then ‖P‖2 > 1. 3. If P, Q are two projections such that range(P) = range(Q), then PQ = Q and QP = P. 4. If P is a projection, then rank(P) = Tr(P). Furthermore, if P is an orthogonal projection, then rank(P) = ‖P‖2F = Tr(PPT). Define for t ≥ 0 { P0t := Pt, P (s+1) t := θ −1 t−s−1P s tθt−s−1, s = 0, 1, . . . , t− 1, Lemma 3. Let Pst be defined as above for 0 ≤ s ≤ t. Then 1. Pst is a projection. 2. Pst is a projection onto Z t−s ‖ , i.e. range(P s t ) = Z t−s ‖ . 3. ‖Pst‖2F ≤ κ(θt−1θt−2 . . .θt−s)2 · r, where κ(A) is the condition number of A, i.e. κ(A) = ‖A‖2 · ‖A−1‖2, and r = rank(Z0‖) = rank(Z1‖) = . . . = rank(Z t ‖). Proof. 1. We prove it by induction on s for each t. For s = 0, P0t = Pt, which is a projection by its definition. Suppose it is true for s such that Pst = P s tP s t , then for (s+ 1), (Ps+1t ) 2 = ( θ−1t−s−1P s tθt−s−1 )2 , = θ−1t−s−1 ( Pst )2 θt−s−1, = θ−1t−s−1P s tθt−s−1, = Ps+1t . 2. We prove it by induction on s for each t. For s = 0, P0t = Pt, which is the orthogonal projection onto Zt‖. Suppose that it is true for s such that P s t is a projection onto Z t−s ‖ , then for (s + 1), Ps+1t = θ −1 t−s−1P s tθt−s−1, which implies range(Ps+1t ) = range(θ −1 t−s−1P s t ), = {θ−1t−s−1x : x ∈ Z t−s ‖ }, = Zt−s−1‖ . 3. We use the inequalities ‖AB‖F ≤ ‖A‖2‖B‖F , and ‖AB‖F ≤ ‖A‖F ‖B‖2. By the definition of Pst , Pst = ( θt−1θt−2 · · ·θt−s )−1 P0t ( θt−1θt−2 · · ·θt−s ) , we have the following ‖Pst‖2F ≤ ‖ ( θt−1θt−2 · · ·θt−s )−1‖22 · ‖(θt−1θt−2 · · ·θt−s)‖22 · ‖P0t‖2F , ≤ κ(θt−1θt−2 · · ·θt−s)2 · r, Lemma 2(4). The following Lemma uses the concept of oblique projection to show a recursive relationship to project any tth state space of Eq. (16) back to the input data space. Lemma 4. Define for 0 ≤ s ≤ t, Gst := α · I + (1− α)Pst . Then, Eq. (16) can be written as x ,t − xt = (θt−1θt−2 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), t ≥ 1. Proof. We prove it by induction on t. For t = 1, by the definition of Gst and transformation from Lemma 1, x ,1 − x1 = θ0(I−K0)(x ,0 − x0), Eq. (16), = θ0(α · I + (1− α) ·P0)(x ,0 − x0), Lemma 1, = θ0G 0 0(x ,0 − x0). Suppose that it is true for (x ,t − xt), by using Eq. (16) and Lemma 1, we have x ,t+1 − xt+1 = θt(I−Kt)(x ,t − xt), Eq. (16), = θt(α · I− (1− α) ·Pt)(x ,t − xt), Lemma 1, = θtG 0 t (θt−1θt−2 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0). (17) Recall the definitions of P(s+1)t := θ −1 t−s−1P s tθt−s−1, and G s t := α · I + (1 − α)Pst , we have the following Gs+1t = α · I + (1− α) ·P (s+1) t , = α · I + (1− α) · θ−1t−s−1Pstθt−s−1, = θ−1t−s−1 ( α · I + (1− α) ·Pst ) θt−s−1, = θ−1t−s−1G s tθt−s−1, which results in the equality for the oblique projections. Furthermore, θt−s−1G (s+1) t = G s tθt−s−1. Applying the above to Eq. (17) results in x ,t+1 − xt+1 = θtG0t (θt−1θt−2 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), = (θtθt−1)G 1 t (θt−2θt−3 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), = (θtθt−1θt−2)G 2 t (θt−3θt−4 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), = (θtθt−1 · · ·θ0)(GttGt−1t−1 · · ·G00)(x ,0 − x0). Lemma 5. Let Ft := G (t−1) t−1 G (t−2) t−2 · · ·G00, t ≥ 1. Then, Ft = α t · I + (1− α) t−1∑ s=0 αsPss. Proof. We prove it by induction on t. Recall the definition of Gst := α · I + (1 − α) · Pst . When t = 1, F1 = G 0 0 = α · I + (1− α) ·P00. Suppose that it is true for t such that Ft = G (t−1) t−1 G (t−2) t−2 · · ·G00 = αt · I + (1− α) t−1∑ s=0 αsPss, for (t+ 1), Ft+1 = G t tF (t), = (α · I + (1− α) ·Ptt)Ft, = (α · I + (1− α) ·Ptt)(αt · I + (1− α) t−1∑ s=0 αsPss), = αt+1 · I + αt(1− α)Ptt + (1− α)2 t−1∑ s=0 αs ·PttPss + α(1− α) t−1∑ s=0 αs ·Pss. Recall Lemma 3, range(Ptt) = range(P s s) = Z 0 ‖ . According to Proposition 2 (3), P t tP s s = P s s. Hence, Ft+1 = α t+1 · I + αt(1− α) ·Ptt + (1− α) t−1∑ s=0 αs ·Pss, = αt+1 · I + (1− α) t∑ s=0 αs ·Pss. Lemma 6. Let V ∈ Rd×r be a matrix whose columns are an orthogonal basis for a subspace D, and θ ∈ Rd×d be invertible. Let P = VVT be the orthogonal projection onto D. Denote by P̂ the orthogonal projection onto θD := {θx : x ∈ D}. Then 1. θ−1P̂θ is an oblique projection onto D. 2. ‖θ−1P̂θ −P‖2 ≤ ( 1 + κ(θ)2 ) · ‖I− θTθ‖2. In general, the last inequality shows that θ−1P̂θ = P, if θ is orthogonal. Proof. 1. (θ−1P̂θ)2 = θ−1P̂2θ = θ−1P̂θ, therefore, θ−1P̂θ is an projection. 2. Since P̂ is orthogonal projection onto the row space of θV, then P̂ = θV [ (θV)T (θV) ]−1 (θV)T , = θV [ VTθTθV ]−1 VTθT . θ−1P̂θ = V [ VTθTθV ]−1 VTθTθ. Furthermore, ‖θ−1P̂θ −P‖2 = ‖V [ VTθTθV ]−1 VTθTθ −VVT ‖2, ≤ ‖V [ VTθTθV ]−1 VTθTθ −VVTθTθ‖2 + ‖VVTθTθ −VVT ‖2, ≤ ‖V ( [VTθTθV]−1 − I ) VT ‖2 · ‖θTθ‖2 + ‖θTθ − I‖2, ≤ ‖[VTθTθV]−1‖2 · ‖I−VTθTθV‖2 · ‖θTθ‖2 + ‖θTθ − I‖2, ≤ ‖[VTθTθV]−1‖2 · ‖I− θTθ‖2 · ‖θTθ‖2 + ‖θTθ − I‖2. We further bound ‖[VTθTθV]−1‖2. ‖[VTθTθV]−1‖2 = ( λmin(V TθTθV) )−1 , = ( inf ‖x‖2=1 xTVTθTθVx )−1 , ≤ ( inf ‖x′‖2=1 (x′)TθTθx′ )−1 , = ( λmin(θ Tθ) )−1 , = ‖(θTθ)−1‖2. Hence, we have ‖θ−1P̂θ −P‖2 ≤ ( 1 + ‖θTθ‖2 · ‖(θTθ)−1‖2 ) · ‖I− θTθ‖2, = ( 1 + κ(θ)2 ) · ‖I− θTθ‖2. Corollary 1. Let t ≥ 1. Then for each s = 0, 1, · · · , t, we have ‖Pss −P0‖2 ≤ ( 1 + κ(θs) 2 ) · ‖I− θTs θs‖2, where • θ := θs−1 · · ·θ0, s ≥ 1, • θ := I, s = 0. Observe that Pss = (θs) −1Psθs. Using Lemma 6, we arrive at the main theorem. Theorem 1. For t ≥ 1, we have the error estimation ‖x ,t − xt‖22 ≤ ‖θt−1 · · ·θ0‖22 · ( α2t‖z⊥‖22 + ‖z‖‖22 + γt‖z‖22 ( γtα 2(1− αt−1)2 + 2(α− αt) )) . where γt := max s≤t ( 1 + κ(θs) 2 ) ‖I− θTs θs‖2, and α = c1+c , c represents the control regularization. In particular, the equality ‖x ,t − xt‖22 = α2t‖z⊥‖22 + ‖z‖‖22. holds when all θt are orthogonal. Proof. The input perturbation z = x ,0 − x0 can be written as z = z‖ + ·z⊥, where z‖ ∈ Z‖ and z⊥ ∈ Z⊥, where z‖ and z⊥ are vectors such that • z‖ · z⊥ = 0 almost surely. • z‖, z⊥ have uncorrelated components. • z‖ ∈ D, and z⊥ ∈ D⊥. Since z‖ and z⊥ are orthogonal almost surely, recall Lemma 4, ‖x ,t − xt‖22 = ‖(θt−1θt−2 · · ·θ0)(Gt−1t−1 · · ·G00)z‖22, ≤ ‖θt−1θt−2 · · ·θ0‖22 · ‖(Gt−1t−1 · · ·G00)z‖22, (18) For the term ‖(Gt−1t−1 · · ·G00)z‖22, recall Lemma 5, ‖(Gt−1t−1 · · ·G00)z‖22 = ‖ ( αt · I + (1− α) t−1∑ s=0 αs ·Pss ) z‖22, = ‖αtz + (1− α) t−1∑ s=0 αsP0z + (1− α) t−1∑ s=0 αs(Pss −P0)z‖22, = ‖αtz + (1− αt)z‖ + (1− α) t−1∑ s=0 αs(Pss −P0)z‖22, in the above, P0 is an orthogonal projection on t = 0 (input data space), therefore, P0z = z‖. Furthermore, when s = 0, Pss −P0 = 0. Thus, ‖(Gt−1t−1 · · ·G00)z‖22 = α2t‖z‖22 + (1− αt)2‖z‖‖22 + (1− α)2 t−1∑ s,q=1 αsαqzT (Pss −P0)T (Pqq −P0)z + 2αt(1− αt)‖z‖‖22 + 2αt(1− α) t−1∑ s=1 αszT (Pss −P0)z + 2(1− αt)(1− α) t−1∑ s=1 αs(z‖)T (Pss −P0)z, = α2t‖z⊥‖22 + ( α2t + 2αt(1− αt) + (1− αt)2 ) ‖z‖‖22 + (1− α)2 t−1∑ s,q=1 αsαqzT (Pss −P0)T (Pqq −P0)z + 2αt(1− α) t−1∑ s=1 αszT (Pss −P0)z + 2(1− αt)(1− α) t−1∑ s=1 αs(z‖)T (Pss −P0)z, = α2t‖z⊥‖22 + ‖z‖‖22 + (1− α)2 t−1∑ s,q=1 αsαqzT (Pss −P0)T (Pqq −P0)z + 2αt(1− α) t−1∑ s=1 αszT (Pss −P0)z + 2(1− αt)(1− α) t−1∑ s=1 αs(z‖)T (Pss −P0)z. Using Corollary 1, we have • zT (Pss −P0)z ≤ ‖z‖22 · ‖Pss −P0‖2, ≤ γt‖z‖22. • zT (Pss −P0)T (Pqq −P0)z ≤ ‖z‖22 · ‖Pss −P0‖2 · ‖Pqq −P0‖2, ≤ γ2t ‖z‖22. • (z‖)T (Pss −P0)z ≤ γt‖z‖‖2 · ‖z‖2, ≤ γt‖z‖22. Thus, we have ‖(Gt−1t−1 · · ·G00)z‖22 ≤ α2t‖z⊥‖22 + ‖z‖‖22 + α2(1− αt−1)2γ2t ‖z‖22 + 2αt+1(1− αt−1)γt‖z‖22 + 2α(1− αt)(1− αt−1)γt‖z‖22, = α2t‖z⊥‖22 + ‖z‖‖22 + γt‖z‖22 ( γtα 2(1− αt−1)2 + 2(α− αt) ) . Recall the error estimation in Eq. (18), ‖x ,t − xt‖22 ≤ ‖θt−1θt−2 · · ·θ0‖22 · ‖(Gt−1t−1 · · ·G00)z‖22, ≤ ‖θt−1 · · ·θ0‖22 · ( α2t‖z⊥‖22 + ‖z‖‖22 + γt‖z‖22 ( γtα 2(1− αt−1)2 + 2(α− αt) )) . In the specific case, when all θt are orthogonal, γt : = max s≤t ( 1 + κ(θs) 2 ) ‖I− θTs θs‖2 = 0. Thus, ‖x ,t − xt‖22 = α2t‖z⊥‖22 + ‖z‖‖22. B APPENDIX B DETAILS OF EXPERIMENTAL SETTING B.1 NETWORK CONFIGURATIONS Since the proposed CLC-NN optimizes the entire state trajectory, it is important to have a relatively smooth state trajectory, in which case, when the reconstruction loss ‖Et(xt) − xt‖22 at layer t is small, the reconstruction losses at its adjacent layers should be small. For this reason, we use residual neural network (He et al., 2016) as network candidate to retain smoother dynamics. The configuration of the residual neural network used for both CIFAR-10 and CIFAR-100 is shown in Tab. 4. Based on the configuration of residual neural network shown in Tab. 4, we construct 4 embedding functions applied at input space, outputs of initial layer, residual block 1 and residual block 2. The output of residual block 3 is embedded with a linear orthogonal projection. We randomly select 5000 clean training data to collect state trajectories at all 5 locations. • For the linear orthogonal projections: we apply the Principle Component Analysis on each of the state collections. We retain the first r columns of the resulted basis, such that r = arg min{i : λ1+...+λi λ1...+λd ≥ 1− δ}, where δ = 0.1. • For the nonlinear embedding: we train 4 convolutional auto-encoders for the input space, outputs of the initial layer and residual blocks 1, 2. All of the embedding functions are trained individually. We adopt shallow convolutional auto-encoder structure to gain fast inference speed, in which case, CLC-NN equipped with linear embedding often outperform the nonlinear embedding as shown in Tab. 1. The configuration of all 4 convolutional auto-encoders are shown in Tab. 5. B.2 PERTURBATIONS AND DEFENSIVE TRAINING In this section, we show details about the perturbations and robust networks that have been considered in this work. For the adversarial training objective function, min θ∈Θ max x ,0=∆(x0, ) E (x0,y)∼D [(1− λ) · Φi(x ,T ,y,θ) + λ · Φi(xT ,y,θ)], where ∆(x0, ) generates a perturbed data from given input x0 within the range of . λ balances between standard accuracy and robustness. We choose λ = 0.5 in all adversarial training. For robust networks, we consider both perturbation agnostic and non-agnostic methods. For the perturbation agnostic adversarial training algorithms equipped ∆(x0, ), the resulted network that is the most robust against the ∆(x0, ) perturbation. On the contrary, perturbation non-agnostic robust training methods are often robust against many types of perturbations. • Adversarial training with the fast gradient sign method (FGSM) (Goodfellow et al., 2014) considers perturbed data as follows. x ,0 = x0 + sign(∇x0Φ(xT ,y)), (x0, y) ∼ D, where sign(·) outputs the sign of the input. In which case, FGSM considers the worse case within the range of along the gradient∇x0Φ(xT ,y) increasing direction. Due to the worse case consideration, it does not scale well for deep networks, for this reason, we adversarially train the network with FGSM with = 4, which is half of the maximum perturbation considered in this paper. • The label smoothing training (Label Smooth) (Hazan et al., 2017) does not utilize any perturbation information ∆(x0, ). It converts one-hot labels into soft targets by setting the correct class as 1 − , while other classes have value of N−1 , where is a small constant and N is number of classes. Specifically, we choose = 0.9 in this paper. • Adversarial training with the project gradient descent (PGD) (Madry et al., 2017) generates adversarial data by iteratively run FGSM with small step size, which results in stronger perturbations compared with FGSM within the same range . We use 7-step of = 2 to generate adversarial data for robust training. For Perturbations, we consider the maximum range of = 2, 4, 8 to test the performance the network robustness against both strong and weak perturbations. For this work, we test network robustness with the manifold-based attack (Jalal et al., 2017), FGSM (Goodfellow et al., 2014), 20-step of PGD (Madry et al., 2017) and the CW attack (Carlini & Wagner, 2017). B.3 ONLINE OPTIMIZATION Optimization Methods. we use Adam (Kingma & Ba, 2014) to maximize the Hamiltonian Eq. (9) with default setting. In which case, solving the PMP brings in extra computational cost for inference. Each online iteration of solving the PMP requires a combination of forward propagation (Eq. (7)), backward propagation (Eq. (8)) and a maximization w.r.t. the control parameters (Eq. (9)), which has computational cost approximately the same as performing gradient descent on training a neural network for one iteration. For the numerical results presented in the paper, we choose the maximum iteration that gives the best performance from one of [5, 10, 20, 30, 50]. C MORE NUMERICAL EXPERIMENTS The proposed CLC-NN is designed to be compatible with existing open-loop trained. We show extra experiments by employing the proposed CLC-NN on two baseline models, DenseNet-40 (Table 6). The layer-wise projection performs orthogonal projection on the hidden state. We define the local cost function at the tth layer as follows J(xt,ut) = 1 2 ‖Qt(xt + ut)‖22 + c 2 ‖ut‖22, the layer-wise achieves the optimal solution at local time t, u∗t (xt) = arg min ut J(xt,ut). However, the layer-wise optimal control solution does not guarantee the optimum across all layers. In Table 7, we launch comparisons between the proposed CLC-NN with layer-wise projection. Specifically, under all perturbations the proposed CLC-NN outperforms layer-wise projection. D ROBUSTNESS AGAINST MANIFOLD-BASED ATTACK The manifold-based attack (Jalal et al., 2017) (denoted as Manifold) has shown great success on breaking down the manifold-based defenses (Samangouei et al., 2018). The proposed CLC-NN can successfully defend such specifically design adversarial attack for manifold-based defenses and improves the robustness accuracy from 1% to 81% for the standard trained model in Cifar-10, and 2% to 52% in Cifar-100. We provide detailed explanation for the successful defense of the proposed CLC-NN against such strong adversarial attack. Exsiting manifold-based defense (Samangouei et al., 2018) focuses on detecting and de-noising the input components that do not lie within the underlying manifold. The overpowered attack proposed in Jalal et al. (2017) searches adversarial attack with in the embedded latent space, which is undetectable for the manifold-based defenses and caused complete failure defense. In the real implementation, the manifold-based attack (Jalal et al., 2017) is detectable and controllable under the proposed framework due to the following reason. The numerically generated manifold embedding functions are not ideal. The error sources of non-ideal embedding functions are mainly due to the algorithm that used to compute the manifold, the architecture of embedding function, and the distribution shift between training and testing data (embedding functions of training data do not perfectly agree with testing data). In which case, even the perturbation is undetectable and non-controllable at initial layer, as it is propagated into hidden layers, each layer amplifies such perturbation, therefore, the perturbation becomes detectable and controllable in hidden layers. We randomly select the batch of testing data to generate the manifold-based attack following the same procedure proposed in Jalal et al. (2017). The proposed method improves the attacked accuracy from 1% to 78%. More specifically, we compare the differences of all hidden states spanning the orthogonal complement between a perturbed testing data and its unperturbed counterpart, ‖P⊥t x ,t− P⊥t x ,t‖, where P⊥t is a projection onto the orthogonal complement. The difference is growing such as 0, 0.016, 0.0438, 0.0107, 0.0552 for hidden states at layer 0, 1, 2, 3, 4 respectively. This validates the argument for how the proposed method is able to detect such perturbation and controls the perturbation in hidden layers. Furthermore, we provide some insights about the reasons behind the success of such an adversarial attack. This follows the same concept of the existence of adversarial attack in neural networks. The highly nonlinear behaviour of neural networks preserves complex representative ability, meanwhile, its powerful representation results in its vulnerability. For example, a constant function has 50% chance to make a correct prediction in binary classification problem under any perturbation, but its performance is limited. Therefore, we propose to use a linear embedding function that compensates between the embedding accuracy and robustness. E DEFINITION OF THREAT MODEL Generally, an attacker should not have access to the hidden states during inference, in which case, an attacker is not allowed to inject extra noise during inference. To define the threat model of the proposed method, for the white-box setting, an attacker has access to both network and all embedding functions. The condition that the perturbation · z makes our method vulnerable is defined as follows, T−1∑ t=0 ‖Et(x ,t)− x ,t‖22 = 0, x ,0 = x0 + · z. In words, the perturbation ·z applied on the input data must result in 0 reconstruction losses across all hidden layers, which means its corresponding state trajectory does not span any of all orthogonal complements of all hidden state spaces. To obtain an effective attack satisfying the above equation, conventional gradient-based attackers cannot guarantee to find an perfect attack. A possible way is to perform grid-search backward in layers to find such an adversarial attack satisfying the thread model condition, which is extremely costly.
1. What is the main contribution of the paper regarding the performance of deep neural networks? 2. What are the strengths of the proposed approach, particularly in terms of intuition and efficiency? 3. What are the concerns regarding the choice of loss function and its impact on the method? 4. How does the reviewer assess the controllability and observability of the controlled system in the proposed method? 5. What are the suggestions for improving the empirical study, including comparisons with baselines?
Review
Review Keeping the performance of deep neural networks against data perturbations is an important and open problem. The authors propose an optimal control-based approach by taking dynamical systems perspective. The proposed method sounds intuitive and efficient. Authors supply theoretical analysis and (small) experimental evaluation. Overall, I believe paper is a good. However, I would like to get some points clarified: a) Authors used manifold assumption (which is a reasonable assumption for many problems) to define running loss (eq 3). (If I am not mistaken) They choose a quadratic loss to have a tractable optimization problem. However, under these assumptions, one may choose many different losses. Would you please comment on the form of the loss and its impact to the method? b) Let’s assume dynamical systems perspective is a right perspective for analysing deep neural networks (to be honest I don’t have any criticism about this). To use control theoretic tools, one needs to comment on controllability and observability of the controlled system. I suspect these mentioned properties are a function of the neural network architecture or do authors think the proposed method (as shown in Figure 1) makes each and every deep neural network architecture controllable and/or observable? I would like to hear authors perspective on these issues. c) As I mentioned before, the empirical study is quite small, and I didn’t see any baseline (do I miss something here). Do authors consider extending their empirical study and compare their method with some baselines. I would like to emphasize one more time that, I am positive about the paper. However, I would like to note that I am not expert in the field and I am open to change my view in both direction.
ICLR
Title Towards Robust Neural Networks via Close-loop Control Abstract Despite their success in massive engineering applications, deep neural networks are vulnerable to various perturbations due to their black-box nature. Recent study has shown that a deep neural network can misclassify the data even if the input data is perturbed by an imperceptible amount. In this paper, we address the robustness issue of neural networks by a novel close-loop control method from the perspective of dynamic systems. Instead of modifying the parameters in a fixed neural network architecture, a close-loop control process is added to generate control signals adaptively for the perturbed or corrupted data. We connect the robustness of neural networks with optimal control using the geometrical information of underlying data to design the control objective. The detailed analysis shows how the embedding manifolds of state trajectory affect error estimation of the proposed method. Our approach can simultaneously maintain the performance on clean data and improve the robustness against many types of data perturbations. It can also further improve the performance of robustly trained neural networks against different perturbations. To the best of our knowledge, this is the first work that improves the robustness of neural networks with close-loop control 1. 1 INTRODUCTION Due to the increasing data and computing power, deep neural networks have achieved state-of-theart performance in many applications such as computer vision, natural language processing and recommendation systems. However, many deep neural networks are vulnerable to various malicious perturbations due to their black-box nature: a small (even imperceptible) perturbation of input data may lead to completely wrong predictions (Szegedy et al., 2013; Nguyen et al., 2015). This has been a major concern in some safety-critical applications such as autonomous driving (Grigorescu et al., 2020) and medical image analysis (Lundervold & Lundervold, 2019). Various perturbations have been reported, including the `p norm based attack (Madry et al., 2017; Moosavi-Dezfooli et al., 2016; Carlini & Wagner, 2017), semantic perturbation (Engstrom et al., 2017) etc. On the other side, some algorithms to improve the robustness against those perturbations have shown great success (Madry et al., 2017). However, most robustly trained models are tailored for certain types of perturbations, and they do not work well for other types of perturbations. Khoury & Hadfield-Menell (2018) showed the non-existence of optimal decision boundary for any `p-norm perturbation. Recent works (E, 2017; Haber & Ruthotto, 2017) have shown the connection between dynamical systems and neural networks. This dynamic system perspective provides some interesting theoretical insights about the robustness issue. Given a set of data x0 ∈ Rd and its labels y ∈ Rl with a joint distribution D, training a neural network can be considered as following min θ E (x0,y)∼D [Φ(xT ,y)], s.t. xt+1 = f(xt,θt), §Equal contributing authors. 1A Pytorch implementation can be found in:https://github.com/zhuotongchen/ Towards-Robust-Neural-Networks-via-Close-loop-Control.git where θ are the unknown parameters to train, and f , Φ represent the forward propagation rule and loss function (e.g. cross-entropy) respectively. The dynamical system perspective interprets the vulnerability of neural networks as a system instability issue, which addresses the state trajectory variation under small perturbations applied on initial conditions. The optimal control theory focuses on developing a control model to adjust the system state trajectory in an optimal manner. The first work that links and extends the classical back-propagation algorithm using optimal control theory was presented in Li et al. (2017), where the direct relationship between the Pontryagin’s Maximum Principle (Kirk, 1970) and the gradient based network training was established. Ye et al. (2019) used control theory to adjust the hyperparameters in the adversarial training algorithm. Han et al. (2018) established the mathematical basis of the optimal control viewpoint of deep learning. These existing works on algorithm development are open-loop control methods since they commonly treat the network weights θ as control parameters and keep them fixed once the training is done. The fixed control parameters θ operate optimally for data sampled from the data distribution D. However, various perturbation methods cause data distributions to deviate from the true distribution D (Song et al., 2017) and cause poor performance with the fixed open-loop control parameters. 1.1 PAPER CONTRIBUTIONS To address the limitation of using open-loop control methods, we propose the Close- Loop Control Neural Network (CLC-NN), the first close-loop control method to improve the robustness of neural networks. As shown in Fig. 1, our method adds additional blocks to a given T -layer neural network: embedding functions Et, which induce running losses in all layers that measure the discrepancies between true features and observed features under input perturbation, then control processes generate control variables ut to minimize the total running loss under various data perturbations. The original neural network can be designed by either standard training or robust training. In the latter case, our CLC-NN framework can achieve extra robustness against different perturbations. The forward propagation rule is thus modified with an extra control parameter ut ∈ Rd ′ xt+1 = f(xt,θt,ut). Fig. 1 should not be misunderstood as an open-loop control. From the perspective of dynamic systems, x0 is an initial condition, and the excitation input signal is ut (which is 0 in a standard feed-forward network). Therefore, the forward signal path is from ut to the internal states xt and then to the output label y. The path from xt to the embedding function Et(xt) and then to the excitation signal ut forms a feedback and closes the whole loop. The technical contributions of this paper are summarized below: • The proposed method relies on the well accepted assumption that the data and hidden state manifolds are low dimensional compared to the ambient dimension (Fefferman et al., 2016). We study the geometrical information of the data and hidden layers to define the objective function for control. Given a trained T -layer neural network, a set of embedding functions Et are trained off-line by minimizing the reconstruction loss ‖E(xt) − xt‖ over some clean data from D only. The embedding functions support defining a running loss required in our control method. • We define the control problem by dynamic programming and implement the online iterative solver based on the Pontryagin’s Maximum Principle to avoid the curse of dimensionality. The proposed close-loop control formulation does not require prior information of the perturbation. • We provide a theoretical error bound of the controlled system for the simplified case with linear activation functions and linear embedding. This error bound reveals how the close-loop control improves neural network robustness in the simplest setting. 2 RELATED WORKS Many techniques have been reported to improve the robustness of neural networks, such as data augmentation (Shorten & Khoshgoftaar, 2019), gradient masking (Liu et al., 2018), etc. We review adversarial training and reactive defense which are most relevant to this work. Adversarial Training. Adversarial training is (possibly) the most popular robust training method, and it solves a min-max robust optimization problem to minimize the worse-case loss with perturbed data. Adversarial training effectively regularizes the network’s local Lipschitz constants of the loss surface around the data manifold (Liu et al., 2018). Zhang et al. (2019) formulated the robustness training using the Pontryagon’s Maximum Principle, such open-loop control methods result in a set of fixed parameters that operates optimally on the considered perturbation. Liu et al. (2020a;b) considered a close-loop formulation from the differential dynamic programming perspective, this algorithm is categorized as a open-loop control method because it utilizes the state feedback information to boost the training convergence and results in a set of fixed controls for any unseen data. On the contrary, the proposed CLC-NN formulation adaptively targets on the inputs with different control parameters and is capable of distinguishing clean data by generating no control. Reactive Defense. A reactive defense method tries to reject or pre-process the input data that may cause mis-classifications. Metzen et al. (2017) rejected perturbed data by using adversarial detectors that are trained with adversarial data to detect abnormal data during forward propagation. Song et al. (2017) estimated the input data distribution D with a generative model (Oord et al., 2016) to detect data that does not belong to D, it applies a greedy method to search the local neighbour of input data for a more statistically plausible counterpart. This purification process has shown improved accuracy with adversarial data contaminated by various types of perturbations. Purification can be considered as a one-step method to solve the optimal control problem that has the objective function defined over the initial condition only. On the contrary, the proposed CLC-NN solves the control problem by the dynamic programming principle and its objective function is defined over the entire state trajectory, which guarantees the optimality for the resulted controls. 3 THE CLOSE-LOOP CONTROL FRAMEWORK FOR NEURAL NETWORKS Now we present a close-loop optimal control formulation to address the robustness issue of deep learning. Consider a neural network consisting of model parameters θ equipped with external control policy π, where π ∈ Π is a collection of functions Rd → Rd′ acting on the state and outputting the control signal. The feed-forward propagation in a T -layer neural network can be represented as xt+1 = f(xt,θt,πt(xt)), t = 0, · · · , T − 1. (1) Given a trained network, we solve the following optimization problem min π E(x0,y)∼D [J(x0,y,π)] := min π E(x0,y)∼D [ Φ(xT ,y) + T−1∑ s=0 L(xs,πs(xs)) ] , s.t. Eq. (1), (2) where π collects the control policies π0, · · · ,πT−1 for all layers. Note that (2) differs from the open-loop control used in standard training. An open-loop control that treats the network parameters as control variables seeks for a set of fixed parameters θ to match the output with true label y by minimizing the terminal loss Φ, and the running loss L defines a regularization for θ. However, the terminal and running losses play different roles when our goal is to improve the robustness of a neural network by generating some adaptive controls for different inputs. Challenge of Close-loop Control for Neural Networks. Optimal control has been well studied in the control community for trajectory optimization, where one defines the running loss as the error between the actual state xt and a reference state xt,ref over time interval [0, T ]. The resulting control policy adjusts xt and makes it approach xt,ref. In this paper, we apply the idea of trajectory optimization to improve the robustness of a neural network via adjusting the undesired state of xt. However, the formulation is more challenging in neural networks: we do not have a “reference” state during the inference process, therefore it is unclear how to define the running loss L. In the following, we investigate manifold embedding of the state trajectory to precisely define the loss functions Φ and L of Eq. (2) required for the control objective function of a neural network. 3.1 MANIFOLD LEARNING FOR STATE TRAJECTORIES State Manifold. Our controller design is based on the “manifold hypothesis”: real-world high dimensional data can often be embedded in a lower dimensional manifold M (Fefferman et al., 2016). Indeed, neural networks extract the embedded features fromM. To fool a well-trained neural network, the perturbed data often stays away from the data manifoldM (Khoury & Hadfield-Menell, 2018). We consider the data space Z (x ∈ Z,∀x ∼ D) as: Z = Z‖ ⊕ Z⊥, where Z‖ contains the embedded manifoldM and Z⊥ is the orthogonal complement of Z‖. During forward propagation, the state manifold embedded in Z‖ varies at different layers due to both the nonlinear activation function f and state dimensionality variation. Therefore, we denote Zt = Zt‖ ⊕ Zt⊥ as the state space decomposition at layer t and Mt ∈ Zt‖. Once an input data is perturbed, the main effects of causing misclassifications are in Z⊥. Therefore, it is important to measure how far the possibly perturbed state xt deviates from the state manifoldMt. Embedding Function. Given an embedding function Et that encodes xt onto the lowerdimensional manifoldMt and decodes the result back to the full state space Zt, the reconstruction loss ‖Et(xt)−xt‖ measures the deviation of the possibly perturbed state xt from the manifoldMt. The reconstruction loss is nonzero as long as xt has components in Zt⊥. The embedding functions are constructed offline by minimizing the total reconstruction losses over a clean training data set. • Linear Case: Et(·) can be considered as Vrt (Vrt )T where Vrt forms an orthonormal basis for Zt‖. Specifically one can first perform a principle component analysis over a collection of hidden states at layer t, then Vrt can be obtained as the first r columns of the resulting eigenvectors. • Nonlinear Case: we choose a convolutional auto-encoder (detailed in Appendix B) to obtain a representative manifold embedding function Et due to its ease of implementation. Based on the assumption that most perturbations are in the Z⊥ subspace, the embeddings are effective to detect the perturbations as long as the target manifold is of a low dimension. Alternative manifold learning methods such as Izenman (2012) may also be employed. 3.2 FORMULATION FOR THE CLOSE-LOOP CONTROL OF NEURAL NETWORKS Control Objectives. The above embedding function allows us to define a running loss L: L(xt,πt(xt), Et(·)) = ‖Et(xt)− xt‖22 + (πt(xt))TRπt(xt). (3) Here the matrix R defines a regularization term promoting controls of small magnitudes. In practical implementations, using a diagonal matrix R with small elements often helps to improve the performance. Now we are ready to design the control objective function of CLC-NN. Different from a standard open-loop control, this work sets the terminal loss Φ as zero because no true label is given during inference. Consequently, the close-loop control formulation in Eq. (2) becomes min π E(x0,y)∼D [J(x0,y,π)] := min π E(x0,y)∼D T−1∑ t=0 [L(xt,πt(xt), Et(·))] , s.t. Eq. (1). (4) Assume that the input data is perturbed by a bounded and small amount, i.e., x ,0 = x0 + · z, where z can be either random or adversarial. The proposed CLC-NN adjusts the perturbed state trajectory x ,t such that it stays at a minimum distance from the desired manifoldMt while promoting small magnitudes of controls. Intuition. We use an intuitive example to show how CLC-NN controls the state trajectory of unseen data samples. We create a synthetic binary classification data set with 1500 samples. We train a residual neural network with one hidden layer of dimension 2, and adopt the fast gradient sign method (Goodfellow et al., 2014) to generate adversarial data. Fig. 2 (a) and (b) show the states of clean data (red and blue) and of perturbed data (black and gray) at t = 0 and t = 1, respectively. The CLC-NN adjusts the state trajectory to reduce the reconstruction loss as shown in Fig. 2 (c) and (d), where lighter background color represents lower reconstruction loss. Comparing Fig. 2 (a) with (c), and Fig. 2 (b) with (d), we see that the perturbed states in Fig. 2 (a) and (b) deviate from the desired state manifold (light green region) and has a high reconstruction loss. Running 1000 iterations of Alg. 1 adjusts the perturbed states and improves the classification accuracy from 86% to 100%. 4 IMPLEMENTATION VIA THE PONTRYAGIN’S MAXIMUM PRINCIPLE Dynamic Programming for Close-Loop Control (4). The control problem in Eq. (4) can be solved by the dynamical programming principle (Bellman, 1952). For simplicity we consider one input data sample, and define a value function V : T × Rd → R (where T := {0, 1, . . . , T − 1}). Here V (t,x) represents the optimal cost-to-go function of Eq. (4) incurred from time t at state x. One can show that V (t,x) satisfies the dynamic programming principle V (t,x) = inf π∈Π [V (t+ 1,x + f(x,θt,π(x))) + L(x,π(x), Et(·))] . (5) Eq. (5) gives a necessary and sufficient condition for optimality of Eq. (4), and it is often solved backward in time by discretizing the entire state space. The state dimension of a modern neural network is at the order of thousands or even higher, therefore, discretizing the state space and directly solving Eq. (5) is intractable for real-world applications due to the curse of dimensionality. Solving (5) via the Pontryagin’s Maximum Principle. To overcome the computational challenge, the Pontryagin’s Maximum Principle (Kirk, 1970) converts the intractable dynamical programming into two ordinary differential equations and a maximization condition. Instead of computing the control policy π of Eq. (5), the Pontryagin’s Maximum Principle provides a necessary condition for the optimality with a set of control parameters [u∗0, · · · ,u∗T ]. The mean-field Pontryagin’s Maximum Principle can be considered when the initial condition is a batch of i.i.d. samples drawn fromD. Specifically, we trade the intractable computational complexity with processing time for solving the Hamilton equations and its maximization condition for every newly observed data. To begin with, we define the Hamiltonian H : T × Rd × Rd × Rl × Rm → R as H(t,xt,pt+1,θt,ut) := p T t+1 · f(xt,θt,ut)− L(xt,ut, Et(·)). (6) Let x∗ denote the corresponding optimally controlled state trajectory. There exists a co-state process p∗ : [0, T ]→ Rd such that the Hamilton’s equations x∗t+1 = ∇pH(t,x∗t ,p∗t ,θt,u∗t ), (x∗0,y) ∼ D, (7) p∗t = ∇xH(t,x∗t ,p∗t+1,θt,u∗t ), p∗T = 0, (8) are satisfied. The terminal co-state pT = 0, since we do not consider the terminal loss Φ(xT ,y). Moreover, we have the Hamiltonian maximization condition H(t,x∗t ,p ∗ t ,θt,u ∗ t ) ≥ H(t,x∗t ,p∗t ,θt,ut),∀u ∈ Rd ′ and ∀t ∈ T . (9) Instead of solving Eq. (5) for the optimal control policy π∗(xt), for a given initial condition, the Pontryagin’s Maximum Principle seeks for a open-loop optimal solution such that the global optimum of Eq. (5) is satisfied. The limitation of using the maximum principle is that the control parameters u∗t need to be solved for every unseen data to achieve the optimal solution. Algorithm Flow. The numerical implementation of CLC-NN is summarized in Alg. 1. Given a trained network (either from standard or adversarial training) and a set of embedding functions, the controls are initialized as ut = 0,∀t ∈ T , because adding random initialization weakens the Algorithm 1: CLC-NN with the Pontryagin’s Maximum Principle. Input : Possibly perturbed data x , a trained neural network, embedding functions [E1, · · · , ET−1], maxItr (maximum number of iterations). Output: A set of optimal control parameters u∗0, · · · ,u∗T−1. 1 for k = 0 to maxItr do 2 Jk = 0, 3 for t = 0 to T − 1 do 4 xt+1,k = f(xt,k,θt,ut,k), where x0,k = x , . Forward propagation Eq. (7), 5 Jk = Jk + L(xt,k,ut,k, Et(xt,k)), . Objective function Eq. (4), 6 end for 7 for t = T to 1 do 8 pt,k = p T t+1 · ∇xtf(xt,k,θt,ut,k)−∇xtL(xt,k,ut,k, Et(xt,k)), 9 where pT,k = 0, . Backward propagation Eq. (8) 10 end for 11 for t = 0 to T − 1 do 12 ut,k+1 = ut,k + ( pTt+1,k · ∇utf(xt,k,θt,ut,k)−∇utL(xt,k,ut,k, Et(xt,k)) ) , 13 . Maximization of Hamiltonian Eq. (9) based on Eq. (6) and gradient ascent. 14 end for 15 end for robustness performance in general, and clean trajectory often does not result in any running loss for the gradient update on the control parameters. In every iteration, a given input x0 is propagated forwardly with Eq. (7) to obtain all the intermediate hidden states xt for all t and to accumulate cost J . Eq. (8) backward propagates the co-state pt and Eq. (9) maximizes the tth Hamiltonian with current xt and pt to compute the optimal control parameters u∗t . 5 ERROR ANALYSIS FOR SIMPLIFIED LINEAR CASES For the ease of analysis, we consider a simplified neural network with linear activation functions: xt+1 = θt(xt + ut), and reveal why our proposed method can improve robustness in the simplest setting. Given a perturbed data sample x ,0, we denote its perturbation-free counterpart as x0 so that z = x ,0−x0. We consider a general perturbation where z is the direct sum of two orthogonal contributions: z‖, which is a perturbation within the data manifold (subspace), and z⊥, which is a perturbation in the orthogonal complement of the data manifold. This case is general: if we consider adversarial attacks, then the perturbation along the orthogonal complement dominates. In contrast, if we consider random perturbations, then the two perturbations are on the same scale. Our formulation covers both such extreme scenarios, together with intermediate cases. We use an orthogonal projection as the embedding function such that Et = Vrt (Vrt )T , where Vrt is the first r columns of the eigenvectors computed by the Principle Component Analysis on a collection of states xt. The proposed CLC-NN minimizes ‖x ,t−xt‖22 by reducing the components of x ,t that lie in the the orthogonal complement of Zt‖. The following theorem provides an error estimation between x ,t and xt. Theorem 1. For t ≥ 1, we have the error estimation ‖x ,t−xt‖22 ≤ ‖θt−1 · · ·θ0‖22· ( α2t‖z⊥‖22+‖z‖‖22+γt‖z‖22 ( γtα 2(1−αt−1)2+2(α−αt) )) , (10) where γt := max s≤t ( 1 + κ(θs) 2 ) ‖I− θTs θs‖2, and α = c1+c , c represents the control regularization. In particular, the equality ‖x ,t − xt‖22 = α2t‖z⊥‖22 + ‖z‖‖22, (11) holds when all θt are orthogonal. The detailed derivation is presented in Appendix A. Let us summarize the insights from Theorem 1. • The above error estimation is general for any input perturbation. It shows the working principle behind the proposed CLC-NN on controlling the perturbation that lies in the orthogonal complement of input subspace (z⊥). • The above error estimation improves as the control regularization c goes to 0 (so α→ 0). It is not the sharpest possible as it relies on a greedily optimal control at each layer. The globally optimal control defined by the Ricatti equation may achieve a lower loss when c 6= 0. • When the dimension of embedding subspace r decreases, our control becomes more effective in reducing ‖x ,t − xt‖22. This means that the control approach works the best when the data is constrained on a low dimensional manifold, which is consistent with the manifold hypothesis. In particular, observe that as r → 0, ‖z‖‖22 → 0 • The obtained upper bound is tight: the estimated upper bound becomes the actual error if all the forward propagation layers are orthogonal matrices. 6 NUMERICAL EXPERIMENTS We test our proposed CLC-NN framework under various input data perturbations. Here we briefly summarize our experimental settings, and we refer readers to Appendix B for the details. • Original Networks without Close-Loop Control. We choose residual neural networks (He et al., 2016) with ReLU activation functions as our target for close-loop control. In order to show that CLC-NN can improve the robustness in various settings, we consider networks from both standard and adversarial trainings. We consider multiple adversarial training methods: fast gradient sign method (FGSM) (Goodfellow et al., 2014), project gradient descent (PGD) (Madry et al., 2017), and the Label smoothing training (Label Smooth) (Hazan et al., 2017). • Input Perturbations. In order to test our CLC-NN framework, we perturb the input data within a radius of with = 2, 4 and 8 respectively. We consider various perturbations, including nonadversarial perturbations with the manifold-based attack (Jalal et al., 2017) (Manifold), as well as some adversarial attacks such as FGSM, PGD and the CW methods (Carlini & Wagner, 2017). • CLC-NN Implementations. We consider both linear and nonlinear embedding in our closeloop control. Specifically, we employ a principal component analysis with a 1% truncation error for linear embedding, and convolutional auto-encoders for nonlinear embedding. We use Adam (Kingma & Ba, 2014) to maximize the Hamiltonian function (9) and keep the same hyperparameters (learning rate, maximum iterations) for each model against all perturbations. Result Summary: Table 1 and Table 2 show the results for both CIFAR-10 and CIFAR-100 datasets on some neural networks from both standard training and adversarial training respectively. • CLC-NN significantly improves the robustness of neural networks from standard training. Table 1 shows that the baseline network trained on a clean data set becomes completely vulnerable (with almost 0% accuracy) under PGD and CW attacks. Our CLC-NN improves its accuracy to nearly 40% and 80% under PGD and CW attacks respectively. The accuracy under FGSM attacks has almost been doubled by our CLC-NN method. The accuracy on clean data is slightly decreased because the lower-dimensional embedding functions cannot exactly capture Z‖ orM. • CLC-NN further improves the robustness of adversarially trained networks. Table 2 shows that while an adversarially trained network is inherently robust against certain types of perturbations, CLC-NN strengthens its robustness significantly against various perturbations. For in- • The robustness improvement of adversarially trained networks is less significant. This is expected because the trajectory of perturbed data lies on the embedding subspace Z‖ if that data sample has been used in adversarial training. However, our experiments show that applying CLCNN to adversarially trained networks can achieve the best performance under most attacks. Comparison with PixelDefend (Song et al., 2017). Our method achieves similar performance on CIFAR-10 with slightly different experimental setting. Specifically, PixelDefend improved the robustness of a normally trained 62-layer ResNet from 0% to 78% against CW attack. Our proposed CLC-NN improves the robustness of a 20-layer ResNet from 0% to 81% against CW attacks. Furthermore, we show that CLC-NN is robust against the manifold-based attack. No result was reported for CIFAR-100 in Song et al. (2017). Comparison with Reactive Defense Reactive defenses can be understood as only applying a control at the initial condition of a dynamical system. Specifically, reactive defense equipped with linear embedding admits the following dynamics: xt+1 = f(xt,θt), s.t. x0 = V r 0(V r 0) Tx ,0. (12) By contrast, CLC-NN controls all hidden states and results in a decreasing error as the number of layers T increases (c.f. Theorem 1). To quantitatively compare CLC-NN with reactive defense, we implement them with the same linear embedding functions and against all perturbations. In Table 3, CLC-NN outperforms reactive defense in almost all cases except that their performances are case-dependent on clean data. 7 CONCLUSION We have proposed a close-loop control formulation to improve the robustness of neural networks. We have studied the embedding of state trajectory during forward propagation to define the optimal control objective function. The numerical experiments have shown that our method can improve the robustness of a trained neural network against various perturbations. We have provided an error estimation for the proposed method in the linear case. Our current implementation uses the Pontryagin’s Maximum Principle and an online iterative algorithm to overcome the intractability of solving a dynamical programming. This online process adds extra inference time. In the future, we plan to show the theoretical analysis for the nonlinear embedding case. Acknowledgement Zhuotong Chen and Zheng Zhang are supported by NSF CAREER Award No. 1846476 and NSF CCF No. 1817037. Qianxiao Li is supported by the start-up grant under the NUS PYP programme. A APPENDIX A ERROR ESTIMATION FOR THE PROPOSED CLC-NN Preliminaries We define the performance index at time t as J(xt,ut) = 1 2 ‖Qt(xt + ut)‖22 + c 2 ‖ut‖22, (13) where Qt = I−Vrt (Vrt )T , Vrt is the linear projection matrix at time t with only its first r principle components corresponding to the largest r eigenvalues. The optimal feedback control is defined as u∗t (xt) = arg min ut J(xt,ut), due to the linear system and quadratic performance index, the optimal feedback control admits an analytic solution by taking the gradient of performance index (Eq. (13)) and setting it to 0. ∇uJ(xt,ut) = ∇u ( 1 2 ‖Qt(xt + ut)‖22 + c 2 ‖ut‖22 ) , = QTt Qtxt + Q T t Qtut + c · ut, which leads to the analytic solution of u∗t (xt) as u∗t (xt) = −(c · I + QTt Qt)−1QTt Qtxt. (14) The above analytic control solution u∗t optimizes the performance index instantly at time step t, the error measured by Eq. (13) for the dynamical programming solution x ,t must be smaller or equal to the state trajectory equipped with u∗t define by Eq. (14), which gives a guaranteed upper bound for the error estimation of the dynamic programming solution. We define the feedback gain matrix Kt = (c · I + QTt Qt)−1QTt Qt. Thus, the one-step optimal feedback control can be represented as u∗t = −Ktxt. The difference between the controlled system applied with perturbation at initial condition and the uncontrolled system without perturbation is shown x ,t+1 − xt+1 = θt(x ,t + ut − xt), = θt(x ,t −Ktx ,t − xt). (15) The control objective is to minimize the state components that span the orthogonal complement of the data manifold (I − Vrt (Vrt )T ), when the input data to feedback control only stays in the state manifold, such that ‖(I−Vrt (Vrt )T )xt‖22 = 0, the feedback control Ktxt = 0. The state difference of Eq. (15) can be further shown by adding a 0 term of (θtKtxt) x ,t+1 − xt+1 = θt(I−Kt)x ,t − θtxt + θtKtxt, = θt(I−Kt)(x ,t − xt). (16) In the following, we show a transformation on the control dynamic term (I − Kt) based on its definition. Lemma 1. For t ≥ 0, we have I−Kt = α · I + (1− α) ·Pt, where Pt := Vrt (V r t ) T , which is the orthogonal projection onto Zt‖, and α := c 1+c such that α ∈ [0, 1]. Proof. Recall that Kt = (c ·I+QTt Qt)−1QTt Qt, and Qt = I−Vrt (Vrt )T , Qt can be diagonalized as following Qt = Vt 0 0 · · · 0 0 0 0 · · · 0 0 ... ... . . . 0 0 0 0 · · · 1 0 0 0 · · · 0 1 VTt , where the first r diagonal elements have common value of 0 and the last (d − r) diagonal elements have common value of 1. Furthermore, the feedback gain matrix Kt can be diagonalized as Kt = Vt 0 0 · · · 0 0 0 0 · · · 0 0 ... ... . . . 0 0 0 0 · · · 11+c 0 0 0 · · · 0 11+c VTt , where the last (d − r) diagonal elements have common value of 11+c . The control term (I −Kt) thus can be represented as I−Kt = Vt 1 0 · · · 0 0 0 1 · · · 0 0 ... ... . . . 0 0 0 0 · · · c1+c 0 0 0 · · · 0 11+c VTt , where the first r diagonal elements have common value of 1 and the last (d − r) diagonal elements have common value of c1+c . By denoting the projection of first r columns as V r t and last (d − r) columns as V̂rt , it can be further shown as I−Kt = Vrt (Vrt )T + c 1 + c ( V̂rt (V̂ r t ) T ) , = Pt + α ( I−Pt ) , = α · I + (1− α) ·Pt. Oblique Projections. Let P be a linear operator on Rd, • We say that P is an projection if P2 = P. • P is an orthogonal projection if P = PT = P2. • If P2 = P but P 6= PT , it is called an oblique projection. Proposition 2. For a projection P, 1. If P is an orthogonal projection, then ‖P‖2 = 1. 2. If P is an oblique projection, then ‖P‖2 > 1. 3. If P, Q are two projections such that range(P) = range(Q), then PQ = Q and QP = P. 4. If P is a projection, then rank(P) = Tr(P). Furthermore, if P is an orthogonal projection, then rank(P) = ‖P‖2F = Tr(PPT). Define for t ≥ 0 { P0t := Pt, P (s+1) t := θ −1 t−s−1P s tθt−s−1, s = 0, 1, . . . , t− 1, Lemma 3. Let Pst be defined as above for 0 ≤ s ≤ t. Then 1. Pst is a projection. 2. Pst is a projection onto Z t−s ‖ , i.e. range(P s t ) = Z t−s ‖ . 3. ‖Pst‖2F ≤ κ(θt−1θt−2 . . .θt−s)2 · r, where κ(A) is the condition number of A, i.e. κ(A) = ‖A‖2 · ‖A−1‖2, and r = rank(Z0‖) = rank(Z1‖) = . . . = rank(Z t ‖). Proof. 1. We prove it by induction on s for each t. For s = 0, P0t = Pt, which is a projection by its definition. Suppose it is true for s such that Pst = P s tP s t , then for (s+ 1), (Ps+1t ) 2 = ( θ−1t−s−1P s tθt−s−1 )2 , = θ−1t−s−1 ( Pst )2 θt−s−1, = θ−1t−s−1P s tθt−s−1, = Ps+1t . 2. We prove it by induction on s for each t. For s = 0, P0t = Pt, which is the orthogonal projection onto Zt‖. Suppose that it is true for s such that P s t is a projection onto Z t−s ‖ , then for (s + 1), Ps+1t = θ −1 t−s−1P s tθt−s−1, which implies range(Ps+1t ) = range(θ −1 t−s−1P s t ), = {θ−1t−s−1x : x ∈ Z t−s ‖ }, = Zt−s−1‖ . 3. We use the inequalities ‖AB‖F ≤ ‖A‖2‖B‖F , and ‖AB‖F ≤ ‖A‖F ‖B‖2. By the definition of Pst , Pst = ( θt−1θt−2 · · ·θt−s )−1 P0t ( θt−1θt−2 · · ·θt−s ) , we have the following ‖Pst‖2F ≤ ‖ ( θt−1θt−2 · · ·θt−s )−1‖22 · ‖(θt−1θt−2 · · ·θt−s)‖22 · ‖P0t‖2F , ≤ κ(θt−1θt−2 · · ·θt−s)2 · r, Lemma 2(4). The following Lemma uses the concept of oblique projection to show a recursive relationship to project any tth state space of Eq. (16) back to the input data space. Lemma 4. Define for 0 ≤ s ≤ t, Gst := α · I + (1− α)Pst . Then, Eq. (16) can be written as x ,t − xt = (θt−1θt−2 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), t ≥ 1. Proof. We prove it by induction on t. For t = 1, by the definition of Gst and transformation from Lemma 1, x ,1 − x1 = θ0(I−K0)(x ,0 − x0), Eq. (16), = θ0(α · I + (1− α) ·P0)(x ,0 − x0), Lemma 1, = θ0G 0 0(x ,0 − x0). Suppose that it is true for (x ,t − xt), by using Eq. (16) and Lemma 1, we have x ,t+1 − xt+1 = θt(I−Kt)(x ,t − xt), Eq. (16), = θt(α · I− (1− α) ·Pt)(x ,t − xt), Lemma 1, = θtG 0 t (θt−1θt−2 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0). (17) Recall the definitions of P(s+1)t := θ −1 t−s−1P s tθt−s−1, and G s t := α · I + (1 − α)Pst , we have the following Gs+1t = α · I + (1− α) ·P (s+1) t , = α · I + (1− α) · θ−1t−s−1Pstθt−s−1, = θ−1t−s−1 ( α · I + (1− α) ·Pst ) θt−s−1, = θ−1t−s−1G s tθt−s−1, which results in the equality for the oblique projections. Furthermore, θt−s−1G (s+1) t = G s tθt−s−1. Applying the above to Eq. (17) results in x ,t+1 − xt+1 = θtG0t (θt−1θt−2 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), = (θtθt−1)G 1 t (θt−2θt−3 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), = (θtθt−1θt−2)G 2 t (θt−3θt−4 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), = (θtθt−1 · · ·θ0)(GttGt−1t−1 · · ·G00)(x ,0 − x0). Lemma 5. Let Ft := G (t−1) t−1 G (t−2) t−2 · · ·G00, t ≥ 1. Then, Ft = α t · I + (1− α) t−1∑ s=0 αsPss. Proof. We prove it by induction on t. Recall the definition of Gst := α · I + (1 − α) · Pst . When t = 1, F1 = G 0 0 = α · I + (1− α) ·P00. Suppose that it is true for t such that Ft = G (t−1) t−1 G (t−2) t−2 · · ·G00 = αt · I + (1− α) t−1∑ s=0 αsPss, for (t+ 1), Ft+1 = G t tF (t), = (α · I + (1− α) ·Ptt)Ft, = (α · I + (1− α) ·Ptt)(αt · I + (1− α) t−1∑ s=0 αsPss), = αt+1 · I + αt(1− α)Ptt + (1− α)2 t−1∑ s=0 αs ·PttPss + α(1− α) t−1∑ s=0 αs ·Pss. Recall Lemma 3, range(Ptt) = range(P s s) = Z 0 ‖ . According to Proposition 2 (3), P t tP s s = P s s. Hence, Ft+1 = α t+1 · I + αt(1− α) ·Ptt + (1− α) t−1∑ s=0 αs ·Pss, = αt+1 · I + (1− α) t∑ s=0 αs ·Pss. Lemma 6. Let V ∈ Rd×r be a matrix whose columns are an orthogonal basis for a subspace D, and θ ∈ Rd×d be invertible. Let P = VVT be the orthogonal projection onto D. Denote by P̂ the orthogonal projection onto θD := {θx : x ∈ D}. Then 1. θ−1P̂θ is an oblique projection onto D. 2. ‖θ−1P̂θ −P‖2 ≤ ( 1 + κ(θ)2 ) · ‖I− θTθ‖2. In general, the last inequality shows that θ−1P̂θ = P, if θ is orthogonal. Proof. 1. (θ−1P̂θ)2 = θ−1P̂2θ = θ−1P̂θ, therefore, θ−1P̂θ is an projection. 2. Since P̂ is orthogonal projection onto the row space of θV, then P̂ = θV [ (θV)T (θV) ]−1 (θV)T , = θV [ VTθTθV ]−1 VTθT . θ−1P̂θ = V [ VTθTθV ]−1 VTθTθ. Furthermore, ‖θ−1P̂θ −P‖2 = ‖V [ VTθTθV ]−1 VTθTθ −VVT ‖2, ≤ ‖V [ VTθTθV ]−1 VTθTθ −VVTθTθ‖2 + ‖VVTθTθ −VVT ‖2, ≤ ‖V ( [VTθTθV]−1 − I ) VT ‖2 · ‖θTθ‖2 + ‖θTθ − I‖2, ≤ ‖[VTθTθV]−1‖2 · ‖I−VTθTθV‖2 · ‖θTθ‖2 + ‖θTθ − I‖2, ≤ ‖[VTθTθV]−1‖2 · ‖I− θTθ‖2 · ‖θTθ‖2 + ‖θTθ − I‖2. We further bound ‖[VTθTθV]−1‖2. ‖[VTθTθV]−1‖2 = ( λmin(V TθTθV) )−1 , = ( inf ‖x‖2=1 xTVTθTθVx )−1 , ≤ ( inf ‖x′‖2=1 (x′)TθTθx′ )−1 , = ( λmin(θ Tθ) )−1 , = ‖(θTθ)−1‖2. Hence, we have ‖θ−1P̂θ −P‖2 ≤ ( 1 + ‖θTθ‖2 · ‖(θTθ)−1‖2 ) · ‖I− θTθ‖2, = ( 1 + κ(θ)2 ) · ‖I− θTθ‖2. Corollary 1. Let t ≥ 1. Then for each s = 0, 1, · · · , t, we have ‖Pss −P0‖2 ≤ ( 1 + κ(θs) 2 ) · ‖I− θTs θs‖2, where • θ := θs−1 · · ·θ0, s ≥ 1, • θ := I, s = 0. Observe that Pss = (θs) −1Psθs. Using Lemma 6, we arrive at the main theorem. Theorem 1. For t ≥ 1, we have the error estimation ‖x ,t − xt‖22 ≤ ‖θt−1 · · ·θ0‖22 · ( α2t‖z⊥‖22 + ‖z‖‖22 + γt‖z‖22 ( γtα 2(1− αt−1)2 + 2(α− αt) )) . where γt := max s≤t ( 1 + κ(θs) 2 ) ‖I− θTs θs‖2, and α = c1+c , c represents the control regularization. In particular, the equality ‖x ,t − xt‖22 = α2t‖z⊥‖22 + ‖z‖‖22. holds when all θt are orthogonal. Proof. The input perturbation z = x ,0 − x0 can be written as z = z‖ + ·z⊥, where z‖ ∈ Z‖ and z⊥ ∈ Z⊥, where z‖ and z⊥ are vectors such that • z‖ · z⊥ = 0 almost surely. • z‖, z⊥ have uncorrelated components. • z‖ ∈ D, and z⊥ ∈ D⊥. Since z‖ and z⊥ are orthogonal almost surely, recall Lemma 4, ‖x ,t − xt‖22 = ‖(θt−1θt−2 · · ·θ0)(Gt−1t−1 · · ·G00)z‖22, ≤ ‖θt−1θt−2 · · ·θ0‖22 · ‖(Gt−1t−1 · · ·G00)z‖22, (18) For the term ‖(Gt−1t−1 · · ·G00)z‖22, recall Lemma 5, ‖(Gt−1t−1 · · ·G00)z‖22 = ‖ ( αt · I + (1− α) t−1∑ s=0 αs ·Pss ) z‖22, = ‖αtz + (1− α) t−1∑ s=0 αsP0z + (1− α) t−1∑ s=0 αs(Pss −P0)z‖22, = ‖αtz + (1− αt)z‖ + (1− α) t−1∑ s=0 αs(Pss −P0)z‖22, in the above, P0 is an orthogonal projection on t = 0 (input data space), therefore, P0z = z‖. Furthermore, when s = 0, Pss −P0 = 0. Thus, ‖(Gt−1t−1 · · ·G00)z‖22 = α2t‖z‖22 + (1− αt)2‖z‖‖22 + (1− α)2 t−1∑ s,q=1 αsαqzT (Pss −P0)T (Pqq −P0)z + 2αt(1− αt)‖z‖‖22 + 2αt(1− α) t−1∑ s=1 αszT (Pss −P0)z + 2(1− αt)(1− α) t−1∑ s=1 αs(z‖)T (Pss −P0)z, = α2t‖z⊥‖22 + ( α2t + 2αt(1− αt) + (1− αt)2 ) ‖z‖‖22 + (1− α)2 t−1∑ s,q=1 αsαqzT (Pss −P0)T (Pqq −P0)z + 2αt(1− α) t−1∑ s=1 αszT (Pss −P0)z + 2(1− αt)(1− α) t−1∑ s=1 αs(z‖)T (Pss −P0)z, = α2t‖z⊥‖22 + ‖z‖‖22 + (1− α)2 t−1∑ s,q=1 αsαqzT (Pss −P0)T (Pqq −P0)z + 2αt(1− α) t−1∑ s=1 αszT (Pss −P0)z + 2(1− αt)(1− α) t−1∑ s=1 αs(z‖)T (Pss −P0)z. Using Corollary 1, we have • zT (Pss −P0)z ≤ ‖z‖22 · ‖Pss −P0‖2, ≤ γt‖z‖22. • zT (Pss −P0)T (Pqq −P0)z ≤ ‖z‖22 · ‖Pss −P0‖2 · ‖Pqq −P0‖2, ≤ γ2t ‖z‖22. • (z‖)T (Pss −P0)z ≤ γt‖z‖‖2 · ‖z‖2, ≤ γt‖z‖22. Thus, we have ‖(Gt−1t−1 · · ·G00)z‖22 ≤ α2t‖z⊥‖22 + ‖z‖‖22 + α2(1− αt−1)2γ2t ‖z‖22 + 2αt+1(1− αt−1)γt‖z‖22 + 2α(1− αt)(1− αt−1)γt‖z‖22, = α2t‖z⊥‖22 + ‖z‖‖22 + γt‖z‖22 ( γtα 2(1− αt−1)2 + 2(α− αt) ) . Recall the error estimation in Eq. (18), ‖x ,t − xt‖22 ≤ ‖θt−1θt−2 · · ·θ0‖22 · ‖(Gt−1t−1 · · ·G00)z‖22, ≤ ‖θt−1 · · ·θ0‖22 · ( α2t‖z⊥‖22 + ‖z‖‖22 + γt‖z‖22 ( γtα 2(1− αt−1)2 + 2(α− αt) )) . In the specific case, when all θt are orthogonal, γt : = max s≤t ( 1 + κ(θs) 2 ) ‖I− θTs θs‖2 = 0. Thus, ‖x ,t − xt‖22 = α2t‖z⊥‖22 + ‖z‖‖22. B APPENDIX B DETAILS OF EXPERIMENTAL SETTING B.1 NETWORK CONFIGURATIONS Since the proposed CLC-NN optimizes the entire state trajectory, it is important to have a relatively smooth state trajectory, in which case, when the reconstruction loss ‖Et(xt) − xt‖22 at layer t is small, the reconstruction losses at its adjacent layers should be small. For this reason, we use residual neural network (He et al., 2016) as network candidate to retain smoother dynamics. The configuration of the residual neural network used for both CIFAR-10 and CIFAR-100 is shown in Tab. 4. Based on the configuration of residual neural network shown in Tab. 4, we construct 4 embedding functions applied at input space, outputs of initial layer, residual block 1 and residual block 2. The output of residual block 3 is embedded with a linear orthogonal projection. We randomly select 5000 clean training data to collect state trajectories at all 5 locations. • For the linear orthogonal projections: we apply the Principle Component Analysis on each of the state collections. We retain the first r columns of the resulted basis, such that r = arg min{i : λ1+...+λi λ1...+λd ≥ 1− δ}, where δ = 0.1. • For the nonlinear embedding: we train 4 convolutional auto-encoders for the input space, outputs of the initial layer and residual blocks 1, 2. All of the embedding functions are trained individually. We adopt shallow convolutional auto-encoder structure to gain fast inference speed, in which case, CLC-NN equipped with linear embedding often outperform the nonlinear embedding as shown in Tab. 1. The configuration of all 4 convolutional auto-encoders are shown in Tab. 5. B.2 PERTURBATIONS AND DEFENSIVE TRAINING In this section, we show details about the perturbations and robust networks that have been considered in this work. For the adversarial training objective function, min θ∈Θ max x ,0=∆(x0, ) E (x0,y)∼D [(1− λ) · Φi(x ,T ,y,θ) + λ · Φi(xT ,y,θ)], where ∆(x0, ) generates a perturbed data from given input x0 within the range of . λ balances between standard accuracy and robustness. We choose λ = 0.5 in all adversarial training. For robust networks, we consider both perturbation agnostic and non-agnostic methods. For the perturbation agnostic adversarial training algorithms equipped ∆(x0, ), the resulted network that is the most robust against the ∆(x0, ) perturbation. On the contrary, perturbation non-agnostic robust training methods are often robust against many types of perturbations. • Adversarial training with the fast gradient sign method (FGSM) (Goodfellow et al., 2014) considers perturbed data as follows. x ,0 = x0 + sign(∇x0Φ(xT ,y)), (x0, y) ∼ D, where sign(·) outputs the sign of the input. In which case, FGSM considers the worse case within the range of along the gradient∇x0Φ(xT ,y) increasing direction. Due to the worse case consideration, it does not scale well for deep networks, for this reason, we adversarially train the network with FGSM with = 4, which is half of the maximum perturbation considered in this paper. • The label smoothing training (Label Smooth) (Hazan et al., 2017) does not utilize any perturbation information ∆(x0, ). It converts one-hot labels into soft targets by setting the correct class as 1 − , while other classes have value of N−1 , where is a small constant and N is number of classes. Specifically, we choose = 0.9 in this paper. • Adversarial training with the project gradient descent (PGD) (Madry et al., 2017) generates adversarial data by iteratively run FGSM with small step size, which results in stronger perturbations compared with FGSM within the same range . We use 7-step of = 2 to generate adversarial data for robust training. For Perturbations, we consider the maximum range of = 2, 4, 8 to test the performance the network robustness against both strong and weak perturbations. For this work, we test network robustness with the manifold-based attack (Jalal et al., 2017), FGSM (Goodfellow et al., 2014), 20-step of PGD (Madry et al., 2017) and the CW attack (Carlini & Wagner, 2017). B.3 ONLINE OPTIMIZATION Optimization Methods. we use Adam (Kingma & Ba, 2014) to maximize the Hamiltonian Eq. (9) with default setting. In which case, solving the PMP brings in extra computational cost for inference. Each online iteration of solving the PMP requires a combination of forward propagation (Eq. (7)), backward propagation (Eq. (8)) and a maximization w.r.t. the control parameters (Eq. (9)), which has computational cost approximately the same as performing gradient descent on training a neural network for one iteration. For the numerical results presented in the paper, we choose the maximum iteration that gives the best performance from one of [5, 10, 20, 30, 50]. C MORE NUMERICAL EXPERIMENTS The proposed CLC-NN is designed to be compatible with existing open-loop trained. We show extra experiments by employing the proposed CLC-NN on two baseline models, DenseNet-40 (Table 6). The layer-wise projection performs orthogonal projection on the hidden state. We define the local cost function at the tth layer as follows J(xt,ut) = 1 2 ‖Qt(xt + ut)‖22 + c 2 ‖ut‖22, the layer-wise achieves the optimal solution at local time t, u∗t (xt) = arg min ut J(xt,ut). However, the layer-wise optimal control solution does not guarantee the optimum across all layers. In Table 7, we launch comparisons between the proposed CLC-NN with layer-wise projection. Specifically, under all perturbations the proposed CLC-NN outperforms layer-wise projection. D ROBUSTNESS AGAINST MANIFOLD-BASED ATTACK The manifold-based attack (Jalal et al., 2017) (denoted as Manifold) has shown great success on breaking down the manifold-based defenses (Samangouei et al., 2018). The proposed CLC-NN can successfully defend such specifically design adversarial attack for manifold-based defenses and improves the robustness accuracy from 1% to 81% for the standard trained model in Cifar-10, and 2% to 52% in Cifar-100. We provide detailed explanation for the successful defense of the proposed CLC-NN against such strong adversarial attack. Exsiting manifold-based defense (Samangouei et al., 2018) focuses on detecting and de-noising the input components that do not lie within the underlying manifold. The overpowered attack proposed in Jalal et al. (2017) searches adversarial attack with in the embedded latent space, which is undetectable for the manifold-based defenses and caused complete failure defense. In the real implementation, the manifold-based attack (Jalal et al., 2017) is detectable and controllable under the proposed framework due to the following reason. The numerically generated manifold embedding functions are not ideal. The error sources of non-ideal embedding functions are mainly due to the algorithm that used to compute the manifold, the architecture of embedding function, and the distribution shift between training and testing data (embedding functions of training data do not perfectly agree with testing data). In which case, even the perturbation is undetectable and non-controllable at initial layer, as it is propagated into hidden layers, each layer amplifies such perturbation, therefore, the perturbation becomes detectable and controllable in hidden layers. We randomly select the batch of testing data to generate the manifold-based attack following the same procedure proposed in Jalal et al. (2017). The proposed method improves the attacked accuracy from 1% to 78%. More specifically, we compare the differences of all hidden states spanning the orthogonal complement between a perturbed testing data and its unperturbed counterpart, ‖P⊥t x ,t− P⊥t x ,t‖, where P⊥t is a projection onto the orthogonal complement. The difference is growing such as 0, 0.016, 0.0438, 0.0107, 0.0552 for hidden states at layer 0, 1, 2, 3, 4 respectively. This validates the argument for how the proposed method is able to detect such perturbation and controls the perturbation in hidden layers. Furthermore, we provide some insights about the reasons behind the success of such an adversarial attack. This follows the same concept of the existence of adversarial attack in neural networks. The highly nonlinear behaviour of neural networks preserves complex representative ability, meanwhile, its powerful representation results in its vulnerability. For example, a constant function has 50% chance to make a correct prediction in binary classification problem under any perturbation, but its performance is limited. Therefore, we propose to use a linear embedding function that compensates between the embedding accuracy and robustness. E DEFINITION OF THREAT MODEL Generally, an attacker should not have access to the hidden states during inference, in which case, an attacker is not allowed to inject extra noise during inference. To define the threat model of the proposed method, for the white-box setting, an attacker has access to both network and all embedding functions. The condition that the perturbation · z makes our method vulnerable is defined as follows, T−1∑ t=0 ‖Et(x ,t)− x ,t‖22 = 0, x ,0 = x0 + · z. In words, the perturbation ·z applied on the input data must result in 0 reconstruction losses across all hidden layers, which means its corresponding state trajectory does not span any of all orthogonal complements of all hidden state spaces. To obtain an effective attack satisfying the above equation, conventional gradient-based attackers cannot guarantee to find an perfect attack. A possible way is to perform grid-search backward in layers to find such an adversarial attack satisfying the thread model condition, which is extremely costly.
1. What is the focus of the paper regarding neural networks? 2. What are the strengths of the proposed closed-loop control strategy? 3. Are there any concerns or suggestions regarding the paper's content, such as discussing additional computational costs or providing more information on specific topics? 4. How does the reviewer assess the originality and significance of the work? 5. Are there any minor issues with the manuscript, such as typos or phrasing?
Review
Review Summary This study develops a closed-loop control strategy to improve robustness of neural networks to adversarial attacks. The study is technically sound and the empirical results on classification tasks are convincing. Quality The paper is technically sound and the claims are appropriately backed by empirical evaluation. However, I would recommend the authors to discuss a bit more the additional computational cost of running the closed-loop method. Clarity The manuscript is clearly written and provides enough information for an expert reader to understand all the steps to reproduce the results. Originality The novelty of the study resides in the development of a closed-loop control method for increasing the robustness of neural networks. The strategy is devised to scale to the typical high-dimensional nature of neural network activations. Significance of the work The results suggest that the developed approach is a solid step towards developing robust neural networks. Some typos: -instead of "cause different data distribution deviating", "cause data distributions to deviate"; -instead of "The resulting control policy [...] make it", "The resulting control policy [...] makes it"; -instead of "the embedding are effective", "the embeddings are effective"; -instead of "the perturbed states in Fig.2 [...] has", "the perturbed states in Fig.2 [...] have"; -instead of "to obtain all the intermediate hidden states [...] and accumulates", "to obtain all the intermediate hidden states [...] and to accumulate"; -issue with reference "E. 2017".
ICLR
Title Towards Robust Neural Networks via Close-loop Control Abstract Despite their success in massive engineering applications, deep neural networks are vulnerable to various perturbations due to their black-box nature. Recent study has shown that a deep neural network can misclassify the data even if the input data is perturbed by an imperceptible amount. In this paper, we address the robustness issue of neural networks by a novel close-loop control method from the perspective of dynamic systems. Instead of modifying the parameters in a fixed neural network architecture, a close-loop control process is added to generate control signals adaptively for the perturbed or corrupted data. We connect the robustness of neural networks with optimal control using the geometrical information of underlying data to design the control objective. The detailed analysis shows how the embedding manifolds of state trajectory affect error estimation of the proposed method. Our approach can simultaneously maintain the performance on clean data and improve the robustness against many types of data perturbations. It can also further improve the performance of robustly trained neural networks against different perturbations. To the best of our knowledge, this is the first work that improves the robustness of neural networks with close-loop control 1. 1 INTRODUCTION Due to the increasing data and computing power, deep neural networks have achieved state-of-theart performance in many applications such as computer vision, natural language processing and recommendation systems. However, many deep neural networks are vulnerable to various malicious perturbations due to their black-box nature: a small (even imperceptible) perturbation of input data may lead to completely wrong predictions (Szegedy et al., 2013; Nguyen et al., 2015). This has been a major concern in some safety-critical applications such as autonomous driving (Grigorescu et al., 2020) and medical image analysis (Lundervold & Lundervold, 2019). Various perturbations have been reported, including the `p norm based attack (Madry et al., 2017; Moosavi-Dezfooli et al., 2016; Carlini & Wagner, 2017), semantic perturbation (Engstrom et al., 2017) etc. On the other side, some algorithms to improve the robustness against those perturbations have shown great success (Madry et al., 2017). However, most robustly trained models are tailored for certain types of perturbations, and they do not work well for other types of perturbations. Khoury & Hadfield-Menell (2018) showed the non-existence of optimal decision boundary for any `p-norm perturbation. Recent works (E, 2017; Haber & Ruthotto, 2017) have shown the connection between dynamical systems and neural networks. This dynamic system perspective provides some interesting theoretical insights about the robustness issue. Given a set of data x0 ∈ Rd and its labels y ∈ Rl with a joint distribution D, training a neural network can be considered as following min θ E (x0,y)∼D [Φ(xT ,y)], s.t. xt+1 = f(xt,θt), §Equal contributing authors. 1A Pytorch implementation can be found in:https://github.com/zhuotongchen/ Towards-Robust-Neural-Networks-via-Close-loop-Control.git where θ are the unknown parameters to train, and f , Φ represent the forward propagation rule and loss function (e.g. cross-entropy) respectively. The dynamical system perspective interprets the vulnerability of neural networks as a system instability issue, which addresses the state trajectory variation under small perturbations applied on initial conditions. The optimal control theory focuses on developing a control model to adjust the system state trajectory in an optimal manner. The first work that links and extends the classical back-propagation algorithm using optimal control theory was presented in Li et al. (2017), where the direct relationship between the Pontryagin’s Maximum Principle (Kirk, 1970) and the gradient based network training was established. Ye et al. (2019) used control theory to adjust the hyperparameters in the adversarial training algorithm. Han et al. (2018) established the mathematical basis of the optimal control viewpoint of deep learning. These existing works on algorithm development are open-loop control methods since they commonly treat the network weights θ as control parameters and keep them fixed once the training is done. The fixed control parameters θ operate optimally for data sampled from the data distribution D. However, various perturbation methods cause data distributions to deviate from the true distribution D (Song et al., 2017) and cause poor performance with the fixed open-loop control parameters. 1.1 PAPER CONTRIBUTIONS To address the limitation of using open-loop control methods, we propose the Close- Loop Control Neural Network (CLC-NN), the first close-loop control method to improve the robustness of neural networks. As shown in Fig. 1, our method adds additional blocks to a given T -layer neural network: embedding functions Et, which induce running losses in all layers that measure the discrepancies between true features and observed features under input perturbation, then control processes generate control variables ut to minimize the total running loss under various data perturbations. The original neural network can be designed by either standard training or robust training. In the latter case, our CLC-NN framework can achieve extra robustness against different perturbations. The forward propagation rule is thus modified with an extra control parameter ut ∈ Rd ′ xt+1 = f(xt,θt,ut). Fig. 1 should not be misunderstood as an open-loop control. From the perspective of dynamic systems, x0 is an initial condition, and the excitation input signal is ut (which is 0 in a standard feed-forward network). Therefore, the forward signal path is from ut to the internal states xt and then to the output label y. The path from xt to the embedding function Et(xt) and then to the excitation signal ut forms a feedback and closes the whole loop. The technical contributions of this paper are summarized below: • The proposed method relies on the well accepted assumption that the data and hidden state manifolds are low dimensional compared to the ambient dimension (Fefferman et al., 2016). We study the geometrical information of the data and hidden layers to define the objective function for control. Given a trained T -layer neural network, a set of embedding functions Et are trained off-line by minimizing the reconstruction loss ‖E(xt) − xt‖ over some clean data from D only. The embedding functions support defining a running loss required in our control method. • We define the control problem by dynamic programming and implement the online iterative solver based on the Pontryagin’s Maximum Principle to avoid the curse of dimensionality. The proposed close-loop control formulation does not require prior information of the perturbation. • We provide a theoretical error bound of the controlled system for the simplified case with linear activation functions and linear embedding. This error bound reveals how the close-loop control improves neural network robustness in the simplest setting. 2 RELATED WORKS Many techniques have been reported to improve the robustness of neural networks, such as data augmentation (Shorten & Khoshgoftaar, 2019), gradient masking (Liu et al., 2018), etc. We review adversarial training and reactive defense which are most relevant to this work. Adversarial Training. Adversarial training is (possibly) the most popular robust training method, and it solves a min-max robust optimization problem to minimize the worse-case loss with perturbed data. Adversarial training effectively regularizes the network’s local Lipschitz constants of the loss surface around the data manifold (Liu et al., 2018). Zhang et al. (2019) formulated the robustness training using the Pontryagon’s Maximum Principle, such open-loop control methods result in a set of fixed parameters that operates optimally on the considered perturbation. Liu et al. (2020a;b) considered a close-loop formulation from the differential dynamic programming perspective, this algorithm is categorized as a open-loop control method because it utilizes the state feedback information to boost the training convergence and results in a set of fixed controls for any unseen data. On the contrary, the proposed CLC-NN formulation adaptively targets on the inputs with different control parameters and is capable of distinguishing clean data by generating no control. Reactive Defense. A reactive defense method tries to reject or pre-process the input data that may cause mis-classifications. Metzen et al. (2017) rejected perturbed data by using adversarial detectors that are trained with adversarial data to detect abnormal data during forward propagation. Song et al. (2017) estimated the input data distribution D with a generative model (Oord et al., 2016) to detect data that does not belong to D, it applies a greedy method to search the local neighbour of input data for a more statistically plausible counterpart. This purification process has shown improved accuracy with adversarial data contaminated by various types of perturbations. Purification can be considered as a one-step method to solve the optimal control problem that has the objective function defined over the initial condition only. On the contrary, the proposed CLC-NN solves the control problem by the dynamic programming principle and its objective function is defined over the entire state trajectory, which guarantees the optimality for the resulted controls. 3 THE CLOSE-LOOP CONTROL FRAMEWORK FOR NEURAL NETWORKS Now we present a close-loop optimal control formulation to address the robustness issue of deep learning. Consider a neural network consisting of model parameters θ equipped with external control policy π, where π ∈ Π is a collection of functions Rd → Rd′ acting on the state and outputting the control signal. The feed-forward propagation in a T -layer neural network can be represented as xt+1 = f(xt,θt,πt(xt)), t = 0, · · · , T − 1. (1) Given a trained network, we solve the following optimization problem min π E(x0,y)∼D [J(x0,y,π)] := min π E(x0,y)∼D [ Φ(xT ,y) + T−1∑ s=0 L(xs,πs(xs)) ] , s.t. Eq. (1), (2) where π collects the control policies π0, · · · ,πT−1 for all layers. Note that (2) differs from the open-loop control used in standard training. An open-loop control that treats the network parameters as control variables seeks for a set of fixed parameters θ to match the output with true label y by minimizing the terminal loss Φ, and the running loss L defines a regularization for θ. However, the terminal and running losses play different roles when our goal is to improve the robustness of a neural network by generating some adaptive controls for different inputs. Challenge of Close-loop Control for Neural Networks. Optimal control has been well studied in the control community for trajectory optimization, where one defines the running loss as the error between the actual state xt and a reference state xt,ref over time interval [0, T ]. The resulting control policy adjusts xt and makes it approach xt,ref. In this paper, we apply the idea of trajectory optimization to improve the robustness of a neural network via adjusting the undesired state of xt. However, the formulation is more challenging in neural networks: we do not have a “reference” state during the inference process, therefore it is unclear how to define the running loss L. In the following, we investigate manifold embedding of the state trajectory to precisely define the loss functions Φ and L of Eq. (2) required for the control objective function of a neural network. 3.1 MANIFOLD LEARNING FOR STATE TRAJECTORIES State Manifold. Our controller design is based on the “manifold hypothesis”: real-world high dimensional data can often be embedded in a lower dimensional manifold M (Fefferman et al., 2016). Indeed, neural networks extract the embedded features fromM. To fool a well-trained neural network, the perturbed data often stays away from the data manifoldM (Khoury & Hadfield-Menell, 2018). We consider the data space Z (x ∈ Z,∀x ∼ D) as: Z = Z‖ ⊕ Z⊥, where Z‖ contains the embedded manifoldM and Z⊥ is the orthogonal complement of Z‖. During forward propagation, the state manifold embedded in Z‖ varies at different layers due to both the nonlinear activation function f and state dimensionality variation. Therefore, we denote Zt = Zt‖ ⊕ Zt⊥ as the state space decomposition at layer t and Mt ∈ Zt‖. Once an input data is perturbed, the main effects of causing misclassifications are in Z⊥. Therefore, it is important to measure how far the possibly perturbed state xt deviates from the state manifoldMt. Embedding Function. Given an embedding function Et that encodes xt onto the lowerdimensional manifoldMt and decodes the result back to the full state space Zt, the reconstruction loss ‖Et(xt)−xt‖ measures the deviation of the possibly perturbed state xt from the manifoldMt. The reconstruction loss is nonzero as long as xt has components in Zt⊥. The embedding functions are constructed offline by minimizing the total reconstruction losses over a clean training data set. • Linear Case: Et(·) can be considered as Vrt (Vrt )T where Vrt forms an orthonormal basis for Zt‖. Specifically one can first perform a principle component analysis over a collection of hidden states at layer t, then Vrt can be obtained as the first r columns of the resulting eigenvectors. • Nonlinear Case: we choose a convolutional auto-encoder (detailed in Appendix B) to obtain a representative manifold embedding function Et due to its ease of implementation. Based on the assumption that most perturbations are in the Z⊥ subspace, the embeddings are effective to detect the perturbations as long as the target manifold is of a low dimension. Alternative manifold learning methods such as Izenman (2012) may also be employed. 3.2 FORMULATION FOR THE CLOSE-LOOP CONTROL OF NEURAL NETWORKS Control Objectives. The above embedding function allows us to define a running loss L: L(xt,πt(xt), Et(·)) = ‖Et(xt)− xt‖22 + (πt(xt))TRπt(xt). (3) Here the matrix R defines a regularization term promoting controls of small magnitudes. In practical implementations, using a diagonal matrix R with small elements often helps to improve the performance. Now we are ready to design the control objective function of CLC-NN. Different from a standard open-loop control, this work sets the terminal loss Φ as zero because no true label is given during inference. Consequently, the close-loop control formulation in Eq. (2) becomes min π E(x0,y)∼D [J(x0,y,π)] := min π E(x0,y)∼D T−1∑ t=0 [L(xt,πt(xt), Et(·))] , s.t. Eq. (1). (4) Assume that the input data is perturbed by a bounded and small amount, i.e., x ,0 = x0 + · z, where z can be either random or adversarial. The proposed CLC-NN adjusts the perturbed state trajectory x ,t such that it stays at a minimum distance from the desired manifoldMt while promoting small magnitudes of controls. Intuition. We use an intuitive example to show how CLC-NN controls the state trajectory of unseen data samples. We create a synthetic binary classification data set with 1500 samples. We train a residual neural network with one hidden layer of dimension 2, and adopt the fast gradient sign method (Goodfellow et al., 2014) to generate adversarial data. Fig. 2 (a) and (b) show the states of clean data (red and blue) and of perturbed data (black and gray) at t = 0 and t = 1, respectively. The CLC-NN adjusts the state trajectory to reduce the reconstruction loss as shown in Fig. 2 (c) and (d), where lighter background color represents lower reconstruction loss. Comparing Fig. 2 (a) with (c), and Fig. 2 (b) with (d), we see that the perturbed states in Fig. 2 (a) and (b) deviate from the desired state manifold (light green region) and has a high reconstruction loss. Running 1000 iterations of Alg. 1 adjusts the perturbed states and improves the classification accuracy from 86% to 100%. 4 IMPLEMENTATION VIA THE PONTRYAGIN’S MAXIMUM PRINCIPLE Dynamic Programming for Close-Loop Control (4). The control problem in Eq. (4) can be solved by the dynamical programming principle (Bellman, 1952). For simplicity we consider one input data sample, and define a value function V : T × Rd → R (where T := {0, 1, . . . , T − 1}). Here V (t,x) represents the optimal cost-to-go function of Eq. (4) incurred from time t at state x. One can show that V (t,x) satisfies the dynamic programming principle V (t,x) = inf π∈Π [V (t+ 1,x + f(x,θt,π(x))) + L(x,π(x), Et(·))] . (5) Eq. (5) gives a necessary and sufficient condition for optimality of Eq. (4), and it is often solved backward in time by discretizing the entire state space. The state dimension of a modern neural network is at the order of thousands or even higher, therefore, discretizing the state space and directly solving Eq. (5) is intractable for real-world applications due to the curse of dimensionality. Solving (5) via the Pontryagin’s Maximum Principle. To overcome the computational challenge, the Pontryagin’s Maximum Principle (Kirk, 1970) converts the intractable dynamical programming into two ordinary differential equations and a maximization condition. Instead of computing the control policy π of Eq. (5), the Pontryagin’s Maximum Principle provides a necessary condition for the optimality with a set of control parameters [u∗0, · · · ,u∗T ]. The mean-field Pontryagin’s Maximum Principle can be considered when the initial condition is a batch of i.i.d. samples drawn fromD. Specifically, we trade the intractable computational complexity with processing time for solving the Hamilton equations and its maximization condition for every newly observed data. To begin with, we define the Hamiltonian H : T × Rd × Rd × Rl × Rm → R as H(t,xt,pt+1,θt,ut) := p T t+1 · f(xt,θt,ut)− L(xt,ut, Et(·)). (6) Let x∗ denote the corresponding optimally controlled state trajectory. There exists a co-state process p∗ : [0, T ]→ Rd such that the Hamilton’s equations x∗t+1 = ∇pH(t,x∗t ,p∗t ,θt,u∗t ), (x∗0,y) ∼ D, (7) p∗t = ∇xH(t,x∗t ,p∗t+1,θt,u∗t ), p∗T = 0, (8) are satisfied. The terminal co-state pT = 0, since we do not consider the terminal loss Φ(xT ,y). Moreover, we have the Hamiltonian maximization condition H(t,x∗t ,p ∗ t ,θt,u ∗ t ) ≥ H(t,x∗t ,p∗t ,θt,ut),∀u ∈ Rd ′ and ∀t ∈ T . (9) Instead of solving Eq. (5) for the optimal control policy π∗(xt), for a given initial condition, the Pontryagin’s Maximum Principle seeks for a open-loop optimal solution such that the global optimum of Eq. (5) is satisfied. The limitation of using the maximum principle is that the control parameters u∗t need to be solved for every unseen data to achieve the optimal solution. Algorithm Flow. The numerical implementation of CLC-NN is summarized in Alg. 1. Given a trained network (either from standard or adversarial training) and a set of embedding functions, the controls are initialized as ut = 0,∀t ∈ T , because adding random initialization weakens the Algorithm 1: CLC-NN with the Pontryagin’s Maximum Principle. Input : Possibly perturbed data x , a trained neural network, embedding functions [E1, · · · , ET−1], maxItr (maximum number of iterations). Output: A set of optimal control parameters u∗0, · · · ,u∗T−1. 1 for k = 0 to maxItr do 2 Jk = 0, 3 for t = 0 to T − 1 do 4 xt+1,k = f(xt,k,θt,ut,k), where x0,k = x , . Forward propagation Eq. (7), 5 Jk = Jk + L(xt,k,ut,k, Et(xt,k)), . Objective function Eq. (4), 6 end for 7 for t = T to 1 do 8 pt,k = p T t+1 · ∇xtf(xt,k,θt,ut,k)−∇xtL(xt,k,ut,k, Et(xt,k)), 9 where pT,k = 0, . Backward propagation Eq. (8) 10 end for 11 for t = 0 to T − 1 do 12 ut,k+1 = ut,k + ( pTt+1,k · ∇utf(xt,k,θt,ut,k)−∇utL(xt,k,ut,k, Et(xt,k)) ) , 13 . Maximization of Hamiltonian Eq. (9) based on Eq. (6) and gradient ascent. 14 end for 15 end for robustness performance in general, and clean trajectory often does not result in any running loss for the gradient update on the control parameters. In every iteration, a given input x0 is propagated forwardly with Eq. (7) to obtain all the intermediate hidden states xt for all t and to accumulate cost J . Eq. (8) backward propagates the co-state pt and Eq. (9) maximizes the tth Hamiltonian with current xt and pt to compute the optimal control parameters u∗t . 5 ERROR ANALYSIS FOR SIMPLIFIED LINEAR CASES For the ease of analysis, we consider a simplified neural network with linear activation functions: xt+1 = θt(xt + ut), and reveal why our proposed method can improve robustness in the simplest setting. Given a perturbed data sample x ,0, we denote its perturbation-free counterpart as x0 so that z = x ,0−x0. We consider a general perturbation where z is the direct sum of two orthogonal contributions: z‖, which is a perturbation within the data manifold (subspace), and z⊥, which is a perturbation in the orthogonal complement of the data manifold. This case is general: if we consider adversarial attacks, then the perturbation along the orthogonal complement dominates. In contrast, if we consider random perturbations, then the two perturbations are on the same scale. Our formulation covers both such extreme scenarios, together with intermediate cases. We use an orthogonal projection as the embedding function such that Et = Vrt (Vrt )T , where Vrt is the first r columns of the eigenvectors computed by the Principle Component Analysis on a collection of states xt. The proposed CLC-NN minimizes ‖x ,t−xt‖22 by reducing the components of x ,t that lie in the the orthogonal complement of Zt‖. The following theorem provides an error estimation between x ,t and xt. Theorem 1. For t ≥ 1, we have the error estimation ‖x ,t−xt‖22 ≤ ‖θt−1 · · ·θ0‖22· ( α2t‖z⊥‖22+‖z‖‖22+γt‖z‖22 ( γtα 2(1−αt−1)2+2(α−αt) )) , (10) where γt := max s≤t ( 1 + κ(θs) 2 ) ‖I− θTs θs‖2, and α = c1+c , c represents the control regularization. In particular, the equality ‖x ,t − xt‖22 = α2t‖z⊥‖22 + ‖z‖‖22, (11) holds when all θt are orthogonal. The detailed derivation is presented in Appendix A. Let us summarize the insights from Theorem 1. • The above error estimation is general for any input perturbation. It shows the working principle behind the proposed CLC-NN on controlling the perturbation that lies in the orthogonal complement of input subspace (z⊥). • The above error estimation improves as the control regularization c goes to 0 (so α→ 0). It is not the sharpest possible as it relies on a greedily optimal control at each layer. The globally optimal control defined by the Ricatti equation may achieve a lower loss when c 6= 0. • When the dimension of embedding subspace r decreases, our control becomes more effective in reducing ‖x ,t − xt‖22. This means that the control approach works the best when the data is constrained on a low dimensional manifold, which is consistent with the manifold hypothesis. In particular, observe that as r → 0, ‖z‖‖22 → 0 • The obtained upper bound is tight: the estimated upper bound becomes the actual error if all the forward propagation layers are orthogonal matrices. 6 NUMERICAL EXPERIMENTS We test our proposed CLC-NN framework under various input data perturbations. Here we briefly summarize our experimental settings, and we refer readers to Appendix B for the details. • Original Networks without Close-Loop Control. We choose residual neural networks (He et al., 2016) with ReLU activation functions as our target for close-loop control. In order to show that CLC-NN can improve the robustness in various settings, we consider networks from both standard and adversarial trainings. We consider multiple adversarial training methods: fast gradient sign method (FGSM) (Goodfellow et al., 2014), project gradient descent (PGD) (Madry et al., 2017), and the Label smoothing training (Label Smooth) (Hazan et al., 2017). • Input Perturbations. In order to test our CLC-NN framework, we perturb the input data within a radius of with = 2, 4 and 8 respectively. We consider various perturbations, including nonadversarial perturbations with the manifold-based attack (Jalal et al., 2017) (Manifold), as well as some adversarial attacks such as FGSM, PGD and the CW methods (Carlini & Wagner, 2017). • CLC-NN Implementations. We consider both linear and nonlinear embedding in our closeloop control. Specifically, we employ a principal component analysis with a 1% truncation error for linear embedding, and convolutional auto-encoders for nonlinear embedding. We use Adam (Kingma & Ba, 2014) to maximize the Hamiltonian function (9) and keep the same hyperparameters (learning rate, maximum iterations) for each model against all perturbations. Result Summary: Table 1 and Table 2 show the results for both CIFAR-10 and CIFAR-100 datasets on some neural networks from both standard training and adversarial training respectively. • CLC-NN significantly improves the robustness of neural networks from standard training. Table 1 shows that the baseline network trained on a clean data set becomes completely vulnerable (with almost 0% accuracy) under PGD and CW attacks. Our CLC-NN improves its accuracy to nearly 40% and 80% under PGD and CW attacks respectively. The accuracy under FGSM attacks has almost been doubled by our CLC-NN method. The accuracy on clean data is slightly decreased because the lower-dimensional embedding functions cannot exactly capture Z‖ orM. • CLC-NN further improves the robustness of adversarially trained networks. Table 2 shows that while an adversarially trained network is inherently robust against certain types of perturbations, CLC-NN strengthens its robustness significantly against various perturbations. For in- • The robustness improvement of adversarially trained networks is less significant. This is expected because the trajectory of perturbed data lies on the embedding subspace Z‖ if that data sample has been used in adversarial training. However, our experiments show that applying CLCNN to adversarially trained networks can achieve the best performance under most attacks. Comparison with PixelDefend (Song et al., 2017). Our method achieves similar performance on CIFAR-10 with slightly different experimental setting. Specifically, PixelDefend improved the robustness of a normally trained 62-layer ResNet from 0% to 78% against CW attack. Our proposed CLC-NN improves the robustness of a 20-layer ResNet from 0% to 81% against CW attacks. Furthermore, we show that CLC-NN is robust against the manifold-based attack. No result was reported for CIFAR-100 in Song et al. (2017). Comparison with Reactive Defense Reactive defenses can be understood as only applying a control at the initial condition of a dynamical system. Specifically, reactive defense equipped with linear embedding admits the following dynamics: xt+1 = f(xt,θt), s.t. x0 = V r 0(V r 0) Tx ,0. (12) By contrast, CLC-NN controls all hidden states and results in a decreasing error as the number of layers T increases (c.f. Theorem 1). To quantitatively compare CLC-NN with reactive defense, we implement them with the same linear embedding functions and against all perturbations. In Table 3, CLC-NN outperforms reactive defense in almost all cases except that their performances are case-dependent on clean data. 7 CONCLUSION We have proposed a close-loop control formulation to improve the robustness of neural networks. We have studied the embedding of state trajectory during forward propagation to define the optimal control objective function. The numerical experiments have shown that our method can improve the robustness of a trained neural network against various perturbations. We have provided an error estimation for the proposed method in the linear case. Our current implementation uses the Pontryagin’s Maximum Principle and an online iterative algorithm to overcome the intractability of solving a dynamical programming. This online process adds extra inference time. In the future, we plan to show the theoretical analysis for the nonlinear embedding case. Acknowledgement Zhuotong Chen and Zheng Zhang are supported by NSF CAREER Award No. 1846476 and NSF CCF No. 1817037. Qianxiao Li is supported by the start-up grant under the NUS PYP programme. A APPENDIX A ERROR ESTIMATION FOR THE PROPOSED CLC-NN Preliminaries We define the performance index at time t as J(xt,ut) = 1 2 ‖Qt(xt + ut)‖22 + c 2 ‖ut‖22, (13) where Qt = I−Vrt (Vrt )T , Vrt is the linear projection matrix at time t with only its first r principle components corresponding to the largest r eigenvalues. The optimal feedback control is defined as u∗t (xt) = arg min ut J(xt,ut), due to the linear system and quadratic performance index, the optimal feedback control admits an analytic solution by taking the gradient of performance index (Eq. (13)) and setting it to 0. ∇uJ(xt,ut) = ∇u ( 1 2 ‖Qt(xt + ut)‖22 + c 2 ‖ut‖22 ) , = QTt Qtxt + Q T t Qtut + c · ut, which leads to the analytic solution of u∗t (xt) as u∗t (xt) = −(c · I + QTt Qt)−1QTt Qtxt. (14) The above analytic control solution u∗t optimizes the performance index instantly at time step t, the error measured by Eq. (13) for the dynamical programming solution x ,t must be smaller or equal to the state trajectory equipped with u∗t define by Eq. (14), which gives a guaranteed upper bound for the error estimation of the dynamic programming solution. We define the feedback gain matrix Kt = (c · I + QTt Qt)−1QTt Qt. Thus, the one-step optimal feedback control can be represented as u∗t = −Ktxt. The difference between the controlled system applied with perturbation at initial condition and the uncontrolled system without perturbation is shown x ,t+1 − xt+1 = θt(x ,t + ut − xt), = θt(x ,t −Ktx ,t − xt). (15) The control objective is to minimize the state components that span the orthogonal complement of the data manifold (I − Vrt (Vrt )T ), when the input data to feedback control only stays in the state manifold, such that ‖(I−Vrt (Vrt )T )xt‖22 = 0, the feedback control Ktxt = 0. The state difference of Eq. (15) can be further shown by adding a 0 term of (θtKtxt) x ,t+1 − xt+1 = θt(I−Kt)x ,t − θtxt + θtKtxt, = θt(I−Kt)(x ,t − xt). (16) In the following, we show a transformation on the control dynamic term (I − Kt) based on its definition. Lemma 1. For t ≥ 0, we have I−Kt = α · I + (1− α) ·Pt, where Pt := Vrt (V r t ) T , which is the orthogonal projection onto Zt‖, and α := c 1+c such that α ∈ [0, 1]. Proof. Recall that Kt = (c ·I+QTt Qt)−1QTt Qt, and Qt = I−Vrt (Vrt )T , Qt can be diagonalized as following Qt = Vt 0 0 · · · 0 0 0 0 · · · 0 0 ... ... . . . 0 0 0 0 · · · 1 0 0 0 · · · 0 1 VTt , where the first r diagonal elements have common value of 0 and the last (d − r) diagonal elements have common value of 1. Furthermore, the feedback gain matrix Kt can be diagonalized as Kt = Vt 0 0 · · · 0 0 0 0 · · · 0 0 ... ... . . . 0 0 0 0 · · · 11+c 0 0 0 · · · 0 11+c VTt , where the last (d − r) diagonal elements have common value of 11+c . The control term (I −Kt) thus can be represented as I−Kt = Vt 1 0 · · · 0 0 0 1 · · · 0 0 ... ... . . . 0 0 0 0 · · · c1+c 0 0 0 · · · 0 11+c VTt , where the first r diagonal elements have common value of 1 and the last (d − r) diagonal elements have common value of c1+c . By denoting the projection of first r columns as V r t and last (d − r) columns as V̂rt , it can be further shown as I−Kt = Vrt (Vrt )T + c 1 + c ( V̂rt (V̂ r t ) T ) , = Pt + α ( I−Pt ) , = α · I + (1− α) ·Pt. Oblique Projections. Let P be a linear operator on Rd, • We say that P is an projection if P2 = P. • P is an orthogonal projection if P = PT = P2. • If P2 = P but P 6= PT , it is called an oblique projection. Proposition 2. For a projection P, 1. If P is an orthogonal projection, then ‖P‖2 = 1. 2. If P is an oblique projection, then ‖P‖2 > 1. 3. If P, Q are two projections such that range(P) = range(Q), then PQ = Q and QP = P. 4. If P is a projection, then rank(P) = Tr(P). Furthermore, if P is an orthogonal projection, then rank(P) = ‖P‖2F = Tr(PPT). Define for t ≥ 0 { P0t := Pt, P (s+1) t := θ −1 t−s−1P s tθt−s−1, s = 0, 1, . . . , t− 1, Lemma 3. Let Pst be defined as above for 0 ≤ s ≤ t. Then 1. Pst is a projection. 2. Pst is a projection onto Z t−s ‖ , i.e. range(P s t ) = Z t−s ‖ . 3. ‖Pst‖2F ≤ κ(θt−1θt−2 . . .θt−s)2 · r, where κ(A) is the condition number of A, i.e. κ(A) = ‖A‖2 · ‖A−1‖2, and r = rank(Z0‖) = rank(Z1‖) = . . . = rank(Z t ‖). Proof. 1. We prove it by induction on s for each t. For s = 0, P0t = Pt, which is a projection by its definition. Suppose it is true for s such that Pst = P s tP s t , then for (s+ 1), (Ps+1t ) 2 = ( θ−1t−s−1P s tθt−s−1 )2 , = θ−1t−s−1 ( Pst )2 θt−s−1, = θ−1t−s−1P s tθt−s−1, = Ps+1t . 2. We prove it by induction on s for each t. For s = 0, P0t = Pt, which is the orthogonal projection onto Zt‖. Suppose that it is true for s such that P s t is a projection onto Z t−s ‖ , then for (s + 1), Ps+1t = θ −1 t−s−1P s tθt−s−1, which implies range(Ps+1t ) = range(θ −1 t−s−1P s t ), = {θ−1t−s−1x : x ∈ Z t−s ‖ }, = Zt−s−1‖ . 3. We use the inequalities ‖AB‖F ≤ ‖A‖2‖B‖F , and ‖AB‖F ≤ ‖A‖F ‖B‖2. By the definition of Pst , Pst = ( θt−1θt−2 · · ·θt−s )−1 P0t ( θt−1θt−2 · · ·θt−s ) , we have the following ‖Pst‖2F ≤ ‖ ( θt−1θt−2 · · ·θt−s )−1‖22 · ‖(θt−1θt−2 · · ·θt−s)‖22 · ‖P0t‖2F , ≤ κ(θt−1θt−2 · · ·θt−s)2 · r, Lemma 2(4). The following Lemma uses the concept of oblique projection to show a recursive relationship to project any tth state space of Eq. (16) back to the input data space. Lemma 4. Define for 0 ≤ s ≤ t, Gst := α · I + (1− α)Pst . Then, Eq. (16) can be written as x ,t − xt = (θt−1θt−2 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), t ≥ 1. Proof. We prove it by induction on t. For t = 1, by the definition of Gst and transformation from Lemma 1, x ,1 − x1 = θ0(I−K0)(x ,0 − x0), Eq. (16), = θ0(α · I + (1− α) ·P0)(x ,0 − x0), Lemma 1, = θ0G 0 0(x ,0 − x0). Suppose that it is true for (x ,t − xt), by using Eq. (16) and Lemma 1, we have x ,t+1 − xt+1 = θt(I−Kt)(x ,t − xt), Eq. (16), = θt(α · I− (1− α) ·Pt)(x ,t − xt), Lemma 1, = θtG 0 t (θt−1θt−2 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0). (17) Recall the definitions of P(s+1)t := θ −1 t−s−1P s tθt−s−1, and G s t := α · I + (1 − α)Pst , we have the following Gs+1t = α · I + (1− α) ·P (s+1) t , = α · I + (1− α) · θ−1t−s−1Pstθt−s−1, = θ−1t−s−1 ( α · I + (1− α) ·Pst ) θt−s−1, = θ−1t−s−1G s tθt−s−1, which results in the equality for the oblique projections. Furthermore, θt−s−1G (s+1) t = G s tθt−s−1. Applying the above to Eq. (17) results in x ,t+1 − xt+1 = θtG0t (θt−1θt−2 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), = (θtθt−1)G 1 t (θt−2θt−3 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), = (θtθt−1θt−2)G 2 t (θt−3θt−4 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), = (θtθt−1 · · ·θ0)(GttGt−1t−1 · · ·G00)(x ,0 − x0). Lemma 5. Let Ft := G (t−1) t−1 G (t−2) t−2 · · ·G00, t ≥ 1. Then, Ft = α t · I + (1− α) t−1∑ s=0 αsPss. Proof. We prove it by induction on t. Recall the definition of Gst := α · I + (1 − α) · Pst . When t = 1, F1 = G 0 0 = α · I + (1− α) ·P00. Suppose that it is true for t such that Ft = G (t−1) t−1 G (t−2) t−2 · · ·G00 = αt · I + (1− α) t−1∑ s=0 αsPss, for (t+ 1), Ft+1 = G t tF (t), = (α · I + (1− α) ·Ptt)Ft, = (α · I + (1− α) ·Ptt)(αt · I + (1− α) t−1∑ s=0 αsPss), = αt+1 · I + αt(1− α)Ptt + (1− α)2 t−1∑ s=0 αs ·PttPss + α(1− α) t−1∑ s=0 αs ·Pss. Recall Lemma 3, range(Ptt) = range(P s s) = Z 0 ‖ . According to Proposition 2 (3), P t tP s s = P s s. Hence, Ft+1 = α t+1 · I + αt(1− α) ·Ptt + (1− α) t−1∑ s=0 αs ·Pss, = αt+1 · I + (1− α) t∑ s=0 αs ·Pss. Lemma 6. Let V ∈ Rd×r be a matrix whose columns are an orthogonal basis for a subspace D, and θ ∈ Rd×d be invertible. Let P = VVT be the orthogonal projection onto D. Denote by P̂ the orthogonal projection onto θD := {θx : x ∈ D}. Then 1. θ−1P̂θ is an oblique projection onto D. 2. ‖θ−1P̂θ −P‖2 ≤ ( 1 + κ(θ)2 ) · ‖I− θTθ‖2. In general, the last inequality shows that θ−1P̂θ = P, if θ is orthogonal. Proof. 1. (θ−1P̂θ)2 = θ−1P̂2θ = θ−1P̂θ, therefore, θ−1P̂θ is an projection. 2. Since P̂ is orthogonal projection onto the row space of θV, then P̂ = θV [ (θV)T (θV) ]−1 (θV)T , = θV [ VTθTθV ]−1 VTθT . θ−1P̂θ = V [ VTθTθV ]−1 VTθTθ. Furthermore, ‖θ−1P̂θ −P‖2 = ‖V [ VTθTθV ]−1 VTθTθ −VVT ‖2, ≤ ‖V [ VTθTθV ]−1 VTθTθ −VVTθTθ‖2 + ‖VVTθTθ −VVT ‖2, ≤ ‖V ( [VTθTθV]−1 − I ) VT ‖2 · ‖θTθ‖2 + ‖θTθ − I‖2, ≤ ‖[VTθTθV]−1‖2 · ‖I−VTθTθV‖2 · ‖θTθ‖2 + ‖θTθ − I‖2, ≤ ‖[VTθTθV]−1‖2 · ‖I− θTθ‖2 · ‖θTθ‖2 + ‖θTθ − I‖2. We further bound ‖[VTθTθV]−1‖2. ‖[VTθTθV]−1‖2 = ( λmin(V TθTθV) )−1 , = ( inf ‖x‖2=1 xTVTθTθVx )−1 , ≤ ( inf ‖x′‖2=1 (x′)TθTθx′ )−1 , = ( λmin(θ Tθ) )−1 , = ‖(θTθ)−1‖2. Hence, we have ‖θ−1P̂θ −P‖2 ≤ ( 1 + ‖θTθ‖2 · ‖(θTθ)−1‖2 ) · ‖I− θTθ‖2, = ( 1 + κ(θ)2 ) · ‖I− θTθ‖2. Corollary 1. Let t ≥ 1. Then for each s = 0, 1, · · · , t, we have ‖Pss −P0‖2 ≤ ( 1 + κ(θs) 2 ) · ‖I− θTs θs‖2, where • θ := θs−1 · · ·θ0, s ≥ 1, • θ := I, s = 0. Observe that Pss = (θs) −1Psθs. Using Lemma 6, we arrive at the main theorem. Theorem 1. For t ≥ 1, we have the error estimation ‖x ,t − xt‖22 ≤ ‖θt−1 · · ·θ0‖22 · ( α2t‖z⊥‖22 + ‖z‖‖22 + γt‖z‖22 ( γtα 2(1− αt−1)2 + 2(α− αt) )) . where γt := max s≤t ( 1 + κ(θs) 2 ) ‖I− θTs θs‖2, and α = c1+c , c represents the control regularization. In particular, the equality ‖x ,t − xt‖22 = α2t‖z⊥‖22 + ‖z‖‖22. holds when all θt are orthogonal. Proof. The input perturbation z = x ,0 − x0 can be written as z = z‖ + ·z⊥, where z‖ ∈ Z‖ and z⊥ ∈ Z⊥, where z‖ and z⊥ are vectors such that • z‖ · z⊥ = 0 almost surely. • z‖, z⊥ have uncorrelated components. • z‖ ∈ D, and z⊥ ∈ D⊥. Since z‖ and z⊥ are orthogonal almost surely, recall Lemma 4, ‖x ,t − xt‖22 = ‖(θt−1θt−2 · · ·θ0)(Gt−1t−1 · · ·G00)z‖22, ≤ ‖θt−1θt−2 · · ·θ0‖22 · ‖(Gt−1t−1 · · ·G00)z‖22, (18) For the term ‖(Gt−1t−1 · · ·G00)z‖22, recall Lemma 5, ‖(Gt−1t−1 · · ·G00)z‖22 = ‖ ( αt · I + (1− α) t−1∑ s=0 αs ·Pss ) z‖22, = ‖αtz + (1− α) t−1∑ s=0 αsP0z + (1− α) t−1∑ s=0 αs(Pss −P0)z‖22, = ‖αtz + (1− αt)z‖ + (1− α) t−1∑ s=0 αs(Pss −P0)z‖22, in the above, P0 is an orthogonal projection on t = 0 (input data space), therefore, P0z = z‖. Furthermore, when s = 0, Pss −P0 = 0. Thus, ‖(Gt−1t−1 · · ·G00)z‖22 = α2t‖z‖22 + (1− αt)2‖z‖‖22 + (1− α)2 t−1∑ s,q=1 αsαqzT (Pss −P0)T (Pqq −P0)z + 2αt(1− αt)‖z‖‖22 + 2αt(1− α) t−1∑ s=1 αszT (Pss −P0)z + 2(1− αt)(1− α) t−1∑ s=1 αs(z‖)T (Pss −P0)z, = α2t‖z⊥‖22 + ( α2t + 2αt(1− αt) + (1− αt)2 ) ‖z‖‖22 + (1− α)2 t−1∑ s,q=1 αsαqzT (Pss −P0)T (Pqq −P0)z + 2αt(1− α) t−1∑ s=1 αszT (Pss −P0)z + 2(1− αt)(1− α) t−1∑ s=1 αs(z‖)T (Pss −P0)z, = α2t‖z⊥‖22 + ‖z‖‖22 + (1− α)2 t−1∑ s,q=1 αsαqzT (Pss −P0)T (Pqq −P0)z + 2αt(1− α) t−1∑ s=1 αszT (Pss −P0)z + 2(1− αt)(1− α) t−1∑ s=1 αs(z‖)T (Pss −P0)z. Using Corollary 1, we have • zT (Pss −P0)z ≤ ‖z‖22 · ‖Pss −P0‖2, ≤ γt‖z‖22. • zT (Pss −P0)T (Pqq −P0)z ≤ ‖z‖22 · ‖Pss −P0‖2 · ‖Pqq −P0‖2, ≤ γ2t ‖z‖22. • (z‖)T (Pss −P0)z ≤ γt‖z‖‖2 · ‖z‖2, ≤ γt‖z‖22. Thus, we have ‖(Gt−1t−1 · · ·G00)z‖22 ≤ α2t‖z⊥‖22 + ‖z‖‖22 + α2(1− αt−1)2γ2t ‖z‖22 + 2αt+1(1− αt−1)γt‖z‖22 + 2α(1− αt)(1− αt−1)γt‖z‖22, = α2t‖z⊥‖22 + ‖z‖‖22 + γt‖z‖22 ( γtα 2(1− αt−1)2 + 2(α− αt) ) . Recall the error estimation in Eq. (18), ‖x ,t − xt‖22 ≤ ‖θt−1θt−2 · · ·θ0‖22 · ‖(Gt−1t−1 · · ·G00)z‖22, ≤ ‖θt−1 · · ·θ0‖22 · ( α2t‖z⊥‖22 + ‖z‖‖22 + γt‖z‖22 ( γtα 2(1− αt−1)2 + 2(α− αt) )) . In the specific case, when all θt are orthogonal, γt : = max s≤t ( 1 + κ(θs) 2 ) ‖I− θTs θs‖2 = 0. Thus, ‖x ,t − xt‖22 = α2t‖z⊥‖22 + ‖z‖‖22. B APPENDIX B DETAILS OF EXPERIMENTAL SETTING B.1 NETWORK CONFIGURATIONS Since the proposed CLC-NN optimizes the entire state trajectory, it is important to have a relatively smooth state trajectory, in which case, when the reconstruction loss ‖Et(xt) − xt‖22 at layer t is small, the reconstruction losses at its adjacent layers should be small. For this reason, we use residual neural network (He et al., 2016) as network candidate to retain smoother dynamics. The configuration of the residual neural network used for both CIFAR-10 and CIFAR-100 is shown in Tab. 4. Based on the configuration of residual neural network shown in Tab. 4, we construct 4 embedding functions applied at input space, outputs of initial layer, residual block 1 and residual block 2. The output of residual block 3 is embedded with a linear orthogonal projection. We randomly select 5000 clean training data to collect state trajectories at all 5 locations. • For the linear orthogonal projections: we apply the Principle Component Analysis on each of the state collections. We retain the first r columns of the resulted basis, such that r = arg min{i : λ1+...+λi λ1...+λd ≥ 1− δ}, where δ = 0.1. • For the nonlinear embedding: we train 4 convolutional auto-encoders for the input space, outputs of the initial layer and residual blocks 1, 2. All of the embedding functions are trained individually. We adopt shallow convolutional auto-encoder structure to gain fast inference speed, in which case, CLC-NN equipped with linear embedding often outperform the nonlinear embedding as shown in Tab. 1. The configuration of all 4 convolutional auto-encoders are shown in Tab. 5. B.2 PERTURBATIONS AND DEFENSIVE TRAINING In this section, we show details about the perturbations and robust networks that have been considered in this work. For the adversarial training objective function, min θ∈Θ max x ,0=∆(x0, ) E (x0,y)∼D [(1− λ) · Φi(x ,T ,y,θ) + λ · Φi(xT ,y,θ)], where ∆(x0, ) generates a perturbed data from given input x0 within the range of . λ balances between standard accuracy and robustness. We choose λ = 0.5 in all adversarial training. For robust networks, we consider both perturbation agnostic and non-agnostic methods. For the perturbation agnostic adversarial training algorithms equipped ∆(x0, ), the resulted network that is the most robust against the ∆(x0, ) perturbation. On the contrary, perturbation non-agnostic robust training methods are often robust against many types of perturbations. • Adversarial training with the fast gradient sign method (FGSM) (Goodfellow et al., 2014) considers perturbed data as follows. x ,0 = x0 + sign(∇x0Φ(xT ,y)), (x0, y) ∼ D, where sign(·) outputs the sign of the input. In which case, FGSM considers the worse case within the range of along the gradient∇x0Φ(xT ,y) increasing direction. Due to the worse case consideration, it does not scale well for deep networks, for this reason, we adversarially train the network with FGSM with = 4, which is half of the maximum perturbation considered in this paper. • The label smoothing training (Label Smooth) (Hazan et al., 2017) does not utilize any perturbation information ∆(x0, ). It converts one-hot labels into soft targets by setting the correct class as 1 − , while other classes have value of N−1 , where is a small constant and N is number of classes. Specifically, we choose = 0.9 in this paper. • Adversarial training with the project gradient descent (PGD) (Madry et al., 2017) generates adversarial data by iteratively run FGSM with small step size, which results in stronger perturbations compared with FGSM within the same range . We use 7-step of = 2 to generate adversarial data for robust training. For Perturbations, we consider the maximum range of = 2, 4, 8 to test the performance the network robustness against both strong and weak perturbations. For this work, we test network robustness with the manifold-based attack (Jalal et al., 2017), FGSM (Goodfellow et al., 2014), 20-step of PGD (Madry et al., 2017) and the CW attack (Carlini & Wagner, 2017). B.3 ONLINE OPTIMIZATION Optimization Methods. we use Adam (Kingma & Ba, 2014) to maximize the Hamiltonian Eq. (9) with default setting. In which case, solving the PMP brings in extra computational cost for inference. Each online iteration of solving the PMP requires a combination of forward propagation (Eq. (7)), backward propagation (Eq. (8)) and a maximization w.r.t. the control parameters (Eq. (9)), which has computational cost approximately the same as performing gradient descent on training a neural network for one iteration. For the numerical results presented in the paper, we choose the maximum iteration that gives the best performance from one of [5, 10, 20, 30, 50]. C MORE NUMERICAL EXPERIMENTS The proposed CLC-NN is designed to be compatible with existing open-loop trained. We show extra experiments by employing the proposed CLC-NN on two baseline models, DenseNet-40 (Table 6). The layer-wise projection performs orthogonal projection on the hidden state. We define the local cost function at the tth layer as follows J(xt,ut) = 1 2 ‖Qt(xt + ut)‖22 + c 2 ‖ut‖22, the layer-wise achieves the optimal solution at local time t, u∗t (xt) = arg min ut J(xt,ut). However, the layer-wise optimal control solution does not guarantee the optimum across all layers. In Table 7, we launch comparisons between the proposed CLC-NN with layer-wise projection. Specifically, under all perturbations the proposed CLC-NN outperforms layer-wise projection. D ROBUSTNESS AGAINST MANIFOLD-BASED ATTACK The manifold-based attack (Jalal et al., 2017) (denoted as Manifold) has shown great success on breaking down the manifold-based defenses (Samangouei et al., 2018). The proposed CLC-NN can successfully defend such specifically design adversarial attack for manifold-based defenses and improves the robustness accuracy from 1% to 81% for the standard trained model in Cifar-10, and 2% to 52% in Cifar-100. We provide detailed explanation for the successful defense of the proposed CLC-NN against such strong adversarial attack. Exsiting manifold-based defense (Samangouei et al., 2018) focuses on detecting and de-noising the input components that do not lie within the underlying manifold. The overpowered attack proposed in Jalal et al. (2017) searches adversarial attack with in the embedded latent space, which is undetectable for the manifold-based defenses and caused complete failure defense. In the real implementation, the manifold-based attack (Jalal et al., 2017) is detectable and controllable under the proposed framework due to the following reason. The numerically generated manifold embedding functions are not ideal. The error sources of non-ideal embedding functions are mainly due to the algorithm that used to compute the manifold, the architecture of embedding function, and the distribution shift between training and testing data (embedding functions of training data do not perfectly agree with testing data). In which case, even the perturbation is undetectable and non-controllable at initial layer, as it is propagated into hidden layers, each layer amplifies such perturbation, therefore, the perturbation becomes detectable and controllable in hidden layers. We randomly select the batch of testing data to generate the manifold-based attack following the same procedure proposed in Jalal et al. (2017). The proposed method improves the attacked accuracy from 1% to 78%. More specifically, we compare the differences of all hidden states spanning the orthogonal complement between a perturbed testing data and its unperturbed counterpart, ‖P⊥t x ,t− P⊥t x ,t‖, where P⊥t is a projection onto the orthogonal complement. The difference is growing such as 0, 0.016, 0.0438, 0.0107, 0.0552 for hidden states at layer 0, 1, 2, 3, 4 respectively. This validates the argument for how the proposed method is able to detect such perturbation and controls the perturbation in hidden layers. Furthermore, we provide some insights about the reasons behind the success of such an adversarial attack. This follows the same concept of the existence of adversarial attack in neural networks. The highly nonlinear behaviour of neural networks preserves complex representative ability, meanwhile, its powerful representation results in its vulnerability. For example, a constant function has 50% chance to make a correct prediction in binary classification problem under any perturbation, but its performance is limited. Therefore, we propose to use a linear embedding function that compensates between the embedding accuracy and robustness. E DEFINITION OF THREAT MODEL Generally, an attacker should not have access to the hidden states during inference, in which case, an attacker is not allowed to inject extra noise during inference. To define the threat model of the proposed method, for the white-box setting, an attacker has access to both network and all embedding functions. The condition that the perturbation · z makes our method vulnerable is defined as follows, T−1∑ t=0 ‖Et(x ,t)− x ,t‖22 = 0, x ,0 = x0 + · z. In words, the perturbation ·z applied on the input data must result in 0 reconstruction losses across all hidden layers, which means its corresponding state trajectory does not span any of all orthogonal complements of all hidden state spaces. To obtain an effective attack satisfying the above equation, conventional gradient-based attackers cannot guarantee to find an perfect attack. A possible way is to perform grid-search backward in layers to find such an adversarial attack satisfying the thread model condition, which is extremely costly.
1. What is the main contribution of the paper regarding adaptive controllers in deep neural networks? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison with existing defense methods? 3. How does the reviewer assess the effectiveness of the approach in terms of practicality and robustness against adversarial examples? 4. What are some suggestions provided by the reviewer for improving the paper, especially regarding experimental evaluation and comparisons with other works?
Review
Review The paper builds on recent revival in control-theoretic approaches to deep neural networks by proposing an adaptive controller that projects intermediate representations in the network to their "manifolds" and consequently makes the neural network robust to input perturbations. Pros: Active controller-based projection of intermediate features is an interesting idea and the utility of Pontryagin's maximum principle to address the challenge of high dimensionality of the state (features) is a good observation. Cons: The manifold-based defense has been shown to be broken previously. For e.g. please see section 4 of https://arxiv.org/pdf/1712.09196.pdf . Manifold/GAN/VAE based defenses can be easily broken by just attacking the projection network and the original network together. The paper considers manifold based defense at all layers using an active controller doing the projection. While comparison with pixeldefense is useful, it would be good to launch an attack similar to the reference above and then observe the effectiveness of the approach. The arguments in the paper are not sufficient to convince the reviewer that this defense is practical. The paper is severely lacking in comparison with existing defense approaches. The state of the art for the used dataset in the paper is significantly better than the effectiveness of the presented approach. For e.g. see https://github.com/MadryLab/robustness In summary, the paper is a good effort to exploit the use of active controllers in deep learning. But firstly, the use of manifold-based projection as a cost function is itself a non-robust defense against adversarial examples. Second, the experimental evaluation in the paper is significantly lacking and does not meet the standards of a venue such as ICLR. The reviewer will strongly recommend reviewing the advices in https://arxiv.org/abs/1902.06705 on this topic. At this point, the paper is interesting but it needs significant development and is not yet ready for publication. Questions for the author: How critical is Pontryagin’s Maximum Principle to the presented approach? What prevents one from using projection to a lower dimensional embedding space followed by a state space control method? In particular, if one is building on the manifold assumption, then isn't it reasonable to not worry about high dimensionality for designing the controller too? Is it realistic to assume the "input perturbation to be a random vector" in Section 5 for theoretical analysis when we are considering adversarial attacks such as PGD, CW? If not, then isn't theorem 1 not relevant to the primary topic of the paper? ------ After author's response: The response of authors identifies the problem of using running loss in projected space. While one can try to get around it by projecting the loss function as well but that would be a convoluted way to solve the problem, and in any case, not a strong criticism of the presented approach. The updated document has generalized the derivation to take general perturbations into account. Updates Tables 1-3 resolve empirical analysis concerns of the reviewer. With these improvements, the reviewer is happy to recommend acceptance of the paper.
ICLR
Title Towards Robust Neural Networks via Close-loop Control Abstract Despite their success in massive engineering applications, deep neural networks are vulnerable to various perturbations due to their black-box nature. Recent study has shown that a deep neural network can misclassify the data even if the input data is perturbed by an imperceptible amount. In this paper, we address the robustness issue of neural networks by a novel close-loop control method from the perspective of dynamic systems. Instead of modifying the parameters in a fixed neural network architecture, a close-loop control process is added to generate control signals adaptively for the perturbed or corrupted data. We connect the robustness of neural networks with optimal control using the geometrical information of underlying data to design the control objective. The detailed analysis shows how the embedding manifolds of state trajectory affect error estimation of the proposed method. Our approach can simultaneously maintain the performance on clean data and improve the robustness against many types of data perturbations. It can also further improve the performance of robustly trained neural networks against different perturbations. To the best of our knowledge, this is the first work that improves the robustness of neural networks with close-loop control 1. 1 INTRODUCTION Due to the increasing data and computing power, deep neural networks have achieved state-of-theart performance in many applications such as computer vision, natural language processing and recommendation systems. However, many deep neural networks are vulnerable to various malicious perturbations due to their black-box nature: a small (even imperceptible) perturbation of input data may lead to completely wrong predictions (Szegedy et al., 2013; Nguyen et al., 2015). This has been a major concern in some safety-critical applications such as autonomous driving (Grigorescu et al., 2020) and medical image analysis (Lundervold & Lundervold, 2019). Various perturbations have been reported, including the `p norm based attack (Madry et al., 2017; Moosavi-Dezfooli et al., 2016; Carlini & Wagner, 2017), semantic perturbation (Engstrom et al., 2017) etc. On the other side, some algorithms to improve the robustness against those perturbations have shown great success (Madry et al., 2017). However, most robustly trained models are tailored for certain types of perturbations, and they do not work well for other types of perturbations. Khoury & Hadfield-Menell (2018) showed the non-existence of optimal decision boundary for any `p-norm perturbation. Recent works (E, 2017; Haber & Ruthotto, 2017) have shown the connection between dynamical systems and neural networks. This dynamic system perspective provides some interesting theoretical insights about the robustness issue. Given a set of data x0 ∈ Rd and its labels y ∈ Rl with a joint distribution D, training a neural network can be considered as following min θ E (x0,y)∼D [Φ(xT ,y)], s.t. xt+1 = f(xt,θt), §Equal contributing authors. 1A Pytorch implementation can be found in:https://github.com/zhuotongchen/ Towards-Robust-Neural-Networks-via-Close-loop-Control.git where θ are the unknown parameters to train, and f , Φ represent the forward propagation rule and loss function (e.g. cross-entropy) respectively. The dynamical system perspective interprets the vulnerability of neural networks as a system instability issue, which addresses the state trajectory variation under small perturbations applied on initial conditions. The optimal control theory focuses on developing a control model to adjust the system state trajectory in an optimal manner. The first work that links and extends the classical back-propagation algorithm using optimal control theory was presented in Li et al. (2017), where the direct relationship between the Pontryagin’s Maximum Principle (Kirk, 1970) and the gradient based network training was established. Ye et al. (2019) used control theory to adjust the hyperparameters in the adversarial training algorithm. Han et al. (2018) established the mathematical basis of the optimal control viewpoint of deep learning. These existing works on algorithm development are open-loop control methods since they commonly treat the network weights θ as control parameters and keep them fixed once the training is done. The fixed control parameters θ operate optimally for data sampled from the data distribution D. However, various perturbation methods cause data distributions to deviate from the true distribution D (Song et al., 2017) and cause poor performance with the fixed open-loop control parameters. 1.1 PAPER CONTRIBUTIONS To address the limitation of using open-loop control methods, we propose the Close- Loop Control Neural Network (CLC-NN), the first close-loop control method to improve the robustness of neural networks. As shown in Fig. 1, our method adds additional blocks to a given T -layer neural network: embedding functions Et, which induce running losses in all layers that measure the discrepancies between true features and observed features under input perturbation, then control processes generate control variables ut to minimize the total running loss under various data perturbations. The original neural network can be designed by either standard training or robust training. In the latter case, our CLC-NN framework can achieve extra robustness against different perturbations. The forward propagation rule is thus modified with an extra control parameter ut ∈ Rd ′ xt+1 = f(xt,θt,ut). Fig. 1 should not be misunderstood as an open-loop control. From the perspective of dynamic systems, x0 is an initial condition, and the excitation input signal is ut (which is 0 in a standard feed-forward network). Therefore, the forward signal path is from ut to the internal states xt and then to the output label y. The path from xt to the embedding function Et(xt) and then to the excitation signal ut forms a feedback and closes the whole loop. The technical contributions of this paper are summarized below: • The proposed method relies on the well accepted assumption that the data and hidden state manifolds are low dimensional compared to the ambient dimension (Fefferman et al., 2016). We study the geometrical information of the data and hidden layers to define the objective function for control. Given a trained T -layer neural network, a set of embedding functions Et are trained off-line by minimizing the reconstruction loss ‖E(xt) − xt‖ over some clean data from D only. The embedding functions support defining a running loss required in our control method. • We define the control problem by dynamic programming and implement the online iterative solver based on the Pontryagin’s Maximum Principle to avoid the curse of dimensionality. The proposed close-loop control formulation does not require prior information of the perturbation. • We provide a theoretical error bound of the controlled system for the simplified case with linear activation functions and linear embedding. This error bound reveals how the close-loop control improves neural network robustness in the simplest setting. 2 RELATED WORKS Many techniques have been reported to improve the robustness of neural networks, such as data augmentation (Shorten & Khoshgoftaar, 2019), gradient masking (Liu et al., 2018), etc. We review adversarial training and reactive defense which are most relevant to this work. Adversarial Training. Adversarial training is (possibly) the most popular robust training method, and it solves a min-max robust optimization problem to minimize the worse-case loss with perturbed data. Adversarial training effectively regularizes the network’s local Lipschitz constants of the loss surface around the data manifold (Liu et al., 2018). Zhang et al. (2019) formulated the robustness training using the Pontryagon’s Maximum Principle, such open-loop control methods result in a set of fixed parameters that operates optimally on the considered perturbation. Liu et al. (2020a;b) considered a close-loop formulation from the differential dynamic programming perspective, this algorithm is categorized as a open-loop control method because it utilizes the state feedback information to boost the training convergence and results in a set of fixed controls for any unseen data. On the contrary, the proposed CLC-NN formulation adaptively targets on the inputs with different control parameters and is capable of distinguishing clean data by generating no control. Reactive Defense. A reactive defense method tries to reject or pre-process the input data that may cause mis-classifications. Metzen et al. (2017) rejected perturbed data by using adversarial detectors that are trained with adversarial data to detect abnormal data during forward propagation. Song et al. (2017) estimated the input data distribution D with a generative model (Oord et al., 2016) to detect data that does not belong to D, it applies a greedy method to search the local neighbour of input data for a more statistically plausible counterpart. This purification process has shown improved accuracy with adversarial data contaminated by various types of perturbations. Purification can be considered as a one-step method to solve the optimal control problem that has the objective function defined over the initial condition only. On the contrary, the proposed CLC-NN solves the control problem by the dynamic programming principle and its objective function is defined over the entire state trajectory, which guarantees the optimality for the resulted controls. 3 THE CLOSE-LOOP CONTROL FRAMEWORK FOR NEURAL NETWORKS Now we present a close-loop optimal control formulation to address the robustness issue of deep learning. Consider a neural network consisting of model parameters θ equipped with external control policy π, where π ∈ Π is a collection of functions Rd → Rd′ acting on the state and outputting the control signal. The feed-forward propagation in a T -layer neural network can be represented as xt+1 = f(xt,θt,πt(xt)), t = 0, · · · , T − 1. (1) Given a trained network, we solve the following optimization problem min π E(x0,y)∼D [J(x0,y,π)] := min π E(x0,y)∼D [ Φ(xT ,y) + T−1∑ s=0 L(xs,πs(xs)) ] , s.t. Eq. (1), (2) where π collects the control policies π0, · · · ,πT−1 for all layers. Note that (2) differs from the open-loop control used in standard training. An open-loop control that treats the network parameters as control variables seeks for a set of fixed parameters θ to match the output with true label y by minimizing the terminal loss Φ, and the running loss L defines a regularization for θ. However, the terminal and running losses play different roles when our goal is to improve the robustness of a neural network by generating some adaptive controls for different inputs. Challenge of Close-loop Control for Neural Networks. Optimal control has been well studied in the control community for trajectory optimization, where one defines the running loss as the error between the actual state xt and a reference state xt,ref over time interval [0, T ]. The resulting control policy adjusts xt and makes it approach xt,ref. In this paper, we apply the idea of trajectory optimization to improve the robustness of a neural network via adjusting the undesired state of xt. However, the formulation is more challenging in neural networks: we do not have a “reference” state during the inference process, therefore it is unclear how to define the running loss L. In the following, we investigate manifold embedding of the state trajectory to precisely define the loss functions Φ and L of Eq. (2) required for the control objective function of a neural network. 3.1 MANIFOLD LEARNING FOR STATE TRAJECTORIES State Manifold. Our controller design is based on the “manifold hypothesis”: real-world high dimensional data can often be embedded in a lower dimensional manifold M (Fefferman et al., 2016). Indeed, neural networks extract the embedded features fromM. To fool a well-trained neural network, the perturbed data often stays away from the data manifoldM (Khoury & Hadfield-Menell, 2018). We consider the data space Z (x ∈ Z,∀x ∼ D) as: Z = Z‖ ⊕ Z⊥, where Z‖ contains the embedded manifoldM and Z⊥ is the orthogonal complement of Z‖. During forward propagation, the state manifold embedded in Z‖ varies at different layers due to both the nonlinear activation function f and state dimensionality variation. Therefore, we denote Zt = Zt‖ ⊕ Zt⊥ as the state space decomposition at layer t and Mt ∈ Zt‖. Once an input data is perturbed, the main effects of causing misclassifications are in Z⊥. Therefore, it is important to measure how far the possibly perturbed state xt deviates from the state manifoldMt. Embedding Function. Given an embedding function Et that encodes xt onto the lowerdimensional manifoldMt and decodes the result back to the full state space Zt, the reconstruction loss ‖Et(xt)−xt‖ measures the deviation of the possibly perturbed state xt from the manifoldMt. The reconstruction loss is nonzero as long as xt has components in Zt⊥. The embedding functions are constructed offline by minimizing the total reconstruction losses over a clean training data set. • Linear Case: Et(·) can be considered as Vrt (Vrt )T where Vrt forms an orthonormal basis for Zt‖. Specifically one can first perform a principle component analysis over a collection of hidden states at layer t, then Vrt can be obtained as the first r columns of the resulting eigenvectors. • Nonlinear Case: we choose a convolutional auto-encoder (detailed in Appendix B) to obtain a representative manifold embedding function Et due to its ease of implementation. Based on the assumption that most perturbations are in the Z⊥ subspace, the embeddings are effective to detect the perturbations as long as the target manifold is of a low dimension. Alternative manifold learning methods such as Izenman (2012) may also be employed. 3.2 FORMULATION FOR THE CLOSE-LOOP CONTROL OF NEURAL NETWORKS Control Objectives. The above embedding function allows us to define a running loss L: L(xt,πt(xt), Et(·)) = ‖Et(xt)− xt‖22 + (πt(xt))TRπt(xt). (3) Here the matrix R defines a regularization term promoting controls of small magnitudes. In practical implementations, using a diagonal matrix R with small elements often helps to improve the performance. Now we are ready to design the control objective function of CLC-NN. Different from a standard open-loop control, this work sets the terminal loss Φ as zero because no true label is given during inference. Consequently, the close-loop control formulation in Eq. (2) becomes min π E(x0,y)∼D [J(x0,y,π)] := min π E(x0,y)∼D T−1∑ t=0 [L(xt,πt(xt), Et(·))] , s.t. Eq. (1). (4) Assume that the input data is perturbed by a bounded and small amount, i.e., x ,0 = x0 + · z, where z can be either random or adversarial. The proposed CLC-NN adjusts the perturbed state trajectory x ,t such that it stays at a minimum distance from the desired manifoldMt while promoting small magnitudes of controls. Intuition. We use an intuitive example to show how CLC-NN controls the state trajectory of unseen data samples. We create a synthetic binary classification data set with 1500 samples. We train a residual neural network with one hidden layer of dimension 2, and adopt the fast gradient sign method (Goodfellow et al., 2014) to generate adversarial data. Fig. 2 (a) and (b) show the states of clean data (red and blue) and of perturbed data (black and gray) at t = 0 and t = 1, respectively. The CLC-NN adjusts the state trajectory to reduce the reconstruction loss as shown in Fig. 2 (c) and (d), where lighter background color represents lower reconstruction loss. Comparing Fig. 2 (a) with (c), and Fig. 2 (b) with (d), we see that the perturbed states in Fig. 2 (a) and (b) deviate from the desired state manifold (light green region) and has a high reconstruction loss. Running 1000 iterations of Alg. 1 adjusts the perturbed states and improves the classification accuracy from 86% to 100%. 4 IMPLEMENTATION VIA THE PONTRYAGIN’S MAXIMUM PRINCIPLE Dynamic Programming for Close-Loop Control (4). The control problem in Eq. (4) can be solved by the dynamical programming principle (Bellman, 1952). For simplicity we consider one input data sample, and define a value function V : T × Rd → R (where T := {0, 1, . . . , T − 1}). Here V (t,x) represents the optimal cost-to-go function of Eq. (4) incurred from time t at state x. One can show that V (t,x) satisfies the dynamic programming principle V (t,x) = inf π∈Π [V (t+ 1,x + f(x,θt,π(x))) + L(x,π(x), Et(·))] . (5) Eq. (5) gives a necessary and sufficient condition for optimality of Eq. (4), and it is often solved backward in time by discretizing the entire state space. The state dimension of a modern neural network is at the order of thousands or even higher, therefore, discretizing the state space and directly solving Eq. (5) is intractable for real-world applications due to the curse of dimensionality. Solving (5) via the Pontryagin’s Maximum Principle. To overcome the computational challenge, the Pontryagin’s Maximum Principle (Kirk, 1970) converts the intractable dynamical programming into two ordinary differential equations and a maximization condition. Instead of computing the control policy π of Eq. (5), the Pontryagin’s Maximum Principle provides a necessary condition for the optimality with a set of control parameters [u∗0, · · · ,u∗T ]. The mean-field Pontryagin’s Maximum Principle can be considered when the initial condition is a batch of i.i.d. samples drawn fromD. Specifically, we trade the intractable computational complexity with processing time for solving the Hamilton equations and its maximization condition for every newly observed data. To begin with, we define the Hamiltonian H : T × Rd × Rd × Rl × Rm → R as H(t,xt,pt+1,θt,ut) := p T t+1 · f(xt,θt,ut)− L(xt,ut, Et(·)). (6) Let x∗ denote the corresponding optimally controlled state trajectory. There exists a co-state process p∗ : [0, T ]→ Rd such that the Hamilton’s equations x∗t+1 = ∇pH(t,x∗t ,p∗t ,θt,u∗t ), (x∗0,y) ∼ D, (7) p∗t = ∇xH(t,x∗t ,p∗t+1,θt,u∗t ), p∗T = 0, (8) are satisfied. The terminal co-state pT = 0, since we do not consider the terminal loss Φ(xT ,y). Moreover, we have the Hamiltonian maximization condition H(t,x∗t ,p ∗ t ,θt,u ∗ t ) ≥ H(t,x∗t ,p∗t ,θt,ut),∀u ∈ Rd ′ and ∀t ∈ T . (9) Instead of solving Eq. (5) for the optimal control policy π∗(xt), for a given initial condition, the Pontryagin’s Maximum Principle seeks for a open-loop optimal solution such that the global optimum of Eq. (5) is satisfied. The limitation of using the maximum principle is that the control parameters u∗t need to be solved for every unseen data to achieve the optimal solution. Algorithm Flow. The numerical implementation of CLC-NN is summarized in Alg. 1. Given a trained network (either from standard or adversarial training) and a set of embedding functions, the controls are initialized as ut = 0,∀t ∈ T , because adding random initialization weakens the Algorithm 1: CLC-NN with the Pontryagin’s Maximum Principle. Input : Possibly perturbed data x , a trained neural network, embedding functions [E1, · · · , ET−1], maxItr (maximum number of iterations). Output: A set of optimal control parameters u∗0, · · · ,u∗T−1. 1 for k = 0 to maxItr do 2 Jk = 0, 3 for t = 0 to T − 1 do 4 xt+1,k = f(xt,k,θt,ut,k), where x0,k = x , . Forward propagation Eq. (7), 5 Jk = Jk + L(xt,k,ut,k, Et(xt,k)), . Objective function Eq. (4), 6 end for 7 for t = T to 1 do 8 pt,k = p T t+1 · ∇xtf(xt,k,θt,ut,k)−∇xtL(xt,k,ut,k, Et(xt,k)), 9 where pT,k = 0, . Backward propagation Eq. (8) 10 end for 11 for t = 0 to T − 1 do 12 ut,k+1 = ut,k + ( pTt+1,k · ∇utf(xt,k,θt,ut,k)−∇utL(xt,k,ut,k, Et(xt,k)) ) , 13 . Maximization of Hamiltonian Eq. (9) based on Eq. (6) and gradient ascent. 14 end for 15 end for robustness performance in general, and clean trajectory often does not result in any running loss for the gradient update on the control parameters. In every iteration, a given input x0 is propagated forwardly with Eq. (7) to obtain all the intermediate hidden states xt for all t and to accumulate cost J . Eq. (8) backward propagates the co-state pt and Eq. (9) maximizes the tth Hamiltonian with current xt and pt to compute the optimal control parameters u∗t . 5 ERROR ANALYSIS FOR SIMPLIFIED LINEAR CASES For the ease of analysis, we consider a simplified neural network with linear activation functions: xt+1 = θt(xt + ut), and reveal why our proposed method can improve robustness in the simplest setting. Given a perturbed data sample x ,0, we denote its perturbation-free counterpart as x0 so that z = x ,0−x0. We consider a general perturbation where z is the direct sum of two orthogonal contributions: z‖, which is a perturbation within the data manifold (subspace), and z⊥, which is a perturbation in the orthogonal complement of the data manifold. This case is general: if we consider adversarial attacks, then the perturbation along the orthogonal complement dominates. In contrast, if we consider random perturbations, then the two perturbations are on the same scale. Our formulation covers both such extreme scenarios, together with intermediate cases. We use an orthogonal projection as the embedding function such that Et = Vrt (Vrt )T , where Vrt is the first r columns of the eigenvectors computed by the Principle Component Analysis on a collection of states xt. The proposed CLC-NN minimizes ‖x ,t−xt‖22 by reducing the components of x ,t that lie in the the orthogonal complement of Zt‖. The following theorem provides an error estimation between x ,t and xt. Theorem 1. For t ≥ 1, we have the error estimation ‖x ,t−xt‖22 ≤ ‖θt−1 · · ·θ0‖22· ( α2t‖z⊥‖22+‖z‖‖22+γt‖z‖22 ( γtα 2(1−αt−1)2+2(α−αt) )) , (10) where γt := max s≤t ( 1 + κ(θs) 2 ) ‖I− θTs θs‖2, and α = c1+c , c represents the control regularization. In particular, the equality ‖x ,t − xt‖22 = α2t‖z⊥‖22 + ‖z‖‖22, (11) holds when all θt are orthogonal. The detailed derivation is presented in Appendix A. Let us summarize the insights from Theorem 1. • The above error estimation is general for any input perturbation. It shows the working principle behind the proposed CLC-NN on controlling the perturbation that lies in the orthogonal complement of input subspace (z⊥). • The above error estimation improves as the control regularization c goes to 0 (so α→ 0). It is not the sharpest possible as it relies on a greedily optimal control at each layer. The globally optimal control defined by the Ricatti equation may achieve a lower loss when c 6= 0. • When the dimension of embedding subspace r decreases, our control becomes more effective in reducing ‖x ,t − xt‖22. This means that the control approach works the best when the data is constrained on a low dimensional manifold, which is consistent with the manifold hypothesis. In particular, observe that as r → 0, ‖z‖‖22 → 0 • The obtained upper bound is tight: the estimated upper bound becomes the actual error if all the forward propagation layers are orthogonal matrices. 6 NUMERICAL EXPERIMENTS We test our proposed CLC-NN framework under various input data perturbations. Here we briefly summarize our experimental settings, and we refer readers to Appendix B for the details. • Original Networks without Close-Loop Control. We choose residual neural networks (He et al., 2016) with ReLU activation functions as our target for close-loop control. In order to show that CLC-NN can improve the robustness in various settings, we consider networks from both standard and adversarial trainings. We consider multiple adversarial training methods: fast gradient sign method (FGSM) (Goodfellow et al., 2014), project gradient descent (PGD) (Madry et al., 2017), and the Label smoothing training (Label Smooth) (Hazan et al., 2017). • Input Perturbations. In order to test our CLC-NN framework, we perturb the input data within a radius of with = 2, 4 and 8 respectively. We consider various perturbations, including nonadversarial perturbations with the manifold-based attack (Jalal et al., 2017) (Manifold), as well as some adversarial attacks such as FGSM, PGD and the CW methods (Carlini & Wagner, 2017). • CLC-NN Implementations. We consider both linear and nonlinear embedding in our closeloop control. Specifically, we employ a principal component analysis with a 1% truncation error for linear embedding, and convolutional auto-encoders for nonlinear embedding. We use Adam (Kingma & Ba, 2014) to maximize the Hamiltonian function (9) and keep the same hyperparameters (learning rate, maximum iterations) for each model against all perturbations. Result Summary: Table 1 and Table 2 show the results for both CIFAR-10 and CIFAR-100 datasets on some neural networks from both standard training and adversarial training respectively. • CLC-NN significantly improves the robustness of neural networks from standard training. Table 1 shows that the baseline network trained on a clean data set becomes completely vulnerable (with almost 0% accuracy) under PGD and CW attacks. Our CLC-NN improves its accuracy to nearly 40% and 80% under PGD and CW attacks respectively. The accuracy under FGSM attacks has almost been doubled by our CLC-NN method. The accuracy on clean data is slightly decreased because the lower-dimensional embedding functions cannot exactly capture Z‖ orM. • CLC-NN further improves the robustness of adversarially trained networks. Table 2 shows that while an adversarially trained network is inherently robust against certain types of perturbations, CLC-NN strengthens its robustness significantly against various perturbations. For in- • The robustness improvement of adversarially trained networks is less significant. This is expected because the trajectory of perturbed data lies on the embedding subspace Z‖ if that data sample has been used in adversarial training. However, our experiments show that applying CLCNN to adversarially trained networks can achieve the best performance under most attacks. Comparison with PixelDefend (Song et al., 2017). Our method achieves similar performance on CIFAR-10 with slightly different experimental setting. Specifically, PixelDefend improved the robustness of a normally trained 62-layer ResNet from 0% to 78% against CW attack. Our proposed CLC-NN improves the robustness of a 20-layer ResNet from 0% to 81% against CW attacks. Furthermore, we show that CLC-NN is robust against the manifold-based attack. No result was reported for CIFAR-100 in Song et al. (2017). Comparison with Reactive Defense Reactive defenses can be understood as only applying a control at the initial condition of a dynamical system. Specifically, reactive defense equipped with linear embedding admits the following dynamics: xt+1 = f(xt,θt), s.t. x0 = V r 0(V r 0) Tx ,0. (12) By contrast, CLC-NN controls all hidden states and results in a decreasing error as the number of layers T increases (c.f. Theorem 1). To quantitatively compare CLC-NN with reactive defense, we implement them with the same linear embedding functions and against all perturbations. In Table 3, CLC-NN outperforms reactive defense in almost all cases except that their performances are case-dependent on clean data. 7 CONCLUSION We have proposed a close-loop control formulation to improve the robustness of neural networks. We have studied the embedding of state trajectory during forward propagation to define the optimal control objective function. The numerical experiments have shown that our method can improve the robustness of a trained neural network against various perturbations. We have provided an error estimation for the proposed method in the linear case. Our current implementation uses the Pontryagin’s Maximum Principle and an online iterative algorithm to overcome the intractability of solving a dynamical programming. This online process adds extra inference time. In the future, we plan to show the theoretical analysis for the nonlinear embedding case. Acknowledgement Zhuotong Chen and Zheng Zhang are supported by NSF CAREER Award No. 1846476 and NSF CCF No. 1817037. Qianxiao Li is supported by the start-up grant under the NUS PYP programme. A APPENDIX A ERROR ESTIMATION FOR THE PROPOSED CLC-NN Preliminaries We define the performance index at time t as J(xt,ut) = 1 2 ‖Qt(xt + ut)‖22 + c 2 ‖ut‖22, (13) where Qt = I−Vrt (Vrt )T , Vrt is the linear projection matrix at time t with only its first r principle components corresponding to the largest r eigenvalues. The optimal feedback control is defined as u∗t (xt) = arg min ut J(xt,ut), due to the linear system and quadratic performance index, the optimal feedback control admits an analytic solution by taking the gradient of performance index (Eq. (13)) and setting it to 0. ∇uJ(xt,ut) = ∇u ( 1 2 ‖Qt(xt + ut)‖22 + c 2 ‖ut‖22 ) , = QTt Qtxt + Q T t Qtut + c · ut, which leads to the analytic solution of u∗t (xt) as u∗t (xt) = −(c · I + QTt Qt)−1QTt Qtxt. (14) The above analytic control solution u∗t optimizes the performance index instantly at time step t, the error measured by Eq. (13) for the dynamical programming solution x ,t must be smaller or equal to the state trajectory equipped with u∗t define by Eq. (14), which gives a guaranteed upper bound for the error estimation of the dynamic programming solution. We define the feedback gain matrix Kt = (c · I + QTt Qt)−1QTt Qt. Thus, the one-step optimal feedback control can be represented as u∗t = −Ktxt. The difference between the controlled system applied with perturbation at initial condition and the uncontrolled system without perturbation is shown x ,t+1 − xt+1 = θt(x ,t + ut − xt), = θt(x ,t −Ktx ,t − xt). (15) The control objective is to minimize the state components that span the orthogonal complement of the data manifold (I − Vrt (Vrt )T ), when the input data to feedback control only stays in the state manifold, such that ‖(I−Vrt (Vrt )T )xt‖22 = 0, the feedback control Ktxt = 0. The state difference of Eq. (15) can be further shown by adding a 0 term of (θtKtxt) x ,t+1 − xt+1 = θt(I−Kt)x ,t − θtxt + θtKtxt, = θt(I−Kt)(x ,t − xt). (16) In the following, we show a transformation on the control dynamic term (I − Kt) based on its definition. Lemma 1. For t ≥ 0, we have I−Kt = α · I + (1− α) ·Pt, where Pt := Vrt (V r t ) T , which is the orthogonal projection onto Zt‖, and α := c 1+c such that α ∈ [0, 1]. Proof. Recall that Kt = (c ·I+QTt Qt)−1QTt Qt, and Qt = I−Vrt (Vrt )T , Qt can be diagonalized as following Qt = Vt 0 0 · · · 0 0 0 0 · · · 0 0 ... ... . . . 0 0 0 0 · · · 1 0 0 0 · · · 0 1 VTt , where the first r diagonal elements have common value of 0 and the last (d − r) diagonal elements have common value of 1. Furthermore, the feedback gain matrix Kt can be diagonalized as Kt = Vt 0 0 · · · 0 0 0 0 · · · 0 0 ... ... . . . 0 0 0 0 · · · 11+c 0 0 0 · · · 0 11+c VTt , where the last (d − r) diagonal elements have common value of 11+c . The control term (I −Kt) thus can be represented as I−Kt = Vt 1 0 · · · 0 0 0 1 · · · 0 0 ... ... . . . 0 0 0 0 · · · c1+c 0 0 0 · · · 0 11+c VTt , where the first r diagonal elements have common value of 1 and the last (d − r) diagonal elements have common value of c1+c . By denoting the projection of first r columns as V r t and last (d − r) columns as V̂rt , it can be further shown as I−Kt = Vrt (Vrt )T + c 1 + c ( V̂rt (V̂ r t ) T ) , = Pt + α ( I−Pt ) , = α · I + (1− α) ·Pt. Oblique Projections. Let P be a linear operator on Rd, • We say that P is an projection if P2 = P. • P is an orthogonal projection if P = PT = P2. • If P2 = P but P 6= PT , it is called an oblique projection. Proposition 2. For a projection P, 1. If P is an orthogonal projection, then ‖P‖2 = 1. 2. If P is an oblique projection, then ‖P‖2 > 1. 3. If P, Q are two projections such that range(P) = range(Q), then PQ = Q and QP = P. 4. If P is a projection, then rank(P) = Tr(P). Furthermore, if P is an orthogonal projection, then rank(P) = ‖P‖2F = Tr(PPT). Define for t ≥ 0 { P0t := Pt, P (s+1) t := θ −1 t−s−1P s tθt−s−1, s = 0, 1, . . . , t− 1, Lemma 3. Let Pst be defined as above for 0 ≤ s ≤ t. Then 1. Pst is a projection. 2. Pst is a projection onto Z t−s ‖ , i.e. range(P s t ) = Z t−s ‖ . 3. ‖Pst‖2F ≤ κ(θt−1θt−2 . . .θt−s)2 · r, where κ(A) is the condition number of A, i.e. κ(A) = ‖A‖2 · ‖A−1‖2, and r = rank(Z0‖) = rank(Z1‖) = . . . = rank(Z t ‖). Proof. 1. We prove it by induction on s for each t. For s = 0, P0t = Pt, which is a projection by its definition. Suppose it is true for s such that Pst = P s tP s t , then for (s+ 1), (Ps+1t ) 2 = ( θ−1t−s−1P s tθt−s−1 )2 , = θ−1t−s−1 ( Pst )2 θt−s−1, = θ−1t−s−1P s tθt−s−1, = Ps+1t . 2. We prove it by induction on s for each t. For s = 0, P0t = Pt, which is the orthogonal projection onto Zt‖. Suppose that it is true for s such that P s t is a projection onto Z t−s ‖ , then for (s + 1), Ps+1t = θ −1 t−s−1P s tθt−s−1, which implies range(Ps+1t ) = range(θ −1 t−s−1P s t ), = {θ−1t−s−1x : x ∈ Z t−s ‖ }, = Zt−s−1‖ . 3. We use the inequalities ‖AB‖F ≤ ‖A‖2‖B‖F , and ‖AB‖F ≤ ‖A‖F ‖B‖2. By the definition of Pst , Pst = ( θt−1θt−2 · · ·θt−s )−1 P0t ( θt−1θt−2 · · ·θt−s ) , we have the following ‖Pst‖2F ≤ ‖ ( θt−1θt−2 · · ·θt−s )−1‖22 · ‖(θt−1θt−2 · · ·θt−s)‖22 · ‖P0t‖2F , ≤ κ(θt−1θt−2 · · ·θt−s)2 · r, Lemma 2(4). The following Lemma uses the concept of oblique projection to show a recursive relationship to project any tth state space of Eq. (16) back to the input data space. Lemma 4. Define for 0 ≤ s ≤ t, Gst := α · I + (1− α)Pst . Then, Eq. (16) can be written as x ,t − xt = (θt−1θt−2 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), t ≥ 1. Proof. We prove it by induction on t. For t = 1, by the definition of Gst and transformation from Lemma 1, x ,1 − x1 = θ0(I−K0)(x ,0 − x0), Eq. (16), = θ0(α · I + (1− α) ·P0)(x ,0 − x0), Lemma 1, = θ0G 0 0(x ,0 − x0). Suppose that it is true for (x ,t − xt), by using Eq. (16) and Lemma 1, we have x ,t+1 − xt+1 = θt(I−Kt)(x ,t − xt), Eq. (16), = θt(α · I− (1− α) ·Pt)(x ,t − xt), Lemma 1, = θtG 0 t (θt−1θt−2 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0). (17) Recall the definitions of P(s+1)t := θ −1 t−s−1P s tθt−s−1, and G s t := α · I + (1 − α)Pst , we have the following Gs+1t = α · I + (1− α) ·P (s+1) t , = α · I + (1− α) · θ−1t−s−1Pstθt−s−1, = θ−1t−s−1 ( α · I + (1− α) ·Pst ) θt−s−1, = θ−1t−s−1G s tθt−s−1, which results in the equality for the oblique projections. Furthermore, θt−s−1G (s+1) t = G s tθt−s−1. Applying the above to Eq. (17) results in x ,t+1 − xt+1 = θtG0t (θt−1θt−2 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), = (θtθt−1)G 1 t (θt−2θt−3 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), = (θtθt−1θt−2)G 2 t (θt−3θt−4 · · ·θ0)(Gt−1t−1G t−2 t−2 · · ·G00)(x ,0 − x0), = (θtθt−1 · · ·θ0)(GttGt−1t−1 · · ·G00)(x ,0 − x0). Lemma 5. Let Ft := G (t−1) t−1 G (t−2) t−2 · · ·G00, t ≥ 1. Then, Ft = α t · I + (1− α) t−1∑ s=0 αsPss. Proof. We prove it by induction on t. Recall the definition of Gst := α · I + (1 − α) · Pst . When t = 1, F1 = G 0 0 = α · I + (1− α) ·P00. Suppose that it is true for t such that Ft = G (t−1) t−1 G (t−2) t−2 · · ·G00 = αt · I + (1− α) t−1∑ s=0 αsPss, for (t+ 1), Ft+1 = G t tF (t), = (α · I + (1− α) ·Ptt)Ft, = (α · I + (1− α) ·Ptt)(αt · I + (1− α) t−1∑ s=0 αsPss), = αt+1 · I + αt(1− α)Ptt + (1− α)2 t−1∑ s=0 αs ·PttPss + α(1− α) t−1∑ s=0 αs ·Pss. Recall Lemma 3, range(Ptt) = range(P s s) = Z 0 ‖ . According to Proposition 2 (3), P t tP s s = P s s. Hence, Ft+1 = α t+1 · I + αt(1− α) ·Ptt + (1− α) t−1∑ s=0 αs ·Pss, = αt+1 · I + (1− α) t∑ s=0 αs ·Pss. Lemma 6. Let V ∈ Rd×r be a matrix whose columns are an orthogonal basis for a subspace D, and θ ∈ Rd×d be invertible. Let P = VVT be the orthogonal projection onto D. Denote by P̂ the orthogonal projection onto θD := {θx : x ∈ D}. Then 1. θ−1P̂θ is an oblique projection onto D. 2. ‖θ−1P̂θ −P‖2 ≤ ( 1 + κ(θ)2 ) · ‖I− θTθ‖2. In general, the last inequality shows that θ−1P̂θ = P, if θ is orthogonal. Proof. 1. (θ−1P̂θ)2 = θ−1P̂2θ = θ−1P̂θ, therefore, θ−1P̂θ is an projection. 2. Since P̂ is orthogonal projection onto the row space of θV, then P̂ = θV [ (θV)T (θV) ]−1 (θV)T , = θV [ VTθTθV ]−1 VTθT . θ−1P̂θ = V [ VTθTθV ]−1 VTθTθ. Furthermore, ‖θ−1P̂θ −P‖2 = ‖V [ VTθTθV ]−1 VTθTθ −VVT ‖2, ≤ ‖V [ VTθTθV ]−1 VTθTθ −VVTθTθ‖2 + ‖VVTθTθ −VVT ‖2, ≤ ‖V ( [VTθTθV]−1 − I ) VT ‖2 · ‖θTθ‖2 + ‖θTθ − I‖2, ≤ ‖[VTθTθV]−1‖2 · ‖I−VTθTθV‖2 · ‖θTθ‖2 + ‖θTθ − I‖2, ≤ ‖[VTθTθV]−1‖2 · ‖I− θTθ‖2 · ‖θTθ‖2 + ‖θTθ − I‖2. We further bound ‖[VTθTθV]−1‖2. ‖[VTθTθV]−1‖2 = ( λmin(V TθTθV) )−1 , = ( inf ‖x‖2=1 xTVTθTθVx )−1 , ≤ ( inf ‖x′‖2=1 (x′)TθTθx′ )−1 , = ( λmin(θ Tθ) )−1 , = ‖(θTθ)−1‖2. Hence, we have ‖θ−1P̂θ −P‖2 ≤ ( 1 + ‖θTθ‖2 · ‖(θTθ)−1‖2 ) · ‖I− θTθ‖2, = ( 1 + κ(θ)2 ) · ‖I− θTθ‖2. Corollary 1. Let t ≥ 1. Then for each s = 0, 1, · · · , t, we have ‖Pss −P0‖2 ≤ ( 1 + κ(θs) 2 ) · ‖I− θTs θs‖2, where • θ := θs−1 · · ·θ0, s ≥ 1, • θ := I, s = 0. Observe that Pss = (θs) −1Psθs. Using Lemma 6, we arrive at the main theorem. Theorem 1. For t ≥ 1, we have the error estimation ‖x ,t − xt‖22 ≤ ‖θt−1 · · ·θ0‖22 · ( α2t‖z⊥‖22 + ‖z‖‖22 + γt‖z‖22 ( γtα 2(1− αt−1)2 + 2(α− αt) )) . where γt := max s≤t ( 1 + κ(θs) 2 ) ‖I− θTs θs‖2, and α = c1+c , c represents the control regularization. In particular, the equality ‖x ,t − xt‖22 = α2t‖z⊥‖22 + ‖z‖‖22. holds when all θt are orthogonal. Proof. The input perturbation z = x ,0 − x0 can be written as z = z‖ + ·z⊥, where z‖ ∈ Z‖ and z⊥ ∈ Z⊥, where z‖ and z⊥ are vectors such that • z‖ · z⊥ = 0 almost surely. • z‖, z⊥ have uncorrelated components. • z‖ ∈ D, and z⊥ ∈ D⊥. Since z‖ and z⊥ are orthogonal almost surely, recall Lemma 4, ‖x ,t − xt‖22 = ‖(θt−1θt−2 · · ·θ0)(Gt−1t−1 · · ·G00)z‖22, ≤ ‖θt−1θt−2 · · ·θ0‖22 · ‖(Gt−1t−1 · · ·G00)z‖22, (18) For the term ‖(Gt−1t−1 · · ·G00)z‖22, recall Lemma 5, ‖(Gt−1t−1 · · ·G00)z‖22 = ‖ ( αt · I + (1− α) t−1∑ s=0 αs ·Pss ) z‖22, = ‖αtz + (1− α) t−1∑ s=0 αsP0z + (1− α) t−1∑ s=0 αs(Pss −P0)z‖22, = ‖αtz + (1− αt)z‖ + (1− α) t−1∑ s=0 αs(Pss −P0)z‖22, in the above, P0 is an orthogonal projection on t = 0 (input data space), therefore, P0z = z‖. Furthermore, when s = 0, Pss −P0 = 0. Thus, ‖(Gt−1t−1 · · ·G00)z‖22 = α2t‖z‖22 + (1− αt)2‖z‖‖22 + (1− α)2 t−1∑ s,q=1 αsαqzT (Pss −P0)T (Pqq −P0)z + 2αt(1− αt)‖z‖‖22 + 2αt(1− α) t−1∑ s=1 αszT (Pss −P0)z + 2(1− αt)(1− α) t−1∑ s=1 αs(z‖)T (Pss −P0)z, = α2t‖z⊥‖22 + ( α2t + 2αt(1− αt) + (1− αt)2 ) ‖z‖‖22 + (1− α)2 t−1∑ s,q=1 αsαqzT (Pss −P0)T (Pqq −P0)z + 2αt(1− α) t−1∑ s=1 αszT (Pss −P0)z + 2(1− αt)(1− α) t−1∑ s=1 αs(z‖)T (Pss −P0)z, = α2t‖z⊥‖22 + ‖z‖‖22 + (1− α)2 t−1∑ s,q=1 αsαqzT (Pss −P0)T (Pqq −P0)z + 2αt(1− α) t−1∑ s=1 αszT (Pss −P0)z + 2(1− αt)(1− α) t−1∑ s=1 αs(z‖)T (Pss −P0)z. Using Corollary 1, we have • zT (Pss −P0)z ≤ ‖z‖22 · ‖Pss −P0‖2, ≤ γt‖z‖22. • zT (Pss −P0)T (Pqq −P0)z ≤ ‖z‖22 · ‖Pss −P0‖2 · ‖Pqq −P0‖2, ≤ γ2t ‖z‖22. • (z‖)T (Pss −P0)z ≤ γt‖z‖‖2 · ‖z‖2, ≤ γt‖z‖22. Thus, we have ‖(Gt−1t−1 · · ·G00)z‖22 ≤ α2t‖z⊥‖22 + ‖z‖‖22 + α2(1− αt−1)2γ2t ‖z‖22 + 2αt+1(1− αt−1)γt‖z‖22 + 2α(1− αt)(1− αt−1)γt‖z‖22, = α2t‖z⊥‖22 + ‖z‖‖22 + γt‖z‖22 ( γtα 2(1− αt−1)2 + 2(α− αt) ) . Recall the error estimation in Eq. (18), ‖x ,t − xt‖22 ≤ ‖θt−1θt−2 · · ·θ0‖22 · ‖(Gt−1t−1 · · ·G00)z‖22, ≤ ‖θt−1 · · ·θ0‖22 · ( α2t‖z⊥‖22 + ‖z‖‖22 + γt‖z‖22 ( γtα 2(1− αt−1)2 + 2(α− αt) )) . In the specific case, when all θt are orthogonal, γt : = max s≤t ( 1 + κ(θs) 2 ) ‖I− θTs θs‖2 = 0. Thus, ‖x ,t − xt‖22 = α2t‖z⊥‖22 + ‖z‖‖22. B APPENDIX B DETAILS OF EXPERIMENTAL SETTING B.1 NETWORK CONFIGURATIONS Since the proposed CLC-NN optimizes the entire state trajectory, it is important to have a relatively smooth state trajectory, in which case, when the reconstruction loss ‖Et(xt) − xt‖22 at layer t is small, the reconstruction losses at its adjacent layers should be small. For this reason, we use residual neural network (He et al., 2016) as network candidate to retain smoother dynamics. The configuration of the residual neural network used for both CIFAR-10 and CIFAR-100 is shown in Tab. 4. Based on the configuration of residual neural network shown in Tab. 4, we construct 4 embedding functions applied at input space, outputs of initial layer, residual block 1 and residual block 2. The output of residual block 3 is embedded with a linear orthogonal projection. We randomly select 5000 clean training data to collect state trajectories at all 5 locations. • For the linear orthogonal projections: we apply the Principle Component Analysis on each of the state collections. We retain the first r columns of the resulted basis, such that r = arg min{i : λ1+...+λi λ1...+λd ≥ 1− δ}, where δ = 0.1. • For the nonlinear embedding: we train 4 convolutional auto-encoders for the input space, outputs of the initial layer and residual blocks 1, 2. All of the embedding functions are trained individually. We adopt shallow convolutional auto-encoder structure to gain fast inference speed, in which case, CLC-NN equipped with linear embedding often outperform the nonlinear embedding as shown in Tab. 1. The configuration of all 4 convolutional auto-encoders are shown in Tab. 5. B.2 PERTURBATIONS AND DEFENSIVE TRAINING In this section, we show details about the perturbations and robust networks that have been considered in this work. For the adversarial training objective function, min θ∈Θ max x ,0=∆(x0, ) E (x0,y)∼D [(1− λ) · Φi(x ,T ,y,θ) + λ · Φi(xT ,y,θ)], where ∆(x0, ) generates a perturbed data from given input x0 within the range of . λ balances between standard accuracy and robustness. We choose λ = 0.5 in all adversarial training. For robust networks, we consider both perturbation agnostic and non-agnostic methods. For the perturbation agnostic adversarial training algorithms equipped ∆(x0, ), the resulted network that is the most robust against the ∆(x0, ) perturbation. On the contrary, perturbation non-agnostic robust training methods are often robust against many types of perturbations. • Adversarial training with the fast gradient sign method (FGSM) (Goodfellow et al., 2014) considers perturbed data as follows. x ,0 = x0 + sign(∇x0Φ(xT ,y)), (x0, y) ∼ D, where sign(·) outputs the sign of the input. In which case, FGSM considers the worse case within the range of along the gradient∇x0Φ(xT ,y) increasing direction. Due to the worse case consideration, it does not scale well for deep networks, for this reason, we adversarially train the network with FGSM with = 4, which is half of the maximum perturbation considered in this paper. • The label smoothing training (Label Smooth) (Hazan et al., 2017) does not utilize any perturbation information ∆(x0, ). It converts one-hot labels into soft targets by setting the correct class as 1 − , while other classes have value of N−1 , where is a small constant and N is number of classes. Specifically, we choose = 0.9 in this paper. • Adversarial training with the project gradient descent (PGD) (Madry et al., 2017) generates adversarial data by iteratively run FGSM with small step size, which results in stronger perturbations compared with FGSM within the same range . We use 7-step of = 2 to generate adversarial data for robust training. For Perturbations, we consider the maximum range of = 2, 4, 8 to test the performance the network robustness against both strong and weak perturbations. For this work, we test network robustness with the manifold-based attack (Jalal et al., 2017), FGSM (Goodfellow et al., 2014), 20-step of PGD (Madry et al., 2017) and the CW attack (Carlini & Wagner, 2017). B.3 ONLINE OPTIMIZATION Optimization Methods. we use Adam (Kingma & Ba, 2014) to maximize the Hamiltonian Eq. (9) with default setting. In which case, solving the PMP brings in extra computational cost for inference. Each online iteration of solving the PMP requires a combination of forward propagation (Eq. (7)), backward propagation (Eq. (8)) and a maximization w.r.t. the control parameters (Eq. (9)), which has computational cost approximately the same as performing gradient descent on training a neural network for one iteration. For the numerical results presented in the paper, we choose the maximum iteration that gives the best performance from one of [5, 10, 20, 30, 50]. C MORE NUMERICAL EXPERIMENTS The proposed CLC-NN is designed to be compatible with existing open-loop trained. We show extra experiments by employing the proposed CLC-NN on two baseline models, DenseNet-40 (Table 6). The layer-wise projection performs orthogonal projection on the hidden state. We define the local cost function at the tth layer as follows J(xt,ut) = 1 2 ‖Qt(xt + ut)‖22 + c 2 ‖ut‖22, the layer-wise achieves the optimal solution at local time t, u∗t (xt) = arg min ut J(xt,ut). However, the layer-wise optimal control solution does not guarantee the optimum across all layers. In Table 7, we launch comparisons between the proposed CLC-NN with layer-wise projection. Specifically, under all perturbations the proposed CLC-NN outperforms layer-wise projection. D ROBUSTNESS AGAINST MANIFOLD-BASED ATTACK The manifold-based attack (Jalal et al., 2017) (denoted as Manifold) has shown great success on breaking down the manifold-based defenses (Samangouei et al., 2018). The proposed CLC-NN can successfully defend such specifically design adversarial attack for manifold-based defenses and improves the robustness accuracy from 1% to 81% for the standard trained model in Cifar-10, and 2% to 52% in Cifar-100. We provide detailed explanation for the successful defense of the proposed CLC-NN against such strong adversarial attack. Exsiting manifold-based defense (Samangouei et al., 2018) focuses on detecting and de-noising the input components that do not lie within the underlying manifold. The overpowered attack proposed in Jalal et al. (2017) searches adversarial attack with in the embedded latent space, which is undetectable for the manifold-based defenses and caused complete failure defense. In the real implementation, the manifold-based attack (Jalal et al., 2017) is detectable and controllable under the proposed framework due to the following reason. The numerically generated manifold embedding functions are not ideal. The error sources of non-ideal embedding functions are mainly due to the algorithm that used to compute the manifold, the architecture of embedding function, and the distribution shift between training and testing data (embedding functions of training data do not perfectly agree with testing data). In which case, even the perturbation is undetectable and non-controllable at initial layer, as it is propagated into hidden layers, each layer amplifies such perturbation, therefore, the perturbation becomes detectable and controllable in hidden layers. We randomly select the batch of testing data to generate the manifold-based attack following the same procedure proposed in Jalal et al. (2017). The proposed method improves the attacked accuracy from 1% to 78%. More specifically, we compare the differences of all hidden states spanning the orthogonal complement between a perturbed testing data and its unperturbed counterpart, ‖P⊥t x ,t− P⊥t x ,t‖, where P⊥t is a projection onto the orthogonal complement. The difference is growing such as 0, 0.016, 0.0438, 0.0107, 0.0552 for hidden states at layer 0, 1, 2, 3, 4 respectively. This validates the argument for how the proposed method is able to detect such perturbation and controls the perturbation in hidden layers. Furthermore, we provide some insights about the reasons behind the success of such an adversarial attack. This follows the same concept of the existence of adversarial attack in neural networks. The highly nonlinear behaviour of neural networks preserves complex representative ability, meanwhile, its powerful representation results in its vulnerability. For example, a constant function has 50% chance to make a correct prediction in binary classification problem under any perturbation, but its performance is limited. Therefore, we propose to use a linear embedding function that compensates between the embedding accuracy and robustness. E DEFINITION OF THREAT MODEL Generally, an attacker should not have access to the hidden states during inference, in which case, an attacker is not allowed to inject extra noise during inference. To define the threat model of the proposed method, for the white-box setting, an attacker has access to both network and all embedding functions. The condition that the perturbation · z makes our method vulnerable is defined as follows, T−1∑ t=0 ‖Et(x ,t)− x ,t‖22 = 0, x ,0 = x0 + · z. In words, the perturbation ·z applied on the input data must result in 0 reconstruction losses across all hidden layers, which means its corresponding state trajectory does not span any of all orthogonal complements of all hidden state spaces. To obtain an effective attack satisfying the above equation, conventional gradient-based attackers cannot guarantee to find an perfect attack. A possible way is to perform grid-search backward in layers to find such an adversarial attack satisfying the thread model condition, which is extremely costly.
1. What is the novel approach introduced by the paper in terms of data manipulation? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its claim of being a closed-loop control system? 3. Do you have any suggestions for improving the experimental section, such as adding more baselines or clarifying the training process? 4. How does the reviewer assess the clarity and organization of the paper, including minor issues with table formatting and explanation? 5. Are there any additional suggestions or ideas that could enhance the paper's contribution, such as exploring multi-step projections or constraining the data to a single space?
Review
Review Strength: This paper first introduced layer-wised projection from the poisoned data to the clean data. The results show improvement of the robustness over the baseline on different types of attacks. Weakness: The statement of the closed-loop control is a little bit ambitious. The overall methods are the layer-wise projection from the poisoned data to the clean data manifold. Normally, in the closed-loop, we will use the control signal u to control the original data instead of the next layer data. For example, the final balance stage should be u = g ( x + f ( x + u ) ) = 0 . So closed-loop should be at least multi-steps within one layer. For different layers, the closed-loop control will have different control signals, since the dimension/ distribution between layers is much different. So this method is only a one-step layer-wise projection. The x between layers cannot be viewed as the same sample to be controlled. I would recommend the author to change the statement from closed-loop into the layer-wise projection for a better suit. Also, this method is still a feed-forward network, not a "loop" control. I do think this method is interested, just a little ambitious. This could be a useful extension of the resnet-based network, since the control u can be viewed as a complicated version of residuals. The experiment is weak for only comparing with one baseline. Also, can the author provide which baseline model that the author is comparing with? I cannot find it in the text. I would appreciate it. It's unclear to me about the training of E ( x ) , will this requires extra data to train? What's the running speed of this "closed-loop" method compared with others? Some tiny comments: A trivial comparison would be to train an autoencoder for each layer of x and only use the decoded results to pass through the network. This in principle learns the data manifold and provides the projection to this manifold. Table 2, the dataset name is not aligned in the center. Table 3 should be more self-explainable. It's a little confusing in the current form. Have the author tried multi-steps in a single layer or constraint x t to be in the same space? ----- post rebuttal ----- The authors addressed most of my concerns and the revision is better than before. I would like to increase my score and would recommend an acceptance.
ICLR
Title DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning Abstract Federated Learning (FL) is a powerful technique for training a model on a server with data from several clients in a privacy-preserving manner. In FL, a server sends the model to every client, who then train the model locally and send it back to the server. The server aggregates the updated models and repeats the process for several rounds. FL incurs significant communication costs, in particular when transmitting the updated local models from the clients back to the server. Recently proposed algorithms quantize the model parameters to efficiently compress FL communication. These algorithms typically have a quantization level that controls the compression factor. We find that dynamic adaptations of the quantization level can boost compression without sacrificing model quality. First, we introduce a time-adaptive quantization algorithm that increases the quantization level as training progresses. Second, we introduce a client-adaptive quantization algorithm that assigns each individual client the optimal quantization level at every round. Finally, we combine both algorithms into DAdaQuant, the doubly-adaptive quantization algorithm. Our experiments show that DAdaQuant consistently improves client→server compression, outperforming the strongest non-adaptive baselines by up to 2.8×. 1 INTRODUCTION Edge devices such as smartphones, remote sensors and smart home appliances generate massive amounts of data (Wang et al., 2018b; Cao et al., 2017; Shi & Dustdar, 2016). In recent years, Federated Learning (FL) has emerged as a technique to train models on this data while preserving privacy (McMahan et al., 2017; Li et al., 2018). In FL, we have a single server that is connected to many clients. Each client stores a local dataset that it does not want to share with the server because of privacy concerns or law enforcement (Voigt & Von dem Bussche, 2017). The server wants to train a model on all local datasets. To this end, it initializes the model and sends it to a random subset of clients. Each client trains the model on its local dataset and sends the trained model back to the server. The server accumulates all trained models into an updated model for the next iteration and repeats the process for several rounds until some termination criterion is met. This procedure enables the server to train a model without accessing any local datasets. Today’s neural network models often have millions or even billions (Brown et al., 2020) of parameters, which makes high communication costs a concern in FL. In fact, Qiu et al. (2020) suggest that communication between clients and server may account for over 70% of energy consumption in FL. Reducing communication in FL is an attractive area of research because it lowers bandwidth requirements, energy consumption and training time. Communication in FL occurs in two phases: Sending parameters from the server to clients (downlink) and sending updated parameters from clients to the server (uplink). Uplink bandwidth usually imposes a tighter bottleneck than downlink bandwidth. This has several reasons. For one, the average global mobile upload bandwidth is currently less than one fourth of the download bandwidth (Speedtest). For another, FL downlink communication sends the same parameters to each client. Broadcasting parameters is usually more efficient than the accumulation of parameters from differ- ent clients that is required for uplink communication (Amiri et al., 2020; Reisizadeh et al., 2019). For these reasons, we seek to compress uplink communication. A large class of compression algorithms for FL apply some lossy quantizer Q, optionally followed by a lossless compression stage. Q usually provides a “quantization level” hyperparameter q to control the coarseness of quantization (e.g. the number of bins for fixed-point quantization). When q is kept constant during training, we speak of static quantization. When q changes, we speak of adaptive quantization. Adaptive quantization can exploit asymmetries in the FL framework to minimize communication. One such asymmetry lies in FL’s training time, where Jhunjhunwala et al. (2021) observed that early training rounds can use a lower q without affecting convergence. Figure 2 illustrates how time-adaptive quantization leverages this phenomenon to minimize communication. Another asymmetry lies in FL’s client space, because most FL algorithms weight client contributions to the global model proportional to their local dataset sizes. Figure 1 illustrates how client-adaptive quantization can minimize the quantization error. Intuitively, FL clients with greater weighting should have a greater commu- nication budget and our proposed client-adaptive quantization achieves this in a principled way. To this end, we introduce the expected variance of an accumulation of quantized parameters, E[Var( ∑ Q(p))], as a measure of the quantization error. Our client-adaptive quantization algorithm then assigns clients minimal quantization levels, subject to a fixed E[Var( ∑ Q(p))]. This lowers the amount of data communicated from clients to the server, without increasing the quantization error. DAdaQuant (Doubly Adaptive Quantization) combines time- and client-adaptive quantization with an adaptation of the QSGD fixed-point quantization algorithm to achieve state-of-the-art FL uplink compression. In this paper, we make the following contributions: • We introduce the concept of client-adaptive quantization and develop algorithms for time- and client-adaptive quantization that are computationally efficient, empirically superior to existing algorithms, and compatible with arbitrary FL quantizers. Our client-adaptive quantization is provably optimal for stochastic fixed-point quantizers. • We create Federated QSGD as an adaptation of the stochastic fixed-point quantizer QSGD that works with FL. Federated QSGD outperforms all other quantizers, establishing a strong baseline for FL compression with static quantization. • We combine time- and client-adaptive quantization into DAdaQuant. We demonstrate DAdaQuant’s state-of-the-art compression by empirically comparing it against several competitive FL compression algorithms. 2 RELATED WORK FL research has explored several approaches to reduce communication. We identify three general directions. First, there is a growing interest of investigating FL algorithms that can converge in fewer rounds. FedAvg (McMahan et al., 2017) achieves this with prolonged local training, while FOLB (Nguyen et al., 2020) speeds up convergence through a more principled client sampling. Since communication is proportional to the number of training rounds, these algorithms effectively reduce communication. Secondly, communication can be reduced by reducing the model size because the model size is proportional to the amount of training communication. PruneFL (Jiang et al., 2019) progressively prunes the model over the course of training, while AFD (Bouacida et al., 2021) only trains submodels on clients. Thirdly, it is possible to directly compress FL training communication. FL compression algorithms typically apply techniques like top-k sparsification (Malekijoo et al., 2021; Rothchild et al., 2020) or quantization (Reisizadeh et al., 2019; Shlezinger et al., 2020) to parameter updates, optionally followed by lossless compression. Our work applies to quantization-based compression algorithms. It is partially based on QSGD (Alistarh et al., 2017), which combines lossy fixed-point quantization with a lossless compression algorithm to compress gradients communicated in distributed training. DAdaQuant adapts QSGD into Federated QSGD, which works with Federated Learning. DAdaQuant also draws inspiration from FedPAQ (Reisizadeh et al., 2019), the first FL framework to use lossy compression based on model parameter update quantization. However, FedPAQ does not explore the advantages of additional lossless compression or adaptive quantization. UVeQFed (Shlezinger et al., 2020) is an FL compression algorithm that generalizes scalar quantization to vector quantization and subsequently employs lossless compression with arithmetic coding. Like FedPAQ, UVeQFed also limits itself to a single static quantization level. Faster convergence, model size reduction and communication compression are orthogonal techniques, so they can be combined for further communication savings. For this paper, we limit the scope of empirical comparisons to quantization-based FL compression algorithms. For quantization-based compression for model training, prior works have demonstrated that DNNs can be successfully trained in low-precision (Banner et al., 2018; Gupta et al., 2015; Sun et al., 2019). There are also several adaptive quantization algorithms for training neural networks in a non-distributed setting. Shen et al. (2020) use different quantization levels for different parameters of a neural network. FracTrain (Fu et al., 2020) introduced multi-dimensional adaptive quantization by developing time-adaptive quantization and combining it with parameter-adaptive quantization. However, FracTrain uses the current loss to decide on the quantization level. FL generally can only compute local client losses that are too noisy to be practical for FracTrain. AdaQuantFL introduces time-adaptive quantization to FL, but requires the global loss (Jhunjhunwala et al., 2021). To compute the global loss, AdaQuantFL has to communicate with every client each round. We show in Section 4.2 that this quickly becomes impractical as the number of clients grows. DAdaQuant’s time-adaptive quantization overcomes this issue without compromising on the underlying FL communication. In addition, to the best of our knowledge, DAdaQuant is the first algorithm to use client-adaptive quantization. 3 THE DADAQUANT METHOD 3.1 FEDERATED LEARNING Federated Learning assumes a client-server topology with a set C = {ci|i ∈ {1, 2...N}} of N clients that are connected to a single server. Each client ck has a local dataset Dk from the local data distribution Dk. Given a model M with parameters p, a loss function fp(d ∈ Dk) and the local loss Fk(p) = 1 |Dk| ∑ d∈Dk fp(d), FL seeks to minimize the global loss G(p) = ∑N k=1 |Dk|∑ l |Dl| Fk(p). 3.2 FEDERATED AVERAGING (FEDAVG) DAdaQuant makes only minimal assumptions about the FL algorithm. Crucially, DAdaquant can complement FedAvg (McMahan et al., 2017), which is representative of a large class of FL algorithms. FedAvg trains the model M over several rounds. In each round t, FedAvg sends the model parameters pt to a random subset St of K clients who then optimize their local objectives Fk(pt) and send the updated model parameters pkt+1 back to the server. The server accumulates all parameters into the new global model pt+1 = ∑ k∈St |Dk|∑ j |Dj | pkt+1 and starts the next round. Algorithm 1 lists FedAvg in detail. For our experiments, we use the FedProx (Li et al., 2018) adaptation of FedAvg. FedProx improves the convergence of FedAvg by adding the proximal term µ2 ‖p k t+1 − pt‖2 to the local objective Fk(pkt+1) in Line 20 of Algorithm 1. 3.3 QUANTIZATION WITH FEDERATED QSGD While DAdaQuant can be applied to any quantizer with a configurable quantization level, it is optimized for fixed-point quantization. We introduce Federated QSGD as a competitive stochastic fixed-point quantizer on top of which DAdaQuant is applied. In general, stochastic fixed-point quantization uses a quantizer Qq with quantization level q that splits R≥0 and R≤0 into q intervals each. Qq(p) then returns the sign of p and |p| stochastically rounded to one of the endpoints of its encompassing interval. Qq(p) quantizes the vector p elementwise. We design DAdaQuant’s quantization stage based on QSGD, an efficient fixed-point quantizer for state-of-the-art gradient compression. QSGD quantizes a vector p in three steps: 1. Quantize p as Qq( p ||p||2 ) into q bins in [0, 1], storing signs and ||p||2 separately. (lossy) 2. Encode the resulting integers with 0 run-length encoding. (lossless) 3. Encode the resulting integers with Elias ω coding. (lossless) QSGD has been designed specifically for quantizing gradients. This makes it not directly applicable to parameter compression. To overcome this limitation, we apply difference coding to uplink compression, first introduced to FL by FedPAQ. Each client ck applies Qq to the parameter updates pkt+1 − pt (cf. Line 21 of Algorithm 1) and sends them to the server. The server keeps track of the previous parameters pt and accumulates the quantized parameter updates into the new parameters as pt+1 = pt + ∑ k∈St |Dk|∑ l |Dl| Qq(p k t+1 − pt) (cf. Line 11 of Algorithm 1). We find that QSGD works well with parameter updates, which can be regarded as an accumulation of gradients over several training steps. We call this adaptation of QSGD Federated QSGD. 3.4 TIME-ADAPTIVE QUANTIZATION Time-adaptive quantization uses a different quantization level qt for each round t of FL training. DAdaQuant chooses qt to minimize communication costs without sacrificing accuracy. To this end, we find that lower quantization levels suffice to initially reduce the loss, while partly trained models require higher quantization levels to further improve (as illustrated in Figure 2). FracTrain is built on similar observations for non-distributed training. Therefore, we design DAdaQuant to mimic FracTrain in monotonically increasing qt as a function of t and using the training loss to inform increases in qt. When q is too low, FL converges prematurely. Like FracTrain, DAdaQuant monitors the FL loss and increases q when it converges. Unlike FracTrain, there is no single centralized loss function to evaluate and unlike AdaQuantFL, we do not assume availability of global training loss G(pt). Instead, we estimate G(pt) as the average local loss Ĝt = ∑ k∈St |Dk|∑ l |Dl| Fk(pt) where St is the set of clients sampled at round t. Since St typically consists of only a small fraction of all clients, Ĝt is a very noisy estimate of G(pt). This makes it unsuitable for convergence detection. Instead, DAdaQuant tracks a running average loss Ĝt = ψĜt−1 + (1− ψ)Ĝt. We initialize q1 = qmin for some qmin ∈ N. DAdaQuant determines training to converge whenever Ĝt ≥ Ĝt+1−φ for some φ ∈ N that specifies the number of rounds across which we compare Ĝ. On convergence, DAdaQuant sets qt = 2qt−1 and keeps the quantization level fixed for at least φ rounds to enable reductions in G to manifest in Ĝ. Eventually, the training loss converges regardless of the quantization level. To avoid unconstrained quantization increases on convergence, we limit the quantization level to qmax. The following equation summarizes DAdaQuant’s time-adaptive quantization: qt ←− qmin t = 0 2qt−1 t > 0 and Ĝt−1 ≥ Ĝt−φ and t > φ and 2qt−1 < qmax and qt−1 = qt−φ qt−1 else 3.5 CLIENT-ADAPTIVE QUANTIZATION FL algorithms typically accumulate each parameter pi over all clients into a weighted average p = ∑K i=1 wipi (see Algorithm 1). Quantized FL accumulates quantized parameters Qq(p) =∑K i=1 wiQq(pi) where q is the quantization level. We define the quantization error e q p = |p− Qq(p)|. We observe in our experiments that communication cost per client is roughly a linear function of Federated QSGD’s quantization level q. This means that the communication cost per round is proportional to Q = Kq. We call Q the communication budget and use it as a proxy measure of communication cost. Client-adaptive quantization dynamically adjusts the quantization level of each client. This means that even within a single round, each client ck can be assigned a different quantization level qk. The previous definitions then generalize to Q = ∑K k=1 qk and Qq1...qK (p) = ∑K i=1 wiQqi(pi) and eq1...qKp = |p− Qq1...qK (p)|. Prior convergence results for distributed training and FL rely on an upper bound b on Var(Qq1...qK (p)) that determines the convergence speed Li et al. (2017); Horváth et al. (2019); Reisizadeh et al. (2019). This makes V(Qq1...qK (p)) a natural measure to optimize for when choosing qk. We optimize for the closely related measure Ep1...pK [Var(Qq1...qK (p))] that replaces the upper bound with an expectation over parameters p1 . . . pK . Heuristically, we expect an this averaged measure to provide a better estimate of practically observed quantization errors than an upper bound. For a stochastic, unbiased fixed-point compressor like Federated QSGD, Ep1...−pK [Var(Qq1...qK (p))] equals Ep1...pK [Var(eqp)] and can be evaluated analytically. We devise an algorithm that chooses qk to minimize Q subject to Ep1...pK [Var(eq1...qKp )] = Ep1...pK [Var(eqp)] for a given q. Thus, our algorithm effectively minimizes communication costs while maintaining a quantization error similar to static quantization. Theorem 1 provides us with an analytical formula for quantization levels q1 . . . qK . Theorem 1. Given parameters p1 . . . pk ∼ U[−t, t] and quantization level q, minq1...qK ∑K i=1 qi subject to Ep1...pK [Var(eq1...qKp )] = Ep1...pK [Var(eqp)] is minimized by qi = √ a b × w 2/3 i where a = ∑K j=1 w 2/3 j and b = ∑K j=1 w2j q2 . DAdaQuant applies Theorem 1 to lower communication costs while maintaining the same loss as static quantization does with a fixed q. To ensure that quantization levels are natural numbers, DAdaQuant approximates the optimal real-valued solution as qi = max(1, round( √ a b × w 2/3 i )). Appendix B gives a detailed proof of Theorem 1. To the best of our knowledge, DAdaQuant is the first algorithm to use client-adaptive quantization. Algorithm 1: The FedAvg and DAdaQuant algorithms. The uncolored lines list FedAvg. Adding the colored lines creates DAdaQuant. — quantization, — client-adaptive quantization, — time-adaptive quantization. 1 Function RunServer() 2 Initialize wi = |Di|∑ j |Dj | for all i ∈ [1, . . . , N ]; 3 for t = 0, . . . , T − 1 do 4 Choose St ⊂ C with |St| = K, including each ck ∈ C with uniform probability; 5 qt ←− qmin t = 0 2qt−1 t > 0 and Ĝt−1 ≥ Ĝt−φ and t > φ and qt ≤ qmax and qt−1 = qt−φ qt−1 else ; 6 for ck ∈ St do in parallel 7 qkt ←− √∑K j=1 w 2/3 j / ∑K j=1 w2j q2 ; 8 Send(ck,pt,q k t ); 9 Receive(ck,p k t+1,Ĝ k t ); 10 end 11 pt+1 ←− ∑ k∈St wkp k t+1; 12 Ĝt ←− ∑ k∈St wkĜ k t ; 13 Ĝt ←− { Ĝ0 t = 0 ψĜt−1 + (1− ψ)Ĝt else ; 14 end 15 end 16 Function RunClient(ck) 17 while True do 18 Receive(Server,pt, qkt ); 19 Ĝkt ←− Fk(pt) ; 20 pkt+1 ←− Fk(pkt+1) trained with SGD for E epochs with learning rate η; 21 Send(Server, Qqkt (p k t+1) ,Ĝ k t ); 22 end 23 end 3.6 DOUBLY-ADAPTIVE QUANTIZATION (DADAQUANT) DAdaQuant combines the time-adaptive and client-adaptive quantization algorithms described in the previous sections. At each round t, time-adaptive quantization determines a preliminary quantization level qt. Client-adaptive quantization then finds the client quantization levels qkt , k ∈ {1, . . . ,K} that minimize ∑K i=1 qi subject to Ep1...pK [Var(eq1...qKp )] = Ep1...pK [Var(eqp)]. Algorithm 1 lists DAdaQuant in detail. Figure 3 gives an example of how our time-adaptive, client-adaptive and doubly-adaptive quantization algorithms set quantization levels. Reisizadeh et al. (2019) prove the convergence of FL with quantization for convex and non-convex cases as long as the quantizer Q is (1) unbiased and (2) has a bounded variance. These convergence results extend to DAdaQuant when combined with any quantizer that satisfies (1) and (2) for DAdaQuant’s minimum quantization level q = 1. Crucially, this includes Federated QSGD. We highlight DAdaQuant’s low overhead and general applicability. The computational overhead is dominated by an additional evaluation epoch per round per client to compute Ĝt, which is negligible when training for many epochs per round. In our experiments, we observe computational overheads of ≈ 1% (see Appendix A.2). DAdaQuant can compliment any FL algorithm that trains models over several rounds and accumulates a weighted average of client parameters. Most FL algorithms, including FedAvg, follow this design. 4 EXPERIMENTS 4.1 EXPERIMENTAL DETAILS Evaluation We use DAdaQuant with Federated QSGD to train different models with FedProx on different datasets for a fixed number of rounds. We monitor the test loss and accuracy at fixed intervals and measure uplink communication at every round across all devices. Models & datasets We select a broad and diverse set of five models and datasets to demonstrate the general applicability of DAdaQuant. To this end, we use DAdaQuant to train a linear model, CNNs and LSTMs of varying complexity on a federated synthetic dataset (Synthetic), as well as two federated image datasets (FEMNIST and CelebA) and two federated natural language datasets (Sent140 and Shakespeare) from the LEAF (Caldas et al., 2018) project for standardized FL research. We refer to Appendix A.1 for more information on the models, datasets, training objectives and implementation. System heterogeneity In practice, FL has to cope with clients that have different compute capabilities. We follow Li et al. (2018) and simulate this system heterogeneity by randomly reducing the number of epochs to E′ for a random subset S′t ⊂ St of clients at each round t, where E′ is sampled from [1, . . . , E] and |S′t| = 0.9K. Baselines We compare DAdaQuant against competing quantization-based algorithms for FL parameter compression, namely Federated QSGD, FedPAQ (Reisizadeh et al., 2019), GZip with fixedpoint quantization (FxPQ + GZip), UVeQFed (Shlezinger et al., 2020) and FP8. Federated QSGD (see section 3.3) is our most important baseline because it outperforms the other algorithms. FedPAQ only applies fixed-point quantization, which is equivalent to Federated QSGD without lossless compression. Similarly, FxPQ + GZip is equivalent to Federated QSGD with Gzip for its lossless compression stages. UVeQFed generalizes scalar quantization to vector quantization, followed by arithmetic coding. We apply UVeQFed with the optimal hyperparameters reported by its authors. FP8 (Wang et al., 2018a) is a floating-point quantizer that uses an 8-bit floating-point format designed for storing neural network gradients. We also evaluate all experiments without compression to establish an accuracy benchmark. Hyperparameters With the exception of CelebA, all our datasets and models are also used by Li et al.. We therefore adopt most of the hyperparameters from Li et al. and use LEAF’s hyperparameters for CelebA Caldas et al. (2018). For all experiments, we sample 10 clients each round. We train Synthetic, FEMNIST and CelebA for 500 rounds each. We train Sent140 for 1000 rounds due to slow convergence and Shakespeare for 50 rounds due to rapid convergence. We use batch size 10, learning rates 0.01, 0.003, 0.3, 0.8, 0.1 and µs (FedProx’s proximal term coefficient) 1, 1, 1, 0.001, 0 for Synthetic, FEMNIST, Sent140, Shakespeare, CelebA respectively. We randomly split the local datasets into 80% training set and 20% test set. To select the quantization level q for static quantization with Federated QSGD, FedPAQ and FxPQ + GZip, we run a gridsearch over q = 1, 2, 4, 8, . . . and choose for each dataset the lowest q for which Federated QSGD exceeds uncompressed training in accuracy. We set UVeQFed’s “coding rate” hyperparameter R = 4, which is the lowest value for which UVeQFed achieves negligible accuracy differences compared to uncompressed training. We set the remaining hyperparameters of UVeQFed to the optimal values reported by its authors. Appendix A.4 shows further experiments that compare against UVeQFed with R chosen to maximize its compression factor. For DAdaQuant’s time-adaptive quantization, we set ψ to 0.9, φ to 1/10th of the number of rounds and qmax to the quantization level q for each experiment. For Synthetic and FEMNIST, we set qmin to 1. We find that Sent140, Shakespeare and CelebA require a high quantization level to achieve top accuracies and/or converge in few rounds. This prevents time-adaptive quantization from increasing the quantization level quickly enough, resulting in prolonged low-precision training that hurts model performance. To counter this effect, we set qmin to qmax/2. This effectively results in binary timeadaptive quantization with an initial low-precision phase with q = qmax/2, followed by a highprecision phase with q = qmax. 4.2 RESULTS We repeat the main experiments three times and report average results and their standard deviation (where applicable). Table 1 shows the highest accuracy and total communication for each experiment. Figure 4 plots the maximum accuracy achieved for any given amount of communication. Baselines Table 1 shows that the accuracy of most experiments lies within the margin of error of the uncompressed experiments. This reiterates the viability of quantization-based compression algorithms for communication reduction in FL. For all experiments, Federated QSGD achieves a significantly higher compression factor than the other baselines. The authors of FedPAQ and UVeQFed also compare their methods against QSGD and report them as superior. However, FedPAQ is compared against “unfederated” QSGD that communicates gradients after each local training step and UVeQFed is compared against QSGD without its lossless compression stages. Time-adaptive quantization The purely time-adaptive version of DAdaQuant, DAdaQuanttime, universally outperforms Federated QSGD and the other baselines in Table 1, achieving comparable accuracies while lowering communication costs. DAdaQuanttime performs particularly well on Syn- thetic and FEMNIST, where it starts from the lowest possible quantization level q = 1. However, binary time-adaptive quantization still measurably improves over QSGD for Sent140, Shakespeare and Celeba. Figure 8 in Appendix A.5 provides empirical evidence that AdaQuantFL’s communication scales linearly with the number of clients. As a result, AdaQuantFL is prohibitively expensive for datasets with thousands of clients such as Celeba and Sent140. DAdaQuant does not face this problem because its communication is unaffected by the number of clients. Client-adaptive quantization The purely time-adaptive version of DAdaQuant, DAdaQuantclients, also universally outperforms Federated QSGD and the other baselines in Table 1, achieving similar accuracies while lowering communication costs. Unsurprisingly, the performance of DAdaQuantclients is correlated with the coefficient of variation cv = σµ of the numbers of samples in the local datasets with mean µ and standard deviation σ: Synthetic (cv = 3.3) and Shakespeare (cv = 1.7) achieve significantly higher compression factors than Sent140 (cv = 0.3), FEMNIST (cv = 0.4) and Celeba (cv = 0.3). DAdaQuant DAdaQuant outperforms DAdaQuanttime and DAdaQuantclients in communication while achieving similar accuracies. The compression factors of DAdaQuant are roughly multiplicative in those of DAdaQuantclients and DAdaQuanttime. This demonstrates that we can effectively combine time- and client-adaptive quantization for maximal communication savings. Figure 4 shows that DAdaQuant achieves a higher accuracy than the strongest baseline, Federated QSGD, for any fixed amount of client→server communication. 5 CONCLUSION We introduced DAdaQuant as a computationally efficient and robust algorithm to boost the performance of quantization-based FL compression algorithms. We showed intuitively and mathematically how DAdaQuant’s dynamic adjustment of the quantization level across time and clients minimize client→server communication while maintaining convergence speed. Our experiments establish DAdaQuant as nearly universally superior over static quantizers, achieving state-of-the-art compression factors when applied to Federated QSGD. The communication savings of DAdaQuant effectively lower FL bandwidth usage, energy consumption and training time. Future work may apply and adapt DAdaQuant to new quantizers, further pushing the state of the art in FL uplink compression. 6 REPRODUCIBILITY STATEMENT Our submission includes a repository with the source code for DAdaQuant and for the experiments presented in this paper. All the datasets used in our experiments are publicly available. Any postprocessing steps of the datasets are described in Appendix A.1. To facilitate the reproduction of our results, we have bundled all our source code, dependencies and datasets into a Docker image. The repository submitted with this paper contains instructions on how to use this Docker image and reproduce all plots and tables in this paper. 7 ETHICS STATEMENT FL trains models on private client datasets in a privacy-preserving manner. However, FL does not completely eliminate privacy concerns, because the transmitted model updates and the learned model parameters may expose the private client data from which they are derived. Our work does not directly target privacy concerns in FL. With that said, it is worth noting that DAdaQuant does not expose any client data that is not already exposed through standard FL training algorithms. In fact, DAdaQuant reduces the amount of exposed data through lossy compression of the model updates. We therefore believe that DAdaQuant is free of ethical complications. A ADDITIONAL SIMULATION DETAILS AND EXPERIMENTS A.1 ADDITIONAL SIMULATION DETAILS Here, we give detailed information on the models, datasets, training objectives and implementation that we use for our experiments. We set the five following FL tasks: • Multinomial logistic regression (MLR) on a synthetic dataset called Synthetic that contains vectors in R60 with a label of one out of 10 classes. We use the synthetic dataset generator in Li et al. (2018) to generate synthetic datasets. The generator samples Synthetic’s local datasets and labels from MLR models with randomly initialized parameters. For this purpose, parameters α and β control different kinds of data heterogeneity. α controls the variation in the local models from which the local dataset labels are generated. β controls the variation in the local dataset samples. We set α = 1 and β = 1 to simulate an FL setting with both kinds of data heterogeneity. This makes Synthetic a useful testbed for FL. • Character classification into 62 classes of handwritten characters from the FEMNIST dataset using a CNN. FEMNIST groups samples from the same author into the same local dataset. • Smile detection in facial images from the CelebA dataset using a CNN. CelebA groups samples of the same person into the same local dataset. We note that LEAF’s CNN for CelebA uses BatchNorm layers. We replace them with LayerNorm layers because they are more amenable to quantization. This change does not affect the final accuracy. • Binary sentiment analysis of tweets from the Sent140 dataset using an LSTM. Sent140 groups tweets from the same user into the same local dataset. The majority of local datasets in the raw Sent140 dataset only have a single sample. This impedes FL convergence. Therefore, we filter Sent140 to clients with at least 10 samples (i.e. one complete batch). Caldas et al. (2018); Li et al. (2018) similarly filter Sent140 for their FL experiments. • Next character prediction on text snippets from the Shakespeare dataset of Shakespeare’s collected plays using an LSTM. Shakespeare groups lines from the same character into the same local dataset. Table 2 provides statistics of our models and datasets. For our experiments in Figure 8, AdaQuantFL requires a hyperparameter s that determines the initial quantization level. We set s to 2, the optimal value reported by the authors of AdaQuantFL. The remaining hyperparameters are identical to those used for the Synthetic dataset experiments in Table 1. We implement the models with PyTorch (Paszke et al., 2019) and use Flower (Beutel et al., 2020) to simulate the FL server and clients. A.2 COMPUTATIONAL OVERHEAD OF DADAQUANT A.3 COMPLETE COMMUNICATION-ACCURACY TRADE-OFF CURVES Synthetic FEMNIST Sent140 Synthetic FEMNIST Sent140 Synthetic FEMNIST Sent140 A.4 ADDITIONAL UVEQFED EXPERIMENTS To demonstrate that the choice of UVeQFed’s “coding rate” hyperparameter R does not affect our findings on the superior compression factors of DAdaQuant, we re-evaluate UVeQFed with R = 1, which maximizes UVeQFed’s compression factor. Our results in Table 4 show that with the exception of Shakespeare, DAdaQuant still achieves considerably higher compression factors than UVeQFed. A.5 ADDITIONAL ADAQUANTFL EXPERIMENTS In principle, AdaQuantFL could be adapted to work with partial client participation by computing an estimate of the global loss from the sampled subset of clients. While a full evaluation of this approach is out of the scope of this paper, we conduct a brief feasibility study on FEMNIST. Concretely, we find that a single run of AdaQuantFL with partial client participation on FEMNIST achieved an accuracy of 78.7%, with a total client→server communication of 50.5 MB. In contrast, the same run with DAdaQuanttime similarly achieved an accuracy of 78.4%, while lowering the total client→server communication to 27.5 MB. B PROOFS Lemma 1. Take arbitrary quantization level qi ∈ N and parameter pi ∈ [−t, t]. Then, Qqi(pi) is an unbiased estimator of pi. Proof. Let si = tqi , bi = rem (pi, si) and ui = si − bi. Then, we have E [ Qqi(pi)− pi ] = ui si (pi − bi) + bi si (pi + ui) see Figure 9 = pi Lemma 2. For arbitrary t > 0 and parameter pi ∈ [−t, t], let si = tqi , bi = rem (pi, si) and ui = si − bi. Then, Var ( Qqi(pi) ) = uibi. Proof. Var ( Qqi(pi) ) = E [( Qqi(pi)− E [ Qqi(pi) ])2] = E [( Qqi(pi)− pi )2] see Lemma 1 = bi si u2i + ui si b2i see Figure 9 Lemma 3. Assume that parameters p1 . . . pK are sampled from U[−t, t] for arbitrary t > 0. Then, Ep1...pK [Var(eq1...qKp )] = t 2 6 ∑K i=1 w2i q2i . Proof. Ep1...pK [Var(ep)] = 1 2t ∫ t −t 1 2t ∫ t −t . . . 1 2t ∫ t −t Var ( K∑ i=1 wiQqi(pi)− p ) dp1dp2 . . . dpK = 1 t ∫ t 0 1 t ∫ t 0 . . . 1 t ∫ t 0 Var ( K∑ i=1 wiQqi(pi)− p ) dp1dp2 . . . dpK symmetry of Qqi(pi) w.r.t. negation = 1 tn ∫ t 0 ∫ t 0 . . . ∫ t 0 K∑ i=1 w2iVar ( Qqi(pi) ) dp1dp2 . . . dpK mutual independence of Qqi(pi) ∀i = 1 tn K∑ i=1 ∫ t 0 ∫ t 0 . . . ∫ t 0 w2iVar ( Qqi(pi) ) dp1dp2 . . . dpK exchangeability of finite sums and integrals = 1 tn K∑ i=1 tn−1 ∫ t 0 w2iVar ( Qqi(pi) ) dpi = 1 t K∑ i=1 w2i ∫ t 0 Var ( Qqi(pi) ) dpi = 1 t K∑ i=1 w2i ∫ t 0 uibi dpi Lemma 2 = 1 t K∑ i=1 w2i qi ∫ si 0 uibi dpi si-periodicity of ui and bi = 1 t K∑ i=1 w2i qi ∫ si 0 (si − pi) pi dpi = 1 6t K∑ i=1 w2i qis 3 i = t2 6 K∑ i=1 w2i q2i Lemma 4. Let Q be a fixed-point quantizer. Assume that parameters p1 . . . pK are sampled from U[−t, t] for arbitrary t > 0. Then, minq1...qK Ep1...pK [Var(eq1...qKp )] subject to Q = ∑K i=1 qi is minimized by qi = Q w 2/3 i∑K k=1 w 2/3 k . Proof. Define f(q) = Ep1...pK [Var(eq1...qKp )] g(q) = ( n∑ i=1 qi ) L(q) = f(q)− λg(q) (Lagrangian) Any (local) minimum q̂ satisfies ∇L(q̂) = 0 ⇐⇒ ∇ t 2 6 K∑ i=1 w2i q2i − λ∇ K∑ i=1 qi = 0 ∧ K∑ i=1 qi = Q Lemma 3 ⇐⇒ ∀i = 1 . . . n. t 2 −3 w2i q3i = λ ∧ K∑ i=1 qi = Q ⇐⇒ ∀i = 1 . . . n. qi = 3 √ t2 −3λ w2i ∧ K∑ i=1 qi = Q =⇒ ∀i = 1 . . . n. qi = Q w 2/3 i∑K j=1 w 2/3 j B.1 PROOF OF THEOREM 1 Proof. Using Lemma 4, it is straightforward to show that for any V , minq1...qK ∑K i=1 qi subject to Ep1...pK [Var(eq1...qKp )] = V is minimized by qi = Cw 2/3 i for the unique C ∈ R>0 that satisfies Ep1...pK [Var(eq1...qKp )] = V . Then, taking V = Ep1...pK [Var(eqp)] and C = √ a b (see Theorem 1), we do indeed get Ep1...pK [Var(eq1...qKp )] = t2 6 K∑ i=1 w2i (Cw 2/3 i ) 2 Lemma 3 = 1 C2 t2 6 K∑ i=1 wi 2/3 = ∑K j=1 w2j q2∑K j=1 w 2/3 j t2 6 K∑ i=1 wi 2/3 = t2 6 K∑ j=1 w2j q2 = Ep1...pK [Var(eqp)] lemma 3
1. What is the focus of the paper regarding federated learning? 2. What are the strengths of the proposed approach, particularly in its adaptive quantization strategies? 3. What are the weaknesses of the paper, especially in its comparison with prior works and overselling of contributions? 4. How does the reviewer assess the novelty and effectiveness of the time-adaptive quantization methods? 5. Are there any remaining questions or concerns regarding the paper's experiments and claims?
Summary Of The Paper Review
Summary Of The Paper This paper studies how to compress the local model changes in federated learning to save uplink communication costs. In particular, the authors found (1) adaptively increasing the number of quantization levels (2) adaptively assigning different quantization levels on different clients, can effectively outperform previous static quantization schemes. They evaluated the proposed algorithm on multiple federated learning datasets. Review Strengths The proposed two adaptive quantization strategies are simple and easy to implement in practice. In the experiments, they exhibit better performance than static quantization schemes. The client-level adaptive quantization scheme is new and has not appeared in literature. The idea itself makes sense and the authors proposed a theory-grounded approach to make it work. Weaknesses I think the "time-adaptivity" part in the paper is a bit improper. And the authors tend to oversell their contribution. The contribution in this part is kind of trivial compared to previous works. In the introduction, the authors mentioned that "we observe that early training rounds can use a lower q without affecting convergence". But in fact, this observation was first made by previous literature, such as (Jhunjhunwala et al. ICASSP 2021). More generally, the intuition that one need more communication towards the end of training already appeared in two years ago in (Wang & Joshi, MLSys 2019, "Adaptive communication strategies to achieve the best error-runtime trade-off in local-update SGD"). The way the authors wrote this part can make people think this paper find this idea, which is not. The authors are supposed to make it clear in introduction that "previous literature observed that.. and we further improve their algorithms by overcoming/addressing.." The comparison of time-adaptive quantization schemes with previous work (Jhunjhunwala et al. ICASSP 2021) is unfair. Basically, (Jhunjhunwala et al. ICASSP 2021) let all clients to participate into each round of training because they consider the full participation FL. But the algorithm itself can be easily extended to the case where we only sample few clients at each round. One can simply replace the global loss by the average loss within the current subset. Compared to this work, the authors should demonstrate (1) moving average of the loss help; (2) a step-increase strategy is better than the strategy used in (Jhunjhunwala et al. ICASSP 2021); (3) the proposed strategy converges faster than any other static quantization schemes, as predicted in Figure 2. Related to the above point, I feel the authors failed to demonstrate the effectiveness of the time-adaptive quantization methods. There are still many remaining questions. In their experiments, the authors do not show whether the moving average of loss help. The authors do not compare the results with previous works AdaQuantFL. The authors do not show the proposed scheme is faster than any other static quantization schemes, as predicted in Figure 2. Instead, they just fix one quantization level and report the final accuracy. In addition, the authors claim Pareto optimality in their experiments. I don't agree with this. Although the proposed algorithm is better, why is it the optimal one?
ICLR
Title DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning Abstract Federated Learning (FL) is a powerful technique for training a model on a server with data from several clients in a privacy-preserving manner. In FL, a server sends the model to every client, who then train the model locally and send it back to the server. The server aggregates the updated models and repeats the process for several rounds. FL incurs significant communication costs, in particular when transmitting the updated local models from the clients back to the server. Recently proposed algorithms quantize the model parameters to efficiently compress FL communication. These algorithms typically have a quantization level that controls the compression factor. We find that dynamic adaptations of the quantization level can boost compression without sacrificing model quality. First, we introduce a time-adaptive quantization algorithm that increases the quantization level as training progresses. Second, we introduce a client-adaptive quantization algorithm that assigns each individual client the optimal quantization level at every round. Finally, we combine both algorithms into DAdaQuant, the doubly-adaptive quantization algorithm. Our experiments show that DAdaQuant consistently improves client→server compression, outperforming the strongest non-adaptive baselines by up to 2.8×. 1 INTRODUCTION Edge devices such as smartphones, remote sensors and smart home appliances generate massive amounts of data (Wang et al., 2018b; Cao et al., 2017; Shi & Dustdar, 2016). In recent years, Federated Learning (FL) has emerged as a technique to train models on this data while preserving privacy (McMahan et al., 2017; Li et al., 2018). In FL, we have a single server that is connected to many clients. Each client stores a local dataset that it does not want to share with the server because of privacy concerns or law enforcement (Voigt & Von dem Bussche, 2017). The server wants to train a model on all local datasets. To this end, it initializes the model and sends it to a random subset of clients. Each client trains the model on its local dataset and sends the trained model back to the server. The server accumulates all trained models into an updated model for the next iteration and repeats the process for several rounds until some termination criterion is met. This procedure enables the server to train a model without accessing any local datasets. Today’s neural network models often have millions or even billions (Brown et al., 2020) of parameters, which makes high communication costs a concern in FL. In fact, Qiu et al. (2020) suggest that communication between clients and server may account for over 70% of energy consumption in FL. Reducing communication in FL is an attractive area of research because it lowers bandwidth requirements, energy consumption and training time. Communication in FL occurs in two phases: Sending parameters from the server to clients (downlink) and sending updated parameters from clients to the server (uplink). Uplink bandwidth usually imposes a tighter bottleneck than downlink bandwidth. This has several reasons. For one, the average global mobile upload bandwidth is currently less than one fourth of the download bandwidth (Speedtest). For another, FL downlink communication sends the same parameters to each client. Broadcasting parameters is usually more efficient than the accumulation of parameters from differ- ent clients that is required for uplink communication (Amiri et al., 2020; Reisizadeh et al., 2019). For these reasons, we seek to compress uplink communication. A large class of compression algorithms for FL apply some lossy quantizer Q, optionally followed by a lossless compression stage. Q usually provides a “quantization level” hyperparameter q to control the coarseness of quantization (e.g. the number of bins for fixed-point quantization). When q is kept constant during training, we speak of static quantization. When q changes, we speak of adaptive quantization. Adaptive quantization can exploit asymmetries in the FL framework to minimize communication. One such asymmetry lies in FL’s training time, where Jhunjhunwala et al. (2021) observed that early training rounds can use a lower q without affecting convergence. Figure 2 illustrates how time-adaptive quantization leverages this phenomenon to minimize communication. Another asymmetry lies in FL’s client space, because most FL algorithms weight client contributions to the global model proportional to their local dataset sizes. Figure 1 illustrates how client-adaptive quantization can minimize the quantization error. Intuitively, FL clients with greater weighting should have a greater commu- nication budget and our proposed client-adaptive quantization achieves this in a principled way. To this end, we introduce the expected variance of an accumulation of quantized parameters, E[Var( ∑ Q(p))], as a measure of the quantization error. Our client-adaptive quantization algorithm then assigns clients minimal quantization levels, subject to a fixed E[Var( ∑ Q(p))]. This lowers the amount of data communicated from clients to the server, without increasing the quantization error. DAdaQuant (Doubly Adaptive Quantization) combines time- and client-adaptive quantization with an adaptation of the QSGD fixed-point quantization algorithm to achieve state-of-the-art FL uplink compression. In this paper, we make the following contributions: • We introduce the concept of client-adaptive quantization and develop algorithms for time- and client-adaptive quantization that are computationally efficient, empirically superior to existing algorithms, and compatible with arbitrary FL quantizers. Our client-adaptive quantization is provably optimal for stochastic fixed-point quantizers. • We create Federated QSGD as an adaptation of the stochastic fixed-point quantizer QSGD that works with FL. Federated QSGD outperforms all other quantizers, establishing a strong baseline for FL compression with static quantization. • We combine time- and client-adaptive quantization into DAdaQuant. We demonstrate DAdaQuant’s state-of-the-art compression by empirically comparing it against several competitive FL compression algorithms. 2 RELATED WORK FL research has explored several approaches to reduce communication. We identify three general directions. First, there is a growing interest of investigating FL algorithms that can converge in fewer rounds. FedAvg (McMahan et al., 2017) achieves this with prolonged local training, while FOLB (Nguyen et al., 2020) speeds up convergence through a more principled client sampling. Since communication is proportional to the number of training rounds, these algorithms effectively reduce communication. Secondly, communication can be reduced by reducing the model size because the model size is proportional to the amount of training communication. PruneFL (Jiang et al., 2019) progressively prunes the model over the course of training, while AFD (Bouacida et al., 2021) only trains submodels on clients. Thirdly, it is possible to directly compress FL training communication. FL compression algorithms typically apply techniques like top-k sparsification (Malekijoo et al., 2021; Rothchild et al., 2020) or quantization (Reisizadeh et al., 2019; Shlezinger et al., 2020) to parameter updates, optionally followed by lossless compression. Our work applies to quantization-based compression algorithms. It is partially based on QSGD (Alistarh et al., 2017), which combines lossy fixed-point quantization with a lossless compression algorithm to compress gradients communicated in distributed training. DAdaQuant adapts QSGD into Federated QSGD, which works with Federated Learning. DAdaQuant also draws inspiration from FedPAQ (Reisizadeh et al., 2019), the first FL framework to use lossy compression based on model parameter update quantization. However, FedPAQ does not explore the advantages of additional lossless compression or adaptive quantization. UVeQFed (Shlezinger et al., 2020) is an FL compression algorithm that generalizes scalar quantization to vector quantization and subsequently employs lossless compression with arithmetic coding. Like FedPAQ, UVeQFed also limits itself to a single static quantization level. Faster convergence, model size reduction and communication compression are orthogonal techniques, so they can be combined for further communication savings. For this paper, we limit the scope of empirical comparisons to quantization-based FL compression algorithms. For quantization-based compression for model training, prior works have demonstrated that DNNs can be successfully trained in low-precision (Banner et al., 2018; Gupta et al., 2015; Sun et al., 2019). There are also several adaptive quantization algorithms for training neural networks in a non-distributed setting. Shen et al. (2020) use different quantization levels for different parameters of a neural network. FracTrain (Fu et al., 2020) introduced multi-dimensional adaptive quantization by developing time-adaptive quantization and combining it with parameter-adaptive quantization. However, FracTrain uses the current loss to decide on the quantization level. FL generally can only compute local client losses that are too noisy to be practical for FracTrain. AdaQuantFL introduces time-adaptive quantization to FL, but requires the global loss (Jhunjhunwala et al., 2021). To compute the global loss, AdaQuantFL has to communicate with every client each round. We show in Section 4.2 that this quickly becomes impractical as the number of clients grows. DAdaQuant’s time-adaptive quantization overcomes this issue without compromising on the underlying FL communication. In addition, to the best of our knowledge, DAdaQuant is the first algorithm to use client-adaptive quantization. 3 THE DADAQUANT METHOD 3.1 FEDERATED LEARNING Federated Learning assumes a client-server topology with a set C = {ci|i ∈ {1, 2...N}} of N clients that are connected to a single server. Each client ck has a local dataset Dk from the local data distribution Dk. Given a model M with parameters p, a loss function fp(d ∈ Dk) and the local loss Fk(p) = 1 |Dk| ∑ d∈Dk fp(d), FL seeks to minimize the global loss G(p) = ∑N k=1 |Dk|∑ l |Dl| Fk(p). 3.2 FEDERATED AVERAGING (FEDAVG) DAdaQuant makes only minimal assumptions about the FL algorithm. Crucially, DAdaquant can complement FedAvg (McMahan et al., 2017), which is representative of a large class of FL algorithms. FedAvg trains the model M over several rounds. In each round t, FedAvg sends the model parameters pt to a random subset St of K clients who then optimize their local objectives Fk(pt) and send the updated model parameters pkt+1 back to the server. The server accumulates all parameters into the new global model pt+1 = ∑ k∈St |Dk|∑ j |Dj | pkt+1 and starts the next round. Algorithm 1 lists FedAvg in detail. For our experiments, we use the FedProx (Li et al., 2018) adaptation of FedAvg. FedProx improves the convergence of FedAvg by adding the proximal term µ2 ‖p k t+1 − pt‖2 to the local objective Fk(pkt+1) in Line 20 of Algorithm 1. 3.3 QUANTIZATION WITH FEDERATED QSGD While DAdaQuant can be applied to any quantizer with a configurable quantization level, it is optimized for fixed-point quantization. We introduce Federated QSGD as a competitive stochastic fixed-point quantizer on top of which DAdaQuant is applied. In general, stochastic fixed-point quantization uses a quantizer Qq with quantization level q that splits R≥0 and R≤0 into q intervals each. Qq(p) then returns the sign of p and |p| stochastically rounded to one of the endpoints of its encompassing interval. Qq(p) quantizes the vector p elementwise. We design DAdaQuant’s quantization stage based on QSGD, an efficient fixed-point quantizer for state-of-the-art gradient compression. QSGD quantizes a vector p in three steps: 1. Quantize p as Qq( p ||p||2 ) into q bins in [0, 1], storing signs and ||p||2 separately. (lossy) 2. Encode the resulting integers with 0 run-length encoding. (lossless) 3. Encode the resulting integers with Elias ω coding. (lossless) QSGD has been designed specifically for quantizing gradients. This makes it not directly applicable to parameter compression. To overcome this limitation, we apply difference coding to uplink compression, first introduced to FL by FedPAQ. Each client ck applies Qq to the parameter updates pkt+1 − pt (cf. Line 21 of Algorithm 1) and sends them to the server. The server keeps track of the previous parameters pt and accumulates the quantized parameter updates into the new parameters as pt+1 = pt + ∑ k∈St |Dk|∑ l |Dl| Qq(p k t+1 − pt) (cf. Line 11 of Algorithm 1). We find that QSGD works well with parameter updates, which can be regarded as an accumulation of gradients over several training steps. We call this adaptation of QSGD Federated QSGD. 3.4 TIME-ADAPTIVE QUANTIZATION Time-adaptive quantization uses a different quantization level qt for each round t of FL training. DAdaQuant chooses qt to minimize communication costs without sacrificing accuracy. To this end, we find that lower quantization levels suffice to initially reduce the loss, while partly trained models require higher quantization levels to further improve (as illustrated in Figure 2). FracTrain is built on similar observations for non-distributed training. Therefore, we design DAdaQuant to mimic FracTrain in monotonically increasing qt as a function of t and using the training loss to inform increases in qt. When q is too low, FL converges prematurely. Like FracTrain, DAdaQuant monitors the FL loss and increases q when it converges. Unlike FracTrain, there is no single centralized loss function to evaluate and unlike AdaQuantFL, we do not assume availability of global training loss G(pt). Instead, we estimate G(pt) as the average local loss Ĝt = ∑ k∈St |Dk|∑ l |Dl| Fk(pt) where St is the set of clients sampled at round t. Since St typically consists of only a small fraction of all clients, Ĝt is a very noisy estimate of G(pt). This makes it unsuitable for convergence detection. Instead, DAdaQuant tracks a running average loss Ĝt = ψĜt−1 + (1− ψ)Ĝt. We initialize q1 = qmin for some qmin ∈ N. DAdaQuant determines training to converge whenever Ĝt ≥ Ĝt+1−φ for some φ ∈ N that specifies the number of rounds across which we compare Ĝ. On convergence, DAdaQuant sets qt = 2qt−1 and keeps the quantization level fixed for at least φ rounds to enable reductions in G to manifest in Ĝ. Eventually, the training loss converges regardless of the quantization level. To avoid unconstrained quantization increases on convergence, we limit the quantization level to qmax. The following equation summarizes DAdaQuant’s time-adaptive quantization: qt ←− qmin t = 0 2qt−1 t > 0 and Ĝt−1 ≥ Ĝt−φ and t > φ and 2qt−1 < qmax and qt−1 = qt−φ qt−1 else 3.5 CLIENT-ADAPTIVE QUANTIZATION FL algorithms typically accumulate each parameter pi over all clients into a weighted average p = ∑K i=1 wipi (see Algorithm 1). Quantized FL accumulates quantized parameters Qq(p) =∑K i=1 wiQq(pi) where q is the quantization level. We define the quantization error e q p = |p− Qq(p)|. We observe in our experiments that communication cost per client is roughly a linear function of Federated QSGD’s quantization level q. This means that the communication cost per round is proportional to Q = Kq. We call Q the communication budget and use it as a proxy measure of communication cost. Client-adaptive quantization dynamically adjusts the quantization level of each client. This means that even within a single round, each client ck can be assigned a different quantization level qk. The previous definitions then generalize to Q = ∑K k=1 qk and Qq1...qK (p) = ∑K i=1 wiQqi(pi) and eq1...qKp = |p− Qq1...qK (p)|. Prior convergence results for distributed training and FL rely on an upper bound b on Var(Qq1...qK (p)) that determines the convergence speed Li et al. (2017); Horváth et al. (2019); Reisizadeh et al. (2019). This makes V(Qq1...qK (p)) a natural measure to optimize for when choosing qk. We optimize for the closely related measure Ep1...pK [Var(Qq1...qK (p))] that replaces the upper bound with an expectation over parameters p1 . . . pK . Heuristically, we expect an this averaged measure to provide a better estimate of practically observed quantization errors than an upper bound. For a stochastic, unbiased fixed-point compressor like Federated QSGD, Ep1...−pK [Var(Qq1...qK (p))] equals Ep1...pK [Var(eqp)] and can be evaluated analytically. We devise an algorithm that chooses qk to minimize Q subject to Ep1...pK [Var(eq1...qKp )] = Ep1...pK [Var(eqp)] for a given q. Thus, our algorithm effectively minimizes communication costs while maintaining a quantization error similar to static quantization. Theorem 1 provides us with an analytical formula for quantization levels q1 . . . qK . Theorem 1. Given parameters p1 . . . pk ∼ U[−t, t] and quantization level q, minq1...qK ∑K i=1 qi subject to Ep1...pK [Var(eq1...qKp )] = Ep1...pK [Var(eqp)] is minimized by qi = √ a b × w 2/3 i where a = ∑K j=1 w 2/3 j and b = ∑K j=1 w2j q2 . DAdaQuant applies Theorem 1 to lower communication costs while maintaining the same loss as static quantization does with a fixed q. To ensure that quantization levels are natural numbers, DAdaQuant approximates the optimal real-valued solution as qi = max(1, round( √ a b × w 2/3 i )). Appendix B gives a detailed proof of Theorem 1. To the best of our knowledge, DAdaQuant is the first algorithm to use client-adaptive quantization. Algorithm 1: The FedAvg and DAdaQuant algorithms. The uncolored lines list FedAvg. Adding the colored lines creates DAdaQuant. — quantization, — client-adaptive quantization, — time-adaptive quantization. 1 Function RunServer() 2 Initialize wi = |Di|∑ j |Dj | for all i ∈ [1, . . . , N ]; 3 for t = 0, . . . , T − 1 do 4 Choose St ⊂ C with |St| = K, including each ck ∈ C with uniform probability; 5 qt ←− qmin t = 0 2qt−1 t > 0 and Ĝt−1 ≥ Ĝt−φ and t > φ and qt ≤ qmax and qt−1 = qt−φ qt−1 else ; 6 for ck ∈ St do in parallel 7 qkt ←− √∑K j=1 w 2/3 j / ∑K j=1 w2j q2 ; 8 Send(ck,pt,q k t ); 9 Receive(ck,p k t+1,Ĝ k t ); 10 end 11 pt+1 ←− ∑ k∈St wkp k t+1; 12 Ĝt ←− ∑ k∈St wkĜ k t ; 13 Ĝt ←− { Ĝ0 t = 0 ψĜt−1 + (1− ψ)Ĝt else ; 14 end 15 end 16 Function RunClient(ck) 17 while True do 18 Receive(Server,pt, qkt ); 19 Ĝkt ←− Fk(pt) ; 20 pkt+1 ←− Fk(pkt+1) trained with SGD for E epochs with learning rate η; 21 Send(Server, Qqkt (p k t+1) ,Ĝ k t ); 22 end 23 end 3.6 DOUBLY-ADAPTIVE QUANTIZATION (DADAQUANT) DAdaQuant combines the time-adaptive and client-adaptive quantization algorithms described in the previous sections. At each round t, time-adaptive quantization determines a preliminary quantization level qt. Client-adaptive quantization then finds the client quantization levels qkt , k ∈ {1, . . . ,K} that minimize ∑K i=1 qi subject to Ep1...pK [Var(eq1...qKp )] = Ep1...pK [Var(eqp)]. Algorithm 1 lists DAdaQuant in detail. Figure 3 gives an example of how our time-adaptive, client-adaptive and doubly-adaptive quantization algorithms set quantization levels. Reisizadeh et al. (2019) prove the convergence of FL with quantization for convex and non-convex cases as long as the quantizer Q is (1) unbiased and (2) has a bounded variance. These convergence results extend to DAdaQuant when combined with any quantizer that satisfies (1) and (2) for DAdaQuant’s minimum quantization level q = 1. Crucially, this includes Federated QSGD. We highlight DAdaQuant’s low overhead and general applicability. The computational overhead is dominated by an additional evaluation epoch per round per client to compute Ĝt, which is negligible when training for many epochs per round. In our experiments, we observe computational overheads of ≈ 1% (see Appendix A.2). DAdaQuant can compliment any FL algorithm that trains models over several rounds and accumulates a weighted average of client parameters. Most FL algorithms, including FedAvg, follow this design. 4 EXPERIMENTS 4.1 EXPERIMENTAL DETAILS Evaluation We use DAdaQuant with Federated QSGD to train different models with FedProx on different datasets for a fixed number of rounds. We monitor the test loss and accuracy at fixed intervals and measure uplink communication at every round across all devices. Models & datasets We select a broad and diverse set of five models and datasets to demonstrate the general applicability of DAdaQuant. To this end, we use DAdaQuant to train a linear model, CNNs and LSTMs of varying complexity on a federated synthetic dataset (Synthetic), as well as two federated image datasets (FEMNIST and CelebA) and two federated natural language datasets (Sent140 and Shakespeare) from the LEAF (Caldas et al., 2018) project for standardized FL research. We refer to Appendix A.1 for more information on the models, datasets, training objectives and implementation. System heterogeneity In practice, FL has to cope with clients that have different compute capabilities. We follow Li et al. (2018) and simulate this system heterogeneity by randomly reducing the number of epochs to E′ for a random subset S′t ⊂ St of clients at each round t, where E′ is sampled from [1, . . . , E] and |S′t| = 0.9K. Baselines We compare DAdaQuant against competing quantization-based algorithms for FL parameter compression, namely Federated QSGD, FedPAQ (Reisizadeh et al., 2019), GZip with fixedpoint quantization (FxPQ + GZip), UVeQFed (Shlezinger et al., 2020) and FP8. Federated QSGD (see section 3.3) is our most important baseline because it outperforms the other algorithms. FedPAQ only applies fixed-point quantization, which is equivalent to Federated QSGD without lossless compression. Similarly, FxPQ + GZip is equivalent to Federated QSGD with Gzip for its lossless compression stages. UVeQFed generalizes scalar quantization to vector quantization, followed by arithmetic coding. We apply UVeQFed with the optimal hyperparameters reported by its authors. FP8 (Wang et al., 2018a) is a floating-point quantizer that uses an 8-bit floating-point format designed for storing neural network gradients. We also evaluate all experiments without compression to establish an accuracy benchmark. Hyperparameters With the exception of CelebA, all our datasets and models are also used by Li et al.. We therefore adopt most of the hyperparameters from Li et al. and use LEAF’s hyperparameters for CelebA Caldas et al. (2018). For all experiments, we sample 10 clients each round. We train Synthetic, FEMNIST and CelebA for 500 rounds each. We train Sent140 for 1000 rounds due to slow convergence and Shakespeare for 50 rounds due to rapid convergence. We use batch size 10, learning rates 0.01, 0.003, 0.3, 0.8, 0.1 and µs (FedProx’s proximal term coefficient) 1, 1, 1, 0.001, 0 for Synthetic, FEMNIST, Sent140, Shakespeare, CelebA respectively. We randomly split the local datasets into 80% training set and 20% test set. To select the quantization level q for static quantization with Federated QSGD, FedPAQ and FxPQ + GZip, we run a gridsearch over q = 1, 2, 4, 8, . . . and choose for each dataset the lowest q for which Federated QSGD exceeds uncompressed training in accuracy. We set UVeQFed’s “coding rate” hyperparameter R = 4, which is the lowest value for which UVeQFed achieves negligible accuracy differences compared to uncompressed training. We set the remaining hyperparameters of UVeQFed to the optimal values reported by its authors. Appendix A.4 shows further experiments that compare against UVeQFed with R chosen to maximize its compression factor. For DAdaQuant’s time-adaptive quantization, we set ψ to 0.9, φ to 1/10th of the number of rounds and qmax to the quantization level q for each experiment. For Synthetic and FEMNIST, we set qmin to 1. We find that Sent140, Shakespeare and CelebA require a high quantization level to achieve top accuracies and/or converge in few rounds. This prevents time-adaptive quantization from increasing the quantization level quickly enough, resulting in prolonged low-precision training that hurts model performance. To counter this effect, we set qmin to qmax/2. This effectively results in binary timeadaptive quantization with an initial low-precision phase with q = qmax/2, followed by a highprecision phase with q = qmax. 4.2 RESULTS We repeat the main experiments three times and report average results and their standard deviation (where applicable). Table 1 shows the highest accuracy and total communication for each experiment. Figure 4 plots the maximum accuracy achieved for any given amount of communication. Baselines Table 1 shows that the accuracy of most experiments lies within the margin of error of the uncompressed experiments. This reiterates the viability of quantization-based compression algorithms for communication reduction in FL. For all experiments, Federated QSGD achieves a significantly higher compression factor than the other baselines. The authors of FedPAQ and UVeQFed also compare their methods against QSGD and report them as superior. However, FedPAQ is compared against “unfederated” QSGD that communicates gradients after each local training step and UVeQFed is compared against QSGD without its lossless compression stages. Time-adaptive quantization The purely time-adaptive version of DAdaQuant, DAdaQuanttime, universally outperforms Federated QSGD and the other baselines in Table 1, achieving comparable accuracies while lowering communication costs. DAdaQuanttime performs particularly well on Syn- thetic and FEMNIST, where it starts from the lowest possible quantization level q = 1. However, binary time-adaptive quantization still measurably improves over QSGD for Sent140, Shakespeare and Celeba. Figure 8 in Appendix A.5 provides empirical evidence that AdaQuantFL’s communication scales linearly with the number of clients. As a result, AdaQuantFL is prohibitively expensive for datasets with thousands of clients such as Celeba and Sent140. DAdaQuant does not face this problem because its communication is unaffected by the number of clients. Client-adaptive quantization The purely time-adaptive version of DAdaQuant, DAdaQuantclients, also universally outperforms Federated QSGD and the other baselines in Table 1, achieving similar accuracies while lowering communication costs. Unsurprisingly, the performance of DAdaQuantclients is correlated with the coefficient of variation cv = σµ of the numbers of samples in the local datasets with mean µ and standard deviation σ: Synthetic (cv = 3.3) and Shakespeare (cv = 1.7) achieve significantly higher compression factors than Sent140 (cv = 0.3), FEMNIST (cv = 0.4) and Celeba (cv = 0.3). DAdaQuant DAdaQuant outperforms DAdaQuanttime and DAdaQuantclients in communication while achieving similar accuracies. The compression factors of DAdaQuant are roughly multiplicative in those of DAdaQuantclients and DAdaQuanttime. This demonstrates that we can effectively combine time- and client-adaptive quantization for maximal communication savings. Figure 4 shows that DAdaQuant achieves a higher accuracy than the strongest baseline, Federated QSGD, for any fixed amount of client→server communication. 5 CONCLUSION We introduced DAdaQuant as a computationally efficient and robust algorithm to boost the performance of quantization-based FL compression algorithms. We showed intuitively and mathematically how DAdaQuant’s dynamic adjustment of the quantization level across time and clients minimize client→server communication while maintaining convergence speed. Our experiments establish DAdaQuant as nearly universally superior over static quantizers, achieving state-of-the-art compression factors when applied to Federated QSGD. The communication savings of DAdaQuant effectively lower FL bandwidth usage, energy consumption and training time. Future work may apply and adapt DAdaQuant to new quantizers, further pushing the state of the art in FL uplink compression. 6 REPRODUCIBILITY STATEMENT Our submission includes a repository with the source code for DAdaQuant and for the experiments presented in this paper. All the datasets used in our experiments are publicly available. Any postprocessing steps of the datasets are described in Appendix A.1. To facilitate the reproduction of our results, we have bundled all our source code, dependencies and datasets into a Docker image. The repository submitted with this paper contains instructions on how to use this Docker image and reproduce all plots and tables in this paper. 7 ETHICS STATEMENT FL trains models on private client datasets in a privacy-preserving manner. However, FL does not completely eliminate privacy concerns, because the transmitted model updates and the learned model parameters may expose the private client data from which they are derived. Our work does not directly target privacy concerns in FL. With that said, it is worth noting that DAdaQuant does not expose any client data that is not already exposed through standard FL training algorithms. In fact, DAdaQuant reduces the amount of exposed data through lossy compression of the model updates. We therefore believe that DAdaQuant is free of ethical complications. A ADDITIONAL SIMULATION DETAILS AND EXPERIMENTS A.1 ADDITIONAL SIMULATION DETAILS Here, we give detailed information on the models, datasets, training objectives and implementation that we use for our experiments. We set the five following FL tasks: • Multinomial logistic regression (MLR) on a synthetic dataset called Synthetic that contains vectors in R60 with a label of one out of 10 classes. We use the synthetic dataset generator in Li et al. (2018) to generate synthetic datasets. The generator samples Synthetic’s local datasets and labels from MLR models with randomly initialized parameters. For this purpose, parameters α and β control different kinds of data heterogeneity. α controls the variation in the local models from which the local dataset labels are generated. β controls the variation in the local dataset samples. We set α = 1 and β = 1 to simulate an FL setting with both kinds of data heterogeneity. This makes Synthetic a useful testbed for FL. • Character classification into 62 classes of handwritten characters from the FEMNIST dataset using a CNN. FEMNIST groups samples from the same author into the same local dataset. • Smile detection in facial images from the CelebA dataset using a CNN. CelebA groups samples of the same person into the same local dataset. We note that LEAF’s CNN for CelebA uses BatchNorm layers. We replace them with LayerNorm layers because they are more amenable to quantization. This change does not affect the final accuracy. • Binary sentiment analysis of tweets from the Sent140 dataset using an LSTM. Sent140 groups tweets from the same user into the same local dataset. The majority of local datasets in the raw Sent140 dataset only have a single sample. This impedes FL convergence. Therefore, we filter Sent140 to clients with at least 10 samples (i.e. one complete batch). Caldas et al. (2018); Li et al. (2018) similarly filter Sent140 for their FL experiments. • Next character prediction on text snippets from the Shakespeare dataset of Shakespeare’s collected plays using an LSTM. Shakespeare groups lines from the same character into the same local dataset. Table 2 provides statistics of our models and datasets. For our experiments in Figure 8, AdaQuantFL requires a hyperparameter s that determines the initial quantization level. We set s to 2, the optimal value reported by the authors of AdaQuantFL. The remaining hyperparameters are identical to those used for the Synthetic dataset experiments in Table 1. We implement the models with PyTorch (Paszke et al., 2019) and use Flower (Beutel et al., 2020) to simulate the FL server and clients. A.2 COMPUTATIONAL OVERHEAD OF DADAQUANT A.3 COMPLETE COMMUNICATION-ACCURACY TRADE-OFF CURVES Synthetic FEMNIST Sent140 Synthetic FEMNIST Sent140 Synthetic FEMNIST Sent140 A.4 ADDITIONAL UVEQFED EXPERIMENTS To demonstrate that the choice of UVeQFed’s “coding rate” hyperparameter R does not affect our findings on the superior compression factors of DAdaQuant, we re-evaluate UVeQFed with R = 1, which maximizes UVeQFed’s compression factor. Our results in Table 4 show that with the exception of Shakespeare, DAdaQuant still achieves considerably higher compression factors than UVeQFed. A.5 ADDITIONAL ADAQUANTFL EXPERIMENTS In principle, AdaQuantFL could be adapted to work with partial client participation by computing an estimate of the global loss from the sampled subset of clients. While a full evaluation of this approach is out of the scope of this paper, we conduct a brief feasibility study on FEMNIST. Concretely, we find that a single run of AdaQuantFL with partial client participation on FEMNIST achieved an accuracy of 78.7%, with a total client→server communication of 50.5 MB. In contrast, the same run with DAdaQuanttime similarly achieved an accuracy of 78.4%, while lowering the total client→server communication to 27.5 MB. B PROOFS Lemma 1. Take arbitrary quantization level qi ∈ N and parameter pi ∈ [−t, t]. Then, Qqi(pi) is an unbiased estimator of pi. Proof. Let si = tqi , bi = rem (pi, si) and ui = si − bi. Then, we have E [ Qqi(pi)− pi ] = ui si (pi − bi) + bi si (pi + ui) see Figure 9 = pi Lemma 2. For arbitrary t > 0 and parameter pi ∈ [−t, t], let si = tqi , bi = rem (pi, si) and ui = si − bi. Then, Var ( Qqi(pi) ) = uibi. Proof. Var ( Qqi(pi) ) = E [( Qqi(pi)− E [ Qqi(pi) ])2] = E [( Qqi(pi)− pi )2] see Lemma 1 = bi si u2i + ui si b2i see Figure 9 Lemma 3. Assume that parameters p1 . . . pK are sampled from U[−t, t] for arbitrary t > 0. Then, Ep1...pK [Var(eq1...qKp )] = t 2 6 ∑K i=1 w2i q2i . Proof. Ep1...pK [Var(ep)] = 1 2t ∫ t −t 1 2t ∫ t −t . . . 1 2t ∫ t −t Var ( K∑ i=1 wiQqi(pi)− p ) dp1dp2 . . . dpK = 1 t ∫ t 0 1 t ∫ t 0 . . . 1 t ∫ t 0 Var ( K∑ i=1 wiQqi(pi)− p ) dp1dp2 . . . dpK symmetry of Qqi(pi) w.r.t. negation = 1 tn ∫ t 0 ∫ t 0 . . . ∫ t 0 K∑ i=1 w2iVar ( Qqi(pi) ) dp1dp2 . . . dpK mutual independence of Qqi(pi) ∀i = 1 tn K∑ i=1 ∫ t 0 ∫ t 0 . . . ∫ t 0 w2iVar ( Qqi(pi) ) dp1dp2 . . . dpK exchangeability of finite sums and integrals = 1 tn K∑ i=1 tn−1 ∫ t 0 w2iVar ( Qqi(pi) ) dpi = 1 t K∑ i=1 w2i ∫ t 0 Var ( Qqi(pi) ) dpi = 1 t K∑ i=1 w2i ∫ t 0 uibi dpi Lemma 2 = 1 t K∑ i=1 w2i qi ∫ si 0 uibi dpi si-periodicity of ui and bi = 1 t K∑ i=1 w2i qi ∫ si 0 (si − pi) pi dpi = 1 6t K∑ i=1 w2i qis 3 i = t2 6 K∑ i=1 w2i q2i Lemma 4. Let Q be a fixed-point quantizer. Assume that parameters p1 . . . pK are sampled from U[−t, t] for arbitrary t > 0. Then, minq1...qK Ep1...pK [Var(eq1...qKp )] subject to Q = ∑K i=1 qi is minimized by qi = Q w 2/3 i∑K k=1 w 2/3 k . Proof. Define f(q) = Ep1...pK [Var(eq1...qKp )] g(q) = ( n∑ i=1 qi ) L(q) = f(q)− λg(q) (Lagrangian) Any (local) minimum q̂ satisfies ∇L(q̂) = 0 ⇐⇒ ∇ t 2 6 K∑ i=1 w2i q2i − λ∇ K∑ i=1 qi = 0 ∧ K∑ i=1 qi = Q Lemma 3 ⇐⇒ ∀i = 1 . . . n. t 2 −3 w2i q3i = λ ∧ K∑ i=1 qi = Q ⇐⇒ ∀i = 1 . . . n. qi = 3 √ t2 −3λ w2i ∧ K∑ i=1 qi = Q =⇒ ∀i = 1 . . . n. qi = Q w 2/3 i∑K j=1 w 2/3 j B.1 PROOF OF THEOREM 1 Proof. Using Lemma 4, it is straightforward to show that for any V , minq1...qK ∑K i=1 qi subject to Ep1...pK [Var(eq1...qKp )] = V is minimized by qi = Cw 2/3 i for the unique C ∈ R>0 that satisfies Ep1...pK [Var(eq1...qKp )] = V . Then, taking V = Ep1...pK [Var(eqp)] and C = √ a b (see Theorem 1), we do indeed get Ep1...pK [Var(eq1...qKp )] = t2 6 K∑ i=1 w2i (Cw 2/3 i ) 2 Lemma 3 = 1 C2 t2 6 K∑ i=1 wi 2/3 = ∑K j=1 w2j q2∑K j=1 w 2/3 j t2 6 K∑ i=1 wi 2/3 = t2 6 K∑ j=1 w2j q2 = Ep1...pK [Var(eqp)] lemma 3
1. What is the main contribution of the paper in federated learning? 2. What are the strengths of the proposed approach, particularly in terms of communication efficiency? 3. What are the weaknesses of the paper regarding computational overhead and comparison with other methods? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper studies the federated learning problem with a focus on communication efficiency. The major contribution of this paper can be summarized as: (1) Propose a time-adaptive quantization algorithm that adjusts the quantization level as training progresses (2) Propose a client-adaptive quantization algorithm that assigns quantization level to individual clients Review Strength: (1) A novel double quantization design (2) Communication overhead saving is promising Weakness (1) Computational overhead in quantization This paper proposes a double quantization strategy for efficient FL. While the saving in the communication overhead is promising, there is little discussion on the extra computational overhead introduced by the algorithm. For instance: i) what is the complexity of performing the time-adaptive and client-adaptive quantization algorithm, ii) how does the overall training time being affected if we use the proposed algorithm, iii) what is the ratio between the communication time saving and the total training time (2) Comparison with other communication efficient algorithms This paper compares with quantization baselines. How does the double quantization algorithm perform when we compare it with sketching-based FL methods?
ICLR
Title DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning Abstract Federated Learning (FL) is a powerful technique for training a model on a server with data from several clients in a privacy-preserving manner. In FL, a server sends the model to every client, who then train the model locally and send it back to the server. The server aggregates the updated models and repeats the process for several rounds. FL incurs significant communication costs, in particular when transmitting the updated local models from the clients back to the server. Recently proposed algorithms quantize the model parameters to efficiently compress FL communication. These algorithms typically have a quantization level that controls the compression factor. We find that dynamic adaptations of the quantization level can boost compression without sacrificing model quality. First, we introduce a time-adaptive quantization algorithm that increases the quantization level as training progresses. Second, we introduce a client-adaptive quantization algorithm that assigns each individual client the optimal quantization level at every round. Finally, we combine both algorithms into DAdaQuant, the doubly-adaptive quantization algorithm. Our experiments show that DAdaQuant consistently improves client→server compression, outperforming the strongest non-adaptive baselines by up to 2.8×. 1 INTRODUCTION Edge devices such as smartphones, remote sensors and smart home appliances generate massive amounts of data (Wang et al., 2018b; Cao et al., 2017; Shi & Dustdar, 2016). In recent years, Federated Learning (FL) has emerged as a technique to train models on this data while preserving privacy (McMahan et al., 2017; Li et al., 2018). In FL, we have a single server that is connected to many clients. Each client stores a local dataset that it does not want to share with the server because of privacy concerns or law enforcement (Voigt & Von dem Bussche, 2017). The server wants to train a model on all local datasets. To this end, it initializes the model and sends it to a random subset of clients. Each client trains the model on its local dataset and sends the trained model back to the server. The server accumulates all trained models into an updated model for the next iteration and repeats the process for several rounds until some termination criterion is met. This procedure enables the server to train a model without accessing any local datasets. Today’s neural network models often have millions or even billions (Brown et al., 2020) of parameters, which makes high communication costs a concern in FL. In fact, Qiu et al. (2020) suggest that communication between clients and server may account for over 70% of energy consumption in FL. Reducing communication in FL is an attractive area of research because it lowers bandwidth requirements, energy consumption and training time. Communication in FL occurs in two phases: Sending parameters from the server to clients (downlink) and sending updated parameters from clients to the server (uplink). Uplink bandwidth usually imposes a tighter bottleneck than downlink bandwidth. This has several reasons. For one, the average global mobile upload bandwidth is currently less than one fourth of the download bandwidth (Speedtest). For another, FL downlink communication sends the same parameters to each client. Broadcasting parameters is usually more efficient than the accumulation of parameters from differ- ent clients that is required for uplink communication (Amiri et al., 2020; Reisizadeh et al., 2019). For these reasons, we seek to compress uplink communication. A large class of compression algorithms for FL apply some lossy quantizer Q, optionally followed by a lossless compression stage. Q usually provides a “quantization level” hyperparameter q to control the coarseness of quantization (e.g. the number of bins for fixed-point quantization). When q is kept constant during training, we speak of static quantization. When q changes, we speak of adaptive quantization. Adaptive quantization can exploit asymmetries in the FL framework to minimize communication. One such asymmetry lies in FL’s training time, where Jhunjhunwala et al. (2021) observed that early training rounds can use a lower q without affecting convergence. Figure 2 illustrates how time-adaptive quantization leverages this phenomenon to minimize communication. Another asymmetry lies in FL’s client space, because most FL algorithms weight client contributions to the global model proportional to their local dataset sizes. Figure 1 illustrates how client-adaptive quantization can minimize the quantization error. Intuitively, FL clients with greater weighting should have a greater commu- nication budget and our proposed client-adaptive quantization achieves this in a principled way. To this end, we introduce the expected variance of an accumulation of quantized parameters, E[Var( ∑ Q(p))], as a measure of the quantization error. Our client-adaptive quantization algorithm then assigns clients minimal quantization levels, subject to a fixed E[Var( ∑ Q(p))]. This lowers the amount of data communicated from clients to the server, without increasing the quantization error. DAdaQuant (Doubly Adaptive Quantization) combines time- and client-adaptive quantization with an adaptation of the QSGD fixed-point quantization algorithm to achieve state-of-the-art FL uplink compression. In this paper, we make the following contributions: • We introduce the concept of client-adaptive quantization and develop algorithms for time- and client-adaptive quantization that are computationally efficient, empirically superior to existing algorithms, and compatible with arbitrary FL quantizers. Our client-adaptive quantization is provably optimal for stochastic fixed-point quantizers. • We create Federated QSGD as an adaptation of the stochastic fixed-point quantizer QSGD that works with FL. Federated QSGD outperforms all other quantizers, establishing a strong baseline for FL compression with static quantization. • We combine time- and client-adaptive quantization into DAdaQuant. We demonstrate DAdaQuant’s state-of-the-art compression by empirically comparing it against several competitive FL compression algorithms. 2 RELATED WORK FL research has explored several approaches to reduce communication. We identify three general directions. First, there is a growing interest of investigating FL algorithms that can converge in fewer rounds. FedAvg (McMahan et al., 2017) achieves this with prolonged local training, while FOLB (Nguyen et al., 2020) speeds up convergence through a more principled client sampling. Since communication is proportional to the number of training rounds, these algorithms effectively reduce communication. Secondly, communication can be reduced by reducing the model size because the model size is proportional to the amount of training communication. PruneFL (Jiang et al., 2019) progressively prunes the model over the course of training, while AFD (Bouacida et al., 2021) only trains submodels on clients. Thirdly, it is possible to directly compress FL training communication. FL compression algorithms typically apply techniques like top-k sparsification (Malekijoo et al., 2021; Rothchild et al., 2020) or quantization (Reisizadeh et al., 2019; Shlezinger et al., 2020) to parameter updates, optionally followed by lossless compression. Our work applies to quantization-based compression algorithms. It is partially based on QSGD (Alistarh et al., 2017), which combines lossy fixed-point quantization with a lossless compression algorithm to compress gradients communicated in distributed training. DAdaQuant adapts QSGD into Federated QSGD, which works with Federated Learning. DAdaQuant also draws inspiration from FedPAQ (Reisizadeh et al., 2019), the first FL framework to use lossy compression based on model parameter update quantization. However, FedPAQ does not explore the advantages of additional lossless compression or adaptive quantization. UVeQFed (Shlezinger et al., 2020) is an FL compression algorithm that generalizes scalar quantization to vector quantization and subsequently employs lossless compression with arithmetic coding. Like FedPAQ, UVeQFed also limits itself to a single static quantization level. Faster convergence, model size reduction and communication compression are orthogonal techniques, so they can be combined for further communication savings. For this paper, we limit the scope of empirical comparisons to quantization-based FL compression algorithms. For quantization-based compression for model training, prior works have demonstrated that DNNs can be successfully trained in low-precision (Banner et al., 2018; Gupta et al., 2015; Sun et al., 2019). There are also several adaptive quantization algorithms for training neural networks in a non-distributed setting. Shen et al. (2020) use different quantization levels for different parameters of a neural network. FracTrain (Fu et al., 2020) introduced multi-dimensional adaptive quantization by developing time-adaptive quantization and combining it with parameter-adaptive quantization. However, FracTrain uses the current loss to decide on the quantization level. FL generally can only compute local client losses that are too noisy to be practical for FracTrain. AdaQuantFL introduces time-adaptive quantization to FL, but requires the global loss (Jhunjhunwala et al., 2021). To compute the global loss, AdaQuantFL has to communicate with every client each round. We show in Section 4.2 that this quickly becomes impractical as the number of clients grows. DAdaQuant’s time-adaptive quantization overcomes this issue without compromising on the underlying FL communication. In addition, to the best of our knowledge, DAdaQuant is the first algorithm to use client-adaptive quantization. 3 THE DADAQUANT METHOD 3.1 FEDERATED LEARNING Federated Learning assumes a client-server topology with a set C = {ci|i ∈ {1, 2...N}} of N clients that are connected to a single server. Each client ck has a local dataset Dk from the local data distribution Dk. Given a model M with parameters p, a loss function fp(d ∈ Dk) and the local loss Fk(p) = 1 |Dk| ∑ d∈Dk fp(d), FL seeks to minimize the global loss G(p) = ∑N k=1 |Dk|∑ l |Dl| Fk(p). 3.2 FEDERATED AVERAGING (FEDAVG) DAdaQuant makes only minimal assumptions about the FL algorithm. Crucially, DAdaquant can complement FedAvg (McMahan et al., 2017), which is representative of a large class of FL algorithms. FedAvg trains the model M over several rounds. In each round t, FedAvg sends the model parameters pt to a random subset St of K clients who then optimize their local objectives Fk(pt) and send the updated model parameters pkt+1 back to the server. The server accumulates all parameters into the new global model pt+1 = ∑ k∈St |Dk|∑ j |Dj | pkt+1 and starts the next round. Algorithm 1 lists FedAvg in detail. For our experiments, we use the FedProx (Li et al., 2018) adaptation of FedAvg. FedProx improves the convergence of FedAvg by adding the proximal term µ2 ‖p k t+1 − pt‖2 to the local objective Fk(pkt+1) in Line 20 of Algorithm 1. 3.3 QUANTIZATION WITH FEDERATED QSGD While DAdaQuant can be applied to any quantizer with a configurable quantization level, it is optimized for fixed-point quantization. We introduce Federated QSGD as a competitive stochastic fixed-point quantizer on top of which DAdaQuant is applied. In general, stochastic fixed-point quantization uses a quantizer Qq with quantization level q that splits R≥0 and R≤0 into q intervals each. Qq(p) then returns the sign of p and |p| stochastically rounded to one of the endpoints of its encompassing interval. Qq(p) quantizes the vector p elementwise. We design DAdaQuant’s quantization stage based on QSGD, an efficient fixed-point quantizer for state-of-the-art gradient compression. QSGD quantizes a vector p in three steps: 1. Quantize p as Qq( p ||p||2 ) into q bins in [0, 1], storing signs and ||p||2 separately. (lossy) 2. Encode the resulting integers with 0 run-length encoding. (lossless) 3. Encode the resulting integers with Elias ω coding. (lossless) QSGD has been designed specifically for quantizing gradients. This makes it not directly applicable to parameter compression. To overcome this limitation, we apply difference coding to uplink compression, first introduced to FL by FedPAQ. Each client ck applies Qq to the parameter updates pkt+1 − pt (cf. Line 21 of Algorithm 1) and sends them to the server. The server keeps track of the previous parameters pt and accumulates the quantized parameter updates into the new parameters as pt+1 = pt + ∑ k∈St |Dk|∑ l |Dl| Qq(p k t+1 − pt) (cf. Line 11 of Algorithm 1). We find that QSGD works well with parameter updates, which can be regarded as an accumulation of gradients over several training steps. We call this adaptation of QSGD Federated QSGD. 3.4 TIME-ADAPTIVE QUANTIZATION Time-adaptive quantization uses a different quantization level qt for each round t of FL training. DAdaQuant chooses qt to minimize communication costs without sacrificing accuracy. To this end, we find that lower quantization levels suffice to initially reduce the loss, while partly trained models require higher quantization levels to further improve (as illustrated in Figure 2). FracTrain is built on similar observations for non-distributed training. Therefore, we design DAdaQuant to mimic FracTrain in monotonically increasing qt as a function of t and using the training loss to inform increases in qt. When q is too low, FL converges prematurely. Like FracTrain, DAdaQuant monitors the FL loss and increases q when it converges. Unlike FracTrain, there is no single centralized loss function to evaluate and unlike AdaQuantFL, we do not assume availability of global training loss G(pt). Instead, we estimate G(pt) as the average local loss Ĝt = ∑ k∈St |Dk|∑ l |Dl| Fk(pt) where St is the set of clients sampled at round t. Since St typically consists of only a small fraction of all clients, Ĝt is a very noisy estimate of G(pt). This makes it unsuitable for convergence detection. Instead, DAdaQuant tracks a running average loss Ĝt = ψĜt−1 + (1− ψ)Ĝt. We initialize q1 = qmin for some qmin ∈ N. DAdaQuant determines training to converge whenever Ĝt ≥ Ĝt+1−φ for some φ ∈ N that specifies the number of rounds across which we compare Ĝ. On convergence, DAdaQuant sets qt = 2qt−1 and keeps the quantization level fixed for at least φ rounds to enable reductions in G to manifest in Ĝ. Eventually, the training loss converges regardless of the quantization level. To avoid unconstrained quantization increases on convergence, we limit the quantization level to qmax. The following equation summarizes DAdaQuant’s time-adaptive quantization: qt ←− qmin t = 0 2qt−1 t > 0 and Ĝt−1 ≥ Ĝt−φ and t > φ and 2qt−1 < qmax and qt−1 = qt−φ qt−1 else 3.5 CLIENT-ADAPTIVE QUANTIZATION FL algorithms typically accumulate each parameter pi over all clients into a weighted average p = ∑K i=1 wipi (see Algorithm 1). Quantized FL accumulates quantized parameters Qq(p) =∑K i=1 wiQq(pi) where q is the quantization level. We define the quantization error e q p = |p− Qq(p)|. We observe in our experiments that communication cost per client is roughly a linear function of Federated QSGD’s quantization level q. This means that the communication cost per round is proportional to Q = Kq. We call Q the communication budget and use it as a proxy measure of communication cost. Client-adaptive quantization dynamically adjusts the quantization level of each client. This means that even within a single round, each client ck can be assigned a different quantization level qk. The previous definitions then generalize to Q = ∑K k=1 qk and Qq1...qK (p) = ∑K i=1 wiQqi(pi) and eq1...qKp = |p− Qq1...qK (p)|. Prior convergence results for distributed training and FL rely on an upper bound b on Var(Qq1...qK (p)) that determines the convergence speed Li et al. (2017); Horváth et al. (2019); Reisizadeh et al. (2019). This makes V(Qq1...qK (p)) a natural measure to optimize for when choosing qk. We optimize for the closely related measure Ep1...pK [Var(Qq1...qK (p))] that replaces the upper bound with an expectation over parameters p1 . . . pK . Heuristically, we expect an this averaged measure to provide a better estimate of practically observed quantization errors than an upper bound. For a stochastic, unbiased fixed-point compressor like Federated QSGD, Ep1...−pK [Var(Qq1...qK (p))] equals Ep1...pK [Var(eqp)] and can be evaluated analytically. We devise an algorithm that chooses qk to minimize Q subject to Ep1...pK [Var(eq1...qKp )] = Ep1...pK [Var(eqp)] for a given q. Thus, our algorithm effectively minimizes communication costs while maintaining a quantization error similar to static quantization. Theorem 1 provides us with an analytical formula for quantization levels q1 . . . qK . Theorem 1. Given parameters p1 . . . pk ∼ U[−t, t] and quantization level q, minq1...qK ∑K i=1 qi subject to Ep1...pK [Var(eq1...qKp )] = Ep1...pK [Var(eqp)] is minimized by qi = √ a b × w 2/3 i where a = ∑K j=1 w 2/3 j and b = ∑K j=1 w2j q2 . DAdaQuant applies Theorem 1 to lower communication costs while maintaining the same loss as static quantization does with a fixed q. To ensure that quantization levels are natural numbers, DAdaQuant approximates the optimal real-valued solution as qi = max(1, round( √ a b × w 2/3 i )). Appendix B gives a detailed proof of Theorem 1. To the best of our knowledge, DAdaQuant is the first algorithm to use client-adaptive quantization. Algorithm 1: The FedAvg and DAdaQuant algorithms. The uncolored lines list FedAvg. Adding the colored lines creates DAdaQuant. — quantization, — client-adaptive quantization, — time-adaptive quantization. 1 Function RunServer() 2 Initialize wi = |Di|∑ j |Dj | for all i ∈ [1, . . . , N ]; 3 for t = 0, . . . , T − 1 do 4 Choose St ⊂ C with |St| = K, including each ck ∈ C with uniform probability; 5 qt ←− qmin t = 0 2qt−1 t > 0 and Ĝt−1 ≥ Ĝt−φ and t > φ and qt ≤ qmax and qt−1 = qt−φ qt−1 else ; 6 for ck ∈ St do in parallel 7 qkt ←− √∑K j=1 w 2/3 j / ∑K j=1 w2j q2 ; 8 Send(ck,pt,q k t ); 9 Receive(ck,p k t+1,Ĝ k t ); 10 end 11 pt+1 ←− ∑ k∈St wkp k t+1; 12 Ĝt ←− ∑ k∈St wkĜ k t ; 13 Ĝt ←− { Ĝ0 t = 0 ψĜt−1 + (1− ψ)Ĝt else ; 14 end 15 end 16 Function RunClient(ck) 17 while True do 18 Receive(Server,pt, qkt ); 19 Ĝkt ←− Fk(pt) ; 20 pkt+1 ←− Fk(pkt+1) trained with SGD for E epochs with learning rate η; 21 Send(Server, Qqkt (p k t+1) ,Ĝ k t ); 22 end 23 end 3.6 DOUBLY-ADAPTIVE QUANTIZATION (DADAQUANT) DAdaQuant combines the time-adaptive and client-adaptive quantization algorithms described in the previous sections. At each round t, time-adaptive quantization determines a preliminary quantization level qt. Client-adaptive quantization then finds the client quantization levels qkt , k ∈ {1, . . . ,K} that minimize ∑K i=1 qi subject to Ep1...pK [Var(eq1...qKp )] = Ep1...pK [Var(eqp)]. Algorithm 1 lists DAdaQuant in detail. Figure 3 gives an example of how our time-adaptive, client-adaptive and doubly-adaptive quantization algorithms set quantization levels. Reisizadeh et al. (2019) prove the convergence of FL with quantization for convex and non-convex cases as long as the quantizer Q is (1) unbiased and (2) has a bounded variance. These convergence results extend to DAdaQuant when combined with any quantizer that satisfies (1) and (2) for DAdaQuant’s minimum quantization level q = 1. Crucially, this includes Federated QSGD. We highlight DAdaQuant’s low overhead and general applicability. The computational overhead is dominated by an additional evaluation epoch per round per client to compute Ĝt, which is negligible when training for many epochs per round. In our experiments, we observe computational overheads of ≈ 1% (see Appendix A.2). DAdaQuant can compliment any FL algorithm that trains models over several rounds and accumulates a weighted average of client parameters. Most FL algorithms, including FedAvg, follow this design. 4 EXPERIMENTS 4.1 EXPERIMENTAL DETAILS Evaluation We use DAdaQuant with Federated QSGD to train different models with FedProx on different datasets for a fixed number of rounds. We monitor the test loss and accuracy at fixed intervals and measure uplink communication at every round across all devices. Models & datasets We select a broad and diverse set of five models and datasets to demonstrate the general applicability of DAdaQuant. To this end, we use DAdaQuant to train a linear model, CNNs and LSTMs of varying complexity on a federated synthetic dataset (Synthetic), as well as two federated image datasets (FEMNIST and CelebA) and two federated natural language datasets (Sent140 and Shakespeare) from the LEAF (Caldas et al., 2018) project for standardized FL research. We refer to Appendix A.1 for more information on the models, datasets, training objectives and implementation. System heterogeneity In practice, FL has to cope with clients that have different compute capabilities. We follow Li et al. (2018) and simulate this system heterogeneity by randomly reducing the number of epochs to E′ for a random subset S′t ⊂ St of clients at each round t, where E′ is sampled from [1, . . . , E] and |S′t| = 0.9K. Baselines We compare DAdaQuant against competing quantization-based algorithms for FL parameter compression, namely Federated QSGD, FedPAQ (Reisizadeh et al., 2019), GZip with fixedpoint quantization (FxPQ + GZip), UVeQFed (Shlezinger et al., 2020) and FP8. Federated QSGD (see section 3.3) is our most important baseline because it outperforms the other algorithms. FedPAQ only applies fixed-point quantization, which is equivalent to Federated QSGD without lossless compression. Similarly, FxPQ + GZip is equivalent to Federated QSGD with Gzip for its lossless compression stages. UVeQFed generalizes scalar quantization to vector quantization, followed by arithmetic coding. We apply UVeQFed with the optimal hyperparameters reported by its authors. FP8 (Wang et al., 2018a) is a floating-point quantizer that uses an 8-bit floating-point format designed for storing neural network gradients. We also evaluate all experiments without compression to establish an accuracy benchmark. Hyperparameters With the exception of CelebA, all our datasets and models are also used by Li et al.. We therefore adopt most of the hyperparameters from Li et al. and use LEAF’s hyperparameters for CelebA Caldas et al. (2018). For all experiments, we sample 10 clients each round. We train Synthetic, FEMNIST and CelebA for 500 rounds each. We train Sent140 for 1000 rounds due to slow convergence and Shakespeare for 50 rounds due to rapid convergence. We use batch size 10, learning rates 0.01, 0.003, 0.3, 0.8, 0.1 and µs (FedProx’s proximal term coefficient) 1, 1, 1, 0.001, 0 for Synthetic, FEMNIST, Sent140, Shakespeare, CelebA respectively. We randomly split the local datasets into 80% training set and 20% test set. To select the quantization level q for static quantization with Federated QSGD, FedPAQ and FxPQ + GZip, we run a gridsearch over q = 1, 2, 4, 8, . . . and choose for each dataset the lowest q for which Federated QSGD exceeds uncompressed training in accuracy. We set UVeQFed’s “coding rate” hyperparameter R = 4, which is the lowest value for which UVeQFed achieves negligible accuracy differences compared to uncompressed training. We set the remaining hyperparameters of UVeQFed to the optimal values reported by its authors. Appendix A.4 shows further experiments that compare against UVeQFed with R chosen to maximize its compression factor. For DAdaQuant’s time-adaptive quantization, we set ψ to 0.9, φ to 1/10th of the number of rounds and qmax to the quantization level q for each experiment. For Synthetic and FEMNIST, we set qmin to 1. We find that Sent140, Shakespeare and CelebA require a high quantization level to achieve top accuracies and/or converge in few rounds. This prevents time-adaptive quantization from increasing the quantization level quickly enough, resulting in prolonged low-precision training that hurts model performance. To counter this effect, we set qmin to qmax/2. This effectively results in binary timeadaptive quantization with an initial low-precision phase with q = qmax/2, followed by a highprecision phase with q = qmax. 4.2 RESULTS We repeat the main experiments three times and report average results and their standard deviation (where applicable). Table 1 shows the highest accuracy and total communication for each experiment. Figure 4 plots the maximum accuracy achieved for any given amount of communication. Baselines Table 1 shows that the accuracy of most experiments lies within the margin of error of the uncompressed experiments. This reiterates the viability of quantization-based compression algorithms for communication reduction in FL. For all experiments, Federated QSGD achieves a significantly higher compression factor than the other baselines. The authors of FedPAQ and UVeQFed also compare their methods against QSGD and report them as superior. However, FedPAQ is compared against “unfederated” QSGD that communicates gradients after each local training step and UVeQFed is compared against QSGD without its lossless compression stages. Time-adaptive quantization The purely time-adaptive version of DAdaQuant, DAdaQuanttime, universally outperforms Federated QSGD and the other baselines in Table 1, achieving comparable accuracies while lowering communication costs. DAdaQuanttime performs particularly well on Syn- thetic and FEMNIST, where it starts from the lowest possible quantization level q = 1. However, binary time-adaptive quantization still measurably improves over QSGD for Sent140, Shakespeare and Celeba. Figure 8 in Appendix A.5 provides empirical evidence that AdaQuantFL’s communication scales linearly with the number of clients. As a result, AdaQuantFL is prohibitively expensive for datasets with thousands of clients such as Celeba and Sent140. DAdaQuant does not face this problem because its communication is unaffected by the number of clients. Client-adaptive quantization The purely time-adaptive version of DAdaQuant, DAdaQuantclients, also universally outperforms Federated QSGD and the other baselines in Table 1, achieving similar accuracies while lowering communication costs. Unsurprisingly, the performance of DAdaQuantclients is correlated with the coefficient of variation cv = σµ of the numbers of samples in the local datasets with mean µ and standard deviation σ: Synthetic (cv = 3.3) and Shakespeare (cv = 1.7) achieve significantly higher compression factors than Sent140 (cv = 0.3), FEMNIST (cv = 0.4) and Celeba (cv = 0.3). DAdaQuant DAdaQuant outperforms DAdaQuanttime and DAdaQuantclients in communication while achieving similar accuracies. The compression factors of DAdaQuant are roughly multiplicative in those of DAdaQuantclients and DAdaQuanttime. This demonstrates that we can effectively combine time- and client-adaptive quantization for maximal communication savings. Figure 4 shows that DAdaQuant achieves a higher accuracy than the strongest baseline, Federated QSGD, for any fixed amount of client→server communication. 5 CONCLUSION We introduced DAdaQuant as a computationally efficient and robust algorithm to boost the performance of quantization-based FL compression algorithms. We showed intuitively and mathematically how DAdaQuant’s dynamic adjustment of the quantization level across time and clients minimize client→server communication while maintaining convergence speed. Our experiments establish DAdaQuant as nearly universally superior over static quantizers, achieving state-of-the-art compression factors when applied to Federated QSGD. The communication savings of DAdaQuant effectively lower FL bandwidth usage, energy consumption and training time. Future work may apply and adapt DAdaQuant to new quantizers, further pushing the state of the art in FL uplink compression. 6 REPRODUCIBILITY STATEMENT Our submission includes a repository with the source code for DAdaQuant and for the experiments presented in this paper. All the datasets used in our experiments are publicly available. Any postprocessing steps of the datasets are described in Appendix A.1. To facilitate the reproduction of our results, we have bundled all our source code, dependencies and datasets into a Docker image. The repository submitted with this paper contains instructions on how to use this Docker image and reproduce all plots and tables in this paper. 7 ETHICS STATEMENT FL trains models on private client datasets in a privacy-preserving manner. However, FL does not completely eliminate privacy concerns, because the transmitted model updates and the learned model parameters may expose the private client data from which they are derived. Our work does not directly target privacy concerns in FL. With that said, it is worth noting that DAdaQuant does not expose any client data that is not already exposed through standard FL training algorithms. In fact, DAdaQuant reduces the amount of exposed data through lossy compression of the model updates. We therefore believe that DAdaQuant is free of ethical complications. A ADDITIONAL SIMULATION DETAILS AND EXPERIMENTS A.1 ADDITIONAL SIMULATION DETAILS Here, we give detailed information on the models, datasets, training objectives and implementation that we use for our experiments. We set the five following FL tasks: • Multinomial logistic regression (MLR) on a synthetic dataset called Synthetic that contains vectors in R60 with a label of one out of 10 classes. We use the synthetic dataset generator in Li et al. (2018) to generate synthetic datasets. The generator samples Synthetic’s local datasets and labels from MLR models with randomly initialized parameters. For this purpose, parameters α and β control different kinds of data heterogeneity. α controls the variation in the local models from which the local dataset labels are generated. β controls the variation in the local dataset samples. We set α = 1 and β = 1 to simulate an FL setting with both kinds of data heterogeneity. This makes Synthetic a useful testbed for FL. • Character classification into 62 classes of handwritten characters from the FEMNIST dataset using a CNN. FEMNIST groups samples from the same author into the same local dataset. • Smile detection in facial images from the CelebA dataset using a CNN. CelebA groups samples of the same person into the same local dataset. We note that LEAF’s CNN for CelebA uses BatchNorm layers. We replace them with LayerNorm layers because they are more amenable to quantization. This change does not affect the final accuracy. • Binary sentiment analysis of tweets from the Sent140 dataset using an LSTM. Sent140 groups tweets from the same user into the same local dataset. The majority of local datasets in the raw Sent140 dataset only have a single sample. This impedes FL convergence. Therefore, we filter Sent140 to clients with at least 10 samples (i.e. one complete batch). Caldas et al. (2018); Li et al. (2018) similarly filter Sent140 for their FL experiments. • Next character prediction on text snippets from the Shakespeare dataset of Shakespeare’s collected plays using an LSTM. Shakespeare groups lines from the same character into the same local dataset. Table 2 provides statistics of our models and datasets. For our experiments in Figure 8, AdaQuantFL requires a hyperparameter s that determines the initial quantization level. We set s to 2, the optimal value reported by the authors of AdaQuantFL. The remaining hyperparameters are identical to those used for the Synthetic dataset experiments in Table 1. We implement the models with PyTorch (Paszke et al., 2019) and use Flower (Beutel et al., 2020) to simulate the FL server and clients. A.2 COMPUTATIONAL OVERHEAD OF DADAQUANT A.3 COMPLETE COMMUNICATION-ACCURACY TRADE-OFF CURVES Synthetic FEMNIST Sent140 Synthetic FEMNIST Sent140 Synthetic FEMNIST Sent140 A.4 ADDITIONAL UVEQFED EXPERIMENTS To demonstrate that the choice of UVeQFed’s “coding rate” hyperparameter R does not affect our findings on the superior compression factors of DAdaQuant, we re-evaluate UVeQFed with R = 1, which maximizes UVeQFed’s compression factor. Our results in Table 4 show that with the exception of Shakespeare, DAdaQuant still achieves considerably higher compression factors than UVeQFed. A.5 ADDITIONAL ADAQUANTFL EXPERIMENTS In principle, AdaQuantFL could be adapted to work with partial client participation by computing an estimate of the global loss from the sampled subset of clients. While a full evaluation of this approach is out of the scope of this paper, we conduct a brief feasibility study on FEMNIST. Concretely, we find that a single run of AdaQuantFL with partial client participation on FEMNIST achieved an accuracy of 78.7%, with a total client→server communication of 50.5 MB. In contrast, the same run with DAdaQuanttime similarly achieved an accuracy of 78.4%, while lowering the total client→server communication to 27.5 MB. B PROOFS Lemma 1. Take arbitrary quantization level qi ∈ N and parameter pi ∈ [−t, t]. Then, Qqi(pi) is an unbiased estimator of pi. Proof. Let si = tqi , bi = rem (pi, si) and ui = si − bi. Then, we have E [ Qqi(pi)− pi ] = ui si (pi − bi) + bi si (pi + ui) see Figure 9 = pi Lemma 2. For arbitrary t > 0 and parameter pi ∈ [−t, t], let si = tqi , bi = rem (pi, si) and ui = si − bi. Then, Var ( Qqi(pi) ) = uibi. Proof. Var ( Qqi(pi) ) = E [( Qqi(pi)− E [ Qqi(pi) ])2] = E [( Qqi(pi)− pi )2] see Lemma 1 = bi si u2i + ui si b2i see Figure 9 Lemma 3. Assume that parameters p1 . . . pK are sampled from U[−t, t] for arbitrary t > 0. Then, Ep1...pK [Var(eq1...qKp )] = t 2 6 ∑K i=1 w2i q2i . Proof. Ep1...pK [Var(ep)] = 1 2t ∫ t −t 1 2t ∫ t −t . . . 1 2t ∫ t −t Var ( K∑ i=1 wiQqi(pi)− p ) dp1dp2 . . . dpK = 1 t ∫ t 0 1 t ∫ t 0 . . . 1 t ∫ t 0 Var ( K∑ i=1 wiQqi(pi)− p ) dp1dp2 . . . dpK symmetry of Qqi(pi) w.r.t. negation = 1 tn ∫ t 0 ∫ t 0 . . . ∫ t 0 K∑ i=1 w2iVar ( Qqi(pi) ) dp1dp2 . . . dpK mutual independence of Qqi(pi) ∀i = 1 tn K∑ i=1 ∫ t 0 ∫ t 0 . . . ∫ t 0 w2iVar ( Qqi(pi) ) dp1dp2 . . . dpK exchangeability of finite sums and integrals = 1 tn K∑ i=1 tn−1 ∫ t 0 w2iVar ( Qqi(pi) ) dpi = 1 t K∑ i=1 w2i ∫ t 0 Var ( Qqi(pi) ) dpi = 1 t K∑ i=1 w2i ∫ t 0 uibi dpi Lemma 2 = 1 t K∑ i=1 w2i qi ∫ si 0 uibi dpi si-periodicity of ui and bi = 1 t K∑ i=1 w2i qi ∫ si 0 (si − pi) pi dpi = 1 6t K∑ i=1 w2i qis 3 i = t2 6 K∑ i=1 w2i q2i Lemma 4. Let Q be a fixed-point quantizer. Assume that parameters p1 . . . pK are sampled from U[−t, t] for arbitrary t > 0. Then, minq1...qK Ep1...pK [Var(eq1...qKp )] subject to Q = ∑K i=1 qi is minimized by qi = Q w 2/3 i∑K k=1 w 2/3 k . Proof. Define f(q) = Ep1...pK [Var(eq1...qKp )] g(q) = ( n∑ i=1 qi ) L(q) = f(q)− λg(q) (Lagrangian) Any (local) minimum q̂ satisfies ∇L(q̂) = 0 ⇐⇒ ∇ t 2 6 K∑ i=1 w2i q2i − λ∇ K∑ i=1 qi = 0 ∧ K∑ i=1 qi = Q Lemma 3 ⇐⇒ ∀i = 1 . . . n. t 2 −3 w2i q3i = λ ∧ K∑ i=1 qi = Q ⇐⇒ ∀i = 1 . . . n. qi = 3 √ t2 −3λ w2i ∧ K∑ i=1 qi = Q =⇒ ∀i = 1 . . . n. qi = Q w 2/3 i∑K j=1 w 2/3 j B.1 PROOF OF THEOREM 1 Proof. Using Lemma 4, it is straightforward to show that for any V , minq1...qK ∑K i=1 qi subject to Ep1...pK [Var(eq1...qKp )] = V is minimized by qi = Cw 2/3 i for the unique C ∈ R>0 that satisfies Ep1...pK [Var(eq1...qKp )] = V . Then, taking V = Ep1...pK [Var(eqp)] and C = √ a b (see Theorem 1), we do indeed get Ep1...pK [Var(eq1...qKp )] = t2 6 K∑ i=1 w2i (Cw 2/3 i ) 2 Lemma 3 = 1 C2 t2 6 K∑ i=1 wi 2/3 = ∑K j=1 w2j q2∑K j=1 w 2/3 j t2 6 K∑ i=1 wi 2/3 = t2 6 K∑ j=1 w2j q2 = Ep1...pK [Var(eqp)] lemma 3
1. What is the focus and contribution of the paper on federated learning? 2. What are the strengths of the proposed approach, particularly in terms of communication efficiency? 3. Do you have any concerns or questions regarding the paper's theoretical analysis, such as the definition of certain variables or the relationship between agent-specific quantization levels? 4. How does the proposed approach compare to previous quantization works in terms of communication efficiency, especially in data heterogeneous cases? 5. Are there any minor questions or suggestions you have for the paper, such as requests for additional justification or empirical studies?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a communication-efficient federated learning framework named DAdaQuant. It is a quantization-based FL compression algorithm, which chooses both time-adaptive and client-adaptive quantization levels to improve the communication efficiency of FL algorithms over previous quantization works. The work provides math intuition behind choosing the client-adaptive quantization level. The authors also show solid empirical studies over FL datasets and validate the superiority of the proposed approach in communication efficiency. Review The paper provides clear intuition behind the double adaptive quantization approach. The work also presents solid empirical studies by comparing with a series of baseline algorithms over multiple classical datasets. The work overall is well presented. Major concerns and questions: For Theorem 1, what is the definition of e q p 1 ⋯ p K ? How is the agent-specific quantization level q i 's relates to the given quantization level q ? For Figure 4, the authors comment that for DAdaQuant, the communication cost does not scale with the number of clients. The reviewer thinks that despite per iteration the communication cost does not increase, the overall number of training rounds will increase, which causes the total communication cost to grow, especially in a data heterogeneous case. Can the author comments on this? Minor questions: In section 3.5, the authors comment "We observe that Ep1...pK [Var(Qq(p))] is a useful statistic of the quantization error because...". The reviewer does not doubt the intuition. But is there any rigorous justification for this observation? Have the authors tried other orthogonal techniques combining with quantization to further improve the communication cost empirically?
ICLR
Title DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning Abstract Federated Learning (FL) is a powerful technique for training a model on a server with data from several clients in a privacy-preserving manner. In FL, a server sends the model to every client, who then train the model locally and send it back to the server. The server aggregates the updated models and repeats the process for several rounds. FL incurs significant communication costs, in particular when transmitting the updated local models from the clients back to the server. Recently proposed algorithms quantize the model parameters to efficiently compress FL communication. These algorithms typically have a quantization level that controls the compression factor. We find that dynamic adaptations of the quantization level can boost compression without sacrificing model quality. First, we introduce a time-adaptive quantization algorithm that increases the quantization level as training progresses. Second, we introduce a client-adaptive quantization algorithm that assigns each individual client the optimal quantization level at every round. Finally, we combine both algorithms into DAdaQuant, the doubly-adaptive quantization algorithm. Our experiments show that DAdaQuant consistently improves client→server compression, outperforming the strongest non-adaptive baselines by up to 2.8×. 1 INTRODUCTION Edge devices such as smartphones, remote sensors and smart home appliances generate massive amounts of data (Wang et al., 2018b; Cao et al., 2017; Shi & Dustdar, 2016). In recent years, Federated Learning (FL) has emerged as a technique to train models on this data while preserving privacy (McMahan et al., 2017; Li et al., 2018). In FL, we have a single server that is connected to many clients. Each client stores a local dataset that it does not want to share with the server because of privacy concerns or law enforcement (Voigt & Von dem Bussche, 2017). The server wants to train a model on all local datasets. To this end, it initializes the model and sends it to a random subset of clients. Each client trains the model on its local dataset and sends the trained model back to the server. The server accumulates all trained models into an updated model for the next iteration and repeats the process for several rounds until some termination criterion is met. This procedure enables the server to train a model without accessing any local datasets. Today’s neural network models often have millions or even billions (Brown et al., 2020) of parameters, which makes high communication costs a concern in FL. In fact, Qiu et al. (2020) suggest that communication between clients and server may account for over 70% of energy consumption in FL. Reducing communication in FL is an attractive area of research because it lowers bandwidth requirements, energy consumption and training time. Communication in FL occurs in two phases: Sending parameters from the server to clients (downlink) and sending updated parameters from clients to the server (uplink). Uplink bandwidth usually imposes a tighter bottleneck than downlink bandwidth. This has several reasons. For one, the average global mobile upload bandwidth is currently less than one fourth of the download bandwidth (Speedtest). For another, FL downlink communication sends the same parameters to each client. Broadcasting parameters is usually more efficient than the accumulation of parameters from differ- ent clients that is required for uplink communication (Amiri et al., 2020; Reisizadeh et al., 2019). For these reasons, we seek to compress uplink communication. A large class of compression algorithms for FL apply some lossy quantizer Q, optionally followed by a lossless compression stage. Q usually provides a “quantization level” hyperparameter q to control the coarseness of quantization (e.g. the number of bins for fixed-point quantization). When q is kept constant during training, we speak of static quantization. When q changes, we speak of adaptive quantization. Adaptive quantization can exploit asymmetries in the FL framework to minimize communication. One such asymmetry lies in FL’s training time, where Jhunjhunwala et al. (2021) observed that early training rounds can use a lower q without affecting convergence. Figure 2 illustrates how time-adaptive quantization leverages this phenomenon to minimize communication. Another asymmetry lies in FL’s client space, because most FL algorithms weight client contributions to the global model proportional to their local dataset sizes. Figure 1 illustrates how client-adaptive quantization can minimize the quantization error. Intuitively, FL clients with greater weighting should have a greater commu- nication budget and our proposed client-adaptive quantization achieves this in a principled way. To this end, we introduce the expected variance of an accumulation of quantized parameters, E[Var( ∑ Q(p))], as a measure of the quantization error. Our client-adaptive quantization algorithm then assigns clients minimal quantization levels, subject to a fixed E[Var( ∑ Q(p))]. This lowers the amount of data communicated from clients to the server, without increasing the quantization error. DAdaQuant (Doubly Adaptive Quantization) combines time- and client-adaptive quantization with an adaptation of the QSGD fixed-point quantization algorithm to achieve state-of-the-art FL uplink compression. In this paper, we make the following contributions: • We introduce the concept of client-adaptive quantization and develop algorithms for time- and client-adaptive quantization that are computationally efficient, empirically superior to existing algorithms, and compatible with arbitrary FL quantizers. Our client-adaptive quantization is provably optimal for stochastic fixed-point quantizers. • We create Federated QSGD as an adaptation of the stochastic fixed-point quantizer QSGD that works with FL. Federated QSGD outperforms all other quantizers, establishing a strong baseline for FL compression with static quantization. • We combine time- and client-adaptive quantization into DAdaQuant. We demonstrate DAdaQuant’s state-of-the-art compression by empirically comparing it against several competitive FL compression algorithms. 2 RELATED WORK FL research has explored several approaches to reduce communication. We identify three general directions. First, there is a growing interest of investigating FL algorithms that can converge in fewer rounds. FedAvg (McMahan et al., 2017) achieves this with prolonged local training, while FOLB (Nguyen et al., 2020) speeds up convergence through a more principled client sampling. Since communication is proportional to the number of training rounds, these algorithms effectively reduce communication. Secondly, communication can be reduced by reducing the model size because the model size is proportional to the amount of training communication. PruneFL (Jiang et al., 2019) progressively prunes the model over the course of training, while AFD (Bouacida et al., 2021) only trains submodels on clients. Thirdly, it is possible to directly compress FL training communication. FL compression algorithms typically apply techniques like top-k sparsification (Malekijoo et al., 2021; Rothchild et al., 2020) or quantization (Reisizadeh et al., 2019; Shlezinger et al., 2020) to parameter updates, optionally followed by lossless compression. Our work applies to quantization-based compression algorithms. It is partially based on QSGD (Alistarh et al., 2017), which combines lossy fixed-point quantization with a lossless compression algorithm to compress gradients communicated in distributed training. DAdaQuant adapts QSGD into Federated QSGD, which works with Federated Learning. DAdaQuant also draws inspiration from FedPAQ (Reisizadeh et al., 2019), the first FL framework to use lossy compression based on model parameter update quantization. However, FedPAQ does not explore the advantages of additional lossless compression or adaptive quantization. UVeQFed (Shlezinger et al., 2020) is an FL compression algorithm that generalizes scalar quantization to vector quantization and subsequently employs lossless compression with arithmetic coding. Like FedPAQ, UVeQFed also limits itself to a single static quantization level. Faster convergence, model size reduction and communication compression are orthogonal techniques, so they can be combined for further communication savings. For this paper, we limit the scope of empirical comparisons to quantization-based FL compression algorithms. For quantization-based compression for model training, prior works have demonstrated that DNNs can be successfully trained in low-precision (Banner et al., 2018; Gupta et al., 2015; Sun et al., 2019). There are also several adaptive quantization algorithms for training neural networks in a non-distributed setting. Shen et al. (2020) use different quantization levels for different parameters of a neural network. FracTrain (Fu et al., 2020) introduced multi-dimensional adaptive quantization by developing time-adaptive quantization and combining it with parameter-adaptive quantization. However, FracTrain uses the current loss to decide on the quantization level. FL generally can only compute local client losses that are too noisy to be practical for FracTrain. AdaQuantFL introduces time-adaptive quantization to FL, but requires the global loss (Jhunjhunwala et al., 2021). To compute the global loss, AdaQuantFL has to communicate with every client each round. We show in Section 4.2 that this quickly becomes impractical as the number of clients grows. DAdaQuant’s time-adaptive quantization overcomes this issue without compromising on the underlying FL communication. In addition, to the best of our knowledge, DAdaQuant is the first algorithm to use client-adaptive quantization. 3 THE DADAQUANT METHOD 3.1 FEDERATED LEARNING Federated Learning assumes a client-server topology with a set C = {ci|i ∈ {1, 2...N}} of N clients that are connected to a single server. Each client ck has a local dataset Dk from the local data distribution Dk. Given a model M with parameters p, a loss function fp(d ∈ Dk) and the local loss Fk(p) = 1 |Dk| ∑ d∈Dk fp(d), FL seeks to minimize the global loss G(p) = ∑N k=1 |Dk|∑ l |Dl| Fk(p). 3.2 FEDERATED AVERAGING (FEDAVG) DAdaQuant makes only minimal assumptions about the FL algorithm. Crucially, DAdaquant can complement FedAvg (McMahan et al., 2017), which is representative of a large class of FL algorithms. FedAvg trains the model M over several rounds. In each round t, FedAvg sends the model parameters pt to a random subset St of K clients who then optimize their local objectives Fk(pt) and send the updated model parameters pkt+1 back to the server. The server accumulates all parameters into the new global model pt+1 = ∑ k∈St |Dk|∑ j |Dj | pkt+1 and starts the next round. Algorithm 1 lists FedAvg in detail. For our experiments, we use the FedProx (Li et al., 2018) adaptation of FedAvg. FedProx improves the convergence of FedAvg by adding the proximal term µ2 ‖p k t+1 − pt‖2 to the local objective Fk(pkt+1) in Line 20 of Algorithm 1. 3.3 QUANTIZATION WITH FEDERATED QSGD While DAdaQuant can be applied to any quantizer with a configurable quantization level, it is optimized for fixed-point quantization. We introduce Federated QSGD as a competitive stochastic fixed-point quantizer on top of which DAdaQuant is applied. In general, stochastic fixed-point quantization uses a quantizer Qq with quantization level q that splits R≥0 and R≤0 into q intervals each. Qq(p) then returns the sign of p and |p| stochastically rounded to one of the endpoints of its encompassing interval. Qq(p) quantizes the vector p elementwise. We design DAdaQuant’s quantization stage based on QSGD, an efficient fixed-point quantizer for state-of-the-art gradient compression. QSGD quantizes a vector p in three steps: 1. Quantize p as Qq( p ||p||2 ) into q bins in [0, 1], storing signs and ||p||2 separately. (lossy) 2. Encode the resulting integers with 0 run-length encoding. (lossless) 3. Encode the resulting integers with Elias ω coding. (lossless) QSGD has been designed specifically for quantizing gradients. This makes it not directly applicable to parameter compression. To overcome this limitation, we apply difference coding to uplink compression, first introduced to FL by FedPAQ. Each client ck applies Qq to the parameter updates pkt+1 − pt (cf. Line 21 of Algorithm 1) and sends them to the server. The server keeps track of the previous parameters pt and accumulates the quantized parameter updates into the new parameters as pt+1 = pt + ∑ k∈St |Dk|∑ l |Dl| Qq(p k t+1 − pt) (cf. Line 11 of Algorithm 1). We find that QSGD works well with parameter updates, which can be regarded as an accumulation of gradients over several training steps. We call this adaptation of QSGD Federated QSGD. 3.4 TIME-ADAPTIVE QUANTIZATION Time-adaptive quantization uses a different quantization level qt for each round t of FL training. DAdaQuant chooses qt to minimize communication costs without sacrificing accuracy. To this end, we find that lower quantization levels suffice to initially reduce the loss, while partly trained models require higher quantization levels to further improve (as illustrated in Figure 2). FracTrain is built on similar observations for non-distributed training. Therefore, we design DAdaQuant to mimic FracTrain in monotonically increasing qt as a function of t and using the training loss to inform increases in qt. When q is too low, FL converges prematurely. Like FracTrain, DAdaQuant monitors the FL loss and increases q when it converges. Unlike FracTrain, there is no single centralized loss function to evaluate and unlike AdaQuantFL, we do not assume availability of global training loss G(pt). Instead, we estimate G(pt) as the average local loss Ĝt = ∑ k∈St |Dk|∑ l |Dl| Fk(pt) where St is the set of clients sampled at round t. Since St typically consists of only a small fraction of all clients, Ĝt is a very noisy estimate of G(pt). This makes it unsuitable for convergence detection. Instead, DAdaQuant tracks a running average loss Ĝt = ψĜt−1 + (1− ψ)Ĝt. We initialize q1 = qmin for some qmin ∈ N. DAdaQuant determines training to converge whenever Ĝt ≥ Ĝt+1−φ for some φ ∈ N that specifies the number of rounds across which we compare Ĝ. On convergence, DAdaQuant sets qt = 2qt−1 and keeps the quantization level fixed for at least φ rounds to enable reductions in G to manifest in Ĝ. Eventually, the training loss converges regardless of the quantization level. To avoid unconstrained quantization increases on convergence, we limit the quantization level to qmax. The following equation summarizes DAdaQuant’s time-adaptive quantization: qt ←− qmin t = 0 2qt−1 t > 0 and Ĝt−1 ≥ Ĝt−φ and t > φ and 2qt−1 < qmax and qt−1 = qt−φ qt−1 else 3.5 CLIENT-ADAPTIVE QUANTIZATION FL algorithms typically accumulate each parameter pi over all clients into a weighted average p = ∑K i=1 wipi (see Algorithm 1). Quantized FL accumulates quantized parameters Qq(p) =∑K i=1 wiQq(pi) where q is the quantization level. We define the quantization error e q p = |p− Qq(p)|. We observe in our experiments that communication cost per client is roughly a linear function of Federated QSGD’s quantization level q. This means that the communication cost per round is proportional to Q = Kq. We call Q the communication budget and use it as a proxy measure of communication cost. Client-adaptive quantization dynamically adjusts the quantization level of each client. This means that even within a single round, each client ck can be assigned a different quantization level qk. The previous definitions then generalize to Q = ∑K k=1 qk and Qq1...qK (p) = ∑K i=1 wiQqi(pi) and eq1...qKp = |p− Qq1...qK (p)|. Prior convergence results for distributed training and FL rely on an upper bound b on Var(Qq1...qK (p)) that determines the convergence speed Li et al. (2017); Horváth et al. (2019); Reisizadeh et al. (2019). This makes V(Qq1...qK (p)) a natural measure to optimize for when choosing qk. We optimize for the closely related measure Ep1...pK [Var(Qq1...qK (p))] that replaces the upper bound with an expectation over parameters p1 . . . pK . Heuristically, we expect an this averaged measure to provide a better estimate of practically observed quantization errors than an upper bound. For a stochastic, unbiased fixed-point compressor like Federated QSGD, Ep1...−pK [Var(Qq1...qK (p))] equals Ep1...pK [Var(eqp)] and can be evaluated analytically. We devise an algorithm that chooses qk to minimize Q subject to Ep1...pK [Var(eq1...qKp )] = Ep1...pK [Var(eqp)] for a given q. Thus, our algorithm effectively minimizes communication costs while maintaining a quantization error similar to static quantization. Theorem 1 provides us with an analytical formula for quantization levels q1 . . . qK . Theorem 1. Given parameters p1 . . . pk ∼ U[−t, t] and quantization level q, minq1...qK ∑K i=1 qi subject to Ep1...pK [Var(eq1...qKp )] = Ep1...pK [Var(eqp)] is minimized by qi = √ a b × w 2/3 i where a = ∑K j=1 w 2/3 j and b = ∑K j=1 w2j q2 . DAdaQuant applies Theorem 1 to lower communication costs while maintaining the same loss as static quantization does with a fixed q. To ensure that quantization levels are natural numbers, DAdaQuant approximates the optimal real-valued solution as qi = max(1, round( √ a b × w 2/3 i )). Appendix B gives a detailed proof of Theorem 1. To the best of our knowledge, DAdaQuant is the first algorithm to use client-adaptive quantization. Algorithm 1: The FedAvg and DAdaQuant algorithms. The uncolored lines list FedAvg. Adding the colored lines creates DAdaQuant. — quantization, — client-adaptive quantization, — time-adaptive quantization. 1 Function RunServer() 2 Initialize wi = |Di|∑ j |Dj | for all i ∈ [1, . . . , N ]; 3 for t = 0, . . . , T − 1 do 4 Choose St ⊂ C with |St| = K, including each ck ∈ C with uniform probability; 5 qt ←− qmin t = 0 2qt−1 t > 0 and Ĝt−1 ≥ Ĝt−φ and t > φ and qt ≤ qmax and qt−1 = qt−φ qt−1 else ; 6 for ck ∈ St do in parallel 7 qkt ←− √∑K j=1 w 2/3 j / ∑K j=1 w2j q2 ; 8 Send(ck,pt,q k t ); 9 Receive(ck,p k t+1,Ĝ k t ); 10 end 11 pt+1 ←− ∑ k∈St wkp k t+1; 12 Ĝt ←− ∑ k∈St wkĜ k t ; 13 Ĝt ←− { Ĝ0 t = 0 ψĜt−1 + (1− ψ)Ĝt else ; 14 end 15 end 16 Function RunClient(ck) 17 while True do 18 Receive(Server,pt, qkt ); 19 Ĝkt ←− Fk(pt) ; 20 pkt+1 ←− Fk(pkt+1) trained with SGD for E epochs with learning rate η; 21 Send(Server, Qqkt (p k t+1) ,Ĝ k t ); 22 end 23 end 3.6 DOUBLY-ADAPTIVE QUANTIZATION (DADAQUANT) DAdaQuant combines the time-adaptive and client-adaptive quantization algorithms described in the previous sections. At each round t, time-adaptive quantization determines a preliminary quantization level qt. Client-adaptive quantization then finds the client quantization levels qkt , k ∈ {1, . . . ,K} that minimize ∑K i=1 qi subject to Ep1...pK [Var(eq1...qKp )] = Ep1...pK [Var(eqp)]. Algorithm 1 lists DAdaQuant in detail. Figure 3 gives an example of how our time-adaptive, client-adaptive and doubly-adaptive quantization algorithms set quantization levels. Reisizadeh et al. (2019) prove the convergence of FL with quantization for convex and non-convex cases as long as the quantizer Q is (1) unbiased and (2) has a bounded variance. These convergence results extend to DAdaQuant when combined with any quantizer that satisfies (1) and (2) for DAdaQuant’s minimum quantization level q = 1. Crucially, this includes Federated QSGD. We highlight DAdaQuant’s low overhead and general applicability. The computational overhead is dominated by an additional evaluation epoch per round per client to compute Ĝt, which is negligible when training for many epochs per round. In our experiments, we observe computational overheads of ≈ 1% (see Appendix A.2). DAdaQuant can compliment any FL algorithm that trains models over several rounds and accumulates a weighted average of client parameters. Most FL algorithms, including FedAvg, follow this design. 4 EXPERIMENTS 4.1 EXPERIMENTAL DETAILS Evaluation We use DAdaQuant with Federated QSGD to train different models with FedProx on different datasets for a fixed number of rounds. We monitor the test loss and accuracy at fixed intervals and measure uplink communication at every round across all devices. Models & datasets We select a broad and diverse set of five models and datasets to demonstrate the general applicability of DAdaQuant. To this end, we use DAdaQuant to train a linear model, CNNs and LSTMs of varying complexity on a federated synthetic dataset (Synthetic), as well as two federated image datasets (FEMNIST and CelebA) and two federated natural language datasets (Sent140 and Shakespeare) from the LEAF (Caldas et al., 2018) project for standardized FL research. We refer to Appendix A.1 for more information on the models, datasets, training objectives and implementation. System heterogeneity In practice, FL has to cope with clients that have different compute capabilities. We follow Li et al. (2018) and simulate this system heterogeneity by randomly reducing the number of epochs to E′ for a random subset S′t ⊂ St of clients at each round t, where E′ is sampled from [1, . . . , E] and |S′t| = 0.9K. Baselines We compare DAdaQuant against competing quantization-based algorithms for FL parameter compression, namely Federated QSGD, FedPAQ (Reisizadeh et al., 2019), GZip with fixedpoint quantization (FxPQ + GZip), UVeQFed (Shlezinger et al., 2020) and FP8. Federated QSGD (see section 3.3) is our most important baseline because it outperforms the other algorithms. FedPAQ only applies fixed-point quantization, which is equivalent to Federated QSGD without lossless compression. Similarly, FxPQ + GZip is equivalent to Federated QSGD with Gzip for its lossless compression stages. UVeQFed generalizes scalar quantization to vector quantization, followed by arithmetic coding. We apply UVeQFed with the optimal hyperparameters reported by its authors. FP8 (Wang et al., 2018a) is a floating-point quantizer that uses an 8-bit floating-point format designed for storing neural network gradients. We also evaluate all experiments without compression to establish an accuracy benchmark. Hyperparameters With the exception of CelebA, all our datasets and models are also used by Li et al.. We therefore adopt most of the hyperparameters from Li et al. and use LEAF’s hyperparameters for CelebA Caldas et al. (2018). For all experiments, we sample 10 clients each round. We train Synthetic, FEMNIST and CelebA for 500 rounds each. We train Sent140 for 1000 rounds due to slow convergence and Shakespeare for 50 rounds due to rapid convergence. We use batch size 10, learning rates 0.01, 0.003, 0.3, 0.8, 0.1 and µs (FedProx’s proximal term coefficient) 1, 1, 1, 0.001, 0 for Synthetic, FEMNIST, Sent140, Shakespeare, CelebA respectively. We randomly split the local datasets into 80% training set and 20% test set. To select the quantization level q for static quantization with Federated QSGD, FedPAQ and FxPQ + GZip, we run a gridsearch over q = 1, 2, 4, 8, . . . and choose for each dataset the lowest q for which Federated QSGD exceeds uncompressed training in accuracy. We set UVeQFed’s “coding rate” hyperparameter R = 4, which is the lowest value for which UVeQFed achieves negligible accuracy differences compared to uncompressed training. We set the remaining hyperparameters of UVeQFed to the optimal values reported by its authors. Appendix A.4 shows further experiments that compare against UVeQFed with R chosen to maximize its compression factor. For DAdaQuant’s time-adaptive quantization, we set ψ to 0.9, φ to 1/10th of the number of rounds and qmax to the quantization level q for each experiment. For Synthetic and FEMNIST, we set qmin to 1. We find that Sent140, Shakespeare and CelebA require a high quantization level to achieve top accuracies and/or converge in few rounds. This prevents time-adaptive quantization from increasing the quantization level quickly enough, resulting in prolonged low-precision training that hurts model performance. To counter this effect, we set qmin to qmax/2. This effectively results in binary timeadaptive quantization with an initial low-precision phase with q = qmax/2, followed by a highprecision phase with q = qmax. 4.2 RESULTS We repeat the main experiments three times and report average results and their standard deviation (where applicable). Table 1 shows the highest accuracy and total communication for each experiment. Figure 4 plots the maximum accuracy achieved for any given amount of communication. Baselines Table 1 shows that the accuracy of most experiments lies within the margin of error of the uncompressed experiments. This reiterates the viability of quantization-based compression algorithms for communication reduction in FL. For all experiments, Federated QSGD achieves a significantly higher compression factor than the other baselines. The authors of FedPAQ and UVeQFed also compare their methods against QSGD and report them as superior. However, FedPAQ is compared against “unfederated” QSGD that communicates gradients after each local training step and UVeQFed is compared against QSGD without its lossless compression stages. Time-adaptive quantization The purely time-adaptive version of DAdaQuant, DAdaQuanttime, universally outperforms Federated QSGD and the other baselines in Table 1, achieving comparable accuracies while lowering communication costs. DAdaQuanttime performs particularly well on Syn- thetic and FEMNIST, where it starts from the lowest possible quantization level q = 1. However, binary time-adaptive quantization still measurably improves over QSGD for Sent140, Shakespeare and Celeba. Figure 8 in Appendix A.5 provides empirical evidence that AdaQuantFL’s communication scales linearly with the number of clients. As a result, AdaQuantFL is prohibitively expensive for datasets with thousands of clients such as Celeba and Sent140. DAdaQuant does not face this problem because its communication is unaffected by the number of clients. Client-adaptive quantization The purely time-adaptive version of DAdaQuant, DAdaQuantclients, also universally outperforms Federated QSGD and the other baselines in Table 1, achieving similar accuracies while lowering communication costs. Unsurprisingly, the performance of DAdaQuantclients is correlated with the coefficient of variation cv = σµ of the numbers of samples in the local datasets with mean µ and standard deviation σ: Synthetic (cv = 3.3) and Shakespeare (cv = 1.7) achieve significantly higher compression factors than Sent140 (cv = 0.3), FEMNIST (cv = 0.4) and Celeba (cv = 0.3). DAdaQuant DAdaQuant outperforms DAdaQuanttime and DAdaQuantclients in communication while achieving similar accuracies. The compression factors of DAdaQuant are roughly multiplicative in those of DAdaQuantclients and DAdaQuanttime. This demonstrates that we can effectively combine time- and client-adaptive quantization for maximal communication savings. Figure 4 shows that DAdaQuant achieves a higher accuracy than the strongest baseline, Federated QSGD, for any fixed amount of client→server communication. 5 CONCLUSION We introduced DAdaQuant as a computationally efficient and robust algorithm to boost the performance of quantization-based FL compression algorithms. We showed intuitively and mathematically how DAdaQuant’s dynamic adjustment of the quantization level across time and clients minimize client→server communication while maintaining convergence speed. Our experiments establish DAdaQuant as nearly universally superior over static quantizers, achieving state-of-the-art compression factors when applied to Federated QSGD. The communication savings of DAdaQuant effectively lower FL bandwidth usage, energy consumption and training time. Future work may apply and adapt DAdaQuant to new quantizers, further pushing the state of the art in FL uplink compression. 6 REPRODUCIBILITY STATEMENT Our submission includes a repository with the source code for DAdaQuant and for the experiments presented in this paper. All the datasets used in our experiments are publicly available. Any postprocessing steps of the datasets are described in Appendix A.1. To facilitate the reproduction of our results, we have bundled all our source code, dependencies and datasets into a Docker image. The repository submitted with this paper contains instructions on how to use this Docker image and reproduce all plots and tables in this paper. 7 ETHICS STATEMENT FL trains models on private client datasets in a privacy-preserving manner. However, FL does not completely eliminate privacy concerns, because the transmitted model updates and the learned model parameters may expose the private client data from which they are derived. Our work does not directly target privacy concerns in FL. With that said, it is worth noting that DAdaQuant does not expose any client data that is not already exposed through standard FL training algorithms. In fact, DAdaQuant reduces the amount of exposed data through lossy compression of the model updates. We therefore believe that DAdaQuant is free of ethical complications. A ADDITIONAL SIMULATION DETAILS AND EXPERIMENTS A.1 ADDITIONAL SIMULATION DETAILS Here, we give detailed information on the models, datasets, training objectives and implementation that we use for our experiments. We set the five following FL tasks: • Multinomial logistic regression (MLR) on a synthetic dataset called Synthetic that contains vectors in R60 with a label of one out of 10 classes. We use the synthetic dataset generator in Li et al. (2018) to generate synthetic datasets. The generator samples Synthetic’s local datasets and labels from MLR models with randomly initialized parameters. For this purpose, parameters α and β control different kinds of data heterogeneity. α controls the variation in the local models from which the local dataset labels are generated. β controls the variation in the local dataset samples. We set α = 1 and β = 1 to simulate an FL setting with both kinds of data heterogeneity. This makes Synthetic a useful testbed for FL. • Character classification into 62 classes of handwritten characters from the FEMNIST dataset using a CNN. FEMNIST groups samples from the same author into the same local dataset. • Smile detection in facial images from the CelebA dataset using a CNN. CelebA groups samples of the same person into the same local dataset. We note that LEAF’s CNN for CelebA uses BatchNorm layers. We replace them with LayerNorm layers because they are more amenable to quantization. This change does not affect the final accuracy. • Binary sentiment analysis of tweets from the Sent140 dataset using an LSTM. Sent140 groups tweets from the same user into the same local dataset. The majority of local datasets in the raw Sent140 dataset only have a single sample. This impedes FL convergence. Therefore, we filter Sent140 to clients with at least 10 samples (i.e. one complete batch). Caldas et al. (2018); Li et al. (2018) similarly filter Sent140 for their FL experiments. • Next character prediction on text snippets from the Shakespeare dataset of Shakespeare’s collected plays using an LSTM. Shakespeare groups lines from the same character into the same local dataset. Table 2 provides statistics of our models and datasets. For our experiments in Figure 8, AdaQuantFL requires a hyperparameter s that determines the initial quantization level. We set s to 2, the optimal value reported by the authors of AdaQuantFL. The remaining hyperparameters are identical to those used for the Synthetic dataset experiments in Table 1. We implement the models with PyTorch (Paszke et al., 2019) and use Flower (Beutel et al., 2020) to simulate the FL server and clients. A.2 COMPUTATIONAL OVERHEAD OF DADAQUANT A.3 COMPLETE COMMUNICATION-ACCURACY TRADE-OFF CURVES Synthetic FEMNIST Sent140 Synthetic FEMNIST Sent140 Synthetic FEMNIST Sent140 A.4 ADDITIONAL UVEQFED EXPERIMENTS To demonstrate that the choice of UVeQFed’s “coding rate” hyperparameter R does not affect our findings on the superior compression factors of DAdaQuant, we re-evaluate UVeQFed with R = 1, which maximizes UVeQFed’s compression factor. Our results in Table 4 show that with the exception of Shakespeare, DAdaQuant still achieves considerably higher compression factors than UVeQFed. A.5 ADDITIONAL ADAQUANTFL EXPERIMENTS In principle, AdaQuantFL could be adapted to work with partial client participation by computing an estimate of the global loss from the sampled subset of clients. While a full evaluation of this approach is out of the scope of this paper, we conduct a brief feasibility study on FEMNIST. Concretely, we find that a single run of AdaQuantFL with partial client participation on FEMNIST achieved an accuracy of 78.7%, with a total client→server communication of 50.5 MB. In contrast, the same run with DAdaQuanttime similarly achieved an accuracy of 78.4%, while lowering the total client→server communication to 27.5 MB. B PROOFS Lemma 1. Take arbitrary quantization level qi ∈ N and parameter pi ∈ [−t, t]. Then, Qqi(pi) is an unbiased estimator of pi. Proof. Let si = tqi , bi = rem (pi, si) and ui = si − bi. Then, we have E [ Qqi(pi)− pi ] = ui si (pi − bi) + bi si (pi + ui) see Figure 9 = pi Lemma 2. For arbitrary t > 0 and parameter pi ∈ [−t, t], let si = tqi , bi = rem (pi, si) and ui = si − bi. Then, Var ( Qqi(pi) ) = uibi. Proof. Var ( Qqi(pi) ) = E [( Qqi(pi)− E [ Qqi(pi) ])2] = E [( Qqi(pi)− pi )2] see Lemma 1 = bi si u2i + ui si b2i see Figure 9 Lemma 3. Assume that parameters p1 . . . pK are sampled from U[−t, t] for arbitrary t > 0. Then, Ep1...pK [Var(eq1...qKp )] = t 2 6 ∑K i=1 w2i q2i . Proof. Ep1...pK [Var(ep)] = 1 2t ∫ t −t 1 2t ∫ t −t . . . 1 2t ∫ t −t Var ( K∑ i=1 wiQqi(pi)− p ) dp1dp2 . . . dpK = 1 t ∫ t 0 1 t ∫ t 0 . . . 1 t ∫ t 0 Var ( K∑ i=1 wiQqi(pi)− p ) dp1dp2 . . . dpK symmetry of Qqi(pi) w.r.t. negation = 1 tn ∫ t 0 ∫ t 0 . . . ∫ t 0 K∑ i=1 w2iVar ( Qqi(pi) ) dp1dp2 . . . dpK mutual independence of Qqi(pi) ∀i = 1 tn K∑ i=1 ∫ t 0 ∫ t 0 . . . ∫ t 0 w2iVar ( Qqi(pi) ) dp1dp2 . . . dpK exchangeability of finite sums and integrals = 1 tn K∑ i=1 tn−1 ∫ t 0 w2iVar ( Qqi(pi) ) dpi = 1 t K∑ i=1 w2i ∫ t 0 Var ( Qqi(pi) ) dpi = 1 t K∑ i=1 w2i ∫ t 0 uibi dpi Lemma 2 = 1 t K∑ i=1 w2i qi ∫ si 0 uibi dpi si-periodicity of ui and bi = 1 t K∑ i=1 w2i qi ∫ si 0 (si − pi) pi dpi = 1 6t K∑ i=1 w2i qis 3 i = t2 6 K∑ i=1 w2i q2i Lemma 4. Let Q be a fixed-point quantizer. Assume that parameters p1 . . . pK are sampled from U[−t, t] for arbitrary t > 0. Then, minq1...qK Ep1...pK [Var(eq1...qKp )] subject to Q = ∑K i=1 qi is minimized by qi = Q w 2/3 i∑K k=1 w 2/3 k . Proof. Define f(q) = Ep1...pK [Var(eq1...qKp )] g(q) = ( n∑ i=1 qi ) L(q) = f(q)− λg(q) (Lagrangian) Any (local) minimum q̂ satisfies ∇L(q̂) = 0 ⇐⇒ ∇ t 2 6 K∑ i=1 w2i q2i − λ∇ K∑ i=1 qi = 0 ∧ K∑ i=1 qi = Q Lemma 3 ⇐⇒ ∀i = 1 . . . n. t 2 −3 w2i q3i = λ ∧ K∑ i=1 qi = Q ⇐⇒ ∀i = 1 . . . n. qi = 3 √ t2 −3λ w2i ∧ K∑ i=1 qi = Q =⇒ ∀i = 1 . . . n. qi = Q w 2/3 i∑K j=1 w 2/3 j B.1 PROOF OF THEOREM 1 Proof. Using Lemma 4, it is straightforward to show that for any V , minq1...qK ∑K i=1 qi subject to Ep1...pK [Var(eq1...qKp )] = V is minimized by qi = Cw 2/3 i for the unique C ∈ R>0 that satisfies Ep1...pK [Var(eq1...qKp )] = V . Then, taking V = Ep1...pK [Var(eqp)] and C = √ a b (see Theorem 1), we do indeed get Ep1...pK [Var(eq1...qKp )] = t2 6 K∑ i=1 w2i (Cw 2/3 i ) 2 Lemma 3 = 1 C2 t2 6 K∑ i=1 wi 2/3 = ∑K j=1 w2j q2∑K j=1 w 2/3 j t2 6 K∑ i=1 wi 2/3 = t2 6 K∑ j=1 w2j q2 = Ep1...pK [Var(eqp)] lemma 3
1. What is the main contribution of the paper regarding Federated Learning (FL) algorithms? 2. What are the strengths of the proposed method, particularly in terms of its adaptive quantization techniques? 3. What are the weaknesses of the paper, especially regarding mathematical notation, definitions, and conflicts with previous works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the paper's experimental studies or results?
Summary Of The Paper Review
Summary Of The Paper The paper tries to reduce the communication cost of the Federated Learning (FL) algorithms. They introduce the doubly-adaptive quantization algorithm (DAdaQuant), which adopts two quantization techniques: 1, time-adaptive quantization; 2, client-adaptive quantization. The empirical studies also show that DAdaQuant can improve client sever compression. Review The strengths: 1, The paper develops a new communication-efficient quantized FL, Particularly, the client-adaptive quantization is the first time to be considered for FL algorithms. 2, The paper has done extensive experiments to show the improvement of proposed methods. The weaknesses: 1, Some math notations used in this paper miss definitions. For example, the definition of E p 1 , . . . , p K [ V a r ( Q q ( p ) ) ] is missing. 2, The quantization Q q ( p ) used in this paper based on section 3.3 QUANTIZATION WITH FEDERATED QSGD is " Q q ( p ) then returns the sign of p and jpj rounded to one of the endpoints". This quantization is not an unbiased estimator of p , which is conflict with the claim on page 14. 3, The paper introduces the expected variance of an accumulation of quantized parameter E [ V a r ( ∑ Q ( p ) ) ] as a measure of the performance of quantized FL algorithm and tries to minimize it in Thereom 1. The authors should explain more why it is a good measure. Why not give the theoretical analysis in the following logic line: convergence guarantee -> get rounds T, given e p s i l o n -> get computational complexity and communication cost? 4, In algorithm 1, RunClient( c k ) is missing between line 8 and line 9.
ICLR
Title EyeDAS: Securing Perception of Autonomous Cars Against the Stereoblindness Syndrome Abstract The ability to detect whether an object is a 2D or 3D object is extremely important in autonomous driving, since a detection error can have lifethreatening consequences, endangering the safety of the driver, passengers, pedestrians, and others on the road. Methods proposed to distinguish between 2 and 3D objects (e.g., liveness detection methods) are not suitable for autonomous driving, because they are object dependent or do not consider the constraints associated with autonomous driving (e.g., the need for real-time decision-making while the vehicle is moving). In this paper, we present EyeDAS , a novel few-shot learning-based method aimed at securing an object detector (OD) against the threat posed by the stereoblindness syndrome (i.e., the inability to distinguish between 2D and 3D objects). We evaluate EyeDAS ’s real-time performance using 2,000 objects extracted from seven YouTube video recordings of street views taken by a dash cam from the driver’s seat perspective. When applying EyeDAS to seven stateof-the-art ODs as a countermeasure, EyeDAS was able to reduce the 2D misclassification rate from 71.42-100% to 2.4% with a 3D misclassification rate of 0% (TPR of 1.0). Also, EyeDAS outperforms the baseline method and achieves an AUC of over 0.999. 1 Introduction After years of research and development, automobile technology is rapidly approaching the point at which human drivers can be replaced, as commercial cars are now capable of supporting semi-autonomous driving. To create a reality that consists of commercial semi-autonomous cars, scientists had to develop the computerized driver intelligence required to: (1) continuously create a virtual perception of the physical surroundings (e.g., detect pedestrians, road signs, cars, etc.), (2) make decisions, and (3) perform the corresponding action (e.g., notify the driver, turn the wheel, stop the car). While computerized driver intelligence brought semi-autonomous driving to new heights in terms of safety (1), recent incidents have shown that semi-autonomous cars suffer from the stereoblindness syndrome: they react to 2D objects as if they were 3D objects due to their inability to distinguish between these two types of objects. This fact threatens autonomous car safety, because a 2D object (e.g., an image of a car, dog, person) in a nearby advertisement that is misdetected as a real object can trigger a reaction from a semi-autonomous car (e.g., cause it to stop in the middle of the road), as shown in Fig. 1. Such undesired reactions may endanger drivers, passengers, and nearby pedestrians as well. As a result, there is a need to secure semi-autonomous cars against the perceptual challenge caused by the stereoblindness syndrome. The perceptual challenge caused by the stereoblindness syndrome stems from object detectors’ (which obtain data from cars’ video cameras) misclassification of 2D objects. One might argue that the stereoblindness syndrome can be addressed by adopting a sensor fusion approach: by cross-correlating data from the video cameras with data obtained by sensors aimed at detecting depth (e.g., ultrasonic sensors, radar). However, due to safety concerns, a "safety first" policy is implemented in autonomous vehicles, which causes them to consider a detected object as a real object even when it is detected by a single sensor without additional validation from another sensor (2; 3). This is also demonstrated in Fig. 1 which shows how a Tesla’s autopilot triggers a sudden stop due to the misdetection of a 2D object as a real object, despite the fact that Teslas are equipped with radar, a set of ultrasonic sensors, and a set of front-facing video cameras. In addition, while various methods have used liveness detection algorithms to detect whether an object is 2D/3D (4; 5; 6), the proposed methods do not provide the functionality required to distinguish between 2D/3D objects in an autonomous driving setup, because they are object dependent (they cannot generalize between different objects, e.g., cars and pedestrians) and do not take into account the real-time constraints associated with autonomous driving. As a result, there is a need for dedicated functionality that validates the detections of video camera based object detectors and considers the constraints of autonomous driving. In this paper, we present EyeDAS , a committee of models that validates objects detected by the on-board object detector. EyeDAS aims to secure a single channel object detector that obtains data from a video camera and provides a solution to the stereoblindness syndrome, i.e., distinguishes between 2 and 3D objects, while taking the constraints of autonomous driving (both safety and real-time constraints) into account. EyeDAS can be deployed on existing advanced driver-assistance systems (ADASs) without the need for additional sensors. EyeDAS is based on few-shot learning and consists of four lightweight unsupervised models, each of which utilizes a unique feature extraction method and outputs a 3D confidence score. Finally, a meta-classifier uses the output of the four models to determine whether the given object is a 2 or 3D object. We evaluate EyeDAS using a dataset collected from seven YouTube video recordings of street views taken by a dash cam from the driver’s seat perspective; the 2D objects in the dataset were extracted from various billboards that appear in the videos. When applying EyeDAS to seven state-of-the-art ODs as a countermeasure, EyeDAS was able to reduce the 2D misclassification rate from 71.42-100% to 2.4% with a 3D misclassification rate of 0% (TPR of 1.0). We also show that EyeDAS outperforms the baseline method and achieves an AUC of over 0.999. In this research we make the following contributions: (1) we present a practical method for securing object detectors against the stereoblindness syndrome that meets the constraints of autonomous driving (safety and real-time constraints), and (2) we show that the method can be applied using few-shot learning, can be used to detect whether an inanimate object is a 2D or 3D object (i.e., distinguishes between a real car from an advertisement containing an image of a car), and can generalize to different types of objects and between cities. The remainder of this paper is structured as follows: In Section 2, we review related work. In Section 3, we present EyeDAS , explain its architecture, design considerations, and each expert in the committee of models. In Section 4, we evaluate EyeDAS ’s performance under the constraints of autonomous driving, based on various YouTube video recordings taken by a dash cam from several places around the world. In Section 5 we discuss the limitations of EyeDAS , and in Section 6, we present a summary. 2 Related Work The ability to detect whether an object is a 2D or 3D object is extremely important in autonomous driving, since a detection error can have life-threatening consequences, endangering the safety of the driver, passengers, pedestrians, and others on the road. Without this capability, Tesla’s autopilot was unintentionally triggered, causing the car to: (1) continuously slam on the brakes in response to a print advertisement containing a picture of a person that appeared on a bus (7), and (2) stop in response to a billboard advertisement that contained a stop sign (8). Moreover, attackers can exploit the absence of this capability and intentionally trigger: (1) Tesla’s autopilot to suddenly stop the car in the middle of a road in response to a stop sign embedded in an advertisement on a digital billboard (2), and (2) Mobileye 630 to issue false notifications regarding a projected road sign (9). The need to detect whether an object is 2 or 3D is also important for authentication systems (e.g., face recognition systems) where the identity of a user can be spoofed using a printed picture of the user. Various methods have been suggested for liveness detection (4; 5; 6), however the two primary disadvantages of the proposed methods are that they: (1) fail to generalize to other objects (e.g., distinguish between a real car and a picture of car), since they mainly rely on dedicated features associated with humans (4) (e.g., eye movements (5), facial vein map (6)), which makes them object dependent; or (2) have high false negative rates for pictures of objects that were not taken from a specific orientation, angle, or position (e.g., they fail to detect liveness if the picture of the person was taken from the back). As a result, these methods are not suitable for autonomous driving. 3 EyeDAS The requirements for a method used to secure the perception of autonomous cars against stereoblindness syndrome are as follows. The method must be capable of: (1) operating under the constraints of autonomous driving, and (2) securing an object detector that obtains data from a single video camera, because a few commercial ADASs, including Mobileye 630 PRO, rely on a single video camera without any additional sensors, and (3) utilizing just a small amount of training data; the fact that there may be just a small amount of 2D objects in each geographical area necessitates a method with high performance and minimum training so that it can be generalized to different types of objects and between geographical locations. 3.1 Architecture Fig. 2 provides an overview of EyeDAS ; whenever an object is detected by the vehicle’s image recognition model, it is tracked during t consecutive frames sampled at a frequency of f frames per second (FPS), cropped from each frame and serially passed to EyeDAS . EyeDAS then predicts whether the object is a 2D (e.g., an image of a person) or 3D object (e.g., a real person). Let x = (x1, ..., xt−1, xt) be a time series of t identical RGB objects cropped from t consecutive frames where each object is centered. To predict whether an object is 2D or 3D, we could build a supervised machine learning model which receives x, which consists of images of an object to classify, and predicts whether the object detected is 2D or 3D. However, such an approach would make the machine learning model reliant on specific features and thus would not generalize to objects extracted when the vehicle is traveling in different locations or at different speeds, or when the vehicle is approaching the object from different angles or distances. To avoid this bias, we utilize the committee of experts approach used in machine learning applications (10), in which there is an ensemble of models, each of which has a different perspective of interpreting the incoming data. By combining different perspectives, we (1) create a more resilient classifier that performs well even in cases where one aspect fails to capture the evidence, and (2) reduce the false alarm rate by focusing the classifier on just the relevant input features; EyeDAS consists of an ensemble of unsupervised models (experts), Figure 2: EyeDAS ’s architecture. When an object is detected, (i) a time series of the cropped object images is transferred to EyeDAS , (ii) four types of unique features are processed by four unsupervised models (i.e., experts), resulting in four 3D confidence scores, and (iii) the meta-classifier model interprets the confidence scores and makes the final decision regarding the object (2D or 3D). each of which outputs a 3D confidence score, and a supervised model (meta-classifier), which produces the final outcome (decision) given the set of confidence scores. Although each of the proposed unsupervised models (experts) focuses on a different perspective, they have a common property: given x, which consists of images of an object to classify as 2D or 3D, each model measures a difference between each two consecutive elements in the series; in this study, we show that the combination of a proper feature extraction method together with the proper distance metric (applied between each two consecutive elements) is a good basis for building such a classifier. In addition, decisions based on a distance observed between two consecutive elements in a given series allows EyeDAS to generalize; this approach minimizes dependency on object types or geographical locations. In addition, EyeDAS finds the optimal balances between time to decision and classification accuracy; we compare EyeDAS utilizing t > 1 object frames to a state-of-the-art image classifier that is designed to process a single frame at a time. From the perspective of a software module like EyeDAS , a 3D object itself is not expected to change significantly within a short period of time, even in cases in which the 3D object is a human, animal, or vehicle (i.e., a real object). Therefore, it is not trivial to find distance metrics that can detect statistically significant differences that: (1) allow accurate differentiation between 2D and 3D objects by considering just the object, and (2) can be computed quickly. Therefore, we suggest a time-efficient approach that considers objects entangled with their close surrounding background. Each of the proposed unsupervised models utilizes a unique feature extraction method and a corresponding distance metric; these models are designed to process image time series of any size t (t > 1). This property is crucial, since the exact time point at which there is a significant difference between two images (in cases in which the object detected is 3D) is unpredictable. In addition, the images to analyze should be represented in such a way that ensures that a statistically significant difference can be efficiently obtained for 3D objects, while the error rate for 2D objects is minimized. In other words, considering the objects entangled with their close surrounding background, we are interested in feature extraction methods whose outcomes for 2D objects and 3D objects are statistically distinguishable within a short period of time. 3.2 Proposed Models Our committee consists of four unsupervised models, each focusing on a different perspective (see Fig. 3 for a demonstration of each model’s perspective); each model receives the time series of consecutive images of an object to classify, extracts features from each image, measures a type of difference between the features extracted from two consecutive images, and finally outputs a 3D confidence score. Additional details on the four models are provided below: Blurring Model (B) - This model utilizes the automatic focus (auto-focus) capability of video cameras commonly used to build 3D cameras (11). Unlike 2D object images, for 3D object images, the blurring effect differs and is applied alternately on the object and its surrounding background during the auto-focus process; this is reflected in a large amount of contrast, which is observed when examining the blurring maps (12) corresponding to the raw images. Thus, using the structural image similarity measure proposed by Wang et al. (13), the blurring model outputs the maximum value obtained by calculating the differences between each two consecutive blurring maps. Sharpness Model (S) - This model utilizes the possible image sharpness-level instability observed in 3D object images due to the objects’ movements; the sharpness model extracts the overall sharpness level (a numeric value) from each raw image received, using the method described by Bansal et al. (14), and outputs the maximum value obtained by calculating the differences between each two consecutive sharpness levels. Color Model (C) - This model utilizes the possible movement expected in the surrounding environment of 3D objects; this movement is reflected in a large difference in the color distributions of the raw images. Thus, the color model extracts the color distributions from each raw image using a clustering algorithm (15), computes the size of the largest cluster observed, and then outputs the maximum value obtained by calculating differences between each two consecutive elements. Edge Model (E) - This model is based on the Sobel edge detector (16), a gradient-based method for estimating the first order derivatives of the image separately for the horizontal and vertical axes; these derivatives are not expected to change significantly for static objects like 2D objects. The edge model operates similarly to the blurring model except for one thing: given the raw images, the edge model extracts the edging maps instead of the blurring maps for comparison; extraction is performed using the method described by Gao et al. (17). Meta-Classifier - To make a prediction as to whether or not an object is a 2D or 3D object, we combine the knowledge of the unsupervised models described above in a final prediction; we utilize a gradient boosting (GB) based binary classifier (18) trained on the outputs of the models. We choose the GB algorithm, since it can capture nonlinear function dependencies; in our case, the above unsupervised models complement each other, and their output scores form a nonlinear relationship. The reader can use the following link to download the code that implements the method (the link will be added after the review, the project zip file has been already shared during paper submission). 4 Evaluation The experiments described in this section were designed in recognition of the needs of autonomous driving in the real world; to ensure safety, a solution needs to both accurately distinguish between 2D and 3D objects and make a fast decision. The experiments are aimed at evaluating: (1) the performance of each expert in the committee, (2) the performance of the entire committee, (3) improvement in the false positive rate of ODs when EyeDAS is applied for validation, (4) the practicality of EyeDAS in real-time environments (in terms of computational resources and speed), and (5) the ability to generalize between objects (i.e., humans, animals and vehicles) and geographical locations (i.e., cities). All the experiments described in this section, including the speed and memory benchmark (described in Section 4.4), were conducted on an Intel 2.9 GHz Intel Core i7-10700 and 32GB RAM. The machine’s operating system was Windows 10. 4.1 Experiment Setup Dataset. Since some ADASs (e.g., Mobileye 630 PRO) rely on a single channel camera, all of the data collected for our experiments was obtained from single-channel cameras; we utilized seven YouTube video recordings 1 of street views taken by a dash cam from the driver’s seat perspective. The distribution of the dataset represents the real distribution of 2D and 3D objects encountered by autonomous driving in the real world: 2,000 RGB objects (i.e., humans, animals, and vehicles) were extracted from driving view videos taken in seven cities: New York (NY, USA), San Francisco (CA, USA), Dubai (United Arab Emirates), Miami (FL, USA), London (UK), Los Angeles (CA, USA), and George Town (Singapore), of which approximately 95% are 3D objects and 5% are 2D objects extracted from billboards. The objects were extracted and cropped using the highest performing OD described by Redmon et al. (19); in our experiments, each input instance x associated with an object to classify contains up to five images taken at 200 millisecond intervals, starting at the time point at which an object was detected by the OD. Input instance xi is labeled as ‘True‘ if the instance represents a 3D object and ‘False‘ if the instance represents a 2D object. Training. We denote TR3D and TR2D as two training sets representing 3D and 2D objects respectively. To avoid an unbalanced training set, we extend TR2D by utilizing known image data augmentation techniques (20) to randomly selected instances from TR2D; we apply the rotation technique. Given the outputs calculated by the unsupervised models for each input instance (i.e., the 3D confidence scores), the final meta-classifier training was performed; the best hyperparameters for training were selected using the grid search algorithm (21) using the random shuffling split method and 10-fold cross-validation. We vary the number of estimators in the set of {20, 25, ..., 40} trees, while we change the maximum depth of the trees in the set of {2, 3, 4}. To select the best set of hyperparameters, we evaluated the meta-classifier’s performance in terms of accuracy. In all of the experiments described in this section: (1) |TR3D| = 150 and |TR2D| = 70, and (2) a decision was made within 200 milliseconds from the moment the object was detected by the OD (i.e., t = 2). 4.2 Results Performance. In Fig. 5, we present the receiver operating characteristic (ROC) plot and the area under the ROC (AUC) for different combinations of the blurring (B), sharpness (S), color (C), and edge (E) models; the combination of all four is our proposed method. The ROC plot shows the true positive rate (TPR) and false positive rate (FPR) for every possible prediction threshold, and the AUC provides an overall performance measure of a classifier (AUC=0.5: random guessing, AUC=1: perfect performance). 1New York City (USA, NY) San Francisco (USA, CA) Dubai (UAE) Miami (USA, FL) London (UK) Los Angeles (USA, CA) George Town (Singapore) In our case, there is a critical trade-off that must be considered: the classification threshold. A lower threshold will decrease the FPR but often decrease the TPR as well. In Table 1, we provide the TPR and FPR of the models when the threshold is set at 0.5 and for the threshold value at which the TPR=1. As can be seen, the use of all of the proposed models (B+S+C+E) in combination outperforms all other model combinations. Figure 4: 2D misclassification rates using state-of-the-art object detectors. In Table 2, we compare EyeDAS ’s performance to that of two other approaches: (1) baseline models based on the state-of-the-art pre-trained image classifiers (i.e., VGG16, VGG19 and Resnet50 (22)); we utilize the known transfer learning technique (23) by re-training these models, and (2) an optimized model similar to EyeDAS , except that it is based on a single expert model which considers the raw images as is (i.e., it computes the image similarity based distance (13) between the raw images directly, without extracting any features as EyeDAS does). In both cases, 220 instances were randomly selected for training; the distribution of 3D and 2D images is approximately 66.7% and 33.3% respectively, and the data augmentation technique described above was applied to avoid an unbalanced training set. Each baseline model was re-trained by (1) freezing all the layers except for the output layer which was replaced by two trainable fully connected layers, (2) randomly picking 50 instances from the training set to serve as the validation set, (3) pre-processing the input data (i.e., image resizing and scaling), and finally (4) minimizing the categorical_crossentropy loss function on the validation set using the Adam optimizer. The first new layer contained 128 neurons and attached with the relu activation function, and the second layer (output layer) contained two neurons and attached with the softmax activation function. Increasing the first layer’s neurons count beyond 128 resulted in poorer results due to overfitting. As can be seen, EyeDAS outperforms all of the abovementioned models. Table 1: EyeDAS ’s TPR and FPR with different thresholds. Table 2: Comparison to other approaches. Securing ODs with EyeDAS . To determine how effective EyeDAS is as part of a system, we evaluated 2D misclassification rates on seven state-of-the-art ODs (19; 24; 25; 26). The results are presented in Table 4; we present the 2D misclassification rates obtained for each detector before and after applying EyeDAS as a countermeasure and the impact of the different thresholds. The results show that for most ODs, when the detector mistakenly classified a 2D object as real (i.e., 3D), EyeDAS provided effective mitigation, even for the threshold value at which the TPR=1. 4.3 Generalization We also evaluate how EyeDAS ’s approach generalizes to different geographical locations and even different types of objects. Generalization to other geographical locations. To evaluate EyeDAS ’s geographical location generalization ability, we trained and tested the models on complementary groups of location types (i.e., cities). For training, we took minimum-sized city type combinations in which there are at least 56 2D objects and at least 120 3D objects, since we observed that less than that amount is insufficient for training the models and thus no meaningful conclusions could be derived. In Table 4, we present the evaluation results obtained for different geographical location (i.e., cities) combinations. As can be seen, EyeDAS is not dependent on the geographical location in which the training data was collected, as it is capable of using the knowledge gained during training to distinguish between 2 and 3D objects. Generalization to other types of objects. To evaluate EyeDAS ’s object type generalization ability, we trained and tested the models on complementary groups of object types. We focused on humans (HU), animals (AN), and vehicles (VE). As previously done, for training we used minimum-sized object type combinations in which there are at least 56 2D objects and at least 120 3D objects. In Table 5, we present the evaluation results obtained for each object type combination. As can be seen, EyeDAS is independent of the type of the object appeared in the training set, as it is capable of using the knowledge gained during training to distinguish between 2 and 3D objects. 4.4 Speed and Memory Performance 5 Limitations Despite the high performance achieved, EyeDAS has some limitations. First, technical factors may influence the performance of EyeDAS . For example, EyeDAS may perform poorly when image resolution is low or images are taken in low lighting. However, adequate lighting conditions are expected on roads on which autonomous vehicles can typically drive. Second, for 3D objects, if both the detected object and its close surrounding background are stationary (e.g., the object does not move or its surrounding background does not change), then EyeDAS may perform poorly. However, if the vehicle is moving or the camera’s auto-focus process is operating, then EyeDAS ’s errors will likely to decrease significantly even for 3D stationary objects. If the vehicle is not moving, the concern for the safety of passengers does not exist. 6 Summary In this paper, we proposed a novel countermeasure which can be used to secure object detectors (ODs) against the stereoblindness syndrome; this syndrome can have life-threatening consequences, endangering the safety of an autonomous car’s driver, passengers, pedestrians, and others on the road. Designed in recognition of the needs of autonomous driving in the real world, the proposed method is based on few-shot learning, making it very practical in terms of collecting the required training set. As presented in the previous sections, EyeDAS outperforms the baseline method and demonstrates excellent performance and specifically its: (1) ability to maintain a zero misclassification rate for 3D objects, (2) ability to improve the performance of ODs, (3) practicality in real-time conditions, and (4) ability to generalize between objects (i.e., humans, animals, and vehicles) and geographical locations (i.e., cities).
1. What is the main contribution of the paper regarding autonomous vehicle perception? 2. What are the strengths and weaknesses of the proposed approach, particularly in its reliance on hand-crafted features? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, especially regarding its relevance to ICLR? 4. Do you have any suggestions for improving the paper, such as incorporating end-to-end learned approaches?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper tackles an important problem facing autonomous vehicle perception - how to distinguish between a 3d object and a 2d representation of a 3d object? A useful insight of this paper is that having information of the target object over time leads to important features that help solve this problem. The paper proposes a model based on 4 heuristic hand-crafted features together with a gradient boosting meta classifier. Strengths And Weaknesses Strengths: Paper deals with an important problem This paper presents a useful insight that considering the appearance of the target object over time can provide a strong cue for this problem The paper shows that the technique performs well on the dataset they used. Weaknesses: This paper uses hand-crafted rather than learned features and does not present a convincing case that there is a good reason to do so. The baseline methods are straw-men that do not benefit from temporal information as the proposed method does. Clarity, Quality, Novelty And Reproducibility Since this method does not involve learning the representation used but instead uses hand-crafted features, I would argue that it is not directly relevant to ICLR. It may be more suited for a computer vision applications conference or workshop. Since the proposed method uses temporal differences between subsequent input frames, it should be compared to approaches that also have that benefit. Unfortunately the baseline methods all only use a single image as input. I believe the hand-crafted features used in this method could easily be learned with an e2e learned approach if the model being trained had subsequent frames as input. I think taking that approach would improve this paper and make it more relevant to ICLR.
ICLR
Title EyeDAS: Securing Perception of Autonomous Cars Against the Stereoblindness Syndrome Abstract The ability to detect whether an object is a 2D or 3D object is extremely important in autonomous driving, since a detection error can have lifethreatening consequences, endangering the safety of the driver, passengers, pedestrians, and others on the road. Methods proposed to distinguish between 2 and 3D objects (e.g., liveness detection methods) are not suitable for autonomous driving, because they are object dependent or do not consider the constraints associated with autonomous driving (e.g., the need for real-time decision-making while the vehicle is moving). In this paper, we present EyeDAS , a novel few-shot learning-based method aimed at securing an object detector (OD) against the threat posed by the stereoblindness syndrome (i.e., the inability to distinguish between 2D and 3D objects). We evaluate EyeDAS ’s real-time performance using 2,000 objects extracted from seven YouTube video recordings of street views taken by a dash cam from the driver’s seat perspective. When applying EyeDAS to seven stateof-the-art ODs as a countermeasure, EyeDAS was able to reduce the 2D misclassification rate from 71.42-100% to 2.4% with a 3D misclassification rate of 0% (TPR of 1.0). Also, EyeDAS outperforms the baseline method and achieves an AUC of over 0.999. 1 Introduction After years of research and development, automobile technology is rapidly approaching the point at which human drivers can be replaced, as commercial cars are now capable of supporting semi-autonomous driving. To create a reality that consists of commercial semi-autonomous cars, scientists had to develop the computerized driver intelligence required to: (1) continuously create a virtual perception of the physical surroundings (e.g., detect pedestrians, road signs, cars, etc.), (2) make decisions, and (3) perform the corresponding action (e.g., notify the driver, turn the wheel, stop the car). While computerized driver intelligence brought semi-autonomous driving to new heights in terms of safety (1), recent incidents have shown that semi-autonomous cars suffer from the stereoblindness syndrome: they react to 2D objects as if they were 3D objects due to their inability to distinguish between these two types of objects. This fact threatens autonomous car safety, because a 2D object (e.g., an image of a car, dog, person) in a nearby advertisement that is misdetected as a real object can trigger a reaction from a semi-autonomous car (e.g., cause it to stop in the middle of the road), as shown in Fig. 1. Such undesired reactions may endanger drivers, passengers, and nearby pedestrians as well. As a result, there is a need to secure semi-autonomous cars against the perceptual challenge caused by the stereoblindness syndrome. The perceptual challenge caused by the stereoblindness syndrome stems from object detectors’ (which obtain data from cars’ video cameras) misclassification of 2D objects. One might argue that the stereoblindness syndrome can be addressed by adopting a sensor fusion approach: by cross-correlating data from the video cameras with data obtained by sensors aimed at detecting depth (e.g., ultrasonic sensors, radar). However, due to safety concerns, a "safety first" policy is implemented in autonomous vehicles, which causes them to consider a detected object as a real object even when it is detected by a single sensor without additional validation from another sensor (2; 3). This is also demonstrated in Fig. 1 which shows how a Tesla’s autopilot triggers a sudden stop due to the misdetection of a 2D object as a real object, despite the fact that Teslas are equipped with radar, a set of ultrasonic sensors, and a set of front-facing video cameras. In addition, while various methods have used liveness detection algorithms to detect whether an object is 2D/3D (4; 5; 6), the proposed methods do not provide the functionality required to distinguish between 2D/3D objects in an autonomous driving setup, because they are object dependent (they cannot generalize between different objects, e.g., cars and pedestrians) and do not take into account the real-time constraints associated with autonomous driving. As a result, there is a need for dedicated functionality that validates the detections of video camera based object detectors and considers the constraints of autonomous driving. In this paper, we present EyeDAS , a committee of models that validates objects detected by the on-board object detector. EyeDAS aims to secure a single channel object detector that obtains data from a video camera and provides a solution to the stereoblindness syndrome, i.e., distinguishes between 2 and 3D objects, while taking the constraints of autonomous driving (both safety and real-time constraints) into account. EyeDAS can be deployed on existing advanced driver-assistance systems (ADASs) without the need for additional sensors. EyeDAS is based on few-shot learning and consists of four lightweight unsupervised models, each of which utilizes a unique feature extraction method and outputs a 3D confidence score. Finally, a meta-classifier uses the output of the four models to determine whether the given object is a 2 or 3D object. We evaluate EyeDAS using a dataset collected from seven YouTube video recordings of street views taken by a dash cam from the driver’s seat perspective; the 2D objects in the dataset were extracted from various billboards that appear in the videos. When applying EyeDAS to seven state-of-the-art ODs as a countermeasure, EyeDAS was able to reduce the 2D misclassification rate from 71.42-100% to 2.4% with a 3D misclassification rate of 0% (TPR of 1.0). We also show that EyeDAS outperforms the baseline method and achieves an AUC of over 0.999. In this research we make the following contributions: (1) we present a practical method for securing object detectors against the stereoblindness syndrome that meets the constraints of autonomous driving (safety and real-time constraints), and (2) we show that the method can be applied using few-shot learning, can be used to detect whether an inanimate object is a 2D or 3D object (i.e., distinguishes between a real car from an advertisement containing an image of a car), and can generalize to different types of objects and between cities. The remainder of this paper is structured as follows: In Section 2, we review related work. In Section 3, we present EyeDAS , explain its architecture, design considerations, and each expert in the committee of models. In Section 4, we evaluate EyeDAS ’s performance under the constraints of autonomous driving, based on various YouTube video recordings taken by a dash cam from several places around the world. In Section 5 we discuss the limitations of EyeDAS , and in Section 6, we present a summary. 2 Related Work The ability to detect whether an object is a 2D or 3D object is extremely important in autonomous driving, since a detection error can have life-threatening consequences, endangering the safety of the driver, passengers, pedestrians, and others on the road. Without this capability, Tesla’s autopilot was unintentionally triggered, causing the car to: (1) continuously slam on the brakes in response to a print advertisement containing a picture of a person that appeared on a bus (7), and (2) stop in response to a billboard advertisement that contained a stop sign (8). Moreover, attackers can exploit the absence of this capability and intentionally trigger: (1) Tesla’s autopilot to suddenly stop the car in the middle of a road in response to a stop sign embedded in an advertisement on a digital billboard (2), and (2) Mobileye 630 to issue false notifications regarding a projected road sign (9). The need to detect whether an object is 2 or 3D is also important for authentication systems (e.g., face recognition systems) where the identity of a user can be spoofed using a printed picture of the user. Various methods have been suggested for liveness detection (4; 5; 6), however the two primary disadvantages of the proposed methods are that they: (1) fail to generalize to other objects (e.g., distinguish between a real car and a picture of car), since they mainly rely on dedicated features associated with humans (4) (e.g., eye movements (5), facial vein map (6)), which makes them object dependent; or (2) have high false negative rates for pictures of objects that were not taken from a specific orientation, angle, or position (e.g., they fail to detect liveness if the picture of the person was taken from the back). As a result, these methods are not suitable for autonomous driving. 3 EyeDAS The requirements for a method used to secure the perception of autonomous cars against stereoblindness syndrome are as follows. The method must be capable of: (1) operating under the constraints of autonomous driving, and (2) securing an object detector that obtains data from a single video camera, because a few commercial ADASs, including Mobileye 630 PRO, rely on a single video camera without any additional sensors, and (3) utilizing just a small amount of training data; the fact that there may be just a small amount of 2D objects in each geographical area necessitates a method with high performance and minimum training so that it can be generalized to different types of objects and between geographical locations. 3.1 Architecture Fig. 2 provides an overview of EyeDAS ; whenever an object is detected by the vehicle’s image recognition model, it is tracked during t consecutive frames sampled at a frequency of f frames per second (FPS), cropped from each frame and serially passed to EyeDAS . EyeDAS then predicts whether the object is a 2D (e.g., an image of a person) or 3D object (e.g., a real person). Let x = (x1, ..., xt−1, xt) be a time series of t identical RGB objects cropped from t consecutive frames where each object is centered. To predict whether an object is 2D or 3D, we could build a supervised machine learning model which receives x, which consists of images of an object to classify, and predicts whether the object detected is 2D or 3D. However, such an approach would make the machine learning model reliant on specific features and thus would not generalize to objects extracted when the vehicle is traveling in different locations or at different speeds, or when the vehicle is approaching the object from different angles or distances. To avoid this bias, we utilize the committee of experts approach used in machine learning applications (10), in which there is an ensemble of models, each of which has a different perspective of interpreting the incoming data. By combining different perspectives, we (1) create a more resilient classifier that performs well even in cases where one aspect fails to capture the evidence, and (2) reduce the false alarm rate by focusing the classifier on just the relevant input features; EyeDAS consists of an ensemble of unsupervised models (experts), Figure 2: EyeDAS ’s architecture. When an object is detected, (i) a time series of the cropped object images is transferred to EyeDAS , (ii) four types of unique features are processed by four unsupervised models (i.e., experts), resulting in four 3D confidence scores, and (iii) the meta-classifier model interprets the confidence scores and makes the final decision regarding the object (2D or 3D). each of which outputs a 3D confidence score, and a supervised model (meta-classifier), which produces the final outcome (decision) given the set of confidence scores. Although each of the proposed unsupervised models (experts) focuses on a different perspective, they have a common property: given x, which consists of images of an object to classify as 2D or 3D, each model measures a difference between each two consecutive elements in the series; in this study, we show that the combination of a proper feature extraction method together with the proper distance metric (applied between each two consecutive elements) is a good basis for building such a classifier. In addition, decisions based on a distance observed between two consecutive elements in a given series allows EyeDAS to generalize; this approach minimizes dependency on object types or geographical locations. In addition, EyeDAS finds the optimal balances between time to decision and classification accuracy; we compare EyeDAS utilizing t > 1 object frames to a state-of-the-art image classifier that is designed to process a single frame at a time. From the perspective of a software module like EyeDAS , a 3D object itself is not expected to change significantly within a short period of time, even in cases in which the 3D object is a human, animal, or vehicle (i.e., a real object). Therefore, it is not trivial to find distance metrics that can detect statistically significant differences that: (1) allow accurate differentiation between 2D and 3D objects by considering just the object, and (2) can be computed quickly. Therefore, we suggest a time-efficient approach that considers objects entangled with their close surrounding background. Each of the proposed unsupervised models utilizes a unique feature extraction method and a corresponding distance metric; these models are designed to process image time series of any size t (t > 1). This property is crucial, since the exact time point at which there is a significant difference between two images (in cases in which the object detected is 3D) is unpredictable. In addition, the images to analyze should be represented in such a way that ensures that a statistically significant difference can be efficiently obtained for 3D objects, while the error rate for 2D objects is minimized. In other words, considering the objects entangled with their close surrounding background, we are interested in feature extraction methods whose outcomes for 2D objects and 3D objects are statistically distinguishable within a short period of time. 3.2 Proposed Models Our committee consists of four unsupervised models, each focusing on a different perspective (see Fig. 3 for a demonstration of each model’s perspective); each model receives the time series of consecutive images of an object to classify, extracts features from each image, measures a type of difference between the features extracted from two consecutive images, and finally outputs a 3D confidence score. Additional details on the four models are provided below: Blurring Model (B) - This model utilizes the automatic focus (auto-focus) capability of video cameras commonly used to build 3D cameras (11). Unlike 2D object images, for 3D object images, the blurring effect differs and is applied alternately on the object and its surrounding background during the auto-focus process; this is reflected in a large amount of contrast, which is observed when examining the blurring maps (12) corresponding to the raw images. Thus, using the structural image similarity measure proposed by Wang et al. (13), the blurring model outputs the maximum value obtained by calculating the differences between each two consecutive blurring maps. Sharpness Model (S) - This model utilizes the possible image sharpness-level instability observed in 3D object images due to the objects’ movements; the sharpness model extracts the overall sharpness level (a numeric value) from each raw image received, using the method described by Bansal et al. (14), and outputs the maximum value obtained by calculating the differences between each two consecutive sharpness levels. Color Model (C) - This model utilizes the possible movement expected in the surrounding environment of 3D objects; this movement is reflected in a large difference in the color distributions of the raw images. Thus, the color model extracts the color distributions from each raw image using a clustering algorithm (15), computes the size of the largest cluster observed, and then outputs the maximum value obtained by calculating differences between each two consecutive elements. Edge Model (E) - This model is based on the Sobel edge detector (16), a gradient-based method for estimating the first order derivatives of the image separately for the horizontal and vertical axes; these derivatives are not expected to change significantly for static objects like 2D objects. The edge model operates similarly to the blurring model except for one thing: given the raw images, the edge model extracts the edging maps instead of the blurring maps for comparison; extraction is performed using the method described by Gao et al. (17). Meta-Classifier - To make a prediction as to whether or not an object is a 2D or 3D object, we combine the knowledge of the unsupervised models described above in a final prediction; we utilize a gradient boosting (GB) based binary classifier (18) trained on the outputs of the models. We choose the GB algorithm, since it can capture nonlinear function dependencies; in our case, the above unsupervised models complement each other, and their output scores form a nonlinear relationship. The reader can use the following link to download the code that implements the method (the link will be added after the review, the project zip file has been already shared during paper submission). 4 Evaluation The experiments described in this section were designed in recognition of the needs of autonomous driving in the real world; to ensure safety, a solution needs to both accurately distinguish between 2D and 3D objects and make a fast decision. The experiments are aimed at evaluating: (1) the performance of each expert in the committee, (2) the performance of the entire committee, (3) improvement in the false positive rate of ODs when EyeDAS is applied for validation, (4) the practicality of EyeDAS in real-time environments (in terms of computational resources and speed), and (5) the ability to generalize between objects (i.e., humans, animals and vehicles) and geographical locations (i.e., cities). All the experiments described in this section, including the speed and memory benchmark (described in Section 4.4), were conducted on an Intel 2.9 GHz Intel Core i7-10700 and 32GB RAM. The machine’s operating system was Windows 10. 4.1 Experiment Setup Dataset. Since some ADASs (e.g., Mobileye 630 PRO) rely on a single channel camera, all of the data collected for our experiments was obtained from single-channel cameras; we utilized seven YouTube video recordings 1 of street views taken by a dash cam from the driver’s seat perspective. The distribution of the dataset represents the real distribution of 2D and 3D objects encountered by autonomous driving in the real world: 2,000 RGB objects (i.e., humans, animals, and vehicles) were extracted from driving view videos taken in seven cities: New York (NY, USA), San Francisco (CA, USA), Dubai (United Arab Emirates), Miami (FL, USA), London (UK), Los Angeles (CA, USA), and George Town (Singapore), of which approximately 95% are 3D objects and 5% are 2D objects extracted from billboards. The objects were extracted and cropped using the highest performing OD described by Redmon et al. (19); in our experiments, each input instance x associated with an object to classify contains up to five images taken at 200 millisecond intervals, starting at the time point at which an object was detected by the OD. Input instance xi is labeled as ‘True‘ if the instance represents a 3D object and ‘False‘ if the instance represents a 2D object. Training. We denote TR3D and TR2D as two training sets representing 3D and 2D objects respectively. To avoid an unbalanced training set, we extend TR2D by utilizing known image data augmentation techniques (20) to randomly selected instances from TR2D; we apply the rotation technique. Given the outputs calculated by the unsupervised models for each input instance (i.e., the 3D confidence scores), the final meta-classifier training was performed; the best hyperparameters for training were selected using the grid search algorithm (21) using the random shuffling split method and 10-fold cross-validation. We vary the number of estimators in the set of {20, 25, ..., 40} trees, while we change the maximum depth of the trees in the set of {2, 3, 4}. To select the best set of hyperparameters, we evaluated the meta-classifier’s performance in terms of accuracy. In all of the experiments described in this section: (1) |TR3D| = 150 and |TR2D| = 70, and (2) a decision was made within 200 milliseconds from the moment the object was detected by the OD (i.e., t = 2). 4.2 Results Performance. In Fig. 5, we present the receiver operating characteristic (ROC) plot and the area under the ROC (AUC) for different combinations of the blurring (B), sharpness (S), color (C), and edge (E) models; the combination of all four is our proposed method. The ROC plot shows the true positive rate (TPR) and false positive rate (FPR) for every possible prediction threshold, and the AUC provides an overall performance measure of a classifier (AUC=0.5: random guessing, AUC=1: perfect performance). 1New York City (USA, NY) San Francisco (USA, CA) Dubai (UAE) Miami (USA, FL) London (UK) Los Angeles (USA, CA) George Town (Singapore) In our case, there is a critical trade-off that must be considered: the classification threshold. A lower threshold will decrease the FPR but often decrease the TPR as well. In Table 1, we provide the TPR and FPR of the models when the threshold is set at 0.5 and for the threshold value at which the TPR=1. As can be seen, the use of all of the proposed models (B+S+C+E) in combination outperforms all other model combinations. Figure 4: 2D misclassification rates using state-of-the-art object detectors. In Table 2, we compare EyeDAS ’s performance to that of two other approaches: (1) baseline models based on the state-of-the-art pre-trained image classifiers (i.e., VGG16, VGG19 and Resnet50 (22)); we utilize the known transfer learning technique (23) by re-training these models, and (2) an optimized model similar to EyeDAS , except that it is based on a single expert model which considers the raw images as is (i.e., it computes the image similarity based distance (13) between the raw images directly, without extracting any features as EyeDAS does). In both cases, 220 instances were randomly selected for training; the distribution of 3D and 2D images is approximately 66.7% and 33.3% respectively, and the data augmentation technique described above was applied to avoid an unbalanced training set. Each baseline model was re-trained by (1) freezing all the layers except for the output layer which was replaced by two trainable fully connected layers, (2) randomly picking 50 instances from the training set to serve as the validation set, (3) pre-processing the input data (i.e., image resizing and scaling), and finally (4) minimizing the categorical_crossentropy loss function on the validation set using the Adam optimizer. The first new layer contained 128 neurons and attached with the relu activation function, and the second layer (output layer) contained two neurons and attached with the softmax activation function. Increasing the first layer’s neurons count beyond 128 resulted in poorer results due to overfitting. As can be seen, EyeDAS outperforms all of the abovementioned models. Table 1: EyeDAS ’s TPR and FPR with different thresholds. Table 2: Comparison to other approaches. Securing ODs with EyeDAS . To determine how effective EyeDAS is as part of a system, we evaluated 2D misclassification rates on seven state-of-the-art ODs (19; 24; 25; 26). The results are presented in Table 4; we present the 2D misclassification rates obtained for each detector before and after applying EyeDAS as a countermeasure and the impact of the different thresholds. The results show that for most ODs, when the detector mistakenly classified a 2D object as real (i.e., 3D), EyeDAS provided effective mitigation, even for the threshold value at which the TPR=1. 4.3 Generalization We also evaluate how EyeDAS ’s approach generalizes to different geographical locations and even different types of objects. Generalization to other geographical locations. To evaluate EyeDAS ’s geographical location generalization ability, we trained and tested the models on complementary groups of location types (i.e., cities). For training, we took minimum-sized city type combinations in which there are at least 56 2D objects and at least 120 3D objects, since we observed that less than that amount is insufficient for training the models and thus no meaningful conclusions could be derived. In Table 4, we present the evaluation results obtained for different geographical location (i.e., cities) combinations. As can be seen, EyeDAS is not dependent on the geographical location in which the training data was collected, as it is capable of using the knowledge gained during training to distinguish between 2 and 3D objects. Generalization to other types of objects. To evaluate EyeDAS ’s object type generalization ability, we trained and tested the models on complementary groups of object types. We focused on humans (HU), animals (AN), and vehicles (VE). As previously done, for training we used minimum-sized object type combinations in which there are at least 56 2D objects and at least 120 3D objects. In Table 5, we present the evaluation results obtained for each object type combination. As can be seen, EyeDAS is independent of the type of the object appeared in the training set, as it is capable of using the knowledge gained during training to distinguish between 2 and 3D objects. 4.4 Speed and Memory Performance 5 Limitations Despite the high performance achieved, EyeDAS has some limitations. First, technical factors may influence the performance of EyeDAS . For example, EyeDAS may perform poorly when image resolution is low or images are taken in low lighting. However, adequate lighting conditions are expected on roads on which autonomous vehicles can typically drive. Second, for 3D objects, if both the detected object and its close surrounding background are stationary (e.g., the object does not move or its surrounding background does not change), then EyeDAS may perform poorly. However, if the vehicle is moving or the camera’s auto-focus process is operating, then EyeDAS ’s errors will likely to decrease significantly even for 3D stationary objects. If the vehicle is not moving, the concern for the safety of passengers does not exist. 6 Summary In this paper, we proposed a novel countermeasure which can be used to secure object detectors (ODs) against the stereoblindness syndrome; this syndrome can have life-threatening consequences, endangering the safety of an autonomous car’s driver, passengers, pedestrians, and others on the road. Designed in recognition of the needs of autonomous driving in the real world, the proposed method is based on few-shot learning, making it very practical in terms of collecting the required training set. As presented in the previous sections, EyeDAS outperforms the baseline method and demonstrates excellent performance and specifically its: (1) ability to maintain a zero misclassification rate for 3D objects, (2) ability to improve the performance of ODs, (3) practicality in real-time conditions, and (4) ability to generalize between objects (i.e., humans, animals, and vehicles) and geographical locations (i.e., cities).
1. What is the focus and contribution of the paper regarding securing object detectors? 2. What are the strengths of the proposed approach, particularly in addressing a practical problem? 3. What are the weaknesses of the paper, especially regarding its experimental evaluation? 4. Do you have any concerns about the suitability of few-shot learning for the problem of stereoblindness syndrome? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This work prorposes a few-shot learning-based method named EyeDAS for securing object detectors against the stereoblindness syndrome (i.e., the inability to distinguish between 2D and 3D objects). It leverages the low-level features of image to solve the problem. Four unsupervised models (for blurring, edge, color and sharpness) repectively predict 3D confidence scores and a meta-classifier interprets the confidence scores and makes the final decision. EyeDAS is evaluated on a dataset collected from seven YouTube video recordings. Strengths And Weaknesses Strength: --The proposed problem of stereoblindness syndrome. i.e., the inability to distinguish between 2D and 3D objects, is interesting and meaningful. It’s a practical problem we should solve to improve the robustness of autonomous vehicles. The paper is well written and organized. And it’s easy to follow. Weakness: The major problem is the evaluation. Firstly, the whole experiment dataset is only collected from seven YouTube video recordings with hundreds of annotated objects for training and testing. This experiment benchmark are not convincing enough because of the small data quantity. And the baseline result used for comparison is poor and not reasonable. The baseline model does not converge well with such few data. If trained with enough data, I think the baseline model can achieve comparable performance. The proposed method is a few-shot method. But in the driving scenerios, 2D / 3D objects are common. It’s easy to get more annotated data for training the model. Few-shot methods are not practical for the problem of stereoblindness syndrome. The proposed method is not lightweight enough. 200 ms latency is not enough for real-time applications. Clarity, Quality, Novelty And Reproducibility The proposed method is novel and clear. The authors provide the code and the reproducibility is guaranteed.
ICLR
Title EyeDAS: Securing Perception of Autonomous Cars Against the Stereoblindness Syndrome Abstract The ability to detect whether an object is a 2D or 3D object is extremely important in autonomous driving, since a detection error can have lifethreatening consequences, endangering the safety of the driver, passengers, pedestrians, and others on the road. Methods proposed to distinguish between 2 and 3D objects (e.g., liveness detection methods) are not suitable for autonomous driving, because they are object dependent or do not consider the constraints associated with autonomous driving (e.g., the need for real-time decision-making while the vehicle is moving). In this paper, we present EyeDAS , a novel few-shot learning-based method aimed at securing an object detector (OD) against the threat posed by the stereoblindness syndrome (i.e., the inability to distinguish between 2D and 3D objects). We evaluate EyeDAS ’s real-time performance using 2,000 objects extracted from seven YouTube video recordings of street views taken by a dash cam from the driver’s seat perspective. When applying EyeDAS to seven stateof-the-art ODs as a countermeasure, EyeDAS was able to reduce the 2D misclassification rate from 71.42-100% to 2.4% with a 3D misclassification rate of 0% (TPR of 1.0). Also, EyeDAS outperforms the baseline method and achieves an AUC of over 0.999. 1 Introduction After years of research and development, automobile technology is rapidly approaching the point at which human drivers can be replaced, as commercial cars are now capable of supporting semi-autonomous driving. To create a reality that consists of commercial semi-autonomous cars, scientists had to develop the computerized driver intelligence required to: (1) continuously create a virtual perception of the physical surroundings (e.g., detect pedestrians, road signs, cars, etc.), (2) make decisions, and (3) perform the corresponding action (e.g., notify the driver, turn the wheel, stop the car). While computerized driver intelligence brought semi-autonomous driving to new heights in terms of safety (1), recent incidents have shown that semi-autonomous cars suffer from the stereoblindness syndrome: they react to 2D objects as if they were 3D objects due to their inability to distinguish between these two types of objects. This fact threatens autonomous car safety, because a 2D object (e.g., an image of a car, dog, person) in a nearby advertisement that is misdetected as a real object can trigger a reaction from a semi-autonomous car (e.g., cause it to stop in the middle of the road), as shown in Fig. 1. Such undesired reactions may endanger drivers, passengers, and nearby pedestrians as well. As a result, there is a need to secure semi-autonomous cars against the perceptual challenge caused by the stereoblindness syndrome. The perceptual challenge caused by the stereoblindness syndrome stems from object detectors’ (which obtain data from cars’ video cameras) misclassification of 2D objects. One might argue that the stereoblindness syndrome can be addressed by adopting a sensor fusion approach: by cross-correlating data from the video cameras with data obtained by sensors aimed at detecting depth (e.g., ultrasonic sensors, radar). However, due to safety concerns, a "safety first" policy is implemented in autonomous vehicles, which causes them to consider a detected object as a real object even when it is detected by a single sensor without additional validation from another sensor (2; 3). This is also demonstrated in Fig. 1 which shows how a Tesla’s autopilot triggers a sudden stop due to the misdetection of a 2D object as a real object, despite the fact that Teslas are equipped with radar, a set of ultrasonic sensors, and a set of front-facing video cameras. In addition, while various methods have used liveness detection algorithms to detect whether an object is 2D/3D (4; 5; 6), the proposed methods do not provide the functionality required to distinguish between 2D/3D objects in an autonomous driving setup, because they are object dependent (they cannot generalize between different objects, e.g., cars and pedestrians) and do not take into account the real-time constraints associated with autonomous driving. As a result, there is a need for dedicated functionality that validates the detections of video camera based object detectors and considers the constraints of autonomous driving. In this paper, we present EyeDAS , a committee of models that validates objects detected by the on-board object detector. EyeDAS aims to secure a single channel object detector that obtains data from a video camera and provides a solution to the stereoblindness syndrome, i.e., distinguishes between 2 and 3D objects, while taking the constraints of autonomous driving (both safety and real-time constraints) into account. EyeDAS can be deployed on existing advanced driver-assistance systems (ADASs) without the need for additional sensors. EyeDAS is based on few-shot learning and consists of four lightweight unsupervised models, each of which utilizes a unique feature extraction method and outputs a 3D confidence score. Finally, a meta-classifier uses the output of the four models to determine whether the given object is a 2 or 3D object. We evaluate EyeDAS using a dataset collected from seven YouTube video recordings of street views taken by a dash cam from the driver’s seat perspective; the 2D objects in the dataset were extracted from various billboards that appear in the videos. When applying EyeDAS to seven state-of-the-art ODs as a countermeasure, EyeDAS was able to reduce the 2D misclassification rate from 71.42-100% to 2.4% with a 3D misclassification rate of 0% (TPR of 1.0). We also show that EyeDAS outperforms the baseline method and achieves an AUC of over 0.999. In this research we make the following contributions: (1) we present a practical method for securing object detectors against the stereoblindness syndrome that meets the constraints of autonomous driving (safety and real-time constraints), and (2) we show that the method can be applied using few-shot learning, can be used to detect whether an inanimate object is a 2D or 3D object (i.e., distinguishes between a real car from an advertisement containing an image of a car), and can generalize to different types of objects and between cities. The remainder of this paper is structured as follows: In Section 2, we review related work. In Section 3, we present EyeDAS , explain its architecture, design considerations, and each expert in the committee of models. In Section 4, we evaluate EyeDAS ’s performance under the constraints of autonomous driving, based on various YouTube video recordings taken by a dash cam from several places around the world. In Section 5 we discuss the limitations of EyeDAS , and in Section 6, we present a summary. 2 Related Work The ability to detect whether an object is a 2D or 3D object is extremely important in autonomous driving, since a detection error can have life-threatening consequences, endangering the safety of the driver, passengers, pedestrians, and others on the road. Without this capability, Tesla’s autopilot was unintentionally triggered, causing the car to: (1) continuously slam on the brakes in response to a print advertisement containing a picture of a person that appeared on a bus (7), and (2) stop in response to a billboard advertisement that contained a stop sign (8). Moreover, attackers can exploit the absence of this capability and intentionally trigger: (1) Tesla’s autopilot to suddenly stop the car in the middle of a road in response to a stop sign embedded in an advertisement on a digital billboard (2), and (2) Mobileye 630 to issue false notifications regarding a projected road sign (9). The need to detect whether an object is 2 or 3D is also important for authentication systems (e.g., face recognition systems) where the identity of a user can be spoofed using a printed picture of the user. Various methods have been suggested for liveness detection (4; 5; 6), however the two primary disadvantages of the proposed methods are that they: (1) fail to generalize to other objects (e.g., distinguish between a real car and a picture of car), since they mainly rely on dedicated features associated with humans (4) (e.g., eye movements (5), facial vein map (6)), which makes them object dependent; or (2) have high false negative rates for pictures of objects that were not taken from a specific orientation, angle, or position (e.g., they fail to detect liveness if the picture of the person was taken from the back). As a result, these methods are not suitable for autonomous driving. 3 EyeDAS The requirements for a method used to secure the perception of autonomous cars against stereoblindness syndrome are as follows. The method must be capable of: (1) operating under the constraints of autonomous driving, and (2) securing an object detector that obtains data from a single video camera, because a few commercial ADASs, including Mobileye 630 PRO, rely on a single video camera without any additional sensors, and (3) utilizing just a small amount of training data; the fact that there may be just a small amount of 2D objects in each geographical area necessitates a method with high performance and minimum training so that it can be generalized to different types of objects and between geographical locations. 3.1 Architecture Fig. 2 provides an overview of EyeDAS ; whenever an object is detected by the vehicle’s image recognition model, it is tracked during t consecutive frames sampled at a frequency of f frames per second (FPS), cropped from each frame and serially passed to EyeDAS . EyeDAS then predicts whether the object is a 2D (e.g., an image of a person) or 3D object (e.g., a real person). Let x = (x1, ..., xt−1, xt) be a time series of t identical RGB objects cropped from t consecutive frames where each object is centered. To predict whether an object is 2D or 3D, we could build a supervised machine learning model which receives x, which consists of images of an object to classify, and predicts whether the object detected is 2D or 3D. However, such an approach would make the machine learning model reliant on specific features and thus would not generalize to objects extracted when the vehicle is traveling in different locations or at different speeds, or when the vehicle is approaching the object from different angles or distances. To avoid this bias, we utilize the committee of experts approach used in machine learning applications (10), in which there is an ensemble of models, each of which has a different perspective of interpreting the incoming data. By combining different perspectives, we (1) create a more resilient classifier that performs well even in cases where one aspect fails to capture the evidence, and (2) reduce the false alarm rate by focusing the classifier on just the relevant input features; EyeDAS consists of an ensemble of unsupervised models (experts), Figure 2: EyeDAS ’s architecture. When an object is detected, (i) a time series of the cropped object images is transferred to EyeDAS , (ii) four types of unique features are processed by four unsupervised models (i.e., experts), resulting in four 3D confidence scores, and (iii) the meta-classifier model interprets the confidence scores and makes the final decision regarding the object (2D or 3D). each of which outputs a 3D confidence score, and a supervised model (meta-classifier), which produces the final outcome (decision) given the set of confidence scores. Although each of the proposed unsupervised models (experts) focuses on a different perspective, they have a common property: given x, which consists of images of an object to classify as 2D or 3D, each model measures a difference between each two consecutive elements in the series; in this study, we show that the combination of a proper feature extraction method together with the proper distance metric (applied between each two consecutive elements) is a good basis for building such a classifier. In addition, decisions based on a distance observed between two consecutive elements in a given series allows EyeDAS to generalize; this approach minimizes dependency on object types or geographical locations. In addition, EyeDAS finds the optimal balances between time to decision and classification accuracy; we compare EyeDAS utilizing t > 1 object frames to a state-of-the-art image classifier that is designed to process a single frame at a time. From the perspective of a software module like EyeDAS , a 3D object itself is not expected to change significantly within a short period of time, even in cases in which the 3D object is a human, animal, or vehicle (i.e., a real object). Therefore, it is not trivial to find distance metrics that can detect statistically significant differences that: (1) allow accurate differentiation between 2D and 3D objects by considering just the object, and (2) can be computed quickly. Therefore, we suggest a time-efficient approach that considers objects entangled with their close surrounding background. Each of the proposed unsupervised models utilizes a unique feature extraction method and a corresponding distance metric; these models are designed to process image time series of any size t (t > 1). This property is crucial, since the exact time point at which there is a significant difference between two images (in cases in which the object detected is 3D) is unpredictable. In addition, the images to analyze should be represented in such a way that ensures that a statistically significant difference can be efficiently obtained for 3D objects, while the error rate for 2D objects is minimized. In other words, considering the objects entangled with their close surrounding background, we are interested in feature extraction methods whose outcomes for 2D objects and 3D objects are statistically distinguishable within a short period of time. 3.2 Proposed Models Our committee consists of four unsupervised models, each focusing on a different perspective (see Fig. 3 for a demonstration of each model’s perspective); each model receives the time series of consecutive images of an object to classify, extracts features from each image, measures a type of difference between the features extracted from two consecutive images, and finally outputs a 3D confidence score. Additional details on the four models are provided below: Blurring Model (B) - This model utilizes the automatic focus (auto-focus) capability of video cameras commonly used to build 3D cameras (11). Unlike 2D object images, for 3D object images, the blurring effect differs and is applied alternately on the object and its surrounding background during the auto-focus process; this is reflected in a large amount of contrast, which is observed when examining the blurring maps (12) corresponding to the raw images. Thus, using the structural image similarity measure proposed by Wang et al. (13), the blurring model outputs the maximum value obtained by calculating the differences between each two consecutive blurring maps. Sharpness Model (S) - This model utilizes the possible image sharpness-level instability observed in 3D object images due to the objects’ movements; the sharpness model extracts the overall sharpness level (a numeric value) from each raw image received, using the method described by Bansal et al. (14), and outputs the maximum value obtained by calculating the differences between each two consecutive sharpness levels. Color Model (C) - This model utilizes the possible movement expected in the surrounding environment of 3D objects; this movement is reflected in a large difference in the color distributions of the raw images. Thus, the color model extracts the color distributions from each raw image using a clustering algorithm (15), computes the size of the largest cluster observed, and then outputs the maximum value obtained by calculating differences between each two consecutive elements. Edge Model (E) - This model is based on the Sobel edge detector (16), a gradient-based method for estimating the first order derivatives of the image separately for the horizontal and vertical axes; these derivatives are not expected to change significantly for static objects like 2D objects. The edge model operates similarly to the blurring model except for one thing: given the raw images, the edge model extracts the edging maps instead of the blurring maps for comparison; extraction is performed using the method described by Gao et al. (17). Meta-Classifier - To make a prediction as to whether or not an object is a 2D or 3D object, we combine the knowledge of the unsupervised models described above in a final prediction; we utilize a gradient boosting (GB) based binary classifier (18) trained on the outputs of the models. We choose the GB algorithm, since it can capture nonlinear function dependencies; in our case, the above unsupervised models complement each other, and their output scores form a nonlinear relationship. The reader can use the following link to download the code that implements the method (the link will be added after the review, the project zip file has been already shared during paper submission). 4 Evaluation The experiments described in this section were designed in recognition of the needs of autonomous driving in the real world; to ensure safety, a solution needs to both accurately distinguish between 2D and 3D objects and make a fast decision. The experiments are aimed at evaluating: (1) the performance of each expert in the committee, (2) the performance of the entire committee, (3) improvement in the false positive rate of ODs when EyeDAS is applied for validation, (4) the practicality of EyeDAS in real-time environments (in terms of computational resources and speed), and (5) the ability to generalize between objects (i.e., humans, animals and vehicles) and geographical locations (i.e., cities). All the experiments described in this section, including the speed and memory benchmark (described in Section 4.4), were conducted on an Intel 2.9 GHz Intel Core i7-10700 and 32GB RAM. The machine’s operating system was Windows 10. 4.1 Experiment Setup Dataset. Since some ADASs (e.g., Mobileye 630 PRO) rely on a single channel camera, all of the data collected for our experiments was obtained from single-channel cameras; we utilized seven YouTube video recordings 1 of street views taken by a dash cam from the driver’s seat perspective. The distribution of the dataset represents the real distribution of 2D and 3D objects encountered by autonomous driving in the real world: 2,000 RGB objects (i.e., humans, animals, and vehicles) were extracted from driving view videos taken in seven cities: New York (NY, USA), San Francisco (CA, USA), Dubai (United Arab Emirates), Miami (FL, USA), London (UK), Los Angeles (CA, USA), and George Town (Singapore), of which approximately 95% are 3D objects and 5% are 2D objects extracted from billboards. The objects were extracted and cropped using the highest performing OD described by Redmon et al. (19); in our experiments, each input instance x associated with an object to classify contains up to five images taken at 200 millisecond intervals, starting at the time point at which an object was detected by the OD. Input instance xi is labeled as ‘True‘ if the instance represents a 3D object and ‘False‘ if the instance represents a 2D object. Training. We denote TR3D and TR2D as two training sets representing 3D and 2D objects respectively. To avoid an unbalanced training set, we extend TR2D by utilizing known image data augmentation techniques (20) to randomly selected instances from TR2D; we apply the rotation technique. Given the outputs calculated by the unsupervised models for each input instance (i.e., the 3D confidence scores), the final meta-classifier training was performed; the best hyperparameters for training were selected using the grid search algorithm (21) using the random shuffling split method and 10-fold cross-validation. We vary the number of estimators in the set of {20, 25, ..., 40} trees, while we change the maximum depth of the trees in the set of {2, 3, 4}. To select the best set of hyperparameters, we evaluated the meta-classifier’s performance in terms of accuracy. In all of the experiments described in this section: (1) |TR3D| = 150 and |TR2D| = 70, and (2) a decision was made within 200 milliseconds from the moment the object was detected by the OD (i.e., t = 2). 4.2 Results Performance. In Fig. 5, we present the receiver operating characteristic (ROC) plot and the area under the ROC (AUC) for different combinations of the blurring (B), sharpness (S), color (C), and edge (E) models; the combination of all four is our proposed method. The ROC plot shows the true positive rate (TPR) and false positive rate (FPR) for every possible prediction threshold, and the AUC provides an overall performance measure of a classifier (AUC=0.5: random guessing, AUC=1: perfect performance). 1New York City (USA, NY) San Francisco (USA, CA) Dubai (UAE) Miami (USA, FL) London (UK) Los Angeles (USA, CA) George Town (Singapore) In our case, there is a critical trade-off that must be considered: the classification threshold. A lower threshold will decrease the FPR but often decrease the TPR as well. In Table 1, we provide the TPR and FPR of the models when the threshold is set at 0.5 and for the threshold value at which the TPR=1. As can be seen, the use of all of the proposed models (B+S+C+E) in combination outperforms all other model combinations. Figure 4: 2D misclassification rates using state-of-the-art object detectors. In Table 2, we compare EyeDAS ’s performance to that of two other approaches: (1) baseline models based on the state-of-the-art pre-trained image classifiers (i.e., VGG16, VGG19 and Resnet50 (22)); we utilize the known transfer learning technique (23) by re-training these models, and (2) an optimized model similar to EyeDAS , except that it is based on a single expert model which considers the raw images as is (i.e., it computes the image similarity based distance (13) between the raw images directly, without extracting any features as EyeDAS does). In both cases, 220 instances were randomly selected for training; the distribution of 3D and 2D images is approximately 66.7% and 33.3% respectively, and the data augmentation technique described above was applied to avoid an unbalanced training set. Each baseline model was re-trained by (1) freezing all the layers except for the output layer which was replaced by two trainable fully connected layers, (2) randomly picking 50 instances from the training set to serve as the validation set, (3) pre-processing the input data (i.e., image resizing and scaling), and finally (4) minimizing the categorical_crossentropy loss function on the validation set using the Adam optimizer. The first new layer contained 128 neurons and attached with the relu activation function, and the second layer (output layer) contained two neurons and attached with the softmax activation function. Increasing the first layer’s neurons count beyond 128 resulted in poorer results due to overfitting. As can be seen, EyeDAS outperforms all of the abovementioned models. Table 1: EyeDAS ’s TPR and FPR with different thresholds. Table 2: Comparison to other approaches. Securing ODs with EyeDAS . To determine how effective EyeDAS is as part of a system, we evaluated 2D misclassification rates on seven state-of-the-art ODs (19; 24; 25; 26). The results are presented in Table 4; we present the 2D misclassification rates obtained for each detector before and after applying EyeDAS as a countermeasure and the impact of the different thresholds. The results show that for most ODs, when the detector mistakenly classified a 2D object as real (i.e., 3D), EyeDAS provided effective mitigation, even for the threshold value at which the TPR=1. 4.3 Generalization We also evaluate how EyeDAS ’s approach generalizes to different geographical locations and even different types of objects. Generalization to other geographical locations. To evaluate EyeDAS ’s geographical location generalization ability, we trained and tested the models on complementary groups of location types (i.e., cities). For training, we took minimum-sized city type combinations in which there are at least 56 2D objects and at least 120 3D objects, since we observed that less than that amount is insufficient for training the models and thus no meaningful conclusions could be derived. In Table 4, we present the evaluation results obtained for different geographical location (i.e., cities) combinations. As can be seen, EyeDAS is not dependent on the geographical location in which the training data was collected, as it is capable of using the knowledge gained during training to distinguish between 2 and 3D objects. Generalization to other types of objects. To evaluate EyeDAS ’s object type generalization ability, we trained and tested the models on complementary groups of object types. We focused on humans (HU), animals (AN), and vehicles (VE). As previously done, for training we used minimum-sized object type combinations in which there are at least 56 2D objects and at least 120 3D objects. In Table 5, we present the evaluation results obtained for each object type combination. As can be seen, EyeDAS is independent of the type of the object appeared in the training set, as it is capable of using the knowledge gained during training to distinguish between 2 and 3D objects. 4.4 Speed and Memory Performance 5 Limitations Despite the high performance achieved, EyeDAS has some limitations. First, technical factors may influence the performance of EyeDAS . For example, EyeDAS may perform poorly when image resolution is low or images are taken in low lighting. However, adequate lighting conditions are expected on roads on which autonomous vehicles can typically drive. Second, for 3D objects, if both the detected object and its close surrounding background are stationary (e.g., the object does not move or its surrounding background does not change), then EyeDAS may perform poorly. However, if the vehicle is moving or the camera’s auto-focus process is operating, then EyeDAS ’s errors will likely to decrease significantly even for 3D stationary objects. If the vehicle is not moving, the concern for the safety of passengers does not exist. 6 Summary In this paper, we proposed a novel countermeasure which can be used to secure object detectors (ODs) against the stereoblindness syndrome; this syndrome can have life-threatening consequences, endangering the safety of an autonomous car’s driver, passengers, pedestrians, and others on the road. Designed in recognition of the needs of autonomous driving in the real world, the proposed method is based on few-shot learning, making it very practical in terms of collecting the required training set. As presented in the previous sections, EyeDAS outperforms the baseline method and demonstrates excellent performance and specifically its: (1) ability to maintain a zero misclassification rate for 3D objects, (2) ability to improve the performance of ODs, (3) practicality in real-time conditions, and (4) ability to generalize between objects (i.e., humans, animals, and vehicles) and geographical locations (i.e., cities).
1. What is the main contribution of the paper regarding autonomous driving? 2. What are the strengths and weaknesses of the proposed network architecture? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are the limitations of the proposed approach in terms of its ability to address the problem of scene planarity from a monocular sequence of two images? 5. Do you have any questions or concerns about the choice of images used in the proposed algorithm or the comparison with other architectures?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper is interested in discriminating between a 2d view of a 3D object and an actual 3D object, in the context of autonomous driving. The goal is to get rid of false detection of objects that are actually pictures of objects, thereby increasing navigation safety. Although the author does not express it this way, the problem is essentially to ascertain if the scene is planar or not in a monocular sequence of two images. Although the problem is interesting, it is cast in a very vague, non geometrical way, which results is an ad hoc algorithm without justification, especially with regards to geometry. Strengths And Weaknesses The main weakness of the paper is that the problem is not cast as a 3D computer vision problem, but a vague 2D recognition problem. While it is clear that the problem is related to estimating scene planarity from a the motion of a monocular camera, the problem is cast as a purely 2D task. The proposed network architecture, relying on four simple models (Burring/Sharpness/Color/Edge) is not well justified and is not related to the fundamentals of the problem. This makes it impossible to see how or why it would work. The choice of 5 images over a 1 second time lapse is not properly justified, especially when the proposed algorithm uses only 2 images to make a decision. The time interval should be chosen in accordance to the motion of the camera, to ensure that enough parallax is present. Comparing this architecture, which uses 2 images, with other architectures that use only one image (and are trained on a different problem) seems somewhat unfair. But maybe I misunderstood that result (Figure 4). Clarity, Quality, Novelty And Reproducibility The paper is generally well written, with few errors to fix. One such error is the mix-up between Table-4 and Figure-4. Even if the proposed approach is novel, it is not justified and seems very random... The Blurring model is said to be related to autofocus, but the moving cameras have fixed focus, so why mention focus?. Sharpness, Color and Edge are similarly unjustified. The authors do not provide a single convincing example, real of synthetic, that would convince that these models are justified.
ICLR
Title EyeDAS: Securing Perception of Autonomous Cars Against the Stereoblindness Syndrome Abstract The ability to detect whether an object is a 2D or 3D object is extremely important in autonomous driving, since a detection error can have lifethreatening consequences, endangering the safety of the driver, passengers, pedestrians, and others on the road. Methods proposed to distinguish between 2 and 3D objects (e.g., liveness detection methods) are not suitable for autonomous driving, because they are object dependent or do not consider the constraints associated with autonomous driving (e.g., the need for real-time decision-making while the vehicle is moving). In this paper, we present EyeDAS , a novel few-shot learning-based method aimed at securing an object detector (OD) against the threat posed by the stereoblindness syndrome (i.e., the inability to distinguish between 2D and 3D objects). We evaluate EyeDAS ’s real-time performance using 2,000 objects extracted from seven YouTube video recordings of street views taken by a dash cam from the driver’s seat perspective. When applying EyeDAS to seven stateof-the-art ODs as a countermeasure, EyeDAS was able to reduce the 2D misclassification rate from 71.42-100% to 2.4% with a 3D misclassification rate of 0% (TPR of 1.0). Also, EyeDAS outperforms the baseline method and achieves an AUC of over 0.999. 1 Introduction After years of research and development, automobile technology is rapidly approaching the point at which human drivers can be replaced, as commercial cars are now capable of supporting semi-autonomous driving. To create a reality that consists of commercial semi-autonomous cars, scientists had to develop the computerized driver intelligence required to: (1) continuously create a virtual perception of the physical surroundings (e.g., detect pedestrians, road signs, cars, etc.), (2) make decisions, and (3) perform the corresponding action (e.g., notify the driver, turn the wheel, stop the car). While computerized driver intelligence brought semi-autonomous driving to new heights in terms of safety (1), recent incidents have shown that semi-autonomous cars suffer from the stereoblindness syndrome: they react to 2D objects as if they were 3D objects due to their inability to distinguish between these two types of objects. This fact threatens autonomous car safety, because a 2D object (e.g., an image of a car, dog, person) in a nearby advertisement that is misdetected as a real object can trigger a reaction from a semi-autonomous car (e.g., cause it to stop in the middle of the road), as shown in Fig. 1. Such undesired reactions may endanger drivers, passengers, and nearby pedestrians as well. As a result, there is a need to secure semi-autonomous cars against the perceptual challenge caused by the stereoblindness syndrome. The perceptual challenge caused by the stereoblindness syndrome stems from object detectors’ (which obtain data from cars’ video cameras) misclassification of 2D objects. One might argue that the stereoblindness syndrome can be addressed by adopting a sensor fusion approach: by cross-correlating data from the video cameras with data obtained by sensors aimed at detecting depth (e.g., ultrasonic sensors, radar). However, due to safety concerns, a "safety first" policy is implemented in autonomous vehicles, which causes them to consider a detected object as a real object even when it is detected by a single sensor without additional validation from another sensor (2; 3). This is also demonstrated in Fig. 1 which shows how a Tesla’s autopilot triggers a sudden stop due to the misdetection of a 2D object as a real object, despite the fact that Teslas are equipped with radar, a set of ultrasonic sensors, and a set of front-facing video cameras. In addition, while various methods have used liveness detection algorithms to detect whether an object is 2D/3D (4; 5; 6), the proposed methods do not provide the functionality required to distinguish between 2D/3D objects in an autonomous driving setup, because they are object dependent (they cannot generalize between different objects, e.g., cars and pedestrians) and do not take into account the real-time constraints associated with autonomous driving. As a result, there is a need for dedicated functionality that validates the detections of video camera based object detectors and considers the constraints of autonomous driving. In this paper, we present EyeDAS , a committee of models that validates objects detected by the on-board object detector. EyeDAS aims to secure a single channel object detector that obtains data from a video camera and provides a solution to the stereoblindness syndrome, i.e., distinguishes between 2 and 3D objects, while taking the constraints of autonomous driving (both safety and real-time constraints) into account. EyeDAS can be deployed on existing advanced driver-assistance systems (ADASs) without the need for additional sensors. EyeDAS is based on few-shot learning and consists of four lightweight unsupervised models, each of which utilizes a unique feature extraction method and outputs a 3D confidence score. Finally, a meta-classifier uses the output of the four models to determine whether the given object is a 2 or 3D object. We evaluate EyeDAS using a dataset collected from seven YouTube video recordings of street views taken by a dash cam from the driver’s seat perspective; the 2D objects in the dataset were extracted from various billboards that appear in the videos. When applying EyeDAS to seven state-of-the-art ODs as a countermeasure, EyeDAS was able to reduce the 2D misclassification rate from 71.42-100% to 2.4% with a 3D misclassification rate of 0% (TPR of 1.0). We also show that EyeDAS outperforms the baseline method and achieves an AUC of over 0.999. In this research we make the following contributions: (1) we present a practical method for securing object detectors against the stereoblindness syndrome that meets the constraints of autonomous driving (safety and real-time constraints), and (2) we show that the method can be applied using few-shot learning, can be used to detect whether an inanimate object is a 2D or 3D object (i.e., distinguishes between a real car from an advertisement containing an image of a car), and can generalize to different types of objects and between cities. The remainder of this paper is structured as follows: In Section 2, we review related work. In Section 3, we present EyeDAS , explain its architecture, design considerations, and each expert in the committee of models. In Section 4, we evaluate EyeDAS ’s performance under the constraints of autonomous driving, based on various YouTube video recordings taken by a dash cam from several places around the world. In Section 5 we discuss the limitations of EyeDAS , and in Section 6, we present a summary. 2 Related Work The ability to detect whether an object is a 2D or 3D object is extremely important in autonomous driving, since a detection error can have life-threatening consequences, endangering the safety of the driver, passengers, pedestrians, and others on the road. Without this capability, Tesla’s autopilot was unintentionally triggered, causing the car to: (1) continuously slam on the brakes in response to a print advertisement containing a picture of a person that appeared on a bus (7), and (2) stop in response to a billboard advertisement that contained a stop sign (8). Moreover, attackers can exploit the absence of this capability and intentionally trigger: (1) Tesla’s autopilot to suddenly stop the car in the middle of a road in response to a stop sign embedded in an advertisement on a digital billboard (2), and (2) Mobileye 630 to issue false notifications regarding a projected road sign (9). The need to detect whether an object is 2 or 3D is also important for authentication systems (e.g., face recognition systems) where the identity of a user can be spoofed using a printed picture of the user. Various methods have been suggested for liveness detection (4; 5; 6), however the two primary disadvantages of the proposed methods are that they: (1) fail to generalize to other objects (e.g., distinguish between a real car and a picture of car), since they mainly rely on dedicated features associated with humans (4) (e.g., eye movements (5), facial vein map (6)), which makes them object dependent; or (2) have high false negative rates for pictures of objects that were not taken from a specific orientation, angle, or position (e.g., they fail to detect liveness if the picture of the person was taken from the back). As a result, these methods are not suitable for autonomous driving. 3 EyeDAS The requirements for a method used to secure the perception of autonomous cars against stereoblindness syndrome are as follows. The method must be capable of: (1) operating under the constraints of autonomous driving, and (2) securing an object detector that obtains data from a single video camera, because a few commercial ADASs, including Mobileye 630 PRO, rely on a single video camera without any additional sensors, and (3) utilizing just a small amount of training data; the fact that there may be just a small amount of 2D objects in each geographical area necessitates a method with high performance and minimum training so that it can be generalized to different types of objects and between geographical locations. 3.1 Architecture Fig. 2 provides an overview of EyeDAS ; whenever an object is detected by the vehicle’s image recognition model, it is tracked during t consecutive frames sampled at a frequency of f frames per second (FPS), cropped from each frame and serially passed to EyeDAS . EyeDAS then predicts whether the object is a 2D (e.g., an image of a person) or 3D object (e.g., a real person). Let x = (x1, ..., xt−1, xt) be a time series of t identical RGB objects cropped from t consecutive frames where each object is centered. To predict whether an object is 2D or 3D, we could build a supervised machine learning model which receives x, which consists of images of an object to classify, and predicts whether the object detected is 2D or 3D. However, such an approach would make the machine learning model reliant on specific features and thus would not generalize to objects extracted when the vehicle is traveling in different locations or at different speeds, or when the vehicle is approaching the object from different angles or distances. To avoid this bias, we utilize the committee of experts approach used in machine learning applications (10), in which there is an ensemble of models, each of which has a different perspective of interpreting the incoming data. By combining different perspectives, we (1) create a more resilient classifier that performs well even in cases where one aspect fails to capture the evidence, and (2) reduce the false alarm rate by focusing the classifier on just the relevant input features; EyeDAS consists of an ensemble of unsupervised models (experts), Figure 2: EyeDAS ’s architecture. When an object is detected, (i) a time series of the cropped object images is transferred to EyeDAS , (ii) four types of unique features are processed by four unsupervised models (i.e., experts), resulting in four 3D confidence scores, and (iii) the meta-classifier model interprets the confidence scores and makes the final decision regarding the object (2D or 3D). each of which outputs a 3D confidence score, and a supervised model (meta-classifier), which produces the final outcome (decision) given the set of confidence scores. Although each of the proposed unsupervised models (experts) focuses on a different perspective, they have a common property: given x, which consists of images of an object to classify as 2D or 3D, each model measures a difference between each two consecutive elements in the series; in this study, we show that the combination of a proper feature extraction method together with the proper distance metric (applied between each two consecutive elements) is a good basis for building such a classifier. In addition, decisions based on a distance observed between two consecutive elements in a given series allows EyeDAS to generalize; this approach minimizes dependency on object types or geographical locations. In addition, EyeDAS finds the optimal balances between time to decision and classification accuracy; we compare EyeDAS utilizing t > 1 object frames to a state-of-the-art image classifier that is designed to process a single frame at a time. From the perspective of a software module like EyeDAS , a 3D object itself is not expected to change significantly within a short period of time, even in cases in which the 3D object is a human, animal, or vehicle (i.e., a real object). Therefore, it is not trivial to find distance metrics that can detect statistically significant differences that: (1) allow accurate differentiation between 2D and 3D objects by considering just the object, and (2) can be computed quickly. Therefore, we suggest a time-efficient approach that considers objects entangled with their close surrounding background. Each of the proposed unsupervised models utilizes a unique feature extraction method and a corresponding distance metric; these models are designed to process image time series of any size t (t > 1). This property is crucial, since the exact time point at which there is a significant difference between two images (in cases in which the object detected is 3D) is unpredictable. In addition, the images to analyze should be represented in such a way that ensures that a statistically significant difference can be efficiently obtained for 3D objects, while the error rate for 2D objects is minimized. In other words, considering the objects entangled with their close surrounding background, we are interested in feature extraction methods whose outcomes for 2D objects and 3D objects are statistically distinguishable within a short period of time. 3.2 Proposed Models Our committee consists of four unsupervised models, each focusing on a different perspective (see Fig. 3 for a demonstration of each model’s perspective); each model receives the time series of consecutive images of an object to classify, extracts features from each image, measures a type of difference between the features extracted from two consecutive images, and finally outputs a 3D confidence score. Additional details on the four models are provided below: Blurring Model (B) - This model utilizes the automatic focus (auto-focus) capability of video cameras commonly used to build 3D cameras (11). Unlike 2D object images, for 3D object images, the blurring effect differs and is applied alternately on the object and its surrounding background during the auto-focus process; this is reflected in a large amount of contrast, which is observed when examining the blurring maps (12) corresponding to the raw images. Thus, using the structural image similarity measure proposed by Wang et al. (13), the blurring model outputs the maximum value obtained by calculating the differences between each two consecutive blurring maps. Sharpness Model (S) - This model utilizes the possible image sharpness-level instability observed in 3D object images due to the objects’ movements; the sharpness model extracts the overall sharpness level (a numeric value) from each raw image received, using the method described by Bansal et al. (14), and outputs the maximum value obtained by calculating the differences between each two consecutive sharpness levels. Color Model (C) - This model utilizes the possible movement expected in the surrounding environment of 3D objects; this movement is reflected in a large difference in the color distributions of the raw images. Thus, the color model extracts the color distributions from each raw image using a clustering algorithm (15), computes the size of the largest cluster observed, and then outputs the maximum value obtained by calculating differences between each two consecutive elements. Edge Model (E) - This model is based on the Sobel edge detector (16), a gradient-based method for estimating the first order derivatives of the image separately for the horizontal and vertical axes; these derivatives are not expected to change significantly for static objects like 2D objects. The edge model operates similarly to the blurring model except for one thing: given the raw images, the edge model extracts the edging maps instead of the blurring maps for comparison; extraction is performed using the method described by Gao et al. (17). Meta-Classifier - To make a prediction as to whether or not an object is a 2D or 3D object, we combine the knowledge of the unsupervised models described above in a final prediction; we utilize a gradient boosting (GB) based binary classifier (18) trained on the outputs of the models. We choose the GB algorithm, since it can capture nonlinear function dependencies; in our case, the above unsupervised models complement each other, and their output scores form a nonlinear relationship. The reader can use the following link to download the code that implements the method (the link will be added after the review, the project zip file has been already shared during paper submission). 4 Evaluation The experiments described in this section were designed in recognition of the needs of autonomous driving in the real world; to ensure safety, a solution needs to both accurately distinguish between 2D and 3D objects and make a fast decision. The experiments are aimed at evaluating: (1) the performance of each expert in the committee, (2) the performance of the entire committee, (3) improvement in the false positive rate of ODs when EyeDAS is applied for validation, (4) the practicality of EyeDAS in real-time environments (in terms of computational resources and speed), and (5) the ability to generalize between objects (i.e., humans, animals and vehicles) and geographical locations (i.e., cities). All the experiments described in this section, including the speed and memory benchmark (described in Section 4.4), were conducted on an Intel 2.9 GHz Intel Core i7-10700 and 32GB RAM. The machine’s operating system was Windows 10. 4.1 Experiment Setup Dataset. Since some ADASs (e.g., Mobileye 630 PRO) rely on a single channel camera, all of the data collected for our experiments was obtained from single-channel cameras; we utilized seven YouTube video recordings 1 of street views taken by a dash cam from the driver’s seat perspective. The distribution of the dataset represents the real distribution of 2D and 3D objects encountered by autonomous driving in the real world: 2,000 RGB objects (i.e., humans, animals, and vehicles) were extracted from driving view videos taken in seven cities: New York (NY, USA), San Francisco (CA, USA), Dubai (United Arab Emirates), Miami (FL, USA), London (UK), Los Angeles (CA, USA), and George Town (Singapore), of which approximately 95% are 3D objects and 5% are 2D objects extracted from billboards. The objects were extracted and cropped using the highest performing OD described by Redmon et al. (19); in our experiments, each input instance x associated with an object to classify contains up to five images taken at 200 millisecond intervals, starting at the time point at which an object was detected by the OD. Input instance xi is labeled as ‘True‘ if the instance represents a 3D object and ‘False‘ if the instance represents a 2D object. Training. We denote TR3D and TR2D as two training sets representing 3D and 2D objects respectively. To avoid an unbalanced training set, we extend TR2D by utilizing known image data augmentation techniques (20) to randomly selected instances from TR2D; we apply the rotation technique. Given the outputs calculated by the unsupervised models for each input instance (i.e., the 3D confidence scores), the final meta-classifier training was performed; the best hyperparameters for training were selected using the grid search algorithm (21) using the random shuffling split method and 10-fold cross-validation. We vary the number of estimators in the set of {20, 25, ..., 40} trees, while we change the maximum depth of the trees in the set of {2, 3, 4}. To select the best set of hyperparameters, we evaluated the meta-classifier’s performance in terms of accuracy. In all of the experiments described in this section: (1) |TR3D| = 150 and |TR2D| = 70, and (2) a decision was made within 200 milliseconds from the moment the object was detected by the OD (i.e., t = 2). 4.2 Results Performance. In Fig. 5, we present the receiver operating characteristic (ROC) plot and the area under the ROC (AUC) for different combinations of the blurring (B), sharpness (S), color (C), and edge (E) models; the combination of all four is our proposed method. The ROC plot shows the true positive rate (TPR) and false positive rate (FPR) for every possible prediction threshold, and the AUC provides an overall performance measure of a classifier (AUC=0.5: random guessing, AUC=1: perfect performance). 1New York City (USA, NY) San Francisco (USA, CA) Dubai (UAE) Miami (USA, FL) London (UK) Los Angeles (USA, CA) George Town (Singapore) In our case, there is a critical trade-off that must be considered: the classification threshold. A lower threshold will decrease the FPR but often decrease the TPR as well. In Table 1, we provide the TPR and FPR of the models when the threshold is set at 0.5 and for the threshold value at which the TPR=1. As can be seen, the use of all of the proposed models (B+S+C+E) in combination outperforms all other model combinations. Figure 4: 2D misclassification rates using state-of-the-art object detectors. In Table 2, we compare EyeDAS ’s performance to that of two other approaches: (1) baseline models based on the state-of-the-art pre-trained image classifiers (i.e., VGG16, VGG19 and Resnet50 (22)); we utilize the known transfer learning technique (23) by re-training these models, and (2) an optimized model similar to EyeDAS , except that it is based on a single expert model which considers the raw images as is (i.e., it computes the image similarity based distance (13) between the raw images directly, without extracting any features as EyeDAS does). In both cases, 220 instances were randomly selected for training; the distribution of 3D and 2D images is approximately 66.7% and 33.3% respectively, and the data augmentation technique described above was applied to avoid an unbalanced training set. Each baseline model was re-trained by (1) freezing all the layers except for the output layer which was replaced by two trainable fully connected layers, (2) randomly picking 50 instances from the training set to serve as the validation set, (3) pre-processing the input data (i.e., image resizing and scaling), and finally (4) minimizing the categorical_crossentropy loss function on the validation set using the Adam optimizer. The first new layer contained 128 neurons and attached with the relu activation function, and the second layer (output layer) contained two neurons and attached with the softmax activation function. Increasing the first layer’s neurons count beyond 128 resulted in poorer results due to overfitting. As can be seen, EyeDAS outperforms all of the abovementioned models. Table 1: EyeDAS ’s TPR and FPR with different thresholds. Table 2: Comparison to other approaches. Securing ODs with EyeDAS . To determine how effective EyeDAS is as part of a system, we evaluated 2D misclassification rates on seven state-of-the-art ODs (19; 24; 25; 26). The results are presented in Table 4; we present the 2D misclassification rates obtained for each detector before and after applying EyeDAS as a countermeasure and the impact of the different thresholds. The results show that for most ODs, when the detector mistakenly classified a 2D object as real (i.e., 3D), EyeDAS provided effective mitigation, even for the threshold value at which the TPR=1. 4.3 Generalization We also evaluate how EyeDAS ’s approach generalizes to different geographical locations and even different types of objects. Generalization to other geographical locations. To evaluate EyeDAS ’s geographical location generalization ability, we trained and tested the models on complementary groups of location types (i.e., cities). For training, we took minimum-sized city type combinations in which there are at least 56 2D objects and at least 120 3D objects, since we observed that less than that amount is insufficient for training the models and thus no meaningful conclusions could be derived. In Table 4, we present the evaluation results obtained for different geographical location (i.e., cities) combinations. As can be seen, EyeDAS is not dependent on the geographical location in which the training data was collected, as it is capable of using the knowledge gained during training to distinguish between 2 and 3D objects. Generalization to other types of objects. To evaluate EyeDAS ’s object type generalization ability, we trained and tested the models on complementary groups of object types. We focused on humans (HU), animals (AN), and vehicles (VE). As previously done, for training we used minimum-sized object type combinations in which there are at least 56 2D objects and at least 120 3D objects. In Table 5, we present the evaluation results obtained for each object type combination. As can be seen, EyeDAS is independent of the type of the object appeared in the training set, as it is capable of using the knowledge gained during training to distinguish between 2 and 3D objects. 4.4 Speed and Memory Performance 5 Limitations Despite the high performance achieved, EyeDAS has some limitations. First, technical factors may influence the performance of EyeDAS . For example, EyeDAS may perform poorly when image resolution is low or images are taken in low lighting. However, adequate lighting conditions are expected on roads on which autonomous vehicles can typically drive. Second, for 3D objects, if both the detected object and its close surrounding background are stationary (e.g., the object does not move or its surrounding background does not change), then EyeDAS may perform poorly. However, if the vehicle is moving or the camera’s auto-focus process is operating, then EyeDAS ’s errors will likely to decrease significantly even for 3D stationary objects. If the vehicle is not moving, the concern for the safety of passengers does not exist. 6 Summary In this paper, we proposed a novel countermeasure which can be used to secure object detectors (ODs) against the stereoblindness syndrome; this syndrome can have life-threatening consequences, endangering the safety of an autonomous car’s driver, passengers, pedestrians, and others on the road. Designed in recognition of the needs of autonomous driving in the real world, the proposed method is based on few-shot learning, making it very practical in terms of collecting the required training set. As presented in the previous sections, EyeDAS outperforms the baseline method and demonstrates excellent performance and specifically its: (1) ability to maintain a zero misclassification rate for 3D objects, (2) ability to improve the performance of ODs, (3) practicality in real-time conditions, and (4) ability to generalize between objects (i.e., humans, animals, and vehicles) and geographical locations (i.e., cities).
1. What is the focus and contribution of the paper regarding few-shot learning for object detection? 2. What are the strengths and weaknesses of the proposed EyeDAS method, particularly in its simplicity and empirical performance? 3. Do you have any concerns about the practical soundness of the paper's application problem? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions or questions regarding the presentation style, evaluation metric, and dataset details?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this paper, the author(s) proposed EyeDAS, a few-shot learning-based method to avoid stereoblindness syndrome for object detection (ie. wrongly detect printed objects on billboard/screen as they were in 3D). The proposed method works as a post-processing step. Once any object detection model detects an object with t (t>1) frames, it selects 4 non-learning-based distance metrics (blurring, sharpness, color and edge) to compute a ``3D confidence score''. It uses a gradient-boosted tree to classify if the object is a 3D object. Experimentally, the authors show that the proposed method can achieve close to perfect classification performance on 7 videos (all recorded by a dash cam from the driver’s seat perspective) collected on YouTube, outperforming deep-learning-based transfer learning methods. Strengths And Weaknesses Strengths: The paper is well-written and easy to understand. The method is simple, fast, and is shown with good empirical performance. The authors also conduct a number of ablation studies to show the generalizability and the effectiveness of each component of the proposed method. Weaknesses: Missing details about the dataset and the labeling process: the experiments use a custom dataset collected on YouTube and I think it would be better to provide more details about it. Specifically, Each of the 7 videos is actually quite long (~an hour), but the resulting train/test datasets only have 2000 RGB objects (line 206). What is the process of extracting the objects? Would about 100 (2000 * 5%) non-3D objects be sufficient for evaluation? What is the process of labeling true 3D objects and false 3D objects? About the setting: This paper actually proposes a new application problem: few-shot planar/3D object classification. But I am not entirely convinced by its practical soundness in that: I am not sure the few-shot is necessary for such a setting since planar/3D object classification can be object agnostic and can potentially have large enough data for training. Gradient boosted method can work quite well on few data with good feature engineering, but I am not sure it can be better than transfer learning methods with enough data. It would be good to also show some qualitative examples of the classification results. I am concerned about the minimal contribution: this paper is about a specific and new setting, and the proposed method is simple (gradient boosted tree with good feature engineering) and not new. Though I think it is good to have a simple method that works well, I think it should also show new ideas/insights/understandings to the machine learning community. I do not see enough such contributions from this paper and it looks more like a good technical report. Minor: Styling of the tables: all tables in the paper are from screenshots and I would suggest the authors follow the ICLR style guide (https://github.com/ICLR/Master-Template/raw/master/iclr2023.zip) to present the tables. About the evaluation metric: I got confused initially by the 2D misclassification rate and TPR in Table 1: if I understand correctly, TPR is specifically referred to as the true positive rate of 3D objects, and 2D misclassification rate is computed as (FP / (FP + TN) = FPR). I suggest the authors clarify the metrics in the experiment section, and it feels more natural to me to use more conventional names (https://en.wikipedia.org/wiki/Sensitivity_and_specificity) Some of the columns/rows in the tables are redundant, e.g., in table 2 and table 3 the TPR row under Threshold @ [TPR=1], in table 5 and table 6 the TPR Threshold @ [TPR=1] column. The right image of Figure 1 might not be the right example: the stop sign is also a planar object. Can it be detected by the proposed method? Clarity, Quality, Novelty And Reproducibility Clarity: Good. This paper is well-written and easy to understand. Quality: Fair. Please see the weakness section. Novelty: Fair. Please see the weakness section. Reproducibility: Unsure. Dataset and code are not provided.
ICLR
Title Learning to Control PDEs with Differentiable Physics Abstract Predicting outcomes and planning interactions with the physical world are longstanding goals for machine learning. A variety of such tasks involves continuous physical systems, which can be described by partial differential equations (PDEs) with many degrees of freedom. Existing methods that aim to control the dynamics of such systems are typically limited to relatively short time frames or a small number of interaction parameters. We present a novel hierarchical predictorcorrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames. We propose to split the problem into two distinct tasks: planning and control. To this end, we introduce a predictor network that plans optimal trajectories and a control network that infers the corresponding control parameters. Both stages are trained end-to-end using a differentiable PDE solver. We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs such as the incompressible Navier-Stokes equations. 1 INTRODUCTION Intelligent systems that operate in the physical world must be able to perceive, predict, and interact with physical phenomena (Battaglia et al., 2013). In this work, we consider physical systems that can be characterized by partial differential equations (PDEs). PDEs constitute the most fundamental description of evolving systems and are used to describe every physical theory, from quantum mechanics and general relativity to turbulent flows (Courant & Hilbert, 1962; Smith, 1985). We aim to endow artificial intelligent agents with the ability to direct the evolution of such systems via continuous controls. Such optimal control problems have typically been addressed via iterative optimization. Differentiable solvers and the adjoint method enable efficient optimization of high-dimensional systems (Toussaint et al., 2018; de Avila Belbute-Peres et al., 2018; Schenck & Fox, 2018). However, direct optimization through gradient descent (single shooting) at test time is resource-intensive and may be difficult to deploy in interactive settings. More advanced methods exist, such as multiple shooting and collocation, but they commonly rely on modeling assumptions that limit their applicability, and still require computationally intensive iterative optimization at test time. Iterative optimization methods are expensive because they have to start optimizing from scratch and typically require a large number of iterations to reach an optimum. In many real-world control problems, however, agents have to repeatedly make decisions in specialized environments, and reaction times are limited to a fraction of a second. This motivates the use of data-driven models such as deep neural networks, which combine short inference times with the capacity to build an internal representation of the environment. We present a novel deep learning approach that can learn to represent solution manifolds for a given physical environment, and is orders of magnitude faster than iterative optimization techniques. The core of our method is a hierarchical predictor-corrector scheme that temporally divides the problem into easier subproblems. This enables us to combine models specialized to different time scales in order to control long sequences of complex high-dimensional systems. We train our models using a differentiable PDE solver that can provide the agent with feedback of how interactions at any point in time affect the outcome. Our models learn to represent manifolds containing a large number of solutions, and can thereby avoid local minima that can trap classic optimization techniques. We evaluate our method on a variety of control tasks in systems governed by advection-diffusion PDEs such as the Navier-Stokes equations. We quantitatively evaluate the resulting sequences on how well they approximate the target state and how much force was exerted on the physical system. Our method yields stable control for significantly longer time spans than alternative approaches. 2 BACKGROUND Physical problems commonly involve nonlinear PDEs, often with many degrees of freedom. In this context, several works have proposed methods for improving the solution of PDE problems (Long et al., 2018; Bar-Sinai et al., 2019; Hsieh et al., 2019) or used PDE formulations for unsupervised optimization (Raissi et al., 2018). Lagrangian fluid simulation has been tackled with regression forests (Ladicky et al., 2015), graph neural networks (Mrowca et al., 2018; Li et al., 2019), and continuous convolutions (Ummenhofer et al., 2020). Data-driven turbulence models were trained with MLPs (Ling et al., 2016). Fully-convolutional networks were trained for pressure inference (Tompson et al., 2017) and advection components were used in adversarial settings (Xie et al., 2018). Temporal updates in reduced spaces were learned via the Koopman operator (Morton et al., 2018). In a related area, deep networks have been used to predict chemical properties and the outcome of chemical reactions (Gilmer et al., 2017; Bradshaw et al., 2019). Differentiable solvers have been shown to be useful in a variety of settings. Degrave et al. (2019) and de Avila Belbute-Peres et al. (2018) developed differentiable simulators for rigid body mechanics. (See Popovic et al. (2000) for earlier work in computer graphics.) Toussaint et al. (2018) applied related techniques to manipulation planning. Specialized solvers were developed to infer protein structures (Ingraham et al., 2019), interact with liquids (Schenck & Fox, 2018), control soft robots (Hu et al., 2019), and solve inverse problems that involve cloth (Liang et al., 2019). Like ours, these works typically leverage the automatic differentiation of deep learning pipelines (Griewank & Walther, 2008; Maclaurin et al., 2015; Amos & Kolter, 2017; Mensch & Blondel, 2018; van Merriënboer et al., 2018; Chen et al., 2018; Bradbury et al., 2018; Paszke et al., 2019; Tokui et al., 2019). However, while the works above target Lagrangian solvers, i.e. reference frames moving with the simulated material, we address grid-based solvers, which are particularly appropriate for dense, volumetric phenomena. The adjoint method (Lions, 1971; Pironneau, 1974; Jameson, 1988; Giles & Pierce, 2000; Bewley, 2001; McNamara et al., 2004) is used by most machine learning frameworks, where it is commonly known as reverse mode differentiation (Werbos, 2006; Chen et al., 2018). While a variety of specialized adjoint solvers exist (Griewank et al., 1996; Fournier et al., 2012; Farrell et al., 2013), these packages do not interface with production machine learning frameworks. A supporting contribution of our work is a differentiable PDE solver called ΦFlow that integrates with TensorFlow (Abadi et al., 2016) and PyTorch (Paszke et al., 2019). It is publicly available at https://github.com/tumpbs/PhiFlow. 3 PROBLEM Consider a physical system u(x, t) whose natural evolution is described by the PDE ∂u ∂t = P ( u, ∂u ∂x , ∂2u ∂x2 , ...,y(t) ) , (1) where P models the physical behavior of the system and y(t) denotes external factors that can influence the system. We now introduce an agent that can interact with the system by controlling certain parameters of the dynamics. This could be the rotation of a motor or fine-grained control over a field. We factor out this influence into a force term F , yielding ∂u ∂t = P ( u, ∂u ∂x , ∂2u ∂x2 , ... ) + F (t). (2) The agent can now be modelled as a function that computes F (t). As solutions of nonlinear PDEs were shown to yield low-dimensional manifolds (Foias et al., 1988; Titi, 1990), we target solution manifolds of F (t) for a given choice of P with suitable boundary conditions. This motivates our choice to employ deep networks for our agents. In most real-world scenarios, it is not possible to observe the full state of a physical system. When considering a cloud of smoke, for example, the smoke density may be observable while the velocity field may not be seen directly. We model the imperfect information by defining the observable state of u as o(u). The observable state is problem dependent, and our agent is conditioned only on these observations, i.e. it does not have access to the full state u. Using the above notation, we define the control task as follows. An initial observable state o0 of the PDE as well as a target state o∗ are given (Figure 1a). We are interested in a reconstructed trajectory u(t) that matches these states at t0 and t∗, i.e. o0 = o(u(t0)),o∗ = o(u(t∗)), and minimizes the amount of force applied within the simulation domain D (Figure 1b): LF [u(t)] = ∫ t∗ t0 ∫ D |Fu(t)|2 dx dt. (3) Taking discrete time steps ∆t, the reconstructed trajectory u is a sequence of n = (t∗ − t0)/∆t states. When an observable dimension cannot be controlled directly, there may not exist any trajectory u(t) that matches both o0 and o∗. This can stem from either physical constraints or numerical limitations. In these cases, we settle for an approximation of o∗. To measure the quality of the approximation of the target, we define an observation loss L∗o. The form of this loss can be chosen to fit the problem. We combine Eq. 3 and the observation loss into the objective function L[u(t)] = α · LF [u(t)] + L∗o(u(t∗)), (4) with α > 0. We use square brackets to denote functionals, i.e. functions depending on fields or series rather than single values. 4 PRELIMINARIES Differentiable solvers. Let u(x, t) be described by a PDE as in Eq. 1. A regular solver can move the system forward in time via Euler steps: u(ti+1) = Solver[u(ti),y(ti)] = u(ti) + ∆t · P (u(ti), ...,y(ti)) . (5) Each step moves the system forward by a time increment ∆t. Repeated execution produces a trajectory u(t) that approximates a solution to the PDE. This functionality for time advancement by itself is not well-suited to solve optimization problems, since gradients can only be approximated by finite differencing. For high-dimensional or continuous systems, this method becomes computationally expensive because a full trajectory needs to be computed for each optimizable parameter. Differentiable solvers resolve this issue by solving the adjoint problem (Pontryagin, 1962) via analytic derivatives. The adjoint problem computes the same mathematical expressions while working with lower-dimensional vectors. A differentiable solver can efficiently compute the derivatives with respect to any of its inputs, i.e. ∂u(ti+1)/∂u(ti) and ∂u(ti+1)/∂y(ti). This allows for gradientbased optimization of inputs or control parameters over an arbitrary number of time steps. Iterative trajectory optimization. Many techniques exist that try to find optimal trajectories by starting with an initial guess for F (t) and slightly changing it until reaching an optimum. The simplest of these is known as single shooting. In one optimization step, it simulates the full dynamics, then backpropagates the loss through the whole sequence to optimize the controls (Kraft, 1985; Leineweber et al., 2003). Replacing F (t) with an agent F (t|ot, o∗), which can be parameterized by a deep network, yields a simple training method. For a sequence of n frames, this setup contains n linked copies of the agent and is depicted in Figure 2. We refer to such an agent as a control force estimator (CFE). Optimizing such a chain of CFEs is both computationally expensive and causes gradients to pass through a potentially long sequence of highly nonlinear simulation steps. When the reconstruction u is close to an optimal trajectory, this is not a problem because the gradients ∆u are small and the operations executed by the solver are differentiable by construction. The solver can therefore be locally approximated by a first-order polynomial and the gradients can be safely backpropagated. For large ∆u, e.g. at the beginning of an optimization, this approximation breaks down, causing the gradients to become unstable while passing through the chain. This instability in the training process can prevent single-shooting approaches from converging and deep networks from learning unless they are initialized near an optimum. Alternatives to single shooting exist, promising better and more efficient convergence. Multiple shooting (Bock & Plitt, 1984) splits the trajectory into segments with additional defect constraints. Depending on the physical system, this method may have to be adjusted for specific problems (Treuille et al., 2003). Collocation schemes (Hargraves & Paris, 1987) model trajectories with splines. While this works well for particle trajectories, it is poorly suited for Eulerian solvers where the evolution of individual points does not reflect the overall motion. Model reduction can be used to reduce the dimensionality or nonlinearity of the problem, but generally requires domain-specific knowledge. When applicable, these methods can converge faster or in a more stable manner than single shooting. However, as we are focusing on a general optimization scheme in this work, we will use single shooting and its variants as baseline comparisons. Supervised and differentiable physics losses. One of the key ingredients in training a machine learning model is the choice of loss function. For many tasks, supervised losses are used, i.e. losses that directly compare the output of the model for a specific input with the desired ground truth. While supervised losses can be employed for trajectory optimization, far better loss functions are possible when a differentiable solver is available. We will refer to these as differentiable physics loss functions. In this work, we employ a combination of supervised and differentiable physics losses, as both come with advantages and disadvantages. One key limitation of supervised losses is that they can only measure the error of a single time step. Therefore, an agent cannot get any measure of how its output would influence future time steps. Another problem arises from the form of supervised training data which comprises input-output pairs, which may be obtained directly from data generation or through iterative optimization. Since optimal control problems are generally not unimodal, there can exist multiple possible outputs for one input. This ambiguity in the supervised training process will lead to suboptimal predictions as the network will try to find a compromise between all possible outputs instead of picking one of them. Differentiable physics losses solve these problems by allowing the agent to be directly optimized for the desired objective (Eq. 4). Unlike supervised losses, differentiable physics losses require a differentiable solver to backpropagate the gradients through the simulation. Multiple time steps can be chained together, which is a key requirement since the objective (Eq. 4) explicitly depends on all time steps through LF [u(t)] (Eq. 3). As with iterative solvers, one optimization step for a sequence of n frames then invokes the agent n times before computing the loss, each invocation followed by a solver step. The employed differentiable solver backpropagates the gradients through the whole sequence, which gives the model feedback on (i) how its decisions change the future trajectory and (ii) how to handle states as input that were reached because of its previous decisions. Since no ground truth needs to be provided, multi-modal problems naturally converge towards one solution. 5 METHOD In order to optimally interact with a physical system, an agent has to (i) build an internal representation of an optimal observable trajectory o(u(t)) and (ii) learn what actions to take to move the system along the desired trajectory. These two steps strongly resemble the predictor-corrector method (Press et al., 2007). Given o(t), a predictor-corrector method computes o(t + ∆t) in two steps. First, a prediction step approximates the next state, yielding op(t+ ∆t). Then, the correction uses op(t + ∆t) to refine the initial approximation and obtain o(t + ∆t). Each step can, to some degree, be learned independently. This motivates splitting the agent into two neural networks: an observation predictor (OP) network that infers intermediate states op(ti), i ∈ {1, 2, ...n − 1}, planning out a trajectory, and a corrector network (CFE) that estimates the control force F (ti|o(ui),opi+1) to follow that trajectory as closely as possible. This splitting has the added benefit of exposing the planned trajectory, which would otherwise be inaccessible. As we will demonstrate, it is crucial for the prediction stage to incorporate knowledge about longer time spans. We address this by modelling the prediction as a temporally hierarchical process, recursively dividing the problem into smaller subproblems. To achieve this, we let the OP not directly infer op(ti+1 |o(ui),o∗) but instead model it to predict the optimal center point between two states at times ti, tj , with i, j ∈ {1, 2, ...n − 1}, j > i, i.e. op((ti + tj)/2 |oi,oj). This function is much more general than predicting the state of the next time step since two arbitrary states can be passed as arguments. Recursive OP evaluations can then partition the sequence until a prediction op(ti) for every time step ti has been made. This scheme naturally enables scaling to arbitrary time frames or arbitrary temporal resolutions, assuming that the OP can correctly anticipate the physical behavior. Since physical systems often exhibit different behaviors on different time scales and the OP can be called with states separated by arbitrary time spans, we condition the OP on the time scale it is evaluated on by instantiating and training a unique version of the model for every time scale. This simplifies training and does not significantly increase the model complexity as we use factors of two for the time scales, and hence the number of required models scales with O(log2 n). We will refer to one instance of an OPn by the time span between its input states, measured in the number of frames n = (tj − ti)/∆t. Execution order. With the CFE and OPn as building blocks, many algorithms for solving the control problem, i.e. for computing F (t), can be assembled and trained. We compared a variety of algorithms and found that a scheme we will refer to as prediction refinement produces the best results. It is based on the following principles: (i) always use the finest scale OP possible to make a prediction, (ii) execute the CFE followed by a solver step as soon as possible, (iii) refine predictions after the solver has computed the next state. The algorithm that realizes these goals is shown in Appendix B with an example for n = 8. To understand the algorithm and resulting execution orders, it is helpful to consider simpler algorithms first. The simplest combination of CFE and OPn invocations that solves the full trajectory, shown in Figure 3a, can be described as follows. Initially, all intermediate states are predicted hierarchically. The first prediction is the half-way point op(tn/2 |o0,o∗), generated by the OPn. Using that as input to an OPn/2 results in new predictions at tn/4, t3n/4. Continuing with this scheme, a prediction can be made for each ti, i ∈ 1, ..., n− 1. Next, the actual trajectory is evaluated step by step. For each step ti, the CFE computes the control force F (ti) conditioned on the state at ti and the prediction op(ti+1). Once F (ti) is known, the solver can step the simulation to the next state at ti+1. This al- gorithm finds a trajectory in timeO(n) since n CFE calls and n−1 OP calls are required in total (see Appendix B). However, there are inherent problems with this algorithm. The physical constraints of the PDE and potential approximation errors of the CFE can result in observations that are only matched partially. This can result in the reconstructed trajectory exhibiting undesirable oscillations, often visible as jittering. When subsequent predictions do not line up perfectly, large forces may be applied by the CFE or the reconstructed trajectory might stop following the predictions altogether. This problem can be alleviated by changing the execution order of the two-stage algorithm described above. The resulting algorithm is shown in Figure 3b and will be referred to as staggered execution. In this setup, the simulation is advanced as soon as a prediction for the next observable state exists and OPs are only executed when their state at time ti is available. This staggered execution scheme allows future predictions to take deviations from the predicted trajectory into account, preventing a divergence of the actual evolution o(u(t)) from the prediction op(t). While the staggered execution allows most predictions to correct for deviations from the predicted trajectory op, this scheme leaves several predictions unmodified. Most notably, the prediction op(tn/2), which is inferred from just the initial state and the desired target, remains unchanged. This prediction must therefore be able to guide the reconstruction in the right direction without knowing about deviations in the system that occurred up to tn/2−1. As a practical consequence, a network trained with this scheme typically learns to average over the deviations, resulting in blurred predictions (see Appendix D.2). Algorithm 1: Recursive algorithm computing the prediction refinement. The algorithm is called via Reconstruct[o0,o∗, absent] to reconstruct a full trajectory from o0 to o∗. function Reconstruct[o(u0),on,o2n]; Input : Initial observation o(u0), observation on, optional observation o2n Output: Observation of the reconstructed state o(un) if n = 1 then F ← CFE[o(u0),o1] u1 ← Solver[u0,F ] return o(u1) else on/2 ← OP[o(u0),on] o(un/2)← Reconstruct[o(u0),on/2,on] if o2n present then o3n/2 ← OP[on,o2n] on ← OP[o(un/2),o3n/2] else o3n/2 ← absent end o(un)← Reconstruct[o(un/2),on,o3n/2] return o(un) end The prediction refinement scheme, listed in Algorithm 1 and illustrated in Figure 3c, solves this problem by re-evaluating existing predictions whenever the simulation progesses in time. Not all predictions need to be updated, though, and an update to a prediction at a finer time scale can depend on a sequence of other predictions. The prediction refinement algorithm that achieves this in an optimal form is listed in Appendix B. While the resulting execution order is difficult to follow for longer sequences with more than n = 8 frames, we give an overview of the algorithm by considering the prediction for time tn/2. After the first center-frame prediction op(tn/2) of the n-frame sequence is made by OPn, the prediction refinement algorithm calls itself recursively until all frames up to frame n/4 are reconstructed from the CFE and the solver. The center prediction is then updated using OPn/2 for the next smaller time scale compared to the previous prediction. The call of OPn/2 also depends on op(t3n/4), which was predicted using OPn/2. After half of the remaining distance to the center is reconstructed by the solver, the center prediction at tn/2 is updated again, this time by the OPn/4, including all prediction dependencies. Hence, the center prediction is continually refined every time the temporal distance between the latest reconstruction and the prediction halves, until the reconstruction reaches that frame. This way, all final predictions op(ti) are conditioned on the reconstruction of the previous state u(ti−1) and can therefore account for all previous deviations. The prediction refinement scheme requires the same number of force inferences but an increased number of OP evaluations compared to the simpler algorithms. With a total of 3n − 2 log2(n) − 3 OP evaluations (see Appendix B), it is of the same complexity, O(n). In practice, this refinement scheme incurs only a small overhead in terms of computation, which is outweighed by the significant gains in quality of the learned control function. 6 RESULTS We evaluate the capabilities of our method to learn to control physical PDEs in three different test environments of increasing complexity. We first target a simple but nonlinear 1D equation, for which we present an ablation study to quantify accuracy. We then study two-dimensional problems: an incompressible fluid and a fluid with complex boundaries and indirect control. Full details are given in Appendix D. Supplemental material containing additional sequences for all of the tests can be downloaded from https://ge.in.tum.de/publications/2020-iclr-holl. Burger’s equation. Burger’s equation is a nonlinear PDE that describes the time evolution of a single field, u (LeVeque, 1992). Using Eq. 1, it can be written as P ( u, ∂u ∂x , ∂2u ∂x2 ) = −u · ∂u ∂x + ν ∂2u ∂x2 . (6) Examples of the unperturbed evolution are shown in Figure 4a. We let the whole state be observable and controllable, i.e. o(t) = u(t), which implies that o∗ can always be reached exactly. The results of our ablation study with this equation are shown in Table 1. The table compares the resulting forces applied by differently trained models when reconstructing a ground-truth sequence (Figure 4e). The variant denoted by CFE chain uses a neural network to infer the force without any intermediate predictions. With a supervised loss, this method learns to approximate a single step well. However, for longer sequences, results quickly deviate from an ideal trajectory and diverge because the network never learned to account for errors made in previous steps (Figure 4b). Training the network with the objective loss (Eq. 4) using the differentiable solver greatly increases the quality of the reconstructions. On average, it applies only 34% of the force used by the supervised model as it learns to correct the temporal evolution of the PDE model. Next, we evaluate variants of our predictor-corrector approach, which hierarchically predicts intermediate states. Here, the CFE is implemented as F (ti) = (op(ti+1) − u(ti))/∆t. Unlike the simple CFE chain above, training with the supervised loss and staggered execution produces stable (albeit jittering) trajectories that successfully converge to the target state (Figure 4c). Surprisingly, this supervised method reaches almost the same accuracy as the differentiable CFE, despite not having access to physics-based gradients. However, employing the differentiable physics loss greatly Table 1: Quantitative reconstruction evaluation using Burger’s equation, avg. for 100 examples. Execution scheme Training loss Force ∫ |F | dt Inference time (ms) CFE chain Supervised 83.4± 2.0 0.024± 0.013 CFE chain Diff. Physics 28.8± 0.8 0.024± 0.013 Staggered Supervised 34.3± 1.1 1.15± 0.19 Staggered Diff. Physics 15.3± 0.7 1.15± 0.19 Refined Diff. Physics 14.2± 0.7 3.05± 0.37 Iterative optim. (60 iter.) Diff. Physics 15.3± 1.6 52.7± 2.1 Iterative optim. (300 iter.) Diff. Physics 10.2± 1.9 264.0± 3.0 improves the reconstruction quality, producing solutions that are hard to distinguish from ideal trajectories (Figure 4d). The prediction refinement scheme further improves the accuracy, but the differences to the staggered execution are relatively small as the predictions of the latter are already very accurate. Table 1 also lists the results of classic shooting-based optimization applied to this problem. To match the quality of the staggered execution scheme, the shooting method requires around 60 optimization steps. These steps are significantly more expensive to compute, despite the fast convergence. After around 300 iterations, the classic optimization reaches an optimal value of 10.2 and the loss stops decreasing. Starting the iterative optimization with our method as an initial guess pushes the optimum slightly lower to 10.1. Thus, even this relatively simple problem shows the advantages of our learned approach. Incompressible fluid flow. Next, we apply our algorithm to two-dimensional fluid dynamics problems, which are challenging due to the complexities of the governing Navier-Stokes equations (Batchelor, 1967). For a velocity field v, these can be written as P(v,∇v) = −v · ∇v + ν∇2v −∇p, (7) subject to the hard constraints∇·v = 0 and∇×p = 0, where p denotes pressure and ν the viscosity. In addition, we consider a passive density ρ that moves with the fluid via ∂ρ/∂t = −v · ∇ρ. We set v to be hidden and ρ to be observable, and allow forces to be applied to all of v. We run our tests on a 1282 grid, resulting in more than 16,000 effective continuous control parameters. We train the OP and CFE networks for two different tasks: reconstruction of natural fluid flows and controlled shape transitions. Example sequences are shown in Figure 5 and a quantitative evaluation, averaged over 100 examples, is given in Table 2. While all methods manage to approximate the target state well, there are considerable differences in the amount of force applied. The supervised technique exerts significantly more force than the methods based on the differentiable solver, resulting in jittering reconstructions. The prediction refinement scheme produces the smoothest transitions, converging to about half the loss of the staggered, non-refined variant. We compare our method to classic shooting algorithms for this incompressible flow problem. While a direct shooting method fails to converge, a more advanced multi-scale shooting approach still requires 1500 iterations to obtain a level of accuracy that our model achieves almost instantly. In addition, our model successfully learns a solution manifold, while iterative optimization techniques essentially start from scratch every time. This global view leads our model to more intuitive solutions and decreases the likelihood of convergence to undesirable local minima. The solutions of our method can also be used as initial guesses for iterative solvers, as illustrated in Appendix D.4. We find that the iterative optimizer with an initial guess converges to solutions that require only 57.4% of the force achieved by the iterative optimizer with default initialization. This illustrates how the more global view of the learned solution manifold can improve the solutions of regular optimization runs. Splitting the task into prediction and correction ensures that intermediate predicted states are physically plausible and allows us to generalize to new tasks. For example, we can infer transitions involving multiple shapes, despite training only on individual shapes. This is demonstrated in Appendix D.2. Incompressible fluid with indirect control. The next experiment increases the complexity of the fluid control problem by adding obstacles to the simulated domain and limiting the area that can be controlled by the network. An example sequence in this setting is shown in Figure 6. As before, only the density ρ is observable. Here, the goal is to move the smoke from its initial position near the center into one of the three “buckets” at the top. Control forces can only be applied in the peripheral regions, which are outside the visible smoke distribution. Only by synchronizing the 5000 continuous control parameters can a directed velocity field be constructed in the central region. We first infer trajectories using a trained CFE network and predictions that move the smoke into the desired bucket in a straight line. This baseline manages to transfer 89%±2.6% of the smoke into the target bucket. Next we enable the hierarchical predictions and train the OPs. This version manages to maneuver 99.22%± 0.15% of the smoke into the desired buckets while requiring 19.1%± 1.0% less force. For comparison, Table 3 also lists success rate and execution time for a direct optimization. Despite only obtaining a low success rate of 82%, the shooting method requires several orders of magnitude longer than evaluating our trained model. Since all optimizations are independent of each other, some find better solutions than others, reflected in the higher standard deviation. The increased number of free parameters and complexity of the fluid dynamics to be controlled make this problem intractable for the shooting method, while our model can leverage the learned representation to infer a solution very quickly. Further details are given in Appendix D.3. 7 CONCLUSIONS We have demonstrated that deep learning models in conjunction with a differentiable physics solver can successfully predict the behavior of complex physical systems and learn to control them. The in- troduction of a hierarchical predictor-corrector architecture allowed the model to learn to reconstruct long sequences by treating the physical behavior on different time scales separately. We have shown that using a differentiable solver greatly benefits the quality of solutions since the networks can learn how their decisions will affect the future. In our experiments, hierarchical inference schemes outperform traditional sequential agents because they can easily learn to plan ahead. To model realistic environments, we have introduced observations to our pipeline which restrict the information available to the learning agent. While the PDE solver still requires full state information to run the simulation, this restriction does not apply when the agent is deployed. While we do not believe that learning approaches will replace iterative optimization, our method shows that it is possible to learn representations of solution manifolds for optimal control trajectories using data-driven approaches. Fast inference is vital in time-critical applications and can also be used in conjunction with classical solvers to speed up convergence and ultimately produce better solutions. 8 ACKNOWLEDGEMENTS This work was supported in part by the ERC Starting Grant realFlow (ERC-2015-StG-637014). B COMPLEXITY OF EXECUTION SCHEMES The staggered execution scheme recursively splits a sequence of length n into smaller sequences, as depicted in Fig. 3b and Fig. 7a for n = 8. With each level of recursion depth, the sequence length is cut in half and twice as many predictions need to be performed. The maximum depth depends on the sequence length tn − t0 and the time steps ∆t performed by the solver, dmax = log2 ( tn − t0 ∆t ) − 1. Therefore, the total number of predictions, equal to the number of OP evaluations, is NOP = 1 + 2 + 4 + ...+ n/2 = dmax∑ k=0 2k = n− 1. The prediction refinement scheme performs more predictions, as can be seen in Fig. 7b. To understand the number of OP evaluations, we need to consider the recursive algorithm Reconstruct[u0,on,o2n], listed in Alg 1, that reconstructs a sequence or partial sequence of n frames. For the first invocation, the last parameter o2n is absent, but for subsequences, that is not necessarily the case. Each invocation performs one OP evaluation if o2n is absent, otherwise three. By counting the sequences for which this condition is fulfilled, we can compute the total number of network evaluations to be NOP = 3 dmax∑ k=0 2k − 2 log2(n) = 3n− 2 log2(n)− 3. C NETWORK ARCHITECTURES AND TRAINING All neural networks used in this work are based on a modified U-net architecture (Ronneberger et al., 2015). The U-net represents a typical multi-level convolutional network architecture with skip connections, which we modify by using residual blocks (He et al., 2016) instead of regular convolutions for each level. We slightly modify this basic layout for some experiments. The network used for predicting observations for the fluid example is detailed in Tab. 4. The input to the network are two feature maps containing the current state and the target state. Zero-padding is applied to the input, so that all strided convolutions do not require padding. Next, five residual blocks are executed in order, each decreasing the resolution (1/2, 1/4, 1/8, 1/16, 1/32) while increasing the number of feature maps (4, 8, 16, 16, 16). Each block performs a convolution with kernel size 2 and stride 2, followed by two residual blocks with kernel size 3 and symmetric padding. Inside each block, the number of feature maps stays constant. Three more residual blocks are executed on the lowest resolution of the bowtie structure, after which the decoder part of the network commences, translating features into spatial content. The decoder works as follows: Starting with the lowest resolution, the feature maps are upsampled with linear interpolation. The upsampled maps and the output of the previous block of same resolution are then concatenated. Next, a convolution with 16 filters, a kernel size of 2 and symmetric padding, followed by two more residual blocks, is executed. When the original resolution is reached, only one feature map is produced instead of 16, forming the output of the network. Depending on the dimensionality of the problem, either 1D or 2D convolutions are used. The network used for the indirect control task is modified in the following ways: (i) It produces two output feature maps, representing the velocity (vx, vy). (ii) Four feature maps of the lowest resolution (4x4) are fed into a dense layer producing four output feature maps. These and the other feature maps are concatenated before moving to the upsampling stage. This modification ensures that the receptive field of the network is the whole domain. All networks were implemented in TensorFlow (Abadi et al., 2016) and trained using the ADAM optimizer on an Nvidia GTX 1080 Ti. We use batch sizes ranging from 4 to 16. Supervised training of all networks converges within a few minutes, for which we iteratively decrease the learning rate from 10−3 to 10−5. We stop supervised training after a few epochs, comprising between 2000 and 10.000 iterations, as the networks usually converge within a fraction of the first epoch. For training with the differentiable solver, we start with a decreased learning rate of 10−4 since the backpropagation through long chains is more challenging than training with a supervised loss. Optimization steps are also considerably more expensive since the whole chain needs to be executed, which includes a forward and backward simulation pass. For the fluid examples, an optimization step takes 1-2 seconds to complete for the 2D fluid problems. We let the networks run about 100.000 iterations, which takes between one and two days for the shown examples. D DETAILED DESCRIPTION AND ANALYSIS OF THE EXPERIMENTS In the following paragraphs, we give further details on the experiments of Section 6. D.1 BURGER’S EQUATION For this experiment, we simulate Burger’s equation (Eq. 6) on a one-dimensional grid with 32 samples over a course of 32 time steps. The typical behavior of Burger’s equation in 1D exhibits shock waves that move in +x or −x direction for u(x) > 0 or u(x) < 0, respectively. When opposing waves clash, they both weaken until only the stronger wave survives and keeps moving. Examples are shown in Figs. 4a and 8a. All 32 samples are observable and controllable, i.e. o(t) = u(t). Thus, we can enforce that all trajectories reach the target state exactly by choosing the force for the last step to be F (tn−1) = o∗ − u(tn−1) ∆t . To measure the quality of a solution, it is therefore sufficient to consider the applied force∫ t∗ t0 |F (t)| dt which is detailed for the tested methods in Table 1. Network training. Both for the CFE chains as well as for the observation prediction models, we use the same network architecture, described in Appendix C. We train the networks on 3600 randomly generated scenes with constant driving forces, F (t) = const. The examples are initialized with two Gaussian waves of random amplitude, size and position, set to clash in the center. In each time step, a constant Gaussian force with the same randomized parameters is applied to the system to steer it away from its natural evolution. Constant forces have a larger impact on the evolution than temporally varying forces since the effects of temporally varying forces can partly cancel out over time. The ground truth sequence can therefore be regarded as a near-perfect but not necessarily optimal trajectory. Figs. 4d and 8b display such examples. The same trajectories, without any forces applied, are shown in sub-figures (a) for comparison. We pretrain all networks (OPs or CFE, depending on the method) with a supervised observation loss, Lsupo = ∣∣∣∣OP[o(ti), o(tj)]− uGT( ti + tj2 )∣∣∣∣2 . (8) The resulting trajectory after supervised training for the CFE chain is shown in Figure 4b and Figure 8c. For the observation prediction models, the trajectories are shown in Figure 4c and Figure 8e. After pretraining, we train all OP networks end-to-end with our objective loss function (see Eq. 4), making use of the differentiable solver. For this experiment, we choose the mean squared difference for the observation loss function: L∗o = |o(u(t∗))− o∗| 2 . (9) We test both the staggered execution scheme and the prediction refinement scheme, shown in Figure 8f and Figure 8g. Results. Table 1 compares the resulting forces inferred by different methods. The results are averaged over a set of 100 examples from the test set which is sampled from the same distribution as the training set. The CFE chains both fail to converge to o∗. While the differentiable physics version manages to produce a un−1 that resembles o∗, the supervised version completely deviates from an optimal trajectory. This shows that learning to infer the control force F (ti) only from u(ti), o∗ and t is very difficult as the model needs to learn to anticipate the physical behavior over any length of time. Compared to the CFE chains, the hierarchical models require much less force and learn to converge towards o∗. Still, the supervised training applies much more force to the system than required, the reasons for which become obvious when inspecting Figure 4b and Fig. 8e. While each state seems close to the ground truth individually, the control oscillates undesirably, requiring counter-actions later in time. The methods using the differentiable solver significantly outperform their supervised counterparts and exhibit an excellent performance that is very close the ground truth solutions in terms of required forces. On many examples, they even reach the target state with less force than was applied by the ground truth simulation. This would not be possible with the supervised loss alone, but by having access to the gradient-based feedback from the differentiable solver, they can learn to find more efficient trajectories with respect to the objective loss. This allows the networks to learn applying forces in different locations that make the system approach the target state with less force. Figure 4e and Fig.8f,g show examples of this. The ground truth applies the same force in each step, thereby continuously increasing the first sample u(x = 0), and the supervised method tries to imitate this behavior. The governing equation then slowly propagates u(x = 0) in positive x direction since u(x = 0) > 0. The learning methods that use a differentiable solver make use of this fact by applying much more force F (x = 0) > 0 at this point than the ground truth, even overshooting the target state. Later, when this value had time to propagate to the right, the model corrects this overshoot by applying a negative force F (x = 0) < 0. Using this trick, these models reach the target state with up to 13% less force than the ground truth on the sequence shown in Figure 4. Figure 9 analyzes the variance of inferred forces. The supervised methods often fail to properly converge to the target state, resulting in large forces in the last step, visible as a second peak in the supervised CFE chain. The formulation of the loss (Eq. 3) suppresses force spikes. In the solutions inferred by our method, the likelihood of large forces falls off multi-exponentially as a consequence. This means that large forces are exponentially rare, which is the expected behavior given the L2 regularizer from Eq. 3. We also compare our results to a single-shooting baseline which is able to find near-optimal solutions at the cost of higher computation times. The classic optimization uses the ADAM optimizer with a learning rate of 0.01 and converges after around 300 iterations. To reach the quality of the staggered prediction scheme, it requires only around 60 iterations. This quick convergence can be explained by the relatively simple setup that is dominated by linear effects. Therefore, the gradients are stable, even when propagated through many frames. The computation times, shown in Tab. 1, were recorded on a single GTX 1080 Ti. We run 100 examples in parallel to reduce the relative overhead caused by GPU instruction queuing. For the network-based methods, we average the inference time over 100 runs. We perform 10 runs for the optimization methods. D.2 INCOMPRESSIBLE FLUID FLOW The incompressible Navier-Stokes equations model dynamics of fluids such as water or air, which can develop highly complex and chaotic behavior. The phenomenon of turbulence is generally seen as one of the few remaining fundamental and unsolved problems of classical physics. The challenging nature of the equations indicates that typically a very significant computational effort and a large number of degrees of freedom are required to numerically compute solutions. Here, we target an incompressible two-dimensional gas with viscosity ν, described by the Navier-Stokes equations for the velocity field v. We assume a constant fluid density throughout the simulation, setting ρf = const. ≡ 1. The gas velocity is controllable and, according to Eq. 1, we set P(v,∇v) = −(v · ∇)v + ν∇2v − ∇p ρf subject to the hard constraints ∇ · v = 0 and ∇ × p = 0. For our experiments, we target fluids with low viscosities, such as air, and set ν = 0 in the equation above as the transport steps implicitly apply numerical diffusion that is on average higher than the targeted one. For fluids with a larger viscosity, the Poisson solver outlined above for computing p could be used to implicitly solve a vector-valued diffusion equation for v. However, incorporating a significant amount of viscosity would make the control problem easier to solve for most cases, as viscosity suppresses small scale structures in the motion. Hence, in order to create a challenging environment for training our networks, we have but a minimal amount of diffusion in the physical model. In addition to the velocity field v, we consider a smoke density distribution ρ which moves passively with the fluid. The evolution of ρ is described by the equation ∂ρ/∂t = −v·∇ρ. We treat the velocity field as hidden from observation, letting only the smoke density be observed, i.e. o(t) = ρ(t). We stack the two fields as u = (v, ρ) to write the system as one PDE, compatible with Eq. 1. For the OP and CFE networks, we use the 2D network architecture described in Appendix C. Instead of directly generating the velocity update in the CFE network for this problem setup, we make use of stream functions (Lamb, 1932). Hence, the CFE network outputs a vector potential Φ of which the curl ∇× Φ is used as a velocity update. This setup numerically simplifies the incompressibility condition of the Navier-Stokes equations but retains the same number of effective control parameters. Datasets. We generate training and test datasets for two distinct tasks: flow reconstruction and shape transition. Both datasets have a resolution of 128 × 128 with the velocity fields being sampled in staggered form (see Appendix A). This results in over 16.000 effective continuous control parameters that make up the control force F (ti) for each step i. The flow reconstruction dataset is comprised of ground-truth sequences where the initial states (ρ0,v0) are randomly sampled and then simulated for 64 time steps. The resulting smoke density is then taken to be the target state, o∗ ≡ ρ∗ = ρsim(t64). Since we use fully convolutional networks for both CFE and OPs, the open domain boundary must be handled carefully. If smoke was lost from the simulation, because it crossed the outer boundary, a neural network would see the smoke simply vanish unless it was explicitly given the domain size as input. To avoid these problems, we run the simulation backwards in time and remove all smoke from ρ0 that left the simulation domain. For the shape transition dataset, we sample initial and target states ρ0 and ρ∗ by randomly choosing a shape from a library containing ten basic geometric shapes and placing it at a random location inside the domain. These can then be used for reconstructing sequences of any length n. For the results on shape transition presented in section 6, we choose n = 16 because all interesting behavior can be seen within that time frame. Due to the linear interpolation used in the advection step (see Appendix A), both ρ and v smear out over time. This numerical limitation makes it impossible to match target states exactly in this task as the density will become blurry over time. While we could generate ground-truth sequences using a classical optimizer, we refrain from doing so because (i) these trajectories are not guaranteed to be optimal and (ii) we want to see how well the model can learn from scratch, without initialization. Training. We pretrain the CFE on the natural flow dataset with a supervised loss, LCFEsup (u(t)) = |vu(t) + F (t)− v∗(t)|2 where v∗(t) denotes the velocity from ground truth sequences. This supervised training alone constitutes a good loss for the CFE as it only needs to consider single-step intervals ∆t while the OPs handle longer sequences. Nevertheless, we found that using the differentiable solver with an observation loss, LCFEo = |Br(o∗)−Br (Solver[u+ CFE[u,o∗]]) |2, further improves the accuracy of the inferred force without sacrificing the ground truth match. Here Br(x) denotes a blur function with a kernel of the form 11+x/r . The blur helps make the gradients smoother and creates non-zero gradients in places where prediction and target do not overlap. During training, we start with a large radius of r = 16 ∆x for Br and successively decrease it to r = 2 ∆x. We choose α such that LF and L∗o are of the same magnitude when the force loss spikes (see Fig. 15). After the CFE is trained, we successively train the OPs starting with the smallest time scale. For the OPs, we train different models for natural flow reconstruction and shape transition, both based on the same CFE model. We pre-train all OPs independently with a supervised observation loss before jointly training them end-to-end with objective loss function (Eq. 4) and the differentiable solver to find the optimal trajectory. We use the OPs trained with the staggered execution scheme as initialization for the prediction refinement scheme. The complexity of solving the Navier-Stokes equations over many time steps in this example requires such a fully supervised initialization step. Without it, this setting is so non-linear that the learning process does not converge to a good solution. Hence, it illustrates the importance of combining supervised and unsupervised (requiring differentiable physics) training for challenging learning objectives. A comparison of the different losses is shown in Fig. 10. The predictions, shown in the top rows of each subfigure, illustrate the differences between the three methods. The supervised predictions, especially the long-term predictions (central images), are blurry because the network learns to average over all ground truth sequences that match the given initial and target state. The differentiable physics solver largely resolves this issue. The predictions are much sharper but the long-term predictions still do not account for short-term deviations. This can be seen in the central prediction of Fig. 10b which shows hints of the target state o∗, despite the fact that the actual reconstruction u cannot reach that state at that time. The refined prediction, shown in subfigure (c), is closer to u since it is conditioned on the previous reconstructed state. In the training data, we let the network transform one shape into another at a random location. The differentiable solver and the long-term intuition provided by our execution scheme make it possible to train networks that can infer accurate sequences of control forces. In most cases, the target shapes are closely matched. As our networks infer sequences over time, we refer readers to the supplemental material (https://ge.in.tum.de/publications/2020-iclr-holl), which contains animations of additional sequences. Generalization to multiple shapes. Splitting the reconstruction task into prediction and correction has the additional benefit of having full access to the intermediate predictions op. These model real states of the system so classical processing or filter operations can be applied to them as well. We demonstrate this by generalizing our method to m > 1 shapes that evolve within the same domain. Figure 11 shows an example of two weakly-interacting shape transitions. We implement this by executing the OPs independently for each transition k ∈ {1, 2, ...m} while inferring the control force F (t) on the joint system. This is achieved by adding the predictions of the smoke density ρ before passing it to the CFE network, õp = ∑m k=1 o p k. The resulting force is then applied to all sequences individually so that smoke from one transition does not end up in another target state. Using this scheme, we can define start and end positions for arbitrarily many shapes and let them evolve together. Evaluation of force strengths The average force strengths are detailed in Tab. 2 while Figure 12 gives a more detailed analysis of the force strengths. As expected from using a L2 regularizer on the force, large values are exponentially rare in the solutions inferred from our test set. None of the hierarchical execution schemes exhibit large outliers. The prediction refinement requires the least amount of force to match the target, slightly ahead of the staggered execution trained with the same loss. The supervised training produces trajectories with reduced continuity that result in larger forces being applied. D.3 INCOMPRESSIBLE FLUID WITH INDIRECT CONTROL As a fourth test environment, we target a case with increased complexity, where the network does not have the means anymore to directly control the full fluid volume. Instead, the network can only apply forces in the peripheral regions, with a total of more than 5000 control parameters per step. The obstacles prevent fluid from passing through them and the domain is enclosed with solid boundaries from the left, right and bottom. This leads to additional hard constraints and interplays between constraints in the physical model, and as such provides an interesting and challenging test case for our method. The domain has three target regions (buckets) separated by walls at the top of the domain, into which a volume of smoke should be transported from any position in the center part. Both initial position and the target bucket are randomized for our training set of 3600 examples and test set of 100 examples. Each sequence consists of 16 time steps. In this case the control is indirect since the smoke density lies outside the controlled area at all times. Only the incompressibility condition allows the network to influence the velocity outside the controlled area. This forces the model to consider the global context and synchronize a large number of parameters to create a desired flow field. The requirement of complex synchronized force fields makes generating reliable training data difficult, as manual or random sampling is unlikely to produce a directed velocity field in the center. We therefore skip the pretraining process and directly train the CFE using the differentiable solver, while the OP networks are trained as before with r = 2 ∆x. To evaluate how well the learning method performs, we measure how much of the smoke density ends up inside the buckets and how much force was applied in total. For reference, we replace the observation predictions with an algorithm that moves the smoke towards the bucket in a straight line. Averaged over 100 examples from the test set, the resulting model manages to put 89%±2.6% of the smoke into the target bucket. In contrast, the model trained with our full algorithm moves 99.22%± 0.15% of the smoke into the target buckets while requiring 19.1%± 1.0% less force. We also compare our method to an iterative optimization which directly optimizes the control velocities. We use the ADAM optimizer with a learning rate of 0.1. Despite the highly non-linear setup, the gradients are stable enough to quickly let the smoke flow in the right direction. Fig. 14 shows how the trajectories improve during optimization. After around 60 optimization steps, the smoke distribution starts reaching the target bucket in some examples. Over the next 600 iterations, it converges to a a configuration in which 82.1± 7.3 of the smoke ends up in the correct bucket. D.4 COMPARISON TO SHOOTING METHODS We compare the sequences inferred by our trained models to classical shooting optimizations using our differentiable physics solver to directly optimize F (t) with the objective loss L (Eq. 4) for a single input. We make use of stream functions (Lamb, 1932), as in the second experiment, to ensure the incompressibility condition is fulfilled. For this comparison, the velocities of all steps are initialized with a normal distribution with µ = 0 and σ = 0.01 so that the initial trajectory does not significantly alter the initial state, u(t) ≈ u(t0). We first show how a simple single-shooting algorithm (Zhou et al., 1996) fares with our NavierStokes setup. When solving the resulting optimization problem using single-shooting, strong artifacts in the reconstructions can be observed, as shown in Figure 17a. This undesirable behavior stems from the nonlinearity of the Navier-Stokes equations, which causes the gradients ∆u 0 to become noisy and unreliable when they are recurrently backpropagated through many time steps. Unsurprisingly, the single-shooting optimizer converges to a undesirable local minimum. As single-shooting is well known to have problems with non-trivial problem settings, we employ a multi-scale shooting (MS) method (Hartmann et al., 2014). This solver first computes the trajectory on a coarsely discretized version of the problem before iteratively refining the discretization. For the first resolution, we use 1/16 of the original width and height which both reduces the number of control parameters and reduces nonlinear effects from the physics model. By employing an exponential learning rate decay, this multi-scale optimization converges reliably for all examples. We use the ADAM optimizer to compute the control variable updates from the gradients of the differentiable Navier-Stokes solver. An averaged set of representative convergence curves for this setup is shown in Figure 15. The objective loss (Eq. 4) is shown in its decomposed state as the sum of the observation loss L∗o, shown in Figure 15a, and the force loss LF , shown in Figure 15b. Due to the initialization of all velocities with small values, the force loss starts out small. For the first 1000 iteration steps, L∗o dominates which causes the system to move towards the target state o∗. This trajectory is not ideal, however, as more force than necessary is applied. Once observation loss and force loss are of the same magnitude, the optimization refines the trajectory to use less force. We found that the trajectories predicted by our neural network based method correspond to performing about 1500 steps with the MS optimization while requiring less tuning. Reconstructions of the same example are compared in Figure 17. Performing the MS optimization up to this point took 131 seconds on a GTX 1080 Ti graphics card for a single 16-frame sequence while the network inference ran for 0.5 seconds. For longer sequences, this gap grows further because the network inference time scales with O(n). This could only be matched if the number of iterations for the MS optimization scaled with O(1), which is not the case for most problems. These tests indicate that our model has successfully internalized the behavior of a large class of physical behavior, and can exert the right amount of force to reach the intended goal. The large number of iterations required for the single-case shooting optimization highlights the complexity of the individual solutions. Interestingly, the network also benefits from the much more difficult task to learn a whole manifold of solutions: comparing solutions with similar observation loss for the MS algorithm and our network, the former often finds solutions that are unintuitive and contain noticeable detours, e.g., not taking a straight path for the density matching examples of Fig. 5. In such situations, our network benefits from having to represent the solution manifold, instead of aiming for single task optimizations. As the solutions are changing relatively smoothly, the complex task effectively regularizes the inference of new solutions and gives the network a more global view. Instead, the shooting optimiza- tions have to purely rely on local gradients for single-shooting or manually crafted multi-resolution schemes for MS. Our method can also be employed to support the MS optimization by initializing it with the velocities inferred by the networks. In this case, shown in Figure 16, both L∗o and LF decrease right from the beginning, similar to the behavior in Figure 15 from iteration 1500 on. The reconstructed trajectory from the neural-network-based method is so close to the optimum that the multi-resolution approach described above is not necessary. D.5 ADDITIONAL RESULTS In Fig. 18, we provide a visual overview of a sub-set of the sequences that can be found in the supplemental materials. It contains 16 randomly selected reconstructions for each of the natural flow, the shape transitions, and the indirect control examples. In addition, the supplemental material, available at https://ge.in.tum.de/publications/2020-iclr-holl, highlights the differences between unsupervised, staggered, and refined versions of our approach.
1. What is the focus of the paper regarding system control using neural networks and PDE solvers? 2. What are the strengths of the proposed method, particularly its efficiency and effectiveness compared to standard iterative optimization? 3. Do you have any questions or concerns about the experimental results, such as the choice of L_o^ and alpha, the necessity of pretraining, and the number of time steps in indirect control? 4. Are there any suggestions for improving the paper, such as trimming down the verbosity, improving naming consistency, and discussing the computational complexity of the proposed method?
Review
Review In this paper, the authors outline a method for system control utilizing an "agent" formed by two neural networks and utilizing a differentiable grid-based PDE solver (assuming the PDE describing the system is known). The agent is split into a control force estimator (CFE) which applies a force to advance the state of the controlled system, and an observation predictor (OP) which predicts the trajectory needed to reach the target state. The objective is to reach the target state with minimal total amount of applied force. The order of CFE and OP calls is discussed and the importance of keeping the trajectory predictions conditioned on the actual previous state of the system, so that errors from previous steps can be taken into account. Three application examples are discussed: Burger's equation (1D), and incompressible flow (2D) with direct and indirect control. In all cases the proposed scheme of "prediction refinement" leads to better or comparable results than standard iterative optimization, and is much more computationally efficient (at inference time, not taking into account the cost of training). The paper presents an interesting mix of neural networks and traditional PDE solvers for system control, and I vote for acceptance. An additional advantage of the paper is the authors' promise to open source their differentiable PDE solver implemented in TensorFlow, which should make it easy for others to build upon their work. The text is easy to read, but quite verbose, with many of the technical details relegated to the (sizeable) appendices. I would recommend trying to trim it down where possible (for instance, the description of the U-nets could be more compact, perhaps in table form; 2nd paragraph of the background section seems a bit out of context and could probably be omitted, etc). Questions and suggestions for improvements: * What form of L_o^* and alpha was used in all the experiments? * It looks like pretraining was used for all cases except for the most challenging one with indirect control. Was it truly necessary for the simpler experiments? * Improve naming consistency. It looks like "differentiable physics" and "differentiable solver" are used for the same thing in different parts of the paper. My recommendation would be to use the latter term everywhere. * How many time steps are used in the indirect control experiment? * IIUC, the optimization of Eq. 3 is always done end-to-end. Have any experiments been done to estimate how many time steps can be reliably handled by the proposed procedure before the optimization problem becomes too hard?
ICLR
Title Learning to Control PDEs with Differentiable Physics Abstract Predicting outcomes and planning interactions with the physical world are longstanding goals for machine learning. A variety of such tasks involves continuous physical systems, which can be described by partial differential equations (PDEs) with many degrees of freedom. Existing methods that aim to control the dynamics of such systems are typically limited to relatively short time frames or a small number of interaction parameters. We present a novel hierarchical predictorcorrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames. We propose to split the problem into two distinct tasks: planning and control. To this end, we introduce a predictor network that plans optimal trajectories and a control network that infers the corresponding control parameters. Both stages are trained end-to-end using a differentiable PDE solver. We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs such as the incompressible Navier-Stokes equations. 1 INTRODUCTION Intelligent systems that operate in the physical world must be able to perceive, predict, and interact with physical phenomena (Battaglia et al., 2013). In this work, we consider physical systems that can be characterized by partial differential equations (PDEs). PDEs constitute the most fundamental description of evolving systems and are used to describe every physical theory, from quantum mechanics and general relativity to turbulent flows (Courant & Hilbert, 1962; Smith, 1985). We aim to endow artificial intelligent agents with the ability to direct the evolution of such systems via continuous controls. Such optimal control problems have typically been addressed via iterative optimization. Differentiable solvers and the adjoint method enable efficient optimization of high-dimensional systems (Toussaint et al., 2018; de Avila Belbute-Peres et al., 2018; Schenck & Fox, 2018). However, direct optimization through gradient descent (single shooting) at test time is resource-intensive and may be difficult to deploy in interactive settings. More advanced methods exist, such as multiple shooting and collocation, but they commonly rely on modeling assumptions that limit their applicability, and still require computationally intensive iterative optimization at test time. Iterative optimization methods are expensive because they have to start optimizing from scratch and typically require a large number of iterations to reach an optimum. In many real-world control problems, however, agents have to repeatedly make decisions in specialized environments, and reaction times are limited to a fraction of a second. This motivates the use of data-driven models such as deep neural networks, which combine short inference times with the capacity to build an internal representation of the environment. We present a novel deep learning approach that can learn to represent solution manifolds for a given physical environment, and is orders of magnitude faster than iterative optimization techniques. The core of our method is a hierarchical predictor-corrector scheme that temporally divides the problem into easier subproblems. This enables us to combine models specialized to different time scales in order to control long sequences of complex high-dimensional systems. We train our models using a differentiable PDE solver that can provide the agent with feedback of how interactions at any point in time affect the outcome. Our models learn to represent manifolds containing a large number of solutions, and can thereby avoid local minima that can trap classic optimization techniques. We evaluate our method on a variety of control tasks in systems governed by advection-diffusion PDEs such as the Navier-Stokes equations. We quantitatively evaluate the resulting sequences on how well they approximate the target state and how much force was exerted on the physical system. Our method yields stable control for significantly longer time spans than alternative approaches. 2 BACKGROUND Physical problems commonly involve nonlinear PDEs, often with many degrees of freedom. In this context, several works have proposed methods for improving the solution of PDE problems (Long et al., 2018; Bar-Sinai et al., 2019; Hsieh et al., 2019) or used PDE formulations for unsupervised optimization (Raissi et al., 2018). Lagrangian fluid simulation has been tackled with regression forests (Ladicky et al., 2015), graph neural networks (Mrowca et al., 2018; Li et al., 2019), and continuous convolutions (Ummenhofer et al., 2020). Data-driven turbulence models were trained with MLPs (Ling et al., 2016). Fully-convolutional networks were trained for pressure inference (Tompson et al., 2017) and advection components were used in adversarial settings (Xie et al., 2018). Temporal updates in reduced spaces were learned via the Koopman operator (Morton et al., 2018). In a related area, deep networks have been used to predict chemical properties and the outcome of chemical reactions (Gilmer et al., 2017; Bradshaw et al., 2019). Differentiable solvers have been shown to be useful in a variety of settings. Degrave et al. (2019) and de Avila Belbute-Peres et al. (2018) developed differentiable simulators for rigid body mechanics. (See Popovic et al. (2000) for earlier work in computer graphics.) Toussaint et al. (2018) applied related techniques to manipulation planning. Specialized solvers were developed to infer protein structures (Ingraham et al., 2019), interact with liquids (Schenck & Fox, 2018), control soft robots (Hu et al., 2019), and solve inverse problems that involve cloth (Liang et al., 2019). Like ours, these works typically leverage the automatic differentiation of deep learning pipelines (Griewank & Walther, 2008; Maclaurin et al., 2015; Amos & Kolter, 2017; Mensch & Blondel, 2018; van Merriënboer et al., 2018; Chen et al., 2018; Bradbury et al., 2018; Paszke et al., 2019; Tokui et al., 2019). However, while the works above target Lagrangian solvers, i.e. reference frames moving with the simulated material, we address grid-based solvers, which are particularly appropriate for dense, volumetric phenomena. The adjoint method (Lions, 1971; Pironneau, 1974; Jameson, 1988; Giles & Pierce, 2000; Bewley, 2001; McNamara et al., 2004) is used by most machine learning frameworks, where it is commonly known as reverse mode differentiation (Werbos, 2006; Chen et al., 2018). While a variety of specialized adjoint solvers exist (Griewank et al., 1996; Fournier et al., 2012; Farrell et al., 2013), these packages do not interface with production machine learning frameworks. A supporting contribution of our work is a differentiable PDE solver called ΦFlow that integrates with TensorFlow (Abadi et al., 2016) and PyTorch (Paszke et al., 2019). It is publicly available at https://github.com/tumpbs/PhiFlow. 3 PROBLEM Consider a physical system u(x, t) whose natural evolution is described by the PDE ∂u ∂t = P ( u, ∂u ∂x , ∂2u ∂x2 , ...,y(t) ) , (1) where P models the physical behavior of the system and y(t) denotes external factors that can influence the system. We now introduce an agent that can interact with the system by controlling certain parameters of the dynamics. This could be the rotation of a motor or fine-grained control over a field. We factor out this influence into a force term F , yielding ∂u ∂t = P ( u, ∂u ∂x , ∂2u ∂x2 , ... ) + F (t). (2) The agent can now be modelled as a function that computes F (t). As solutions of nonlinear PDEs were shown to yield low-dimensional manifolds (Foias et al., 1988; Titi, 1990), we target solution manifolds of F (t) for a given choice of P with suitable boundary conditions. This motivates our choice to employ deep networks for our agents. In most real-world scenarios, it is not possible to observe the full state of a physical system. When considering a cloud of smoke, for example, the smoke density may be observable while the velocity field may not be seen directly. We model the imperfect information by defining the observable state of u as o(u). The observable state is problem dependent, and our agent is conditioned only on these observations, i.e. it does not have access to the full state u. Using the above notation, we define the control task as follows. An initial observable state o0 of the PDE as well as a target state o∗ are given (Figure 1a). We are interested in a reconstructed trajectory u(t) that matches these states at t0 and t∗, i.e. o0 = o(u(t0)),o∗ = o(u(t∗)), and minimizes the amount of force applied within the simulation domain D (Figure 1b): LF [u(t)] = ∫ t∗ t0 ∫ D |Fu(t)|2 dx dt. (3) Taking discrete time steps ∆t, the reconstructed trajectory u is a sequence of n = (t∗ − t0)/∆t states. When an observable dimension cannot be controlled directly, there may not exist any trajectory u(t) that matches both o0 and o∗. This can stem from either physical constraints or numerical limitations. In these cases, we settle for an approximation of o∗. To measure the quality of the approximation of the target, we define an observation loss L∗o. The form of this loss can be chosen to fit the problem. We combine Eq. 3 and the observation loss into the objective function L[u(t)] = α · LF [u(t)] + L∗o(u(t∗)), (4) with α > 0. We use square brackets to denote functionals, i.e. functions depending on fields or series rather than single values. 4 PRELIMINARIES Differentiable solvers. Let u(x, t) be described by a PDE as in Eq. 1. A regular solver can move the system forward in time via Euler steps: u(ti+1) = Solver[u(ti),y(ti)] = u(ti) + ∆t · P (u(ti), ...,y(ti)) . (5) Each step moves the system forward by a time increment ∆t. Repeated execution produces a trajectory u(t) that approximates a solution to the PDE. This functionality for time advancement by itself is not well-suited to solve optimization problems, since gradients can only be approximated by finite differencing. For high-dimensional or continuous systems, this method becomes computationally expensive because a full trajectory needs to be computed for each optimizable parameter. Differentiable solvers resolve this issue by solving the adjoint problem (Pontryagin, 1962) via analytic derivatives. The adjoint problem computes the same mathematical expressions while working with lower-dimensional vectors. A differentiable solver can efficiently compute the derivatives with respect to any of its inputs, i.e. ∂u(ti+1)/∂u(ti) and ∂u(ti+1)/∂y(ti). This allows for gradientbased optimization of inputs or control parameters over an arbitrary number of time steps. Iterative trajectory optimization. Many techniques exist that try to find optimal trajectories by starting with an initial guess for F (t) and slightly changing it until reaching an optimum. The simplest of these is known as single shooting. In one optimization step, it simulates the full dynamics, then backpropagates the loss through the whole sequence to optimize the controls (Kraft, 1985; Leineweber et al., 2003). Replacing F (t) with an agent F (t|ot, o∗), which can be parameterized by a deep network, yields a simple training method. For a sequence of n frames, this setup contains n linked copies of the agent and is depicted in Figure 2. We refer to such an agent as a control force estimator (CFE). Optimizing such a chain of CFEs is both computationally expensive and causes gradients to pass through a potentially long sequence of highly nonlinear simulation steps. When the reconstruction u is close to an optimal trajectory, this is not a problem because the gradients ∆u are small and the operations executed by the solver are differentiable by construction. The solver can therefore be locally approximated by a first-order polynomial and the gradients can be safely backpropagated. For large ∆u, e.g. at the beginning of an optimization, this approximation breaks down, causing the gradients to become unstable while passing through the chain. This instability in the training process can prevent single-shooting approaches from converging and deep networks from learning unless they are initialized near an optimum. Alternatives to single shooting exist, promising better and more efficient convergence. Multiple shooting (Bock & Plitt, 1984) splits the trajectory into segments with additional defect constraints. Depending on the physical system, this method may have to be adjusted for specific problems (Treuille et al., 2003). Collocation schemes (Hargraves & Paris, 1987) model trajectories with splines. While this works well for particle trajectories, it is poorly suited for Eulerian solvers where the evolution of individual points does not reflect the overall motion. Model reduction can be used to reduce the dimensionality or nonlinearity of the problem, but generally requires domain-specific knowledge. When applicable, these methods can converge faster or in a more stable manner than single shooting. However, as we are focusing on a general optimization scheme in this work, we will use single shooting and its variants as baseline comparisons. Supervised and differentiable physics losses. One of the key ingredients in training a machine learning model is the choice of loss function. For many tasks, supervised losses are used, i.e. losses that directly compare the output of the model for a specific input with the desired ground truth. While supervised losses can be employed for trajectory optimization, far better loss functions are possible when a differentiable solver is available. We will refer to these as differentiable physics loss functions. In this work, we employ a combination of supervised and differentiable physics losses, as both come with advantages and disadvantages. One key limitation of supervised losses is that they can only measure the error of a single time step. Therefore, an agent cannot get any measure of how its output would influence future time steps. Another problem arises from the form of supervised training data which comprises input-output pairs, which may be obtained directly from data generation or through iterative optimization. Since optimal control problems are generally not unimodal, there can exist multiple possible outputs for one input. This ambiguity in the supervised training process will lead to suboptimal predictions as the network will try to find a compromise between all possible outputs instead of picking one of them. Differentiable physics losses solve these problems by allowing the agent to be directly optimized for the desired objective (Eq. 4). Unlike supervised losses, differentiable physics losses require a differentiable solver to backpropagate the gradients through the simulation. Multiple time steps can be chained together, which is a key requirement since the objective (Eq. 4) explicitly depends on all time steps through LF [u(t)] (Eq. 3). As with iterative solvers, one optimization step for a sequence of n frames then invokes the agent n times before computing the loss, each invocation followed by a solver step. The employed differentiable solver backpropagates the gradients through the whole sequence, which gives the model feedback on (i) how its decisions change the future trajectory and (ii) how to handle states as input that were reached because of its previous decisions. Since no ground truth needs to be provided, multi-modal problems naturally converge towards one solution. 5 METHOD In order to optimally interact with a physical system, an agent has to (i) build an internal representation of an optimal observable trajectory o(u(t)) and (ii) learn what actions to take to move the system along the desired trajectory. These two steps strongly resemble the predictor-corrector method (Press et al., 2007). Given o(t), a predictor-corrector method computes o(t + ∆t) in two steps. First, a prediction step approximates the next state, yielding op(t+ ∆t). Then, the correction uses op(t + ∆t) to refine the initial approximation and obtain o(t + ∆t). Each step can, to some degree, be learned independently. This motivates splitting the agent into two neural networks: an observation predictor (OP) network that infers intermediate states op(ti), i ∈ {1, 2, ...n − 1}, planning out a trajectory, and a corrector network (CFE) that estimates the control force F (ti|o(ui),opi+1) to follow that trajectory as closely as possible. This splitting has the added benefit of exposing the planned trajectory, which would otherwise be inaccessible. As we will demonstrate, it is crucial for the prediction stage to incorporate knowledge about longer time spans. We address this by modelling the prediction as a temporally hierarchical process, recursively dividing the problem into smaller subproblems. To achieve this, we let the OP not directly infer op(ti+1 |o(ui),o∗) but instead model it to predict the optimal center point between two states at times ti, tj , with i, j ∈ {1, 2, ...n − 1}, j > i, i.e. op((ti + tj)/2 |oi,oj). This function is much more general than predicting the state of the next time step since two arbitrary states can be passed as arguments. Recursive OP evaluations can then partition the sequence until a prediction op(ti) for every time step ti has been made. This scheme naturally enables scaling to arbitrary time frames or arbitrary temporal resolutions, assuming that the OP can correctly anticipate the physical behavior. Since physical systems often exhibit different behaviors on different time scales and the OP can be called with states separated by arbitrary time spans, we condition the OP on the time scale it is evaluated on by instantiating and training a unique version of the model for every time scale. This simplifies training and does not significantly increase the model complexity as we use factors of two for the time scales, and hence the number of required models scales with O(log2 n). We will refer to one instance of an OPn by the time span between its input states, measured in the number of frames n = (tj − ti)/∆t. Execution order. With the CFE and OPn as building blocks, many algorithms for solving the control problem, i.e. for computing F (t), can be assembled and trained. We compared a variety of algorithms and found that a scheme we will refer to as prediction refinement produces the best results. It is based on the following principles: (i) always use the finest scale OP possible to make a prediction, (ii) execute the CFE followed by a solver step as soon as possible, (iii) refine predictions after the solver has computed the next state. The algorithm that realizes these goals is shown in Appendix B with an example for n = 8. To understand the algorithm and resulting execution orders, it is helpful to consider simpler algorithms first. The simplest combination of CFE and OPn invocations that solves the full trajectory, shown in Figure 3a, can be described as follows. Initially, all intermediate states are predicted hierarchically. The first prediction is the half-way point op(tn/2 |o0,o∗), generated by the OPn. Using that as input to an OPn/2 results in new predictions at tn/4, t3n/4. Continuing with this scheme, a prediction can be made for each ti, i ∈ 1, ..., n− 1. Next, the actual trajectory is evaluated step by step. For each step ti, the CFE computes the control force F (ti) conditioned on the state at ti and the prediction op(ti+1). Once F (ti) is known, the solver can step the simulation to the next state at ti+1. This al- gorithm finds a trajectory in timeO(n) since n CFE calls and n−1 OP calls are required in total (see Appendix B). However, there are inherent problems with this algorithm. The physical constraints of the PDE and potential approximation errors of the CFE can result in observations that are only matched partially. This can result in the reconstructed trajectory exhibiting undesirable oscillations, often visible as jittering. When subsequent predictions do not line up perfectly, large forces may be applied by the CFE or the reconstructed trajectory might stop following the predictions altogether. This problem can be alleviated by changing the execution order of the two-stage algorithm described above. The resulting algorithm is shown in Figure 3b and will be referred to as staggered execution. In this setup, the simulation is advanced as soon as a prediction for the next observable state exists and OPs are only executed when their state at time ti is available. This staggered execution scheme allows future predictions to take deviations from the predicted trajectory into account, preventing a divergence of the actual evolution o(u(t)) from the prediction op(t). While the staggered execution allows most predictions to correct for deviations from the predicted trajectory op, this scheme leaves several predictions unmodified. Most notably, the prediction op(tn/2), which is inferred from just the initial state and the desired target, remains unchanged. This prediction must therefore be able to guide the reconstruction in the right direction without knowing about deviations in the system that occurred up to tn/2−1. As a practical consequence, a network trained with this scheme typically learns to average over the deviations, resulting in blurred predictions (see Appendix D.2). Algorithm 1: Recursive algorithm computing the prediction refinement. The algorithm is called via Reconstruct[o0,o∗, absent] to reconstruct a full trajectory from o0 to o∗. function Reconstruct[o(u0),on,o2n]; Input : Initial observation o(u0), observation on, optional observation o2n Output: Observation of the reconstructed state o(un) if n = 1 then F ← CFE[o(u0),o1] u1 ← Solver[u0,F ] return o(u1) else on/2 ← OP[o(u0),on] o(un/2)← Reconstruct[o(u0),on/2,on] if o2n present then o3n/2 ← OP[on,o2n] on ← OP[o(un/2),o3n/2] else o3n/2 ← absent end o(un)← Reconstruct[o(un/2),on,o3n/2] return o(un) end The prediction refinement scheme, listed in Algorithm 1 and illustrated in Figure 3c, solves this problem by re-evaluating existing predictions whenever the simulation progesses in time. Not all predictions need to be updated, though, and an update to a prediction at a finer time scale can depend on a sequence of other predictions. The prediction refinement algorithm that achieves this in an optimal form is listed in Appendix B. While the resulting execution order is difficult to follow for longer sequences with more than n = 8 frames, we give an overview of the algorithm by considering the prediction for time tn/2. After the first center-frame prediction op(tn/2) of the n-frame sequence is made by OPn, the prediction refinement algorithm calls itself recursively until all frames up to frame n/4 are reconstructed from the CFE and the solver. The center prediction is then updated using OPn/2 for the next smaller time scale compared to the previous prediction. The call of OPn/2 also depends on op(t3n/4), which was predicted using OPn/2. After half of the remaining distance to the center is reconstructed by the solver, the center prediction at tn/2 is updated again, this time by the OPn/4, including all prediction dependencies. Hence, the center prediction is continually refined every time the temporal distance between the latest reconstruction and the prediction halves, until the reconstruction reaches that frame. This way, all final predictions op(ti) are conditioned on the reconstruction of the previous state u(ti−1) and can therefore account for all previous deviations. The prediction refinement scheme requires the same number of force inferences but an increased number of OP evaluations compared to the simpler algorithms. With a total of 3n − 2 log2(n) − 3 OP evaluations (see Appendix B), it is of the same complexity, O(n). In practice, this refinement scheme incurs only a small overhead in terms of computation, which is outweighed by the significant gains in quality of the learned control function. 6 RESULTS We evaluate the capabilities of our method to learn to control physical PDEs in three different test environments of increasing complexity. We first target a simple but nonlinear 1D equation, for which we present an ablation study to quantify accuracy. We then study two-dimensional problems: an incompressible fluid and a fluid with complex boundaries and indirect control. Full details are given in Appendix D. Supplemental material containing additional sequences for all of the tests can be downloaded from https://ge.in.tum.de/publications/2020-iclr-holl. Burger’s equation. Burger’s equation is a nonlinear PDE that describes the time evolution of a single field, u (LeVeque, 1992). Using Eq. 1, it can be written as P ( u, ∂u ∂x , ∂2u ∂x2 ) = −u · ∂u ∂x + ν ∂2u ∂x2 . (6) Examples of the unperturbed evolution are shown in Figure 4a. We let the whole state be observable and controllable, i.e. o(t) = u(t), which implies that o∗ can always be reached exactly. The results of our ablation study with this equation are shown in Table 1. The table compares the resulting forces applied by differently trained models when reconstructing a ground-truth sequence (Figure 4e). The variant denoted by CFE chain uses a neural network to infer the force without any intermediate predictions. With a supervised loss, this method learns to approximate a single step well. However, for longer sequences, results quickly deviate from an ideal trajectory and diverge because the network never learned to account for errors made in previous steps (Figure 4b). Training the network with the objective loss (Eq. 4) using the differentiable solver greatly increases the quality of the reconstructions. On average, it applies only 34% of the force used by the supervised model as it learns to correct the temporal evolution of the PDE model. Next, we evaluate variants of our predictor-corrector approach, which hierarchically predicts intermediate states. Here, the CFE is implemented as F (ti) = (op(ti+1) − u(ti))/∆t. Unlike the simple CFE chain above, training with the supervised loss and staggered execution produces stable (albeit jittering) trajectories that successfully converge to the target state (Figure 4c). Surprisingly, this supervised method reaches almost the same accuracy as the differentiable CFE, despite not having access to physics-based gradients. However, employing the differentiable physics loss greatly Table 1: Quantitative reconstruction evaluation using Burger’s equation, avg. for 100 examples. Execution scheme Training loss Force ∫ |F | dt Inference time (ms) CFE chain Supervised 83.4± 2.0 0.024± 0.013 CFE chain Diff. Physics 28.8± 0.8 0.024± 0.013 Staggered Supervised 34.3± 1.1 1.15± 0.19 Staggered Diff. Physics 15.3± 0.7 1.15± 0.19 Refined Diff. Physics 14.2± 0.7 3.05± 0.37 Iterative optim. (60 iter.) Diff. Physics 15.3± 1.6 52.7± 2.1 Iterative optim. (300 iter.) Diff. Physics 10.2± 1.9 264.0± 3.0 improves the reconstruction quality, producing solutions that are hard to distinguish from ideal trajectories (Figure 4d). The prediction refinement scheme further improves the accuracy, but the differences to the staggered execution are relatively small as the predictions of the latter are already very accurate. Table 1 also lists the results of classic shooting-based optimization applied to this problem. To match the quality of the staggered execution scheme, the shooting method requires around 60 optimization steps. These steps are significantly more expensive to compute, despite the fast convergence. After around 300 iterations, the classic optimization reaches an optimal value of 10.2 and the loss stops decreasing. Starting the iterative optimization with our method as an initial guess pushes the optimum slightly lower to 10.1. Thus, even this relatively simple problem shows the advantages of our learned approach. Incompressible fluid flow. Next, we apply our algorithm to two-dimensional fluid dynamics problems, which are challenging due to the complexities of the governing Navier-Stokes equations (Batchelor, 1967). For a velocity field v, these can be written as P(v,∇v) = −v · ∇v + ν∇2v −∇p, (7) subject to the hard constraints∇·v = 0 and∇×p = 0, where p denotes pressure and ν the viscosity. In addition, we consider a passive density ρ that moves with the fluid via ∂ρ/∂t = −v · ∇ρ. We set v to be hidden and ρ to be observable, and allow forces to be applied to all of v. We run our tests on a 1282 grid, resulting in more than 16,000 effective continuous control parameters. We train the OP and CFE networks for two different tasks: reconstruction of natural fluid flows and controlled shape transitions. Example sequences are shown in Figure 5 and a quantitative evaluation, averaged over 100 examples, is given in Table 2. While all methods manage to approximate the target state well, there are considerable differences in the amount of force applied. The supervised technique exerts significantly more force than the methods based on the differentiable solver, resulting in jittering reconstructions. The prediction refinement scheme produces the smoothest transitions, converging to about half the loss of the staggered, non-refined variant. We compare our method to classic shooting algorithms for this incompressible flow problem. While a direct shooting method fails to converge, a more advanced multi-scale shooting approach still requires 1500 iterations to obtain a level of accuracy that our model achieves almost instantly. In addition, our model successfully learns a solution manifold, while iterative optimization techniques essentially start from scratch every time. This global view leads our model to more intuitive solutions and decreases the likelihood of convergence to undesirable local minima. The solutions of our method can also be used as initial guesses for iterative solvers, as illustrated in Appendix D.4. We find that the iterative optimizer with an initial guess converges to solutions that require only 57.4% of the force achieved by the iterative optimizer with default initialization. This illustrates how the more global view of the learned solution manifold can improve the solutions of regular optimization runs. Splitting the task into prediction and correction ensures that intermediate predicted states are physically plausible and allows us to generalize to new tasks. For example, we can infer transitions involving multiple shapes, despite training only on individual shapes. This is demonstrated in Appendix D.2. Incompressible fluid with indirect control. The next experiment increases the complexity of the fluid control problem by adding obstacles to the simulated domain and limiting the area that can be controlled by the network. An example sequence in this setting is shown in Figure 6. As before, only the density ρ is observable. Here, the goal is to move the smoke from its initial position near the center into one of the three “buckets” at the top. Control forces can only be applied in the peripheral regions, which are outside the visible smoke distribution. Only by synchronizing the 5000 continuous control parameters can a directed velocity field be constructed in the central region. We first infer trajectories using a trained CFE network and predictions that move the smoke into the desired bucket in a straight line. This baseline manages to transfer 89%±2.6% of the smoke into the target bucket. Next we enable the hierarchical predictions and train the OPs. This version manages to maneuver 99.22%± 0.15% of the smoke into the desired buckets while requiring 19.1%± 1.0% less force. For comparison, Table 3 also lists success rate and execution time for a direct optimization. Despite only obtaining a low success rate of 82%, the shooting method requires several orders of magnitude longer than evaluating our trained model. Since all optimizations are independent of each other, some find better solutions than others, reflected in the higher standard deviation. The increased number of free parameters and complexity of the fluid dynamics to be controlled make this problem intractable for the shooting method, while our model can leverage the learned representation to infer a solution very quickly. Further details are given in Appendix D.3. 7 CONCLUSIONS We have demonstrated that deep learning models in conjunction with a differentiable physics solver can successfully predict the behavior of complex physical systems and learn to control them. The in- troduction of a hierarchical predictor-corrector architecture allowed the model to learn to reconstruct long sequences by treating the physical behavior on different time scales separately. We have shown that using a differentiable solver greatly benefits the quality of solutions since the networks can learn how their decisions will affect the future. In our experiments, hierarchical inference schemes outperform traditional sequential agents because they can easily learn to plan ahead. To model realistic environments, we have introduced observations to our pipeline which restrict the information available to the learning agent. While the PDE solver still requires full state information to run the simulation, this restriction does not apply when the agent is deployed. While we do not believe that learning approaches will replace iterative optimization, our method shows that it is possible to learn representations of solution manifolds for optimal control trajectories using data-driven approaches. Fast inference is vital in time-critical applications and can also be used in conjunction with classical solvers to speed up convergence and ultimately produce better solutions. 8 ACKNOWLEDGEMENTS This work was supported in part by the ERC Starting Grant realFlow (ERC-2015-StG-637014). B COMPLEXITY OF EXECUTION SCHEMES The staggered execution scheme recursively splits a sequence of length n into smaller sequences, as depicted in Fig. 3b and Fig. 7a for n = 8. With each level of recursion depth, the sequence length is cut in half and twice as many predictions need to be performed. The maximum depth depends on the sequence length tn − t0 and the time steps ∆t performed by the solver, dmax = log2 ( tn − t0 ∆t ) − 1. Therefore, the total number of predictions, equal to the number of OP evaluations, is NOP = 1 + 2 + 4 + ...+ n/2 = dmax∑ k=0 2k = n− 1. The prediction refinement scheme performs more predictions, as can be seen in Fig. 7b. To understand the number of OP evaluations, we need to consider the recursive algorithm Reconstruct[u0,on,o2n], listed in Alg 1, that reconstructs a sequence or partial sequence of n frames. For the first invocation, the last parameter o2n is absent, but for subsequences, that is not necessarily the case. Each invocation performs one OP evaluation if o2n is absent, otherwise three. By counting the sequences for which this condition is fulfilled, we can compute the total number of network evaluations to be NOP = 3 dmax∑ k=0 2k − 2 log2(n) = 3n− 2 log2(n)− 3. C NETWORK ARCHITECTURES AND TRAINING All neural networks used in this work are based on a modified U-net architecture (Ronneberger et al., 2015). The U-net represents a typical multi-level convolutional network architecture with skip connections, which we modify by using residual blocks (He et al., 2016) instead of regular convolutions for each level. We slightly modify this basic layout for some experiments. The network used for predicting observations for the fluid example is detailed in Tab. 4. The input to the network are two feature maps containing the current state and the target state. Zero-padding is applied to the input, so that all strided convolutions do not require padding. Next, five residual blocks are executed in order, each decreasing the resolution (1/2, 1/4, 1/8, 1/16, 1/32) while increasing the number of feature maps (4, 8, 16, 16, 16). Each block performs a convolution with kernel size 2 and stride 2, followed by two residual blocks with kernel size 3 and symmetric padding. Inside each block, the number of feature maps stays constant. Three more residual blocks are executed on the lowest resolution of the bowtie structure, after which the decoder part of the network commences, translating features into spatial content. The decoder works as follows: Starting with the lowest resolution, the feature maps are upsampled with linear interpolation. The upsampled maps and the output of the previous block of same resolution are then concatenated. Next, a convolution with 16 filters, a kernel size of 2 and symmetric padding, followed by two more residual blocks, is executed. When the original resolution is reached, only one feature map is produced instead of 16, forming the output of the network. Depending on the dimensionality of the problem, either 1D or 2D convolutions are used. The network used for the indirect control task is modified in the following ways: (i) It produces two output feature maps, representing the velocity (vx, vy). (ii) Four feature maps of the lowest resolution (4x4) are fed into a dense layer producing four output feature maps. These and the other feature maps are concatenated before moving to the upsampling stage. This modification ensures that the receptive field of the network is the whole domain. All networks were implemented in TensorFlow (Abadi et al., 2016) and trained using the ADAM optimizer on an Nvidia GTX 1080 Ti. We use batch sizes ranging from 4 to 16. Supervised training of all networks converges within a few minutes, for which we iteratively decrease the learning rate from 10−3 to 10−5. We stop supervised training after a few epochs, comprising between 2000 and 10.000 iterations, as the networks usually converge within a fraction of the first epoch. For training with the differentiable solver, we start with a decreased learning rate of 10−4 since the backpropagation through long chains is more challenging than training with a supervised loss. Optimization steps are also considerably more expensive since the whole chain needs to be executed, which includes a forward and backward simulation pass. For the fluid examples, an optimization step takes 1-2 seconds to complete for the 2D fluid problems. We let the networks run about 100.000 iterations, which takes between one and two days for the shown examples. D DETAILED DESCRIPTION AND ANALYSIS OF THE EXPERIMENTS In the following paragraphs, we give further details on the experiments of Section 6. D.1 BURGER’S EQUATION For this experiment, we simulate Burger’s equation (Eq. 6) on a one-dimensional grid with 32 samples over a course of 32 time steps. The typical behavior of Burger’s equation in 1D exhibits shock waves that move in +x or −x direction for u(x) > 0 or u(x) < 0, respectively. When opposing waves clash, they both weaken until only the stronger wave survives and keeps moving. Examples are shown in Figs. 4a and 8a. All 32 samples are observable and controllable, i.e. o(t) = u(t). Thus, we can enforce that all trajectories reach the target state exactly by choosing the force for the last step to be F (tn−1) = o∗ − u(tn−1) ∆t . To measure the quality of a solution, it is therefore sufficient to consider the applied force∫ t∗ t0 |F (t)| dt which is detailed for the tested methods in Table 1. Network training. Both for the CFE chains as well as for the observation prediction models, we use the same network architecture, described in Appendix C. We train the networks on 3600 randomly generated scenes with constant driving forces, F (t) = const. The examples are initialized with two Gaussian waves of random amplitude, size and position, set to clash in the center. In each time step, a constant Gaussian force with the same randomized parameters is applied to the system to steer it away from its natural evolution. Constant forces have a larger impact on the evolution than temporally varying forces since the effects of temporally varying forces can partly cancel out over time. The ground truth sequence can therefore be regarded as a near-perfect but not necessarily optimal trajectory. Figs. 4d and 8b display such examples. The same trajectories, without any forces applied, are shown in sub-figures (a) for comparison. We pretrain all networks (OPs or CFE, depending on the method) with a supervised observation loss, Lsupo = ∣∣∣∣OP[o(ti), o(tj)]− uGT( ti + tj2 )∣∣∣∣2 . (8) The resulting trajectory after supervised training for the CFE chain is shown in Figure 4b and Figure 8c. For the observation prediction models, the trajectories are shown in Figure 4c and Figure 8e. After pretraining, we train all OP networks end-to-end with our objective loss function (see Eq. 4), making use of the differentiable solver. For this experiment, we choose the mean squared difference for the observation loss function: L∗o = |o(u(t∗))− o∗| 2 . (9) We test both the staggered execution scheme and the prediction refinement scheme, shown in Figure 8f and Figure 8g. Results. Table 1 compares the resulting forces inferred by different methods. The results are averaged over a set of 100 examples from the test set which is sampled from the same distribution as the training set. The CFE chains both fail to converge to o∗. While the differentiable physics version manages to produce a un−1 that resembles o∗, the supervised version completely deviates from an optimal trajectory. This shows that learning to infer the control force F (ti) only from u(ti), o∗ and t is very difficult as the model needs to learn to anticipate the physical behavior over any length of time. Compared to the CFE chains, the hierarchical models require much less force and learn to converge towards o∗. Still, the supervised training applies much more force to the system than required, the reasons for which become obvious when inspecting Figure 4b and Fig. 8e. While each state seems close to the ground truth individually, the control oscillates undesirably, requiring counter-actions later in time. The methods using the differentiable solver significantly outperform their supervised counterparts and exhibit an excellent performance that is very close the ground truth solutions in terms of required forces. On many examples, they even reach the target state with less force than was applied by the ground truth simulation. This would not be possible with the supervised loss alone, but by having access to the gradient-based feedback from the differentiable solver, they can learn to find more efficient trajectories with respect to the objective loss. This allows the networks to learn applying forces in different locations that make the system approach the target state with less force. Figure 4e and Fig.8f,g show examples of this. The ground truth applies the same force in each step, thereby continuously increasing the first sample u(x = 0), and the supervised method tries to imitate this behavior. The governing equation then slowly propagates u(x = 0) in positive x direction since u(x = 0) > 0. The learning methods that use a differentiable solver make use of this fact by applying much more force F (x = 0) > 0 at this point than the ground truth, even overshooting the target state. Later, when this value had time to propagate to the right, the model corrects this overshoot by applying a negative force F (x = 0) < 0. Using this trick, these models reach the target state with up to 13% less force than the ground truth on the sequence shown in Figure 4. Figure 9 analyzes the variance of inferred forces. The supervised methods often fail to properly converge to the target state, resulting in large forces in the last step, visible as a second peak in the supervised CFE chain. The formulation of the loss (Eq. 3) suppresses force spikes. In the solutions inferred by our method, the likelihood of large forces falls off multi-exponentially as a consequence. This means that large forces are exponentially rare, which is the expected behavior given the L2 regularizer from Eq. 3. We also compare our results to a single-shooting baseline which is able to find near-optimal solutions at the cost of higher computation times. The classic optimization uses the ADAM optimizer with a learning rate of 0.01 and converges after around 300 iterations. To reach the quality of the staggered prediction scheme, it requires only around 60 iterations. This quick convergence can be explained by the relatively simple setup that is dominated by linear effects. Therefore, the gradients are stable, even when propagated through many frames. The computation times, shown in Tab. 1, were recorded on a single GTX 1080 Ti. We run 100 examples in parallel to reduce the relative overhead caused by GPU instruction queuing. For the network-based methods, we average the inference time over 100 runs. We perform 10 runs for the optimization methods. D.2 INCOMPRESSIBLE FLUID FLOW The incompressible Navier-Stokes equations model dynamics of fluids such as water or air, which can develop highly complex and chaotic behavior. The phenomenon of turbulence is generally seen as one of the few remaining fundamental and unsolved problems of classical physics. The challenging nature of the equations indicates that typically a very significant computational effort and a large number of degrees of freedom are required to numerically compute solutions. Here, we target an incompressible two-dimensional gas with viscosity ν, described by the Navier-Stokes equations for the velocity field v. We assume a constant fluid density throughout the simulation, setting ρf = const. ≡ 1. The gas velocity is controllable and, according to Eq. 1, we set P(v,∇v) = −(v · ∇)v + ν∇2v − ∇p ρf subject to the hard constraints ∇ · v = 0 and ∇ × p = 0. For our experiments, we target fluids with low viscosities, such as air, and set ν = 0 in the equation above as the transport steps implicitly apply numerical diffusion that is on average higher than the targeted one. For fluids with a larger viscosity, the Poisson solver outlined above for computing p could be used to implicitly solve a vector-valued diffusion equation for v. However, incorporating a significant amount of viscosity would make the control problem easier to solve for most cases, as viscosity suppresses small scale structures in the motion. Hence, in order to create a challenging environment for training our networks, we have but a minimal amount of diffusion in the physical model. In addition to the velocity field v, we consider a smoke density distribution ρ which moves passively with the fluid. The evolution of ρ is described by the equation ∂ρ/∂t = −v·∇ρ. We treat the velocity field as hidden from observation, letting only the smoke density be observed, i.e. o(t) = ρ(t). We stack the two fields as u = (v, ρ) to write the system as one PDE, compatible with Eq. 1. For the OP and CFE networks, we use the 2D network architecture described in Appendix C. Instead of directly generating the velocity update in the CFE network for this problem setup, we make use of stream functions (Lamb, 1932). Hence, the CFE network outputs a vector potential Φ of which the curl ∇× Φ is used as a velocity update. This setup numerically simplifies the incompressibility condition of the Navier-Stokes equations but retains the same number of effective control parameters. Datasets. We generate training and test datasets for two distinct tasks: flow reconstruction and shape transition. Both datasets have a resolution of 128 × 128 with the velocity fields being sampled in staggered form (see Appendix A). This results in over 16.000 effective continuous control parameters that make up the control force F (ti) for each step i. The flow reconstruction dataset is comprised of ground-truth sequences where the initial states (ρ0,v0) are randomly sampled and then simulated for 64 time steps. The resulting smoke density is then taken to be the target state, o∗ ≡ ρ∗ = ρsim(t64). Since we use fully convolutional networks for both CFE and OPs, the open domain boundary must be handled carefully. If smoke was lost from the simulation, because it crossed the outer boundary, a neural network would see the smoke simply vanish unless it was explicitly given the domain size as input. To avoid these problems, we run the simulation backwards in time and remove all smoke from ρ0 that left the simulation domain. For the shape transition dataset, we sample initial and target states ρ0 and ρ∗ by randomly choosing a shape from a library containing ten basic geometric shapes and placing it at a random location inside the domain. These can then be used for reconstructing sequences of any length n. For the results on shape transition presented in section 6, we choose n = 16 because all interesting behavior can be seen within that time frame. Due to the linear interpolation used in the advection step (see Appendix A), both ρ and v smear out over time. This numerical limitation makes it impossible to match target states exactly in this task as the density will become blurry over time. While we could generate ground-truth sequences using a classical optimizer, we refrain from doing so because (i) these trajectories are not guaranteed to be optimal and (ii) we want to see how well the model can learn from scratch, without initialization. Training. We pretrain the CFE on the natural flow dataset with a supervised loss, LCFEsup (u(t)) = |vu(t) + F (t)− v∗(t)|2 where v∗(t) denotes the velocity from ground truth sequences. This supervised training alone constitutes a good loss for the CFE as it only needs to consider single-step intervals ∆t while the OPs handle longer sequences. Nevertheless, we found that using the differentiable solver with an observation loss, LCFEo = |Br(o∗)−Br (Solver[u+ CFE[u,o∗]]) |2, further improves the accuracy of the inferred force without sacrificing the ground truth match. Here Br(x) denotes a blur function with a kernel of the form 11+x/r . The blur helps make the gradients smoother and creates non-zero gradients in places where prediction and target do not overlap. During training, we start with a large radius of r = 16 ∆x for Br and successively decrease it to r = 2 ∆x. We choose α such that LF and L∗o are of the same magnitude when the force loss spikes (see Fig. 15). After the CFE is trained, we successively train the OPs starting with the smallest time scale. For the OPs, we train different models for natural flow reconstruction and shape transition, both based on the same CFE model. We pre-train all OPs independently with a supervised observation loss before jointly training them end-to-end with objective loss function (Eq. 4) and the differentiable solver to find the optimal trajectory. We use the OPs trained with the staggered execution scheme as initialization for the prediction refinement scheme. The complexity of solving the Navier-Stokes equations over many time steps in this example requires such a fully supervised initialization step. Without it, this setting is so non-linear that the learning process does not converge to a good solution. Hence, it illustrates the importance of combining supervised and unsupervised (requiring differentiable physics) training for challenging learning objectives. A comparison of the different losses is shown in Fig. 10. The predictions, shown in the top rows of each subfigure, illustrate the differences between the three methods. The supervised predictions, especially the long-term predictions (central images), are blurry because the network learns to average over all ground truth sequences that match the given initial and target state. The differentiable physics solver largely resolves this issue. The predictions are much sharper but the long-term predictions still do not account for short-term deviations. This can be seen in the central prediction of Fig. 10b which shows hints of the target state o∗, despite the fact that the actual reconstruction u cannot reach that state at that time. The refined prediction, shown in subfigure (c), is closer to u since it is conditioned on the previous reconstructed state. In the training data, we let the network transform one shape into another at a random location. The differentiable solver and the long-term intuition provided by our execution scheme make it possible to train networks that can infer accurate sequences of control forces. In most cases, the target shapes are closely matched. As our networks infer sequences over time, we refer readers to the supplemental material (https://ge.in.tum.de/publications/2020-iclr-holl), which contains animations of additional sequences. Generalization to multiple shapes. Splitting the reconstruction task into prediction and correction has the additional benefit of having full access to the intermediate predictions op. These model real states of the system so classical processing or filter operations can be applied to them as well. We demonstrate this by generalizing our method to m > 1 shapes that evolve within the same domain. Figure 11 shows an example of two weakly-interacting shape transitions. We implement this by executing the OPs independently for each transition k ∈ {1, 2, ...m} while inferring the control force F (t) on the joint system. This is achieved by adding the predictions of the smoke density ρ before passing it to the CFE network, õp = ∑m k=1 o p k. The resulting force is then applied to all sequences individually so that smoke from one transition does not end up in another target state. Using this scheme, we can define start and end positions for arbitrarily many shapes and let them evolve together. Evaluation of force strengths The average force strengths are detailed in Tab. 2 while Figure 12 gives a more detailed analysis of the force strengths. As expected from using a L2 regularizer on the force, large values are exponentially rare in the solutions inferred from our test set. None of the hierarchical execution schemes exhibit large outliers. The prediction refinement requires the least amount of force to match the target, slightly ahead of the staggered execution trained with the same loss. The supervised training produces trajectories with reduced continuity that result in larger forces being applied. D.3 INCOMPRESSIBLE FLUID WITH INDIRECT CONTROL As a fourth test environment, we target a case with increased complexity, where the network does not have the means anymore to directly control the full fluid volume. Instead, the network can only apply forces in the peripheral regions, with a total of more than 5000 control parameters per step. The obstacles prevent fluid from passing through them and the domain is enclosed with solid boundaries from the left, right and bottom. This leads to additional hard constraints and interplays between constraints in the physical model, and as such provides an interesting and challenging test case for our method. The domain has three target regions (buckets) separated by walls at the top of the domain, into which a volume of smoke should be transported from any position in the center part. Both initial position and the target bucket are randomized for our training set of 3600 examples and test set of 100 examples. Each sequence consists of 16 time steps. In this case the control is indirect since the smoke density lies outside the controlled area at all times. Only the incompressibility condition allows the network to influence the velocity outside the controlled area. This forces the model to consider the global context and synchronize a large number of parameters to create a desired flow field. The requirement of complex synchronized force fields makes generating reliable training data difficult, as manual or random sampling is unlikely to produce a directed velocity field in the center. We therefore skip the pretraining process and directly train the CFE using the differentiable solver, while the OP networks are trained as before with r = 2 ∆x. To evaluate how well the learning method performs, we measure how much of the smoke density ends up inside the buckets and how much force was applied in total. For reference, we replace the observation predictions with an algorithm that moves the smoke towards the bucket in a straight line. Averaged over 100 examples from the test set, the resulting model manages to put 89%±2.6% of the smoke into the target bucket. In contrast, the model trained with our full algorithm moves 99.22%± 0.15% of the smoke into the target buckets while requiring 19.1%± 1.0% less force. We also compare our method to an iterative optimization which directly optimizes the control velocities. We use the ADAM optimizer with a learning rate of 0.1. Despite the highly non-linear setup, the gradients are stable enough to quickly let the smoke flow in the right direction. Fig. 14 shows how the trajectories improve during optimization. After around 60 optimization steps, the smoke distribution starts reaching the target bucket in some examples. Over the next 600 iterations, it converges to a a configuration in which 82.1± 7.3 of the smoke ends up in the correct bucket. D.4 COMPARISON TO SHOOTING METHODS We compare the sequences inferred by our trained models to classical shooting optimizations using our differentiable physics solver to directly optimize F (t) with the objective loss L (Eq. 4) for a single input. We make use of stream functions (Lamb, 1932), as in the second experiment, to ensure the incompressibility condition is fulfilled. For this comparison, the velocities of all steps are initialized with a normal distribution with µ = 0 and σ = 0.01 so that the initial trajectory does not significantly alter the initial state, u(t) ≈ u(t0). We first show how a simple single-shooting algorithm (Zhou et al., 1996) fares with our NavierStokes setup. When solving the resulting optimization problem using single-shooting, strong artifacts in the reconstructions can be observed, as shown in Figure 17a. This undesirable behavior stems from the nonlinearity of the Navier-Stokes equations, which causes the gradients ∆u 0 to become noisy and unreliable when they are recurrently backpropagated through many time steps. Unsurprisingly, the single-shooting optimizer converges to a undesirable local minimum. As single-shooting is well known to have problems with non-trivial problem settings, we employ a multi-scale shooting (MS) method (Hartmann et al., 2014). This solver first computes the trajectory on a coarsely discretized version of the problem before iteratively refining the discretization. For the first resolution, we use 1/16 of the original width and height which both reduces the number of control parameters and reduces nonlinear effects from the physics model. By employing an exponential learning rate decay, this multi-scale optimization converges reliably for all examples. We use the ADAM optimizer to compute the control variable updates from the gradients of the differentiable Navier-Stokes solver. An averaged set of representative convergence curves for this setup is shown in Figure 15. The objective loss (Eq. 4) is shown in its decomposed state as the sum of the observation loss L∗o, shown in Figure 15a, and the force loss LF , shown in Figure 15b. Due to the initialization of all velocities with small values, the force loss starts out small. For the first 1000 iteration steps, L∗o dominates which causes the system to move towards the target state o∗. This trajectory is not ideal, however, as more force than necessary is applied. Once observation loss and force loss are of the same magnitude, the optimization refines the trajectory to use less force. We found that the trajectories predicted by our neural network based method correspond to performing about 1500 steps with the MS optimization while requiring less tuning. Reconstructions of the same example are compared in Figure 17. Performing the MS optimization up to this point took 131 seconds on a GTX 1080 Ti graphics card for a single 16-frame sequence while the network inference ran for 0.5 seconds. For longer sequences, this gap grows further because the network inference time scales with O(n). This could only be matched if the number of iterations for the MS optimization scaled with O(1), which is not the case for most problems. These tests indicate that our model has successfully internalized the behavior of a large class of physical behavior, and can exert the right amount of force to reach the intended goal. The large number of iterations required for the single-case shooting optimization highlights the complexity of the individual solutions. Interestingly, the network also benefits from the much more difficult task to learn a whole manifold of solutions: comparing solutions with similar observation loss for the MS algorithm and our network, the former often finds solutions that are unintuitive and contain noticeable detours, e.g., not taking a straight path for the density matching examples of Fig. 5. In such situations, our network benefits from having to represent the solution manifold, instead of aiming for single task optimizations. As the solutions are changing relatively smoothly, the complex task effectively regularizes the inference of new solutions and gives the network a more global view. Instead, the shooting optimiza- tions have to purely rely on local gradients for single-shooting or manually crafted multi-resolution schemes for MS. Our method can also be employed to support the MS optimization by initializing it with the velocities inferred by the networks. In this case, shown in Figure 16, both L∗o and LF decrease right from the beginning, similar to the behavior in Figure 15 from iteration 1500 on. The reconstructed trajectory from the neural-network-based method is so close to the optimum that the multi-resolution approach described above is not necessary. D.5 ADDITIONAL RESULTS In Fig. 18, we provide a visual overview of a sub-set of the sequences that can be found in the supplemental materials. It contains 16 randomly selected reconstructions for each of the natural flow, the shape transitions, and the indirect control examples. In addition, the supplemental material, available at https://ge.in.tum.de/publications/2020-iclr-holl, highlights the differences between unsupervised, staggered, and refined versions of our approach.
1. What is the main contribution of the paper regarding training physical systems? 2. What are the strengths of the proposed approach, particularly in terms of methodology and applications? 3. Do you have any questions or concerns regarding the predictor-corrector framework, adjoint sensitivity method, or the overall training procedure? 4. How does the reviewer assess the clarity, organization, and impact of the paper? 5. Are there any limitations or potential improvements regarding the "differentiable physics" losses and their explanation in the paper?
Review
Review ## Summary The authors propose a method for training physical systems whose behavior is governed by partial differential equations. They consider situations where only partial observations available, and where control is indirect. Indirectly controlling physical systems is a very important problem with applications throughout engineering. In fact, much of the field of robotics can be described in these terms. The authors employ a number of interesting methods, including a predictor-corrector framework and the adjoint sensitivity method for differentiating through differential equation solvers. The paper is generally very clear, organized and well written. There are only a few places where I think clarification is needed (see detailed comments below). I also have a few questions about the losses and training procedure. On the whole, I think the paper is inventive, well-written and potentially very impactful. I think it would be a great addition to ICLR. ## Clarifications * Page 4: I found the statement "an agent trained in supervised fashion will then learn to average over the modes instead of picking one of them" a little confusing. Could you clarify the reasoning here? * Page 5: I think the description of predictor-corrector could be clearer. In particular, I found the phrase "the correction uses o(t + ∆t) to obtain o(t + ∆t)" unclear. * Page 8: Could you add a description of what is observable to the body of the paper (I see it is included in the supplement)? * Page 8: I think ∇p needs to be divided by density in your NS equation, right? * Page 8 - 9: Is there a limit on the size of the force that can be applied at any point? I know the total force is penalized, but what about the maximum force applied at any point? ## Losses and Training * I think "differentiable physics" losses need a more detailed explanation in the body of the paper. * In the supplement, it is defined using B_r, but I don't think B_r is defined. * It seems like the differential physics loss requires a differential solver (in this case, for Burger/Navier-Stokes). If I have understood this correctly, I think this needs to be discussed in the body of the paper. In particular, it would be nice to discuss what happens when the physics is a black box (i.e. we can interact with the system by applying control and observing, but we don't know the rules governing the physical system). Is this exactly when we are restricted to the "supervised" loss? Is there some middle ground? What if we had black box access to the exact physics, along with an approximate differentiable solver? This seems like a realistic scenario for e.g. large fluid flow scenarios.
ICLR
Title Learning to Control PDEs with Differentiable Physics Abstract Predicting outcomes and planning interactions with the physical world are longstanding goals for machine learning. A variety of such tasks involves continuous physical systems, which can be described by partial differential equations (PDEs) with many degrees of freedom. Existing methods that aim to control the dynamics of such systems are typically limited to relatively short time frames or a small number of interaction parameters. We present a novel hierarchical predictorcorrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames. We propose to split the problem into two distinct tasks: planning and control. To this end, we introduce a predictor network that plans optimal trajectories and a control network that infers the corresponding control parameters. Both stages are trained end-to-end using a differentiable PDE solver. We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs such as the incompressible Navier-Stokes equations. 1 INTRODUCTION Intelligent systems that operate in the physical world must be able to perceive, predict, and interact with physical phenomena (Battaglia et al., 2013). In this work, we consider physical systems that can be characterized by partial differential equations (PDEs). PDEs constitute the most fundamental description of evolving systems and are used to describe every physical theory, from quantum mechanics and general relativity to turbulent flows (Courant & Hilbert, 1962; Smith, 1985). We aim to endow artificial intelligent agents with the ability to direct the evolution of such systems via continuous controls. Such optimal control problems have typically been addressed via iterative optimization. Differentiable solvers and the adjoint method enable efficient optimization of high-dimensional systems (Toussaint et al., 2018; de Avila Belbute-Peres et al., 2018; Schenck & Fox, 2018). However, direct optimization through gradient descent (single shooting) at test time is resource-intensive and may be difficult to deploy in interactive settings. More advanced methods exist, such as multiple shooting and collocation, but they commonly rely on modeling assumptions that limit their applicability, and still require computationally intensive iterative optimization at test time. Iterative optimization methods are expensive because they have to start optimizing from scratch and typically require a large number of iterations to reach an optimum. In many real-world control problems, however, agents have to repeatedly make decisions in specialized environments, and reaction times are limited to a fraction of a second. This motivates the use of data-driven models such as deep neural networks, which combine short inference times with the capacity to build an internal representation of the environment. We present a novel deep learning approach that can learn to represent solution manifolds for a given physical environment, and is orders of magnitude faster than iterative optimization techniques. The core of our method is a hierarchical predictor-corrector scheme that temporally divides the problem into easier subproblems. This enables us to combine models specialized to different time scales in order to control long sequences of complex high-dimensional systems. We train our models using a differentiable PDE solver that can provide the agent with feedback of how interactions at any point in time affect the outcome. Our models learn to represent manifolds containing a large number of solutions, and can thereby avoid local minima that can trap classic optimization techniques. We evaluate our method on a variety of control tasks in systems governed by advection-diffusion PDEs such as the Navier-Stokes equations. We quantitatively evaluate the resulting sequences on how well they approximate the target state and how much force was exerted on the physical system. Our method yields stable control for significantly longer time spans than alternative approaches. 2 BACKGROUND Physical problems commonly involve nonlinear PDEs, often with many degrees of freedom. In this context, several works have proposed methods for improving the solution of PDE problems (Long et al., 2018; Bar-Sinai et al., 2019; Hsieh et al., 2019) or used PDE formulations for unsupervised optimization (Raissi et al., 2018). Lagrangian fluid simulation has been tackled with regression forests (Ladicky et al., 2015), graph neural networks (Mrowca et al., 2018; Li et al., 2019), and continuous convolutions (Ummenhofer et al., 2020). Data-driven turbulence models were trained with MLPs (Ling et al., 2016). Fully-convolutional networks were trained for pressure inference (Tompson et al., 2017) and advection components were used in adversarial settings (Xie et al., 2018). Temporal updates in reduced spaces were learned via the Koopman operator (Morton et al., 2018). In a related area, deep networks have been used to predict chemical properties and the outcome of chemical reactions (Gilmer et al., 2017; Bradshaw et al., 2019). Differentiable solvers have been shown to be useful in a variety of settings. Degrave et al. (2019) and de Avila Belbute-Peres et al. (2018) developed differentiable simulators for rigid body mechanics. (See Popovic et al. (2000) for earlier work in computer graphics.) Toussaint et al. (2018) applied related techniques to manipulation planning. Specialized solvers were developed to infer protein structures (Ingraham et al., 2019), interact with liquids (Schenck & Fox, 2018), control soft robots (Hu et al., 2019), and solve inverse problems that involve cloth (Liang et al., 2019). Like ours, these works typically leverage the automatic differentiation of deep learning pipelines (Griewank & Walther, 2008; Maclaurin et al., 2015; Amos & Kolter, 2017; Mensch & Blondel, 2018; van Merriënboer et al., 2018; Chen et al., 2018; Bradbury et al., 2018; Paszke et al., 2019; Tokui et al., 2019). However, while the works above target Lagrangian solvers, i.e. reference frames moving with the simulated material, we address grid-based solvers, which are particularly appropriate for dense, volumetric phenomena. The adjoint method (Lions, 1971; Pironneau, 1974; Jameson, 1988; Giles & Pierce, 2000; Bewley, 2001; McNamara et al., 2004) is used by most machine learning frameworks, where it is commonly known as reverse mode differentiation (Werbos, 2006; Chen et al., 2018). While a variety of specialized adjoint solvers exist (Griewank et al., 1996; Fournier et al., 2012; Farrell et al., 2013), these packages do not interface with production machine learning frameworks. A supporting contribution of our work is a differentiable PDE solver called ΦFlow that integrates with TensorFlow (Abadi et al., 2016) and PyTorch (Paszke et al., 2019). It is publicly available at https://github.com/tumpbs/PhiFlow. 3 PROBLEM Consider a physical system u(x, t) whose natural evolution is described by the PDE ∂u ∂t = P ( u, ∂u ∂x , ∂2u ∂x2 , ...,y(t) ) , (1) where P models the physical behavior of the system and y(t) denotes external factors that can influence the system. We now introduce an agent that can interact with the system by controlling certain parameters of the dynamics. This could be the rotation of a motor or fine-grained control over a field. We factor out this influence into a force term F , yielding ∂u ∂t = P ( u, ∂u ∂x , ∂2u ∂x2 , ... ) + F (t). (2) The agent can now be modelled as a function that computes F (t). As solutions of nonlinear PDEs were shown to yield low-dimensional manifolds (Foias et al., 1988; Titi, 1990), we target solution manifolds of F (t) for a given choice of P with suitable boundary conditions. This motivates our choice to employ deep networks for our agents. In most real-world scenarios, it is not possible to observe the full state of a physical system. When considering a cloud of smoke, for example, the smoke density may be observable while the velocity field may not be seen directly. We model the imperfect information by defining the observable state of u as o(u). The observable state is problem dependent, and our agent is conditioned only on these observations, i.e. it does not have access to the full state u. Using the above notation, we define the control task as follows. An initial observable state o0 of the PDE as well as a target state o∗ are given (Figure 1a). We are interested in a reconstructed trajectory u(t) that matches these states at t0 and t∗, i.e. o0 = o(u(t0)),o∗ = o(u(t∗)), and minimizes the amount of force applied within the simulation domain D (Figure 1b): LF [u(t)] = ∫ t∗ t0 ∫ D |Fu(t)|2 dx dt. (3) Taking discrete time steps ∆t, the reconstructed trajectory u is a sequence of n = (t∗ − t0)/∆t states. When an observable dimension cannot be controlled directly, there may not exist any trajectory u(t) that matches both o0 and o∗. This can stem from either physical constraints or numerical limitations. In these cases, we settle for an approximation of o∗. To measure the quality of the approximation of the target, we define an observation loss L∗o. The form of this loss can be chosen to fit the problem. We combine Eq. 3 and the observation loss into the objective function L[u(t)] = α · LF [u(t)] + L∗o(u(t∗)), (4) with α > 0. We use square brackets to denote functionals, i.e. functions depending on fields or series rather than single values. 4 PRELIMINARIES Differentiable solvers. Let u(x, t) be described by a PDE as in Eq. 1. A regular solver can move the system forward in time via Euler steps: u(ti+1) = Solver[u(ti),y(ti)] = u(ti) + ∆t · P (u(ti), ...,y(ti)) . (5) Each step moves the system forward by a time increment ∆t. Repeated execution produces a trajectory u(t) that approximates a solution to the PDE. This functionality for time advancement by itself is not well-suited to solve optimization problems, since gradients can only be approximated by finite differencing. For high-dimensional or continuous systems, this method becomes computationally expensive because a full trajectory needs to be computed for each optimizable parameter. Differentiable solvers resolve this issue by solving the adjoint problem (Pontryagin, 1962) via analytic derivatives. The adjoint problem computes the same mathematical expressions while working with lower-dimensional vectors. A differentiable solver can efficiently compute the derivatives with respect to any of its inputs, i.e. ∂u(ti+1)/∂u(ti) and ∂u(ti+1)/∂y(ti). This allows for gradientbased optimization of inputs or control parameters over an arbitrary number of time steps. Iterative trajectory optimization. Many techniques exist that try to find optimal trajectories by starting with an initial guess for F (t) and slightly changing it until reaching an optimum. The simplest of these is known as single shooting. In one optimization step, it simulates the full dynamics, then backpropagates the loss through the whole sequence to optimize the controls (Kraft, 1985; Leineweber et al., 2003). Replacing F (t) with an agent F (t|ot, o∗), which can be parameterized by a deep network, yields a simple training method. For a sequence of n frames, this setup contains n linked copies of the agent and is depicted in Figure 2. We refer to such an agent as a control force estimator (CFE). Optimizing such a chain of CFEs is both computationally expensive and causes gradients to pass through a potentially long sequence of highly nonlinear simulation steps. When the reconstruction u is close to an optimal trajectory, this is not a problem because the gradients ∆u are small and the operations executed by the solver are differentiable by construction. The solver can therefore be locally approximated by a first-order polynomial and the gradients can be safely backpropagated. For large ∆u, e.g. at the beginning of an optimization, this approximation breaks down, causing the gradients to become unstable while passing through the chain. This instability in the training process can prevent single-shooting approaches from converging and deep networks from learning unless they are initialized near an optimum. Alternatives to single shooting exist, promising better and more efficient convergence. Multiple shooting (Bock & Plitt, 1984) splits the trajectory into segments with additional defect constraints. Depending on the physical system, this method may have to be adjusted for specific problems (Treuille et al., 2003). Collocation schemes (Hargraves & Paris, 1987) model trajectories with splines. While this works well for particle trajectories, it is poorly suited for Eulerian solvers where the evolution of individual points does not reflect the overall motion. Model reduction can be used to reduce the dimensionality or nonlinearity of the problem, but generally requires domain-specific knowledge. When applicable, these methods can converge faster or in a more stable manner than single shooting. However, as we are focusing on a general optimization scheme in this work, we will use single shooting and its variants as baseline comparisons. Supervised and differentiable physics losses. One of the key ingredients in training a machine learning model is the choice of loss function. For many tasks, supervised losses are used, i.e. losses that directly compare the output of the model for a specific input with the desired ground truth. While supervised losses can be employed for trajectory optimization, far better loss functions are possible when a differentiable solver is available. We will refer to these as differentiable physics loss functions. In this work, we employ a combination of supervised and differentiable physics losses, as both come with advantages and disadvantages. One key limitation of supervised losses is that they can only measure the error of a single time step. Therefore, an agent cannot get any measure of how its output would influence future time steps. Another problem arises from the form of supervised training data which comprises input-output pairs, which may be obtained directly from data generation or through iterative optimization. Since optimal control problems are generally not unimodal, there can exist multiple possible outputs for one input. This ambiguity in the supervised training process will lead to suboptimal predictions as the network will try to find a compromise between all possible outputs instead of picking one of them. Differentiable physics losses solve these problems by allowing the agent to be directly optimized for the desired objective (Eq. 4). Unlike supervised losses, differentiable physics losses require a differentiable solver to backpropagate the gradients through the simulation. Multiple time steps can be chained together, which is a key requirement since the objective (Eq. 4) explicitly depends on all time steps through LF [u(t)] (Eq. 3). As with iterative solvers, one optimization step for a sequence of n frames then invokes the agent n times before computing the loss, each invocation followed by a solver step. The employed differentiable solver backpropagates the gradients through the whole sequence, which gives the model feedback on (i) how its decisions change the future trajectory and (ii) how to handle states as input that were reached because of its previous decisions. Since no ground truth needs to be provided, multi-modal problems naturally converge towards one solution. 5 METHOD In order to optimally interact with a physical system, an agent has to (i) build an internal representation of an optimal observable trajectory o(u(t)) and (ii) learn what actions to take to move the system along the desired trajectory. These two steps strongly resemble the predictor-corrector method (Press et al., 2007). Given o(t), a predictor-corrector method computes o(t + ∆t) in two steps. First, a prediction step approximates the next state, yielding op(t+ ∆t). Then, the correction uses op(t + ∆t) to refine the initial approximation and obtain o(t + ∆t). Each step can, to some degree, be learned independently. This motivates splitting the agent into two neural networks: an observation predictor (OP) network that infers intermediate states op(ti), i ∈ {1, 2, ...n − 1}, planning out a trajectory, and a corrector network (CFE) that estimates the control force F (ti|o(ui),opi+1) to follow that trajectory as closely as possible. This splitting has the added benefit of exposing the planned trajectory, which would otherwise be inaccessible. As we will demonstrate, it is crucial for the prediction stage to incorporate knowledge about longer time spans. We address this by modelling the prediction as a temporally hierarchical process, recursively dividing the problem into smaller subproblems. To achieve this, we let the OP not directly infer op(ti+1 |o(ui),o∗) but instead model it to predict the optimal center point between two states at times ti, tj , with i, j ∈ {1, 2, ...n − 1}, j > i, i.e. op((ti + tj)/2 |oi,oj). This function is much more general than predicting the state of the next time step since two arbitrary states can be passed as arguments. Recursive OP evaluations can then partition the sequence until a prediction op(ti) for every time step ti has been made. This scheme naturally enables scaling to arbitrary time frames or arbitrary temporal resolutions, assuming that the OP can correctly anticipate the physical behavior. Since physical systems often exhibit different behaviors on different time scales and the OP can be called with states separated by arbitrary time spans, we condition the OP on the time scale it is evaluated on by instantiating and training a unique version of the model for every time scale. This simplifies training and does not significantly increase the model complexity as we use factors of two for the time scales, and hence the number of required models scales with O(log2 n). We will refer to one instance of an OPn by the time span between its input states, measured in the number of frames n = (tj − ti)/∆t. Execution order. With the CFE and OPn as building blocks, many algorithms for solving the control problem, i.e. for computing F (t), can be assembled and trained. We compared a variety of algorithms and found that a scheme we will refer to as prediction refinement produces the best results. It is based on the following principles: (i) always use the finest scale OP possible to make a prediction, (ii) execute the CFE followed by a solver step as soon as possible, (iii) refine predictions after the solver has computed the next state. The algorithm that realizes these goals is shown in Appendix B with an example for n = 8. To understand the algorithm and resulting execution orders, it is helpful to consider simpler algorithms first. The simplest combination of CFE and OPn invocations that solves the full trajectory, shown in Figure 3a, can be described as follows. Initially, all intermediate states are predicted hierarchically. The first prediction is the half-way point op(tn/2 |o0,o∗), generated by the OPn. Using that as input to an OPn/2 results in new predictions at tn/4, t3n/4. Continuing with this scheme, a prediction can be made for each ti, i ∈ 1, ..., n− 1. Next, the actual trajectory is evaluated step by step. For each step ti, the CFE computes the control force F (ti) conditioned on the state at ti and the prediction op(ti+1). Once F (ti) is known, the solver can step the simulation to the next state at ti+1. This al- gorithm finds a trajectory in timeO(n) since n CFE calls and n−1 OP calls are required in total (see Appendix B). However, there are inherent problems with this algorithm. The physical constraints of the PDE and potential approximation errors of the CFE can result in observations that are only matched partially. This can result in the reconstructed trajectory exhibiting undesirable oscillations, often visible as jittering. When subsequent predictions do not line up perfectly, large forces may be applied by the CFE or the reconstructed trajectory might stop following the predictions altogether. This problem can be alleviated by changing the execution order of the two-stage algorithm described above. The resulting algorithm is shown in Figure 3b and will be referred to as staggered execution. In this setup, the simulation is advanced as soon as a prediction for the next observable state exists and OPs are only executed when their state at time ti is available. This staggered execution scheme allows future predictions to take deviations from the predicted trajectory into account, preventing a divergence of the actual evolution o(u(t)) from the prediction op(t). While the staggered execution allows most predictions to correct for deviations from the predicted trajectory op, this scheme leaves several predictions unmodified. Most notably, the prediction op(tn/2), which is inferred from just the initial state and the desired target, remains unchanged. This prediction must therefore be able to guide the reconstruction in the right direction without knowing about deviations in the system that occurred up to tn/2−1. As a practical consequence, a network trained with this scheme typically learns to average over the deviations, resulting in blurred predictions (see Appendix D.2). Algorithm 1: Recursive algorithm computing the prediction refinement. The algorithm is called via Reconstruct[o0,o∗, absent] to reconstruct a full trajectory from o0 to o∗. function Reconstruct[o(u0),on,o2n]; Input : Initial observation o(u0), observation on, optional observation o2n Output: Observation of the reconstructed state o(un) if n = 1 then F ← CFE[o(u0),o1] u1 ← Solver[u0,F ] return o(u1) else on/2 ← OP[o(u0),on] o(un/2)← Reconstruct[o(u0),on/2,on] if o2n present then o3n/2 ← OP[on,o2n] on ← OP[o(un/2),o3n/2] else o3n/2 ← absent end o(un)← Reconstruct[o(un/2),on,o3n/2] return o(un) end The prediction refinement scheme, listed in Algorithm 1 and illustrated in Figure 3c, solves this problem by re-evaluating existing predictions whenever the simulation progesses in time. Not all predictions need to be updated, though, and an update to a prediction at a finer time scale can depend on a sequence of other predictions. The prediction refinement algorithm that achieves this in an optimal form is listed in Appendix B. While the resulting execution order is difficult to follow for longer sequences with more than n = 8 frames, we give an overview of the algorithm by considering the prediction for time tn/2. After the first center-frame prediction op(tn/2) of the n-frame sequence is made by OPn, the prediction refinement algorithm calls itself recursively until all frames up to frame n/4 are reconstructed from the CFE and the solver. The center prediction is then updated using OPn/2 for the next smaller time scale compared to the previous prediction. The call of OPn/2 also depends on op(t3n/4), which was predicted using OPn/2. After half of the remaining distance to the center is reconstructed by the solver, the center prediction at tn/2 is updated again, this time by the OPn/4, including all prediction dependencies. Hence, the center prediction is continually refined every time the temporal distance between the latest reconstruction and the prediction halves, until the reconstruction reaches that frame. This way, all final predictions op(ti) are conditioned on the reconstruction of the previous state u(ti−1) and can therefore account for all previous deviations. The prediction refinement scheme requires the same number of force inferences but an increased number of OP evaluations compared to the simpler algorithms. With a total of 3n − 2 log2(n) − 3 OP evaluations (see Appendix B), it is of the same complexity, O(n). In practice, this refinement scheme incurs only a small overhead in terms of computation, which is outweighed by the significant gains in quality of the learned control function. 6 RESULTS We evaluate the capabilities of our method to learn to control physical PDEs in three different test environments of increasing complexity. We first target a simple but nonlinear 1D equation, for which we present an ablation study to quantify accuracy. We then study two-dimensional problems: an incompressible fluid and a fluid with complex boundaries and indirect control. Full details are given in Appendix D. Supplemental material containing additional sequences for all of the tests can be downloaded from https://ge.in.tum.de/publications/2020-iclr-holl. Burger’s equation. Burger’s equation is a nonlinear PDE that describes the time evolution of a single field, u (LeVeque, 1992). Using Eq. 1, it can be written as P ( u, ∂u ∂x , ∂2u ∂x2 ) = −u · ∂u ∂x + ν ∂2u ∂x2 . (6) Examples of the unperturbed evolution are shown in Figure 4a. We let the whole state be observable and controllable, i.e. o(t) = u(t), which implies that o∗ can always be reached exactly. The results of our ablation study with this equation are shown in Table 1. The table compares the resulting forces applied by differently trained models when reconstructing a ground-truth sequence (Figure 4e). The variant denoted by CFE chain uses a neural network to infer the force without any intermediate predictions. With a supervised loss, this method learns to approximate a single step well. However, for longer sequences, results quickly deviate from an ideal trajectory and diverge because the network never learned to account for errors made in previous steps (Figure 4b). Training the network with the objective loss (Eq. 4) using the differentiable solver greatly increases the quality of the reconstructions. On average, it applies only 34% of the force used by the supervised model as it learns to correct the temporal evolution of the PDE model. Next, we evaluate variants of our predictor-corrector approach, which hierarchically predicts intermediate states. Here, the CFE is implemented as F (ti) = (op(ti+1) − u(ti))/∆t. Unlike the simple CFE chain above, training with the supervised loss and staggered execution produces stable (albeit jittering) trajectories that successfully converge to the target state (Figure 4c). Surprisingly, this supervised method reaches almost the same accuracy as the differentiable CFE, despite not having access to physics-based gradients. However, employing the differentiable physics loss greatly Table 1: Quantitative reconstruction evaluation using Burger’s equation, avg. for 100 examples. Execution scheme Training loss Force ∫ |F | dt Inference time (ms) CFE chain Supervised 83.4± 2.0 0.024± 0.013 CFE chain Diff. Physics 28.8± 0.8 0.024± 0.013 Staggered Supervised 34.3± 1.1 1.15± 0.19 Staggered Diff. Physics 15.3± 0.7 1.15± 0.19 Refined Diff. Physics 14.2± 0.7 3.05± 0.37 Iterative optim. (60 iter.) Diff. Physics 15.3± 1.6 52.7± 2.1 Iterative optim. (300 iter.) Diff. Physics 10.2± 1.9 264.0± 3.0 improves the reconstruction quality, producing solutions that are hard to distinguish from ideal trajectories (Figure 4d). The prediction refinement scheme further improves the accuracy, but the differences to the staggered execution are relatively small as the predictions of the latter are already very accurate. Table 1 also lists the results of classic shooting-based optimization applied to this problem. To match the quality of the staggered execution scheme, the shooting method requires around 60 optimization steps. These steps are significantly more expensive to compute, despite the fast convergence. After around 300 iterations, the classic optimization reaches an optimal value of 10.2 and the loss stops decreasing. Starting the iterative optimization with our method as an initial guess pushes the optimum slightly lower to 10.1. Thus, even this relatively simple problem shows the advantages of our learned approach. Incompressible fluid flow. Next, we apply our algorithm to two-dimensional fluid dynamics problems, which are challenging due to the complexities of the governing Navier-Stokes equations (Batchelor, 1967). For a velocity field v, these can be written as P(v,∇v) = −v · ∇v + ν∇2v −∇p, (7) subject to the hard constraints∇·v = 0 and∇×p = 0, where p denotes pressure and ν the viscosity. In addition, we consider a passive density ρ that moves with the fluid via ∂ρ/∂t = −v · ∇ρ. We set v to be hidden and ρ to be observable, and allow forces to be applied to all of v. We run our tests on a 1282 grid, resulting in more than 16,000 effective continuous control parameters. We train the OP and CFE networks for two different tasks: reconstruction of natural fluid flows and controlled shape transitions. Example sequences are shown in Figure 5 and a quantitative evaluation, averaged over 100 examples, is given in Table 2. While all methods manage to approximate the target state well, there are considerable differences in the amount of force applied. The supervised technique exerts significantly more force than the methods based on the differentiable solver, resulting in jittering reconstructions. The prediction refinement scheme produces the smoothest transitions, converging to about half the loss of the staggered, non-refined variant. We compare our method to classic shooting algorithms for this incompressible flow problem. While a direct shooting method fails to converge, a more advanced multi-scale shooting approach still requires 1500 iterations to obtain a level of accuracy that our model achieves almost instantly. In addition, our model successfully learns a solution manifold, while iterative optimization techniques essentially start from scratch every time. This global view leads our model to more intuitive solutions and decreases the likelihood of convergence to undesirable local minima. The solutions of our method can also be used as initial guesses for iterative solvers, as illustrated in Appendix D.4. We find that the iterative optimizer with an initial guess converges to solutions that require only 57.4% of the force achieved by the iterative optimizer with default initialization. This illustrates how the more global view of the learned solution manifold can improve the solutions of regular optimization runs. Splitting the task into prediction and correction ensures that intermediate predicted states are physically plausible and allows us to generalize to new tasks. For example, we can infer transitions involving multiple shapes, despite training only on individual shapes. This is demonstrated in Appendix D.2. Incompressible fluid with indirect control. The next experiment increases the complexity of the fluid control problem by adding obstacles to the simulated domain and limiting the area that can be controlled by the network. An example sequence in this setting is shown in Figure 6. As before, only the density ρ is observable. Here, the goal is to move the smoke from its initial position near the center into one of the three “buckets” at the top. Control forces can only be applied in the peripheral regions, which are outside the visible smoke distribution. Only by synchronizing the 5000 continuous control parameters can a directed velocity field be constructed in the central region. We first infer trajectories using a trained CFE network and predictions that move the smoke into the desired bucket in a straight line. This baseline manages to transfer 89%±2.6% of the smoke into the target bucket. Next we enable the hierarchical predictions and train the OPs. This version manages to maneuver 99.22%± 0.15% of the smoke into the desired buckets while requiring 19.1%± 1.0% less force. For comparison, Table 3 also lists success rate and execution time for a direct optimization. Despite only obtaining a low success rate of 82%, the shooting method requires several orders of magnitude longer than evaluating our trained model. Since all optimizations are independent of each other, some find better solutions than others, reflected in the higher standard deviation. The increased number of free parameters and complexity of the fluid dynamics to be controlled make this problem intractable for the shooting method, while our model can leverage the learned representation to infer a solution very quickly. Further details are given in Appendix D.3. 7 CONCLUSIONS We have demonstrated that deep learning models in conjunction with a differentiable physics solver can successfully predict the behavior of complex physical systems and learn to control them. The in- troduction of a hierarchical predictor-corrector architecture allowed the model to learn to reconstruct long sequences by treating the physical behavior on different time scales separately. We have shown that using a differentiable solver greatly benefits the quality of solutions since the networks can learn how their decisions will affect the future. In our experiments, hierarchical inference schemes outperform traditional sequential agents because they can easily learn to plan ahead. To model realistic environments, we have introduced observations to our pipeline which restrict the information available to the learning agent. While the PDE solver still requires full state information to run the simulation, this restriction does not apply when the agent is deployed. While we do not believe that learning approaches will replace iterative optimization, our method shows that it is possible to learn representations of solution manifolds for optimal control trajectories using data-driven approaches. Fast inference is vital in time-critical applications and can also be used in conjunction with classical solvers to speed up convergence and ultimately produce better solutions. 8 ACKNOWLEDGEMENTS This work was supported in part by the ERC Starting Grant realFlow (ERC-2015-StG-637014). B COMPLEXITY OF EXECUTION SCHEMES The staggered execution scheme recursively splits a sequence of length n into smaller sequences, as depicted in Fig. 3b and Fig. 7a for n = 8. With each level of recursion depth, the sequence length is cut in half and twice as many predictions need to be performed. The maximum depth depends on the sequence length tn − t0 and the time steps ∆t performed by the solver, dmax = log2 ( tn − t0 ∆t ) − 1. Therefore, the total number of predictions, equal to the number of OP evaluations, is NOP = 1 + 2 + 4 + ...+ n/2 = dmax∑ k=0 2k = n− 1. The prediction refinement scheme performs more predictions, as can be seen in Fig. 7b. To understand the number of OP evaluations, we need to consider the recursive algorithm Reconstruct[u0,on,o2n], listed in Alg 1, that reconstructs a sequence or partial sequence of n frames. For the first invocation, the last parameter o2n is absent, but for subsequences, that is not necessarily the case. Each invocation performs one OP evaluation if o2n is absent, otherwise three. By counting the sequences for which this condition is fulfilled, we can compute the total number of network evaluations to be NOP = 3 dmax∑ k=0 2k − 2 log2(n) = 3n− 2 log2(n)− 3. C NETWORK ARCHITECTURES AND TRAINING All neural networks used in this work are based on a modified U-net architecture (Ronneberger et al., 2015). The U-net represents a typical multi-level convolutional network architecture with skip connections, which we modify by using residual blocks (He et al., 2016) instead of regular convolutions for each level. We slightly modify this basic layout for some experiments. The network used for predicting observations for the fluid example is detailed in Tab. 4. The input to the network are two feature maps containing the current state and the target state. Zero-padding is applied to the input, so that all strided convolutions do not require padding. Next, five residual blocks are executed in order, each decreasing the resolution (1/2, 1/4, 1/8, 1/16, 1/32) while increasing the number of feature maps (4, 8, 16, 16, 16). Each block performs a convolution with kernel size 2 and stride 2, followed by two residual blocks with kernel size 3 and symmetric padding. Inside each block, the number of feature maps stays constant. Three more residual blocks are executed on the lowest resolution of the bowtie structure, after which the decoder part of the network commences, translating features into spatial content. The decoder works as follows: Starting with the lowest resolution, the feature maps are upsampled with linear interpolation. The upsampled maps and the output of the previous block of same resolution are then concatenated. Next, a convolution with 16 filters, a kernel size of 2 and symmetric padding, followed by two more residual blocks, is executed. When the original resolution is reached, only one feature map is produced instead of 16, forming the output of the network. Depending on the dimensionality of the problem, either 1D or 2D convolutions are used. The network used for the indirect control task is modified in the following ways: (i) It produces two output feature maps, representing the velocity (vx, vy). (ii) Four feature maps of the lowest resolution (4x4) are fed into a dense layer producing four output feature maps. These and the other feature maps are concatenated before moving to the upsampling stage. This modification ensures that the receptive field of the network is the whole domain. All networks were implemented in TensorFlow (Abadi et al., 2016) and trained using the ADAM optimizer on an Nvidia GTX 1080 Ti. We use batch sizes ranging from 4 to 16. Supervised training of all networks converges within a few minutes, for which we iteratively decrease the learning rate from 10−3 to 10−5. We stop supervised training after a few epochs, comprising between 2000 and 10.000 iterations, as the networks usually converge within a fraction of the first epoch. For training with the differentiable solver, we start with a decreased learning rate of 10−4 since the backpropagation through long chains is more challenging than training with a supervised loss. Optimization steps are also considerably more expensive since the whole chain needs to be executed, which includes a forward and backward simulation pass. For the fluid examples, an optimization step takes 1-2 seconds to complete for the 2D fluid problems. We let the networks run about 100.000 iterations, which takes between one and two days for the shown examples. D DETAILED DESCRIPTION AND ANALYSIS OF THE EXPERIMENTS In the following paragraphs, we give further details on the experiments of Section 6. D.1 BURGER’S EQUATION For this experiment, we simulate Burger’s equation (Eq. 6) on a one-dimensional grid with 32 samples over a course of 32 time steps. The typical behavior of Burger’s equation in 1D exhibits shock waves that move in +x or −x direction for u(x) > 0 or u(x) < 0, respectively. When opposing waves clash, they both weaken until only the stronger wave survives and keeps moving. Examples are shown in Figs. 4a and 8a. All 32 samples are observable and controllable, i.e. o(t) = u(t). Thus, we can enforce that all trajectories reach the target state exactly by choosing the force for the last step to be F (tn−1) = o∗ − u(tn−1) ∆t . To measure the quality of a solution, it is therefore sufficient to consider the applied force∫ t∗ t0 |F (t)| dt which is detailed for the tested methods in Table 1. Network training. Both for the CFE chains as well as for the observation prediction models, we use the same network architecture, described in Appendix C. We train the networks on 3600 randomly generated scenes with constant driving forces, F (t) = const. The examples are initialized with two Gaussian waves of random amplitude, size and position, set to clash in the center. In each time step, a constant Gaussian force with the same randomized parameters is applied to the system to steer it away from its natural evolution. Constant forces have a larger impact on the evolution than temporally varying forces since the effects of temporally varying forces can partly cancel out over time. The ground truth sequence can therefore be regarded as a near-perfect but not necessarily optimal trajectory. Figs. 4d and 8b display such examples. The same trajectories, without any forces applied, are shown in sub-figures (a) for comparison. We pretrain all networks (OPs or CFE, depending on the method) with a supervised observation loss, Lsupo = ∣∣∣∣OP[o(ti), o(tj)]− uGT( ti + tj2 )∣∣∣∣2 . (8) The resulting trajectory after supervised training for the CFE chain is shown in Figure 4b and Figure 8c. For the observation prediction models, the trajectories are shown in Figure 4c and Figure 8e. After pretraining, we train all OP networks end-to-end with our objective loss function (see Eq. 4), making use of the differentiable solver. For this experiment, we choose the mean squared difference for the observation loss function: L∗o = |o(u(t∗))− o∗| 2 . (9) We test both the staggered execution scheme and the prediction refinement scheme, shown in Figure 8f and Figure 8g. Results. Table 1 compares the resulting forces inferred by different methods. The results are averaged over a set of 100 examples from the test set which is sampled from the same distribution as the training set. The CFE chains both fail to converge to o∗. While the differentiable physics version manages to produce a un−1 that resembles o∗, the supervised version completely deviates from an optimal trajectory. This shows that learning to infer the control force F (ti) only from u(ti), o∗ and t is very difficult as the model needs to learn to anticipate the physical behavior over any length of time. Compared to the CFE chains, the hierarchical models require much less force and learn to converge towards o∗. Still, the supervised training applies much more force to the system than required, the reasons for which become obvious when inspecting Figure 4b and Fig. 8e. While each state seems close to the ground truth individually, the control oscillates undesirably, requiring counter-actions later in time. The methods using the differentiable solver significantly outperform their supervised counterparts and exhibit an excellent performance that is very close the ground truth solutions in terms of required forces. On many examples, they even reach the target state with less force than was applied by the ground truth simulation. This would not be possible with the supervised loss alone, but by having access to the gradient-based feedback from the differentiable solver, they can learn to find more efficient trajectories with respect to the objective loss. This allows the networks to learn applying forces in different locations that make the system approach the target state with less force. Figure 4e and Fig.8f,g show examples of this. The ground truth applies the same force in each step, thereby continuously increasing the first sample u(x = 0), and the supervised method tries to imitate this behavior. The governing equation then slowly propagates u(x = 0) in positive x direction since u(x = 0) > 0. The learning methods that use a differentiable solver make use of this fact by applying much more force F (x = 0) > 0 at this point than the ground truth, even overshooting the target state. Later, when this value had time to propagate to the right, the model corrects this overshoot by applying a negative force F (x = 0) < 0. Using this trick, these models reach the target state with up to 13% less force than the ground truth on the sequence shown in Figure 4. Figure 9 analyzes the variance of inferred forces. The supervised methods often fail to properly converge to the target state, resulting in large forces in the last step, visible as a second peak in the supervised CFE chain. The formulation of the loss (Eq. 3) suppresses force spikes. In the solutions inferred by our method, the likelihood of large forces falls off multi-exponentially as a consequence. This means that large forces are exponentially rare, which is the expected behavior given the L2 regularizer from Eq. 3. We also compare our results to a single-shooting baseline which is able to find near-optimal solutions at the cost of higher computation times. The classic optimization uses the ADAM optimizer with a learning rate of 0.01 and converges after around 300 iterations. To reach the quality of the staggered prediction scheme, it requires only around 60 iterations. This quick convergence can be explained by the relatively simple setup that is dominated by linear effects. Therefore, the gradients are stable, even when propagated through many frames. The computation times, shown in Tab. 1, were recorded on a single GTX 1080 Ti. We run 100 examples in parallel to reduce the relative overhead caused by GPU instruction queuing. For the network-based methods, we average the inference time over 100 runs. We perform 10 runs for the optimization methods. D.2 INCOMPRESSIBLE FLUID FLOW The incompressible Navier-Stokes equations model dynamics of fluids such as water or air, which can develop highly complex and chaotic behavior. The phenomenon of turbulence is generally seen as one of the few remaining fundamental and unsolved problems of classical physics. The challenging nature of the equations indicates that typically a very significant computational effort and a large number of degrees of freedom are required to numerically compute solutions. Here, we target an incompressible two-dimensional gas with viscosity ν, described by the Navier-Stokes equations for the velocity field v. We assume a constant fluid density throughout the simulation, setting ρf = const. ≡ 1. The gas velocity is controllable and, according to Eq. 1, we set P(v,∇v) = −(v · ∇)v + ν∇2v − ∇p ρf subject to the hard constraints ∇ · v = 0 and ∇ × p = 0. For our experiments, we target fluids with low viscosities, such as air, and set ν = 0 in the equation above as the transport steps implicitly apply numerical diffusion that is on average higher than the targeted one. For fluids with a larger viscosity, the Poisson solver outlined above for computing p could be used to implicitly solve a vector-valued diffusion equation for v. However, incorporating a significant amount of viscosity would make the control problem easier to solve for most cases, as viscosity suppresses small scale structures in the motion. Hence, in order to create a challenging environment for training our networks, we have but a minimal amount of diffusion in the physical model. In addition to the velocity field v, we consider a smoke density distribution ρ which moves passively with the fluid. The evolution of ρ is described by the equation ∂ρ/∂t = −v·∇ρ. We treat the velocity field as hidden from observation, letting only the smoke density be observed, i.e. o(t) = ρ(t). We stack the two fields as u = (v, ρ) to write the system as one PDE, compatible with Eq. 1. For the OP and CFE networks, we use the 2D network architecture described in Appendix C. Instead of directly generating the velocity update in the CFE network for this problem setup, we make use of stream functions (Lamb, 1932). Hence, the CFE network outputs a vector potential Φ of which the curl ∇× Φ is used as a velocity update. This setup numerically simplifies the incompressibility condition of the Navier-Stokes equations but retains the same number of effective control parameters. Datasets. We generate training and test datasets for two distinct tasks: flow reconstruction and shape transition. Both datasets have a resolution of 128 × 128 with the velocity fields being sampled in staggered form (see Appendix A). This results in over 16.000 effective continuous control parameters that make up the control force F (ti) for each step i. The flow reconstruction dataset is comprised of ground-truth sequences where the initial states (ρ0,v0) are randomly sampled and then simulated for 64 time steps. The resulting smoke density is then taken to be the target state, o∗ ≡ ρ∗ = ρsim(t64). Since we use fully convolutional networks for both CFE and OPs, the open domain boundary must be handled carefully. If smoke was lost from the simulation, because it crossed the outer boundary, a neural network would see the smoke simply vanish unless it was explicitly given the domain size as input. To avoid these problems, we run the simulation backwards in time and remove all smoke from ρ0 that left the simulation domain. For the shape transition dataset, we sample initial and target states ρ0 and ρ∗ by randomly choosing a shape from a library containing ten basic geometric shapes and placing it at a random location inside the domain. These can then be used for reconstructing sequences of any length n. For the results on shape transition presented in section 6, we choose n = 16 because all interesting behavior can be seen within that time frame. Due to the linear interpolation used in the advection step (see Appendix A), both ρ and v smear out over time. This numerical limitation makes it impossible to match target states exactly in this task as the density will become blurry over time. While we could generate ground-truth sequences using a classical optimizer, we refrain from doing so because (i) these trajectories are not guaranteed to be optimal and (ii) we want to see how well the model can learn from scratch, without initialization. Training. We pretrain the CFE on the natural flow dataset with a supervised loss, LCFEsup (u(t)) = |vu(t) + F (t)− v∗(t)|2 where v∗(t) denotes the velocity from ground truth sequences. This supervised training alone constitutes a good loss for the CFE as it only needs to consider single-step intervals ∆t while the OPs handle longer sequences. Nevertheless, we found that using the differentiable solver with an observation loss, LCFEo = |Br(o∗)−Br (Solver[u+ CFE[u,o∗]]) |2, further improves the accuracy of the inferred force without sacrificing the ground truth match. Here Br(x) denotes a blur function with a kernel of the form 11+x/r . The blur helps make the gradients smoother and creates non-zero gradients in places where prediction and target do not overlap. During training, we start with a large radius of r = 16 ∆x for Br and successively decrease it to r = 2 ∆x. We choose α such that LF and L∗o are of the same magnitude when the force loss spikes (see Fig. 15). After the CFE is trained, we successively train the OPs starting with the smallest time scale. For the OPs, we train different models for natural flow reconstruction and shape transition, both based on the same CFE model. We pre-train all OPs independently with a supervised observation loss before jointly training them end-to-end with objective loss function (Eq. 4) and the differentiable solver to find the optimal trajectory. We use the OPs trained with the staggered execution scheme as initialization for the prediction refinement scheme. The complexity of solving the Navier-Stokes equations over many time steps in this example requires such a fully supervised initialization step. Without it, this setting is so non-linear that the learning process does not converge to a good solution. Hence, it illustrates the importance of combining supervised and unsupervised (requiring differentiable physics) training for challenging learning objectives. A comparison of the different losses is shown in Fig. 10. The predictions, shown in the top rows of each subfigure, illustrate the differences between the three methods. The supervised predictions, especially the long-term predictions (central images), are blurry because the network learns to average over all ground truth sequences that match the given initial and target state. The differentiable physics solver largely resolves this issue. The predictions are much sharper but the long-term predictions still do not account for short-term deviations. This can be seen in the central prediction of Fig. 10b which shows hints of the target state o∗, despite the fact that the actual reconstruction u cannot reach that state at that time. The refined prediction, shown in subfigure (c), is closer to u since it is conditioned on the previous reconstructed state. In the training data, we let the network transform one shape into another at a random location. The differentiable solver and the long-term intuition provided by our execution scheme make it possible to train networks that can infer accurate sequences of control forces. In most cases, the target shapes are closely matched. As our networks infer sequences over time, we refer readers to the supplemental material (https://ge.in.tum.de/publications/2020-iclr-holl), which contains animations of additional sequences. Generalization to multiple shapes. Splitting the reconstruction task into prediction and correction has the additional benefit of having full access to the intermediate predictions op. These model real states of the system so classical processing or filter operations can be applied to them as well. We demonstrate this by generalizing our method to m > 1 shapes that evolve within the same domain. Figure 11 shows an example of two weakly-interacting shape transitions. We implement this by executing the OPs independently for each transition k ∈ {1, 2, ...m} while inferring the control force F (t) on the joint system. This is achieved by adding the predictions of the smoke density ρ before passing it to the CFE network, õp = ∑m k=1 o p k. The resulting force is then applied to all sequences individually so that smoke from one transition does not end up in another target state. Using this scheme, we can define start and end positions for arbitrarily many shapes and let them evolve together. Evaluation of force strengths The average force strengths are detailed in Tab. 2 while Figure 12 gives a more detailed analysis of the force strengths. As expected from using a L2 regularizer on the force, large values are exponentially rare in the solutions inferred from our test set. None of the hierarchical execution schemes exhibit large outliers. The prediction refinement requires the least amount of force to match the target, slightly ahead of the staggered execution trained with the same loss. The supervised training produces trajectories with reduced continuity that result in larger forces being applied. D.3 INCOMPRESSIBLE FLUID WITH INDIRECT CONTROL As a fourth test environment, we target a case with increased complexity, where the network does not have the means anymore to directly control the full fluid volume. Instead, the network can only apply forces in the peripheral regions, with a total of more than 5000 control parameters per step. The obstacles prevent fluid from passing through them and the domain is enclosed with solid boundaries from the left, right and bottom. This leads to additional hard constraints and interplays between constraints in the physical model, and as such provides an interesting and challenging test case for our method. The domain has three target regions (buckets) separated by walls at the top of the domain, into which a volume of smoke should be transported from any position in the center part. Both initial position and the target bucket are randomized for our training set of 3600 examples and test set of 100 examples. Each sequence consists of 16 time steps. In this case the control is indirect since the smoke density lies outside the controlled area at all times. Only the incompressibility condition allows the network to influence the velocity outside the controlled area. This forces the model to consider the global context and synchronize a large number of parameters to create a desired flow field. The requirement of complex synchronized force fields makes generating reliable training data difficult, as manual or random sampling is unlikely to produce a directed velocity field in the center. We therefore skip the pretraining process and directly train the CFE using the differentiable solver, while the OP networks are trained as before with r = 2 ∆x. To evaluate how well the learning method performs, we measure how much of the smoke density ends up inside the buckets and how much force was applied in total. For reference, we replace the observation predictions with an algorithm that moves the smoke towards the bucket in a straight line. Averaged over 100 examples from the test set, the resulting model manages to put 89%±2.6% of the smoke into the target bucket. In contrast, the model trained with our full algorithm moves 99.22%± 0.15% of the smoke into the target buckets while requiring 19.1%± 1.0% less force. We also compare our method to an iterative optimization which directly optimizes the control velocities. We use the ADAM optimizer with a learning rate of 0.1. Despite the highly non-linear setup, the gradients are stable enough to quickly let the smoke flow in the right direction. Fig. 14 shows how the trajectories improve during optimization. After around 60 optimization steps, the smoke distribution starts reaching the target bucket in some examples. Over the next 600 iterations, it converges to a a configuration in which 82.1± 7.3 of the smoke ends up in the correct bucket. D.4 COMPARISON TO SHOOTING METHODS We compare the sequences inferred by our trained models to classical shooting optimizations using our differentiable physics solver to directly optimize F (t) with the objective loss L (Eq. 4) for a single input. We make use of stream functions (Lamb, 1932), as in the second experiment, to ensure the incompressibility condition is fulfilled. For this comparison, the velocities of all steps are initialized with a normal distribution with µ = 0 and σ = 0.01 so that the initial trajectory does not significantly alter the initial state, u(t) ≈ u(t0). We first show how a simple single-shooting algorithm (Zhou et al., 1996) fares with our NavierStokes setup. When solving the resulting optimization problem using single-shooting, strong artifacts in the reconstructions can be observed, as shown in Figure 17a. This undesirable behavior stems from the nonlinearity of the Navier-Stokes equations, which causes the gradients ∆u 0 to become noisy and unreliable when they are recurrently backpropagated through many time steps. Unsurprisingly, the single-shooting optimizer converges to a undesirable local minimum. As single-shooting is well known to have problems with non-trivial problem settings, we employ a multi-scale shooting (MS) method (Hartmann et al., 2014). This solver first computes the trajectory on a coarsely discretized version of the problem before iteratively refining the discretization. For the first resolution, we use 1/16 of the original width and height which both reduces the number of control parameters and reduces nonlinear effects from the physics model. By employing an exponential learning rate decay, this multi-scale optimization converges reliably for all examples. We use the ADAM optimizer to compute the control variable updates from the gradients of the differentiable Navier-Stokes solver. An averaged set of representative convergence curves for this setup is shown in Figure 15. The objective loss (Eq. 4) is shown in its decomposed state as the sum of the observation loss L∗o, shown in Figure 15a, and the force loss LF , shown in Figure 15b. Due to the initialization of all velocities with small values, the force loss starts out small. For the first 1000 iteration steps, L∗o dominates which causes the system to move towards the target state o∗. This trajectory is not ideal, however, as more force than necessary is applied. Once observation loss and force loss are of the same magnitude, the optimization refines the trajectory to use less force. We found that the trajectories predicted by our neural network based method correspond to performing about 1500 steps with the MS optimization while requiring less tuning. Reconstructions of the same example are compared in Figure 17. Performing the MS optimization up to this point took 131 seconds on a GTX 1080 Ti graphics card for a single 16-frame sequence while the network inference ran for 0.5 seconds. For longer sequences, this gap grows further because the network inference time scales with O(n). This could only be matched if the number of iterations for the MS optimization scaled with O(1), which is not the case for most problems. These tests indicate that our model has successfully internalized the behavior of a large class of physical behavior, and can exert the right amount of force to reach the intended goal. The large number of iterations required for the single-case shooting optimization highlights the complexity of the individual solutions. Interestingly, the network also benefits from the much more difficult task to learn a whole manifold of solutions: comparing solutions with similar observation loss for the MS algorithm and our network, the former often finds solutions that are unintuitive and contain noticeable detours, e.g., not taking a straight path for the density matching examples of Fig. 5. In such situations, our network benefits from having to represent the solution manifold, instead of aiming for single task optimizations. As the solutions are changing relatively smoothly, the complex task effectively regularizes the inference of new solutions and gives the network a more global view. Instead, the shooting optimiza- tions have to purely rely on local gradients for single-shooting or manually crafted multi-resolution schemes for MS. Our method can also be employed to support the MS optimization by initializing it with the velocities inferred by the networks. In this case, shown in Figure 16, both L∗o and LF decrease right from the beginning, similar to the behavior in Figure 15 from iteration 1500 on. The reconstructed trajectory from the neural-network-based method is so close to the optimum that the multi-resolution approach described above is not necessary. D.5 ADDITIONAL RESULTS In Fig. 18, we provide a visual overview of a sub-set of the sequences that can be found in the supplemental materials. It contains 16 randomly selected reconstructions for each of the natural flow, the shape transitions, and the indirect control examples. In addition, the supplemental material, available at https://ge.in.tum.de/publications/2020-iclr-holl, highlights the differences between unsupervised, staggered, and refined versions of our approach.
1. What is the focus of the paper regarding deep learning and PDEs? 2. What are the strengths of the proposed method, particularly its hierarchical structure and predictor-corrector scheme? 3. Do you have any concerns about the claim that the model only uses observables and does not require full-state information? 4. How can the differentiable PDE solver be utilized when the underlying physics is uncertain or unknown? 5. Would including an algorithm block in the main paper improve clarity? 6. Are there any suggestions for improving Figure 4 and Table 1?
Review
Review [Summary] This paper proposes to combine deep learning and a differentiable PDE solver for understanding and controlling complex nonlinear physical systems over a long time horizon. The method introduces a predictor-corrector scheme, which employs a hierarchical structure that temporally divides the problem into more manageable subproblems, and uses models specialized in different time scales to solve the subproblems recursively. For dividing the problem into subproblems, they use an observation predictor network to predict the optimal center point between two states. To scale the scheme to sequences of arbitrary length, the number of models scales with O(log N). For each subproblem, the authors propose to use a corrector network to estimate the control force to follow the planned trajectory as close as possible. They have compared their method with several baselines and demonstrated that the proposed approach is both more effective and efficient in several challenging PDEs, including the incompressible Navier-Stokes equations. [Major Comments] Predicting the middle point between two states for modeling the dynamics via deep neural networks is not new, but I did not know any other works that use this idea for controlling PDEs. I like the idea of splitting the control problem into a prediction and a correction phase, which leverages the power of deep neural networks and also incorporates our understanding of physics. The introduction of the hierarchical structure alleviates the problem of accumulating error in single-step forwarding models and significantly improves the efficiency of the proposed method. The videos for fluid control in the supplement materials also convincingly demonstrate the effectiveness of the technique. I still have a few questions regarding the applicability and the presentation of the paper. Please see the following detailed comments. [Detailed Comments] In Section 3, the authors claim that their model "is conditioned only on these observables" and "does not have access to the full state." However, the model requires a differentiable PDE solver to provide the gradient of how interactions affect the outcome. These seem to contradict each other. Doesn't the solver require full-state information to predict the behavior of the system? Related to the previous question, how can we make use of the differentiable PDE solver if we are uncertain or unknown of the underlying physics, i.e., partially observable scenarios. The algorithm described in Section 5 seems to be the core contribution of this work. Instead of describing the algorithm in words, I think it would make it more clear if the authors can add an algorithm block in the main paper. It would also be better if the authors can include a few sentences describing the algorithm in the abstract to inform the readers of what to expect. Figure 4 is a bit confusing, and it would be better if the authors can include the label for the x-axis. Besides, in the caption, the authors said that they show "the target state in blue." However, there are a lot of blue lines in the figure, and it is hard to know, at first glance, which one of them is the target. In Table 1, the bottom two methods are using the same execution scheme and training loss, but the results are different. Is there a typo? Also, it would be better to bold the number that has the best performance.
ICLR
Title Bit-wise Training of Neural Network Weights Abstract We propose an algorithm where the individual bits representing the weights of a neural network are learned. This method allows training weights with integer values on arbitrary bit-depths and naturally uncovers sparse networks, without additional constraints or regularization techniques. We show better results than the standard training technique with fully connected networks and similar performance as compared to standard training for residual networks. By training bits in a selective manner we found that the biggest contribution to achieving high accuracy is given by the first three most significant bits, while the rest provide an intrinsic regularization. As a consequence we show that more than 90% of a network can be used to store arbitrary codes without affecting its accuracy. These codes can be random noise, binary files or even the weights of previously trained networks. N/A We propose an algorithm where the individual bits representing the weights of a neural network are learned. This method allows training weights with integer values on arbitrary bit-depths and naturally uncovers sparse networks, without additional constraints or regularization techniques. We show better results than the standard training technique with fully connected networks and similar performance as compared to standard training for residual networks. By training bits in a selective manner we found that the biggest contribution to achieving high accuracy is given by the first three most significant bits, while the rest provide an intrinsic regularization. As a consequence we show that more than 90% of a network can be used to store arbitrary codes without affecting its accuracy. These codes can be random noise, binary files or even the weights of previously trained networks. 1 INTRODUCTION Many challenging areas of computer science have found very good solutions by using powerful techniques such as deep neural networks. Their applications range now from computer vision, speech recognition, natural language processing, game playing engines, natural sciences such as physics, chemistry, biology and even to automated driving. Their success is largely due to the increase in computing power of dedicated hardware which supports massive parallel matrix operations. This enabled researchers to build ever growing models with intricate architectures and millions or even billions of parameters, with impressive results. However, despite their effectiveness, many aspects of deep neural networks are not well understood. One such aspect is why over-parameterized models are able to generalize well. One of the important avenues of research towards a better understanding of deep learning architectures is neural network sparsity. Frankle & Carbin (2019) showed a simple, yet very effective magnitude based pruning technique capable of training neural networks in very high sparsity regimes while retaining the performance of the dense counterparts. This sparked new interest in parameter pruning and a large body of work on the topic has since been published. The techniques for weight pruning can be broadly categorized as follows: pruning after training, before training and pruning during training. The work of Frankle & Carbin (2019) falls in the first category because the method relies on removing the weights which reach small magnitudes after they have been trained. In the second kind of approach, such as (Lee et al., 2019; Wang et al., 2020), neural networks are pruned before training in order to avoid expensive computations at training time. The end goal is to remove connections such that the resulting network is sparse and the weights are efficiently trainable after the pruning procedure. The third kind of approach is to use dynamical pruning strategies (Dai et al., 2019; Mostafa & Wang, 2019) which train and remove weights at the same time. The main goal behind these pruning strategies is to find sparse neural networks which can be trained to large degrees of accuracy. However, it has been shown by Zhou et al. (2019) that there exist pruning masks which can be applied to an untrained network such that its performance is far better than chance. Furthermore, Ramanujan et al. (2019) developed an algorithm for finding good pruning masks for networks with fixed, random weights. Theoretical works (Malach et al., 2020; Orseau et al., 2020) even proved that within random neural networks there exist highly efficient subnetworks, which can be found just by pruning. Orseau et al. (2020) advance the hypothesis that the main task of gradient descent is to prune the networks while on the second place is the fine-tuning of the weights. 2 MOTIVATION A key issue we want to emphasize is that, in all these works, the way in which the networks are pruned in practice is by forcing them, through some criteria, to set a fraction of the weights to zero. Since it has been shown that sparse networks perform as well as their dense counterpart, or sometimes even better, the natural question that arises is: why doesn’t gradient descent itself prune the weights during training? Why hasn’t pruning been spontaneously observed in practice? One possible explanation is that, at least for classification tasks, the usual cross–entropy loss without additional regularization techniques are not well suited for this. Other factors such as the stochasticity of the data batches, optimization algorithm, weights initialization etc. might also play a role. However, we approach this question from a different perspective. We hypothesize that an important reason for weights not being set to zero is because this is a particular state where the bits representing a weight must all equal zero. This is highly unlikely since weights are usually represented on 32 bits. The probability of a single weight being set to exactly zero is 2−31, the sign bit not playing a role. Therefore the chances that a significant number of weights is set to zero decreases very rapidly. If weights would be represented on a lower bit depth, then the chance that the optimizer sets them to zero should increase. In order to test the degree to which this hypothesis is true we experiment with neural networks for image classification where, instead of training the weights themselves, we train the individual bits representing the weights. This might allow gradient descent to reach stable states where all bits in a set of weights are zero and the loss function is around a local minimum. If our hypothesis is true then we expect a strong dependency between sparsity and bit-depth. By encoding weights on arbitrary precision we also touch upon the topic of network quantization and show that particular cases of this training technique result in algorithms developed in previous works which we will describe in Section 8. Moreover, we show that weight quantization naturally leads to weight pruning and sparse networks without additional constraints such a regularizations, additional loss terms, architectural changes or other tricks usually involved in engineering low bit quantized networks. 3 BINARY DECOMPOSITION We approximate weights on k bits by using the sign and magnitude representation, due to its simplicity. A weight tensor of a layer l can be decomposed as: θlk = ( k−2∑ i=0 ali · 2i+αl ) · (−1)a l k−1 (1) with al ∈ {0, 1} representing the binary coefficients and k the number of bits. The summation encodes the magnitude of the number while the second factor encodes the sign: this way we obtain numbers in a symmetric interval around zero. We add a negative constant αl to the exponent in order to allow the representation of fractional numbers (see Table 1). Additionally, this term controls the magnitude of the numbers and, therefore, the width of the weights distribution within a layer. Choosing αl < −k + 1 the weights are guaranteed to be less than 1 in magnitude. In order to constrain a to take binary values we use auxiliary floating point variables x ∈ R (virtual bits) passed through a unit step function: a = H(x) = 0 if x ≤ 0, otherwise 1. The weight initialization for the k bit training technique is as follows: for a fully connected layer the weight matrix is expanded into a 3D tensor of shape (k, nl−1, nl) with k representing the number of bits and nl−1, nl the number of nodes in the previous and current layer, respectively. Figure 1 illustrates a simple example of a (3, 4, 3) bit-tensor. For convolutional layers, where a weight tensor is in higher dimension, the procedure is analogous to the fully connected case and the bit-tensor is now of shape (k, sx, sy, nl−1, nl) with sx, sy representing the kernel sizes in x and y direction. The value for each bit is chosen randomly with equal probability of being either 0 or 1 (x ≤ 0 in the first case and x > 0 in the second). We ensure the weights sampled in this manner are not initialized at exactly zero, because this would mean pruning the network from the start and invalidate our hypothesis. Hence we obtain a uniform weight distribution without zeros. We adopt the Kaiming He (He et al., 2015a) initialization technique for each layer’s weights, which means the standard deviation is √ 2/nl−1, where nl−1 is the number of nodes in the previous layer. We have determined αl algorithmically via a simple binary search such that this condition is fulfilled for the weight distribution of each layer. This term is a fixed parameter in each layer and depends only on the structure of the network. The virtual bits, x, are chosen from a normal distribution which also satisfies the Kaiming He condition on its variance. For the particular situation where k = 2 the weights have only two values and the standard deviation is exactly 2α. Ramanujan et al. (2019) refer to this distribution as the Signed Kaiming Constant. During training, the feed-forward step is performed as usual, with the weights being calculated according to Eq. (1). The backpropagation phase uses the straight through estimator (STE) (Hinton, 2012; Bengio, 2013) for the step function introduced in the weight’s binary decomposition. The derivative of a hard threshold function such as the Heaviside step function is zero everywhere except at zero (more specifically it is the Dirac delta function). Since the values of the weights are passed through this step function are almost never exactly zero, the gradients during backpropagation will almost always be zero. This situation leads to a stagnant network which never updates its weights and never learns. To avoid this, during the backpropagation phase the gradient of the step function is replaced by the gradient of a different function which is non-zero on a domain larger than for the step function. Such functions are usually referred to as proxy functions and can take many forms. Yin et al. (2019); Shekhovtsov & Yanush (2020) provide in-depth discussions on the properties of STEs. Throughout this work we adopt the method first proposed by Hinton (2012) which treats the gradient of a hard threshold function as if it were the identity function. This method has been shown to work very well in practice (Liu et al., 2018; Bulat et al., 2019; 2021; Bethge et al., 2019; Alizadeh et al., 2019) Notice that in Eq.(1) the additive constant αl can be factored out of the sum. The resulting weights are in the form θlk = 2 αl · Θlk, where Θlk contains only integer numbers on k bits. The ReLU activation function has the property that σ(α · x) = α · σ(x) for any α > 0. It can be shown that for a ReLU network of depth L, scaling the weights of each layer by a factor αl, with l ∈ [0, 1, . . . L−1] representing the layer index, is equivalent to scaling just a single layer with α = ∏L−1 l=0 αl, including the input layer. This means that we can gather all factors αl into a single α, scale the input images with that factor and train the network with just integer numbers represented on k bits. At inference time, for classification tasks α is not relevant because the output nodes are all scaled by the same coefficient and argmax(α · x) = argmax(x) for any α > 0. 4 EXPERIMENTS We have performed an extensive set of experiments where networks were trained on bit-depths ranging from 2 to 32. Figure 2 summarises the performance of LeNet and ResNet-18 (LeCun et al., 1998; He et al., 2015b) trained on MNIST and CIFAR10 (LeCun & Cortes, 2010; Krizhevsky, 2009). Each experiment was repeated 15 times. Each data point represents the best accuracy/sparsity obtained from all runs and are displayed as violin plots. They show via the kernel density estimation the minimum, mean, maximum and the spread of the repeated runs. The right-most violin shows the performance of the standard 32-bit training technique, the horizontal black line its mean and the shaded area the minimum and maximum accuracy. The networks were trained with the following setup. For LeNet the learning rate starts at 9 · 10−4 and is divided by 10 at epoch 40 and 80. We have also experimented with a single, fixed learning rate but in that case the standard training technique on 32bits reached a maximum accuracy of only 97.7%, while bit-wise weight training did not suffer any noticeable penalty. For ResNet the learning rate starts at 6 · 10−4 and is divided by 10 at epoch 150 and 170. In both cases we used the Adam optimizer (Kingma & Ba, 2017). For LeNet (left panels in Figure 2) this training technique consistently achieves higher mean accuracies than the baseline while at the same time pruning the network significantly. Moreover, as the bit depth decreases there seems to be a slight increase in the mean classification accuracy. This indicates that the additional bits available for the weights impede the ability of the gradient descent to converge to better solutions. The right panels in Figure 2 show the results of ResNet-18 trained on CIFAR10. Here we observe a degradation in terms of classification accuracy compared to the standard training technique of about 1.7 percentage points (we will show in Section 5 how to mitigate this issue). The network sparsity is higher than in the case of LeNet, somewhere in the range of 25-35%, for bit depths 2 to 16. Note that the sparsity plots are also represented as violins, but their height is smaller relative to the scale of the entire curve due to the very small variations in the sparsity achieved at the end of training. For both LeNet and ResNet there is a strong dependency between the bit depth and the amount of zero weights found by the network. This is in line with our hypothesis that gradient descent does not naturally uncover sparse networks when training weights represented on 32bits. This also explains why currently used pruning techniques require external mechanisms which force the network to set weights to zero while training. In essence, they bypass the weight’s whole bit structure, effectively setting all bits to zero at once. The black dots in Figure 2 indicate the percentage of weights set to zero by random chance. We observe that for high bit-depths (k > 24) the chance that gradient descent sets a certain amount of weights to zero is almost the same as random chance. However, for lower bit-depths gradient descent is much more likely to set weights to zero due to the much smaller search space of the weight’s bit structure. Figure 3 shows the histogram of (float) weight distribution of the second hidden layer in LeNet before and after training. Bit-wise weight learning moves a significant amount of weights either to exactly zero or to the maximum value representable on k bits. The frequency of intermediary values is significantly reduced, in some cases by one order of magnitude. Although this technique has no special regularization nor an external weight pruning mechanism, it naturally uncovers sparse networks. This comes in stark contrast with the standard training technique, right most panels. Here, the distribution of the weights after training is much more spread out than the initial one and has a large peak towards zero, but the weights are never exactly zero. 5 SELECTIVE BIT TRAINING In Section 4 we have presented experiments where all weight bits are simultaneously trained together. Our algorithm, however, also allows us to train specific bits only, while keeping others fixed as they were originally initialized. We can encode as a string mask of 0’s and 1’s which bit is trainable and which not, e.g. for a 4-bit mask 0001 we initialize all bits randomly but we train only the least significant bit, while for 1000 we train only the sign bit and leave the rest unchanged. See Table 1 for an example of a weight represented as a 16-bit number. Figure 4 show the results achieved by LeNet with all possible selective training patterns for 2, 4 and 8 bits. Training with weights encoded on 2 bits (top-left panel) results in 3 possible scenarios: ’01’ trains the magnitude, ’10’ trains the sign and ’11’ trains the sign as well as the magnitude of the weights. Training with weights encoded on 4 bits, pattern ’1000’ corresponds to training just the sign and keeping the magnitudes random, ’0111’ corresponds to training the magnitudes and keeping the sign fixed and ’1111’ corresponds to training all bits. Similarly for 8 bits (bottom panel). The baseline accuracy is shown as the right-most data-point in each graph. Figure 5 shows the same experiments for ResNet. An interesting phenomenon appears when training bits selectively. Several strong discontinuities in the accuracy curve are visible when training weights encoded on 4 and 8 bits. They appear at very specific bit patterns which we will address next. First, we highlight the extreme situations of (a) training just the sign bit and (b) only the magnitude bits. In Figures 4 and 5 these refer to the central data points with trainable bit patterns ’10’, ’1000’, ’10000000’ for sign training and ’01’, ’0111’, ’01111111’ for magnitude training. When training just the sign bit, LeNet outperforms the baseline network, as shown in Figure 5. Our weight initialization procedure avoids initializing magnitudes to zero. For the particular case when quantizing weights on k = 2 bits it means that the magnitude bit is always 1. In this situation training only the sign bit is therefore equivalent to training a binary network with Θ ∈ {−1, 1}. For ResNet, Figure 5, training the weight’s sign leads to a performance drop of 2–4 percentage points, depending on the quantization size. It shows that this particular network can be trained reasonably well only by changing the sign of the weights and never updating their magnitudes. Training only the magnitude bits results in a very small performance penalty for LeNet as compared to the baseline, and about 1–3 percentage points for ResNet. Training all bits simultaneously leads to the average performance between the two extreme cases. This phenomenon is valid for both ResNet and LeNet, although less visible for the latter. We have performed experiments for bit depths ranging from 2 to 32, where we train only the sign and only the magnitude bits in ResNet. Figure 6 summarizes the test accuracy and sparsity obtained in these two cases. Notice there is little to no correlation between accuracy and bit-depth above 8, whereas sparsity is strongly influenced by it, particularly above 14. For bit-depths lower than 5, magnitude only training rather decreases in performance, while sign only training increases. For the extreme k = 2 bits quantization, their accuracy ordering is inverted and in this case training both the sign bit and the magnitude bit results in a ternary network with Θ ∈ {−1, 0, 1}. The second important observation refers to the cases where the sign and the next one or two bits are trained, while the following remain randomly initialized. These situations correspond to the trainable bit patterns ’1100’, ’1110’, ’11000000’ and ’11100000’ in Figures 4 and 5. In all these cases the bit-wise training technique reaches an accuracy above the baseline (LeNet) or similarly to it (ResNet). This behaviour indicates that a fraction of the untrainable (and less significant) magnitude bits act as a regularizer, increasing the accuracy of the network as compared to the case when they are also trained. We investigated how many trainable bits would be sufficient to reach the accuracy of the baseline. To this end we perform bit-wise training on ResNet with 32, 16, 8, 6, 4 and 2 bits encoding for the weights and gradually decrease the number of trainable bits. More specifically, we expand Eq. (1) in the following way: θlk = 2 αl p−1∑ i=0 ali︸︷︷︸ untrainable 2i + k−2∑ j=p alj︸︷︷︸ trainable 2j · (−1)alk−1︸ ︷︷ ︸ trainable (2) where k represents the weight’s bit-depth and p the number of untrainable bits. For p = 0 all bits are trainable and for p = k − 2 only the sign is trainable. We summarize the results of these experiments in Figure 7. The blue data points represent the test accuracy of ResNet as a function of the number of trainable bits, with weights encoded as 32bit integers. Training more than 17 bits results in a test accuracy of about 88%. As the number of trainable bits decreases the accuracy improves and reaches the level of the baseline when training only the first 3 bits. A similar behaviour is seen when encoding weights on lower bit-depths. The best performance is obtained when weights are encoded on more than 6 bits and we train the sign and the next two most significant magnitude bits. The rest of the available bits do not contribute to the network’s performance, rather they hinder the capacity of the network to converge well. 6 POST-TRAINING BIT ANALYSIS Training bits selectively uncovers the fact that only a few of the most significant bits contribute to achieving a high accuracy, while the others provide regularization. In contrast, standard training does not reveal which weights or bits contribute most to the network’s performance. In order to understand this we conduct experiments where we convert the weights learned in the standard way into weights expressed according to Eq. (1). More precisely, we start by training a standard network, and after training, for each layer we divide all weights by the magnitude of the smallest non-zero weight within that layer and round to the nearest integer. Therefore we obtain integer weights which we can then decompose into binary form and gain access to each bit. To be as close as possible to the original weights we encode the integer weights on 32 bits, even though in most situations weights do not require that many. Thus we convert a network trained in the standard way, weights as 32bit floating point values, into a network with integer weights on 32 bits. Next, we start changing the first p less significant magnitude bits and leave the next 32 − p bits unchanged, similar to Eq. (2). In this way we can investigate the impact of each bit on the final accuracy. Note that different layers require a different number of bits to represent the weights and generally, but not necessarily, depends on the number of weights within that layer. If we start changing more bits than a layer requires, the pre-trained structure is destroyed and the network looses its properties. In order to avoid this, we compute the maximum number of bits required for the weights in each layer, ml. We impose that the maximum number of changed bits for each layer is pmaxl = ml − 3. Figure 8 shows the accuracy and sparsity of a standard, pre-trained LeNet and a 6 layer VGG-like network, Conv6, (same as Frankle & Carbin (2019); Zhou et al. (2019); Ramanujan et al. (2019)) as a function of the number of changed bits. We have experimented with three scenarios: all bits are changed randomly, all bits are set to 0, all bits are set to 1. The first data point in each graph, p = 0, represents the performance of the unmodified network with 32 bit floating point weights, as no bits are changed. The following entries indicate the performance of the network as we gradually increase the amount of changed bits. LeNet extends up to 16 bits (the maximum allowed for the first layer in this particular network) and Conv6 extends up to 25 (the maximum allowed for the first dense layer within this network). Setting all p bits to zero (or one) leads to a single possible set of weights. Setting p bits randomly leads to more possible outcomes. This difference is illustrated in Figure 8 by the way the data-points are represented: a single dot when setting bits to zero/one and a violin when setting bits randomly. One can observe that also weights trained in a standard 32 bit floating point format do not make full use of high precision bits. The first 6 bits do not play a significant role for the final accuracy, as they can be modified post-training to any value. These results are in line with our initial hypothesis that gradient descent does not prune networks due to the large amount of bits available for the weights. Additionally, we found that the most important contribution to the performance of a network is the sign bit, followed by the next two most significant magnitude bits. This suggests that gradient descent might find a local optimum based only on these three bits while the rest are used to perform fine-tuning. However, this appears to be less successful, since a large fraction of the bits might be set to zero, one or left randomly initialized, perhaps due to the stochasticity of the training algorithm (batch training) or the noise present in the data itself. 7 MESSAGE ENCODING IN WEIGHTS We have shown so far that 29 out of the 32 bits available for the weights of ResNet have an overall regularization behaviour and can remain randomly initialized and never trained. This leads to the idea that they could be used to encode arbitrary messages while the trainable bits are sufficient to train the network to high degrees of accuracy. To test this hypothesis we have performed several experiments in which we embedded various types of messages in the first 29 untrainable bits of a neural network’s weights and train only the next 3. The results are summarized in Figure 9. Each experiment was repeated 10 times. The first data point shows the baseline accuracy of ResNet trained with the standard method (32bit floating point representation of weights). For the second experiment we assigned random values to the untrainable bits of each layer. In the third experiment we embedded random passages of Shakespeare’s Hamlet. In the fourth experiment we trained until convergence 29 ResNets with bitdepth 1 and embedded each of them into a new, 32bit ResNet, training in a bit-wise fashion the sign and the next two most significant magnitude bits. The test accuracy obtained by the 1bit ResNet is shown as the last violin. We observe that embedding either random noise, structured data or a set of previously learned weights does not impact the accuracy with respect to the baseline ResNet in any significant way. 8 CONNECTION WITH OTHER WORKS Our weight initialization procedure described in Section 3 ensures that weights are never set to zero before training. For the particular case where k = 2 bits this means that the magnitude bit is always 1 while the sign bit can be either 1 or 0. Training only the sign bit is therefore equivalent to training a binary network. This is similar to BinaryConnect, BinaryNet (Courbariaux et al., 2015; 2016) and XNOR-Net (Rastegari et al., 2016) where weights are constrained to −1 and 1. When training with bit pattern ’01’ (magnitude only) or ’11’ (sign and magnitude) results in a ternary network (Li & Liu, 2016; Zhu et al., 2017) because the magnitude is now also allowed to change, leading to some weights being set to zero. Training only the magnitude bit the behaviour of our algorithm is effectively very similar in nature and performance to the edge-popup algorithm developed by Ramanujan et al. (2019) which finds pruning masks for networks with weights randomly sampled from the Signed Kaiming Constant distribution. Encoding weight on arbitrary bit depths and training just the sign bit we obtain the sign-flipping algorithm first shown by Ivan & Florian (2020). Wang et al. (2021) found in a recent study that it is possible to embed 36.9MB of malware into the dense layers of a pretrained 178MB Alex-Net model with a 1% accuracy degradation and without being detected by antivirus programs. Our method can store arbitrary code in any layer of a network (dense as well as convolutional) and could drastically increase the viral amount without damaging the network’s performance, at the same time raising no suspicion on the presence of the malware. 9 SUMMARY Motivated by the question of why gradient descent does not naturally prune neural connections during training, we developed a method to directly train the bits representing the weights. From this perspective we show that an important factor is the over-parametrization in terms of number of bits available for weight encoding. This also sheds some light into why networks with large amounts of weights are able to generalize well. Our algorithm enables weight quantization on arbitrary bit-depths and can be used as a tool for bit level analysis of weight training procedures. We show that gradient descent effectively uses only a small fraction of the most significant bits, while the less significant ones provide an intrinsic regularization and their exact values are not essential for reaching a high classification accuracy. A consequence of this property is that, by using 32 bits for the weight representation, more than 90% of a ResNet can be used to store a large variety of messages, ranging from random noise to structured data, without affecting its performance. 10 REPRODUCIBILITY The code used for the experiments carried out in this work will be made public at: https://github.com/iclr2022-2798/bit-wise-training
1. What is the central motivation behind the proposed method of training individual bits of weights in a neural network? 2. How does the method demonstrate that good performance can be achieved with few bits of weight representation? 3. What is the significance of the experiments that restrict the changeable bit positions? 4. Can you provide further clarification or references regarding the background of pruning literature and sparse representations in neural networks? 5. Why do you think the experimental results appear preliminary and limited in their ability to draw conclusions? 6. What is the purpose of the last section on encoding messages in weights, and how does it relate to the rest of the paper?
Summary Of The Paper Review
Summary Of The Paper The authors propose a method to train individual bits of weights of a neural network. The central motivation is the question of why doesn't gradient descent "discover" inherent sparsity by setting certain weights to zero? The authors suggest that since there are so many possible states of a 32b integer number, the probability of landing on all zeros is vanishingly small. Using the bit-wise training technique, they are able to demonstrate that good performance can be achieved with few bits of weight representation. The authors also do a series of experiments where only certain bit positions are changeable to demonstrate which positions are most relevant to getting good classification performance. Review The authors provide a nice background of pruning literature and sparse representations in neural networks. I believe they were trying to provide a background for the idea that sparseness is something inherent and should/could be uncovered by gradient descent. However, the background citations are for a slightly different, but related topic; pruning techniques are not exactly what this paper is about, but it does share some background with spares representations. In motivation, for line" One possible explanation is that, at least for classification tasks, the usual cross–entropy loss without ad- ditional regularization techniques are not well suited for this" there needs to be citation. The description of STE in section 3 needs more description. The experimental results look very preliminary. Using only LeNet and CIFAR10 might be a good way to triage a technique, but it is very different to draw any conclusions based on these. They are too small and the results between them barely suggest any trend. The authors essentially show a known result that training only sign bits (Ivan and Florian 2020) yields good results. What additional science have the authors uncovered? The last section of encoding messages in weights seems quite unrelated to the rest of the paper. It is interesting, but the results seem random and they don't give us any new insight into the learning process of neural networks.
ICLR
Title Bit-wise Training of Neural Network Weights Abstract We propose an algorithm where the individual bits representing the weights of a neural network are learned. This method allows training weights with integer values on arbitrary bit-depths and naturally uncovers sparse networks, without additional constraints or regularization techniques. We show better results than the standard training technique with fully connected networks and similar performance as compared to standard training for residual networks. By training bits in a selective manner we found that the biggest contribution to achieving high accuracy is given by the first three most significant bits, while the rest provide an intrinsic regularization. As a consequence we show that more than 90% of a network can be used to store arbitrary codes without affecting its accuracy. These codes can be random noise, binary files or even the weights of previously trained networks. N/A We propose an algorithm where the individual bits representing the weights of a neural network are learned. This method allows training weights with integer values on arbitrary bit-depths and naturally uncovers sparse networks, without additional constraints or regularization techniques. We show better results than the standard training technique with fully connected networks and similar performance as compared to standard training for residual networks. By training bits in a selective manner we found that the biggest contribution to achieving high accuracy is given by the first three most significant bits, while the rest provide an intrinsic regularization. As a consequence we show that more than 90% of a network can be used to store arbitrary codes without affecting its accuracy. These codes can be random noise, binary files or even the weights of previously trained networks. 1 INTRODUCTION Many challenging areas of computer science have found very good solutions by using powerful techniques such as deep neural networks. Their applications range now from computer vision, speech recognition, natural language processing, game playing engines, natural sciences such as physics, chemistry, biology and even to automated driving. Their success is largely due to the increase in computing power of dedicated hardware which supports massive parallel matrix operations. This enabled researchers to build ever growing models with intricate architectures and millions or even billions of parameters, with impressive results. However, despite their effectiveness, many aspects of deep neural networks are not well understood. One such aspect is why over-parameterized models are able to generalize well. One of the important avenues of research towards a better understanding of deep learning architectures is neural network sparsity. Frankle & Carbin (2019) showed a simple, yet very effective magnitude based pruning technique capable of training neural networks in very high sparsity regimes while retaining the performance of the dense counterparts. This sparked new interest in parameter pruning and a large body of work on the topic has since been published. The techniques for weight pruning can be broadly categorized as follows: pruning after training, before training and pruning during training. The work of Frankle & Carbin (2019) falls in the first category because the method relies on removing the weights which reach small magnitudes after they have been trained. In the second kind of approach, such as (Lee et al., 2019; Wang et al., 2020), neural networks are pruned before training in order to avoid expensive computations at training time. The end goal is to remove connections such that the resulting network is sparse and the weights are efficiently trainable after the pruning procedure. The third kind of approach is to use dynamical pruning strategies (Dai et al., 2019; Mostafa & Wang, 2019) which train and remove weights at the same time. The main goal behind these pruning strategies is to find sparse neural networks which can be trained to large degrees of accuracy. However, it has been shown by Zhou et al. (2019) that there exist pruning masks which can be applied to an untrained network such that its performance is far better than chance. Furthermore, Ramanujan et al. (2019) developed an algorithm for finding good pruning masks for networks with fixed, random weights. Theoretical works (Malach et al., 2020; Orseau et al., 2020) even proved that within random neural networks there exist highly efficient subnetworks, which can be found just by pruning. Orseau et al. (2020) advance the hypothesis that the main task of gradient descent is to prune the networks while on the second place is the fine-tuning of the weights. 2 MOTIVATION A key issue we want to emphasize is that, in all these works, the way in which the networks are pruned in practice is by forcing them, through some criteria, to set a fraction of the weights to zero. Since it has been shown that sparse networks perform as well as their dense counterpart, or sometimes even better, the natural question that arises is: why doesn’t gradient descent itself prune the weights during training? Why hasn’t pruning been spontaneously observed in practice? One possible explanation is that, at least for classification tasks, the usual cross–entropy loss without additional regularization techniques are not well suited for this. Other factors such as the stochasticity of the data batches, optimization algorithm, weights initialization etc. might also play a role. However, we approach this question from a different perspective. We hypothesize that an important reason for weights not being set to zero is because this is a particular state where the bits representing a weight must all equal zero. This is highly unlikely since weights are usually represented on 32 bits. The probability of a single weight being set to exactly zero is 2−31, the sign bit not playing a role. Therefore the chances that a significant number of weights is set to zero decreases very rapidly. If weights would be represented on a lower bit depth, then the chance that the optimizer sets them to zero should increase. In order to test the degree to which this hypothesis is true we experiment with neural networks for image classification where, instead of training the weights themselves, we train the individual bits representing the weights. This might allow gradient descent to reach stable states where all bits in a set of weights are zero and the loss function is around a local minimum. If our hypothesis is true then we expect a strong dependency between sparsity and bit-depth. By encoding weights on arbitrary precision we also touch upon the topic of network quantization and show that particular cases of this training technique result in algorithms developed in previous works which we will describe in Section 8. Moreover, we show that weight quantization naturally leads to weight pruning and sparse networks without additional constraints such a regularizations, additional loss terms, architectural changes or other tricks usually involved in engineering low bit quantized networks. 3 BINARY DECOMPOSITION We approximate weights on k bits by using the sign and magnitude representation, due to its simplicity. A weight tensor of a layer l can be decomposed as: θlk = ( k−2∑ i=0 ali · 2i+αl ) · (−1)a l k−1 (1) with al ∈ {0, 1} representing the binary coefficients and k the number of bits. The summation encodes the magnitude of the number while the second factor encodes the sign: this way we obtain numbers in a symmetric interval around zero. We add a negative constant αl to the exponent in order to allow the representation of fractional numbers (see Table 1). Additionally, this term controls the magnitude of the numbers and, therefore, the width of the weights distribution within a layer. Choosing αl < −k + 1 the weights are guaranteed to be less than 1 in magnitude. In order to constrain a to take binary values we use auxiliary floating point variables x ∈ R (virtual bits) passed through a unit step function: a = H(x) = 0 if x ≤ 0, otherwise 1. The weight initialization for the k bit training technique is as follows: for a fully connected layer the weight matrix is expanded into a 3D tensor of shape (k, nl−1, nl) with k representing the number of bits and nl−1, nl the number of nodes in the previous and current layer, respectively. Figure 1 illustrates a simple example of a (3, 4, 3) bit-tensor. For convolutional layers, where a weight tensor is in higher dimension, the procedure is analogous to the fully connected case and the bit-tensor is now of shape (k, sx, sy, nl−1, nl) with sx, sy representing the kernel sizes in x and y direction. The value for each bit is chosen randomly with equal probability of being either 0 or 1 (x ≤ 0 in the first case and x > 0 in the second). We ensure the weights sampled in this manner are not initialized at exactly zero, because this would mean pruning the network from the start and invalidate our hypothesis. Hence we obtain a uniform weight distribution without zeros. We adopt the Kaiming He (He et al., 2015a) initialization technique for each layer’s weights, which means the standard deviation is √ 2/nl−1, where nl−1 is the number of nodes in the previous layer. We have determined αl algorithmically via a simple binary search such that this condition is fulfilled for the weight distribution of each layer. This term is a fixed parameter in each layer and depends only on the structure of the network. The virtual bits, x, are chosen from a normal distribution which also satisfies the Kaiming He condition on its variance. For the particular situation where k = 2 the weights have only two values and the standard deviation is exactly 2α. Ramanujan et al. (2019) refer to this distribution as the Signed Kaiming Constant. During training, the feed-forward step is performed as usual, with the weights being calculated according to Eq. (1). The backpropagation phase uses the straight through estimator (STE) (Hinton, 2012; Bengio, 2013) for the step function introduced in the weight’s binary decomposition. The derivative of a hard threshold function such as the Heaviside step function is zero everywhere except at zero (more specifically it is the Dirac delta function). Since the values of the weights are passed through this step function are almost never exactly zero, the gradients during backpropagation will almost always be zero. This situation leads to a stagnant network which never updates its weights and never learns. To avoid this, during the backpropagation phase the gradient of the step function is replaced by the gradient of a different function which is non-zero on a domain larger than for the step function. Such functions are usually referred to as proxy functions and can take many forms. Yin et al. (2019); Shekhovtsov & Yanush (2020) provide in-depth discussions on the properties of STEs. Throughout this work we adopt the method first proposed by Hinton (2012) which treats the gradient of a hard threshold function as if it were the identity function. This method has been shown to work very well in practice (Liu et al., 2018; Bulat et al., 2019; 2021; Bethge et al., 2019; Alizadeh et al., 2019) Notice that in Eq.(1) the additive constant αl can be factored out of the sum. The resulting weights are in the form θlk = 2 αl · Θlk, where Θlk contains only integer numbers on k bits. The ReLU activation function has the property that σ(α · x) = α · σ(x) for any α > 0. It can be shown that for a ReLU network of depth L, scaling the weights of each layer by a factor αl, with l ∈ [0, 1, . . . L−1] representing the layer index, is equivalent to scaling just a single layer with α = ∏L−1 l=0 αl, including the input layer. This means that we can gather all factors αl into a single α, scale the input images with that factor and train the network with just integer numbers represented on k bits. At inference time, for classification tasks α is not relevant because the output nodes are all scaled by the same coefficient and argmax(α · x) = argmax(x) for any α > 0. 4 EXPERIMENTS We have performed an extensive set of experiments where networks were trained on bit-depths ranging from 2 to 32. Figure 2 summarises the performance of LeNet and ResNet-18 (LeCun et al., 1998; He et al., 2015b) trained on MNIST and CIFAR10 (LeCun & Cortes, 2010; Krizhevsky, 2009). Each experiment was repeated 15 times. Each data point represents the best accuracy/sparsity obtained from all runs and are displayed as violin plots. They show via the kernel density estimation the minimum, mean, maximum and the spread of the repeated runs. The right-most violin shows the performance of the standard 32-bit training technique, the horizontal black line its mean and the shaded area the minimum and maximum accuracy. The networks were trained with the following setup. For LeNet the learning rate starts at 9 · 10−4 and is divided by 10 at epoch 40 and 80. We have also experimented with a single, fixed learning rate but in that case the standard training technique on 32bits reached a maximum accuracy of only 97.7%, while bit-wise weight training did not suffer any noticeable penalty. For ResNet the learning rate starts at 6 · 10−4 and is divided by 10 at epoch 150 and 170. In both cases we used the Adam optimizer (Kingma & Ba, 2017). For LeNet (left panels in Figure 2) this training technique consistently achieves higher mean accuracies than the baseline while at the same time pruning the network significantly. Moreover, as the bit depth decreases there seems to be a slight increase in the mean classification accuracy. This indicates that the additional bits available for the weights impede the ability of the gradient descent to converge to better solutions. The right panels in Figure 2 show the results of ResNet-18 trained on CIFAR10. Here we observe a degradation in terms of classification accuracy compared to the standard training technique of about 1.7 percentage points (we will show in Section 5 how to mitigate this issue). The network sparsity is higher than in the case of LeNet, somewhere in the range of 25-35%, for bit depths 2 to 16. Note that the sparsity plots are also represented as violins, but their height is smaller relative to the scale of the entire curve due to the very small variations in the sparsity achieved at the end of training. For both LeNet and ResNet there is a strong dependency between the bit depth and the amount of zero weights found by the network. This is in line with our hypothesis that gradient descent does not naturally uncover sparse networks when training weights represented on 32bits. This also explains why currently used pruning techniques require external mechanisms which force the network to set weights to zero while training. In essence, they bypass the weight’s whole bit structure, effectively setting all bits to zero at once. The black dots in Figure 2 indicate the percentage of weights set to zero by random chance. We observe that for high bit-depths (k > 24) the chance that gradient descent sets a certain amount of weights to zero is almost the same as random chance. However, for lower bit-depths gradient descent is much more likely to set weights to zero due to the much smaller search space of the weight’s bit structure. Figure 3 shows the histogram of (float) weight distribution of the second hidden layer in LeNet before and after training. Bit-wise weight learning moves a significant amount of weights either to exactly zero or to the maximum value representable on k bits. The frequency of intermediary values is significantly reduced, in some cases by one order of magnitude. Although this technique has no special regularization nor an external weight pruning mechanism, it naturally uncovers sparse networks. This comes in stark contrast with the standard training technique, right most panels. Here, the distribution of the weights after training is much more spread out than the initial one and has a large peak towards zero, but the weights are never exactly zero. 5 SELECTIVE BIT TRAINING In Section 4 we have presented experiments where all weight bits are simultaneously trained together. Our algorithm, however, also allows us to train specific bits only, while keeping others fixed as they were originally initialized. We can encode as a string mask of 0’s and 1’s which bit is trainable and which not, e.g. for a 4-bit mask 0001 we initialize all bits randomly but we train only the least significant bit, while for 1000 we train only the sign bit and leave the rest unchanged. See Table 1 for an example of a weight represented as a 16-bit number. Figure 4 show the results achieved by LeNet with all possible selective training patterns for 2, 4 and 8 bits. Training with weights encoded on 2 bits (top-left panel) results in 3 possible scenarios: ’01’ trains the magnitude, ’10’ trains the sign and ’11’ trains the sign as well as the magnitude of the weights. Training with weights encoded on 4 bits, pattern ’1000’ corresponds to training just the sign and keeping the magnitudes random, ’0111’ corresponds to training the magnitudes and keeping the sign fixed and ’1111’ corresponds to training all bits. Similarly for 8 bits (bottom panel). The baseline accuracy is shown as the right-most data-point in each graph. Figure 5 shows the same experiments for ResNet. An interesting phenomenon appears when training bits selectively. Several strong discontinuities in the accuracy curve are visible when training weights encoded on 4 and 8 bits. They appear at very specific bit patterns which we will address next. First, we highlight the extreme situations of (a) training just the sign bit and (b) only the magnitude bits. In Figures 4 and 5 these refer to the central data points with trainable bit patterns ’10’, ’1000’, ’10000000’ for sign training and ’01’, ’0111’, ’01111111’ for magnitude training. When training just the sign bit, LeNet outperforms the baseline network, as shown in Figure 5. Our weight initialization procedure avoids initializing magnitudes to zero. For the particular case when quantizing weights on k = 2 bits it means that the magnitude bit is always 1. In this situation training only the sign bit is therefore equivalent to training a binary network with Θ ∈ {−1, 1}. For ResNet, Figure 5, training the weight’s sign leads to a performance drop of 2–4 percentage points, depending on the quantization size. It shows that this particular network can be trained reasonably well only by changing the sign of the weights and never updating their magnitudes. Training only the magnitude bits results in a very small performance penalty for LeNet as compared to the baseline, and about 1–3 percentage points for ResNet. Training all bits simultaneously leads to the average performance between the two extreme cases. This phenomenon is valid for both ResNet and LeNet, although less visible for the latter. We have performed experiments for bit depths ranging from 2 to 32, where we train only the sign and only the magnitude bits in ResNet. Figure 6 summarizes the test accuracy and sparsity obtained in these two cases. Notice there is little to no correlation between accuracy and bit-depth above 8, whereas sparsity is strongly influenced by it, particularly above 14. For bit-depths lower than 5, magnitude only training rather decreases in performance, while sign only training increases. For the extreme k = 2 bits quantization, their accuracy ordering is inverted and in this case training both the sign bit and the magnitude bit results in a ternary network with Θ ∈ {−1, 0, 1}. The second important observation refers to the cases where the sign and the next one or two bits are trained, while the following remain randomly initialized. These situations correspond to the trainable bit patterns ’1100’, ’1110’, ’11000000’ and ’11100000’ in Figures 4 and 5. In all these cases the bit-wise training technique reaches an accuracy above the baseline (LeNet) or similarly to it (ResNet). This behaviour indicates that a fraction of the untrainable (and less significant) magnitude bits act as a regularizer, increasing the accuracy of the network as compared to the case when they are also trained. We investigated how many trainable bits would be sufficient to reach the accuracy of the baseline. To this end we perform bit-wise training on ResNet with 32, 16, 8, 6, 4 and 2 bits encoding for the weights and gradually decrease the number of trainable bits. More specifically, we expand Eq. (1) in the following way: θlk = 2 αl p−1∑ i=0 ali︸︷︷︸ untrainable 2i + k−2∑ j=p alj︸︷︷︸ trainable 2j · (−1)alk−1︸ ︷︷ ︸ trainable (2) where k represents the weight’s bit-depth and p the number of untrainable bits. For p = 0 all bits are trainable and for p = k − 2 only the sign is trainable. We summarize the results of these experiments in Figure 7. The blue data points represent the test accuracy of ResNet as a function of the number of trainable bits, with weights encoded as 32bit integers. Training more than 17 bits results in a test accuracy of about 88%. As the number of trainable bits decreases the accuracy improves and reaches the level of the baseline when training only the first 3 bits. A similar behaviour is seen when encoding weights on lower bit-depths. The best performance is obtained when weights are encoded on more than 6 bits and we train the sign and the next two most significant magnitude bits. The rest of the available bits do not contribute to the network’s performance, rather they hinder the capacity of the network to converge well. 6 POST-TRAINING BIT ANALYSIS Training bits selectively uncovers the fact that only a few of the most significant bits contribute to achieving a high accuracy, while the others provide regularization. In contrast, standard training does not reveal which weights or bits contribute most to the network’s performance. In order to understand this we conduct experiments where we convert the weights learned in the standard way into weights expressed according to Eq. (1). More precisely, we start by training a standard network, and after training, for each layer we divide all weights by the magnitude of the smallest non-zero weight within that layer and round to the nearest integer. Therefore we obtain integer weights which we can then decompose into binary form and gain access to each bit. To be as close as possible to the original weights we encode the integer weights on 32 bits, even though in most situations weights do not require that many. Thus we convert a network trained in the standard way, weights as 32bit floating point values, into a network with integer weights on 32 bits. Next, we start changing the first p less significant magnitude bits and leave the next 32 − p bits unchanged, similar to Eq. (2). In this way we can investigate the impact of each bit on the final accuracy. Note that different layers require a different number of bits to represent the weights and generally, but not necessarily, depends on the number of weights within that layer. If we start changing more bits than a layer requires, the pre-trained structure is destroyed and the network looses its properties. In order to avoid this, we compute the maximum number of bits required for the weights in each layer, ml. We impose that the maximum number of changed bits for each layer is pmaxl = ml − 3. Figure 8 shows the accuracy and sparsity of a standard, pre-trained LeNet and a 6 layer VGG-like network, Conv6, (same as Frankle & Carbin (2019); Zhou et al. (2019); Ramanujan et al. (2019)) as a function of the number of changed bits. We have experimented with three scenarios: all bits are changed randomly, all bits are set to 0, all bits are set to 1. The first data point in each graph, p = 0, represents the performance of the unmodified network with 32 bit floating point weights, as no bits are changed. The following entries indicate the performance of the network as we gradually increase the amount of changed bits. LeNet extends up to 16 bits (the maximum allowed for the first layer in this particular network) and Conv6 extends up to 25 (the maximum allowed for the first dense layer within this network). Setting all p bits to zero (or one) leads to a single possible set of weights. Setting p bits randomly leads to more possible outcomes. This difference is illustrated in Figure 8 by the way the data-points are represented: a single dot when setting bits to zero/one and a violin when setting bits randomly. One can observe that also weights trained in a standard 32 bit floating point format do not make full use of high precision bits. The first 6 bits do not play a significant role for the final accuracy, as they can be modified post-training to any value. These results are in line with our initial hypothesis that gradient descent does not prune networks due to the large amount of bits available for the weights. Additionally, we found that the most important contribution to the performance of a network is the sign bit, followed by the next two most significant magnitude bits. This suggests that gradient descent might find a local optimum based only on these three bits while the rest are used to perform fine-tuning. However, this appears to be less successful, since a large fraction of the bits might be set to zero, one or left randomly initialized, perhaps due to the stochasticity of the training algorithm (batch training) or the noise present in the data itself. 7 MESSAGE ENCODING IN WEIGHTS We have shown so far that 29 out of the 32 bits available for the weights of ResNet have an overall regularization behaviour and can remain randomly initialized and never trained. This leads to the idea that they could be used to encode arbitrary messages while the trainable bits are sufficient to train the network to high degrees of accuracy. To test this hypothesis we have performed several experiments in which we embedded various types of messages in the first 29 untrainable bits of a neural network’s weights and train only the next 3. The results are summarized in Figure 9. Each experiment was repeated 10 times. The first data point shows the baseline accuracy of ResNet trained with the standard method (32bit floating point representation of weights). For the second experiment we assigned random values to the untrainable bits of each layer. In the third experiment we embedded random passages of Shakespeare’s Hamlet. In the fourth experiment we trained until convergence 29 ResNets with bitdepth 1 and embedded each of them into a new, 32bit ResNet, training in a bit-wise fashion the sign and the next two most significant magnitude bits. The test accuracy obtained by the 1bit ResNet is shown as the last violin. We observe that embedding either random noise, structured data or a set of previously learned weights does not impact the accuracy with respect to the baseline ResNet in any significant way. 8 CONNECTION WITH OTHER WORKS Our weight initialization procedure described in Section 3 ensures that weights are never set to zero before training. For the particular case where k = 2 bits this means that the magnitude bit is always 1 while the sign bit can be either 1 or 0. Training only the sign bit is therefore equivalent to training a binary network. This is similar to BinaryConnect, BinaryNet (Courbariaux et al., 2015; 2016) and XNOR-Net (Rastegari et al., 2016) where weights are constrained to −1 and 1. When training with bit pattern ’01’ (magnitude only) or ’11’ (sign and magnitude) results in a ternary network (Li & Liu, 2016; Zhu et al., 2017) because the magnitude is now also allowed to change, leading to some weights being set to zero. Training only the magnitude bit the behaviour of our algorithm is effectively very similar in nature and performance to the edge-popup algorithm developed by Ramanujan et al. (2019) which finds pruning masks for networks with weights randomly sampled from the Signed Kaiming Constant distribution. Encoding weight on arbitrary bit depths and training just the sign bit we obtain the sign-flipping algorithm first shown by Ivan & Florian (2020). Wang et al. (2021) found in a recent study that it is possible to embed 36.9MB of malware into the dense layers of a pretrained 178MB Alex-Net model with a 1% accuracy degradation and without being detected by antivirus programs. Our method can store arbitrary code in any layer of a network (dense as well as convolutional) and could drastically increase the viral amount without damaging the network’s performance, at the same time raising no suspicion on the presence of the malware. 9 SUMMARY Motivated by the question of why gradient descent does not naturally prune neural connections during training, we developed a method to directly train the bits representing the weights. From this perspective we show that an important factor is the over-parametrization in terms of number of bits available for weight encoding. This also sheds some light into why networks with large amounts of weights are able to generalize well. Our algorithm enables weight quantization on arbitrary bit-depths and can be used as a tool for bit level analysis of weight training procedures. We show that gradient descent effectively uses only a small fraction of the most significant bits, while the less significant ones provide an intrinsic regularization and their exact values are not essential for reaching a high classification accuracy. A consequence of this property is that, by using 32 bits for the weight representation, more than 90% of a ResNet can be used to store a large variety of messages, ranging from random noise to structured data, without affecting its performance. 10 REPRODUCIBILITY The code used for the experiments carried out in this work will be made public at: https://github.com/iclr2022-2798/bit-wise-training
1. What is the focus of the paper regarding low-precision quantization? 2. What are the issues with the introduction and motivation section of the paper? 3. What is the expression for alpha in terms of layer dimensions and number of bits? 4. What is a He distribution, and how does it relate to the paper's content? 5. Why do the experiments focus on trivial networks and datasets, and how could the results be generalized to larger networks? 6. What is the purpose of message encoding in the neural network's weight using steganography during training?
Summary Of The Paper Review
Summary Of The Paper The paper proposes arithmetic decomposition and training on individual bits in order to achieve low-precision quantization. Review Unfortunately, there are serious issues with this submission: The introduction and motivation seem to be written for a paper that is on pruning not quantization, and they are out of context compared to the title, abstract, and rest of the paper. This is most likely a LaTeX error. The motivation of the paper and comparison to prior art being excluded, it is very hard to assess the quality of the work. I did however try to extrapolate what the authors intended to write, and below is my review for Section 3 and onwards. In Section 3, what is the expression for \alpha in terms of layer dimensions and number of bits such that the He initialization standard deviation condition is satisfied? This seems like an interesting result that can be added inline in the paper, rather than just implicitly mentioning that an expression for \alpha was derived and used. What is a He distribution? In (He, 2015), variance engineering is done and the contributions are conditions on the variance of the initialized weights. The distributions themselves are either uniform or normal. Can the author clarify what they mean by He distribution, I believe they mean He conditions on variance. The experiments are performed on very trivial networks deployed on the MNIST and CIFAR-10 dataset. Can the authors evaluate their work on more contemporary networks such as ResNet on ImageNet and similar tasks? Message encoding in the neural network's weight using steganography in training is interesting. However, why does it matter? And can these results be generalized on larger networks.
ICLR
Title Bit-wise Training of Neural Network Weights Abstract We propose an algorithm where the individual bits representing the weights of a neural network are learned. This method allows training weights with integer values on arbitrary bit-depths and naturally uncovers sparse networks, without additional constraints or regularization techniques. We show better results than the standard training technique with fully connected networks and similar performance as compared to standard training for residual networks. By training bits in a selective manner we found that the biggest contribution to achieving high accuracy is given by the first three most significant bits, while the rest provide an intrinsic regularization. As a consequence we show that more than 90% of a network can be used to store arbitrary codes without affecting its accuracy. These codes can be random noise, binary files or even the weights of previously trained networks. N/A We propose an algorithm where the individual bits representing the weights of a neural network are learned. This method allows training weights with integer values on arbitrary bit-depths and naturally uncovers sparse networks, without additional constraints or regularization techniques. We show better results than the standard training technique with fully connected networks and similar performance as compared to standard training for residual networks. By training bits in a selective manner we found that the biggest contribution to achieving high accuracy is given by the first three most significant bits, while the rest provide an intrinsic regularization. As a consequence we show that more than 90% of a network can be used to store arbitrary codes without affecting its accuracy. These codes can be random noise, binary files or even the weights of previously trained networks. 1 INTRODUCTION Many challenging areas of computer science have found very good solutions by using powerful techniques such as deep neural networks. Their applications range now from computer vision, speech recognition, natural language processing, game playing engines, natural sciences such as physics, chemistry, biology and even to automated driving. Their success is largely due to the increase in computing power of dedicated hardware which supports massive parallel matrix operations. This enabled researchers to build ever growing models with intricate architectures and millions or even billions of parameters, with impressive results. However, despite their effectiveness, many aspects of deep neural networks are not well understood. One such aspect is why over-parameterized models are able to generalize well. One of the important avenues of research towards a better understanding of deep learning architectures is neural network sparsity. Frankle & Carbin (2019) showed a simple, yet very effective magnitude based pruning technique capable of training neural networks in very high sparsity regimes while retaining the performance of the dense counterparts. This sparked new interest in parameter pruning and a large body of work on the topic has since been published. The techniques for weight pruning can be broadly categorized as follows: pruning after training, before training and pruning during training. The work of Frankle & Carbin (2019) falls in the first category because the method relies on removing the weights which reach small magnitudes after they have been trained. In the second kind of approach, such as (Lee et al., 2019; Wang et al., 2020), neural networks are pruned before training in order to avoid expensive computations at training time. The end goal is to remove connections such that the resulting network is sparse and the weights are efficiently trainable after the pruning procedure. The third kind of approach is to use dynamical pruning strategies (Dai et al., 2019; Mostafa & Wang, 2019) which train and remove weights at the same time. The main goal behind these pruning strategies is to find sparse neural networks which can be trained to large degrees of accuracy. However, it has been shown by Zhou et al. (2019) that there exist pruning masks which can be applied to an untrained network such that its performance is far better than chance. Furthermore, Ramanujan et al. (2019) developed an algorithm for finding good pruning masks for networks with fixed, random weights. Theoretical works (Malach et al., 2020; Orseau et al., 2020) even proved that within random neural networks there exist highly efficient subnetworks, which can be found just by pruning. Orseau et al. (2020) advance the hypothesis that the main task of gradient descent is to prune the networks while on the second place is the fine-tuning of the weights. 2 MOTIVATION A key issue we want to emphasize is that, in all these works, the way in which the networks are pruned in practice is by forcing them, through some criteria, to set a fraction of the weights to zero. Since it has been shown that sparse networks perform as well as their dense counterpart, or sometimes even better, the natural question that arises is: why doesn’t gradient descent itself prune the weights during training? Why hasn’t pruning been spontaneously observed in practice? One possible explanation is that, at least for classification tasks, the usual cross–entropy loss without additional regularization techniques are not well suited for this. Other factors such as the stochasticity of the data batches, optimization algorithm, weights initialization etc. might also play a role. However, we approach this question from a different perspective. We hypothesize that an important reason for weights not being set to zero is because this is a particular state where the bits representing a weight must all equal zero. This is highly unlikely since weights are usually represented on 32 bits. The probability of a single weight being set to exactly zero is 2−31, the sign bit not playing a role. Therefore the chances that a significant number of weights is set to zero decreases very rapidly. If weights would be represented on a lower bit depth, then the chance that the optimizer sets them to zero should increase. In order to test the degree to which this hypothesis is true we experiment with neural networks for image classification where, instead of training the weights themselves, we train the individual bits representing the weights. This might allow gradient descent to reach stable states where all bits in a set of weights are zero and the loss function is around a local minimum. If our hypothesis is true then we expect a strong dependency between sparsity and bit-depth. By encoding weights on arbitrary precision we also touch upon the topic of network quantization and show that particular cases of this training technique result in algorithms developed in previous works which we will describe in Section 8. Moreover, we show that weight quantization naturally leads to weight pruning and sparse networks without additional constraints such a regularizations, additional loss terms, architectural changes or other tricks usually involved in engineering low bit quantized networks. 3 BINARY DECOMPOSITION We approximate weights on k bits by using the sign and magnitude representation, due to its simplicity. A weight tensor of a layer l can be decomposed as: θlk = ( k−2∑ i=0 ali · 2i+αl ) · (−1)a l k−1 (1) with al ∈ {0, 1} representing the binary coefficients and k the number of bits. The summation encodes the magnitude of the number while the second factor encodes the sign: this way we obtain numbers in a symmetric interval around zero. We add a negative constant αl to the exponent in order to allow the representation of fractional numbers (see Table 1). Additionally, this term controls the magnitude of the numbers and, therefore, the width of the weights distribution within a layer. Choosing αl < −k + 1 the weights are guaranteed to be less than 1 in magnitude. In order to constrain a to take binary values we use auxiliary floating point variables x ∈ R (virtual bits) passed through a unit step function: a = H(x) = 0 if x ≤ 0, otherwise 1. The weight initialization for the k bit training technique is as follows: for a fully connected layer the weight matrix is expanded into a 3D tensor of shape (k, nl−1, nl) with k representing the number of bits and nl−1, nl the number of nodes in the previous and current layer, respectively. Figure 1 illustrates a simple example of a (3, 4, 3) bit-tensor. For convolutional layers, where a weight tensor is in higher dimension, the procedure is analogous to the fully connected case and the bit-tensor is now of shape (k, sx, sy, nl−1, nl) with sx, sy representing the kernel sizes in x and y direction. The value for each bit is chosen randomly with equal probability of being either 0 or 1 (x ≤ 0 in the first case and x > 0 in the second). We ensure the weights sampled in this manner are not initialized at exactly zero, because this would mean pruning the network from the start and invalidate our hypothesis. Hence we obtain a uniform weight distribution without zeros. We adopt the Kaiming He (He et al., 2015a) initialization technique for each layer’s weights, which means the standard deviation is √ 2/nl−1, where nl−1 is the number of nodes in the previous layer. We have determined αl algorithmically via a simple binary search such that this condition is fulfilled for the weight distribution of each layer. This term is a fixed parameter in each layer and depends only on the structure of the network. The virtual bits, x, are chosen from a normal distribution which also satisfies the Kaiming He condition on its variance. For the particular situation where k = 2 the weights have only two values and the standard deviation is exactly 2α. Ramanujan et al. (2019) refer to this distribution as the Signed Kaiming Constant. During training, the feed-forward step is performed as usual, with the weights being calculated according to Eq. (1). The backpropagation phase uses the straight through estimator (STE) (Hinton, 2012; Bengio, 2013) for the step function introduced in the weight’s binary decomposition. The derivative of a hard threshold function such as the Heaviside step function is zero everywhere except at zero (more specifically it is the Dirac delta function). Since the values of the weights are passed through this step function are almost never exactly zero, the gradients during backpropagation will almost always be zero. This situation leads to a stagnant network which never updates its weights and never learns. To avoid this, during the backpropagation phase the gradient of the step function is replaced by the gradient of a different function which is non-zero on a domain larger than for the step function. Such functions are usually referred to as proxy functions and can take many forms. Yin et al. (2019); Shekhovtsov & Yanush (2020) provide in-depth discussions on the properties of STEs. Throughout this work we adopt the method first proposed by Hinton (2012) which treats the gradient of a hard threshold function as if it were the identity function. This method has been shown to work very well in practice (Liu et al., 2018; Bulat et al., 2019; 2021; Bethge et al., 2019; Alizadeh et al., 2019) Notice that in Eq.(1) the additive constant αl can be factored out of the sum. The resulting weights are in the form θlk = 2 αl · Θlk, where Θlk contains only integer numbers on k bits. The ReLU activation function has the property that σ(α · x) = α · σ(x) for any α > 0. It can be shown that for a ReLU network of depth L, scaling the weights of each layer by a factor αl, with l ∈ [0, 1, . . . L−1] representing the layer index, is equivalent to scaling just a single layer with α = ∏L−1 l=0 αl, including the input layer. This means that we can gather all factors αl into a single α, scale the input images with that factor and train the network with just integer numbers represented on k bits. At inference time, for classification tasks α is not relevant because the output nodes are all scaled by the same coefficient and argmax(α · x) = argmax(x) for any α > 0. 4 EXPERIMENTS We have performed an extensive set of experiments where networks were trained on bit-depths ranging from 2 to 32. Figure 2 summarises the performance of LeNet and ResNet-18 (LeCun et al., 1998; He et al., 2015b) trained on MNIST and CIFAR10 (LeCun & Cortes, 2010; Krizhevsky, 2009). Each experiment was repeated 15 times. Each data point represents the best accuracy/sparsity obtained from all runs and are displayed as violin plots. They show via the kernel density estimation the minimum, mean, maximum and the spread of the repeated runs. The right-most violin shows the performance of the standard 32-bit training technique, the horizontal black line its mean and the shaded area the minimum and maximum accuracy. The networks were trained with the following setup. For LeNet the learning rate starts at 9 · 10−4 and is divided by 10 at epoch 40 and 80. We have also experimented with a single, fixed learning rate but in that case the standard training technique on 32bits reached a maximum accuracy of only 97.7%, while bit-wise weight training did not suffer any noticeable penalty. For ResNet the learning rate starts at 6 · 10−4 and is divided by 10 at epoch 150 and 170. In both cases we used the Adam optimizer (Kingma & Ba, 2017). For LeNet (left panels in Figure 2) this training technique consistently achieves higher mean accuracies than the baseline while at the same time pruning the network significantly. Moreover, as the bit depth decreases there seems to be a slight increase in the mean classification accuracy. This indicates that the additional bits available for the weights impede the ability of the gradient descent to converge to better solutions. The right panels in Figure 2 show the results of ResNet-18 trained on CIFAR10. Here we observe a degradation in terms of classification accuracy compared to the standard training technique of about 1.7 percentage points (we will show in Section 5 how to mitigate this issue). The network sparsity is higher than in the case of LeNet, somewhere in the range of 25-35%, for bit depths 2 to 16. Note that the sparsity plots are also represented as violins, but their height is smaller relative to the scale of the entire curve due to the very small variations in the sparsity achieved at the end of training. For both LeNet and ResNet there is a strong dependency between the bit depth and the amount of zero weights found by the network. This is in line with our hypothesis that gradient descent does not naturally uncover sparse networks when training weights represented on 32bits. This also explains why currently used pruning techniques require external mechanisms which force the network to set weights to zero while training. In essence, they bypass the weight’s whole bit structure, effectively setting all bits to zero at once. The black dots in Figure 2 indicate the percentage of weights set to zero by random chance. We observe that for high bit-depths (k > 24) the chance that gradient descent sets a certain amount of weights to zero is almost the same as random chance. However, for lower bit-depths gradient descent is much more likely to set weights to zero due to the much smaller search space of the weight’s bit structure. Figure 3 shows the histogram of (float) weight distribution of the second hidden layer in LeNet before and after training. Bit-wise weight learning moves a significant amount of weights either to exactly zero or to the maximum value representable on k bits. The frequency of intermediary values is significantly reduced, in some cases by one order of magnitude. Although this technique has no special regularization nor an external weight pruning mechanism, it naturally uncovers sparse networks. This comes in stark contrast with the standard training technique, right most panels. Here, the distribution of the weights after training is much more spread out than the initial one and has a large peak towards zero, but the weights are never exactly zero. 5 SELECTIVE BIT TRAINING In Section 4 we have presented experiments where all weight bits are simultaneously trained together. Our algorithm, however, also allows us to train specific bits only, while keeping others fixed as they were originally initialized. We can encode as a string mask of 0’s and 1’s which bit is trainable and which not, e.g. for a 4-bit mask 0001 we initialize all bits randomly but we train only the least significant bit, while for 1000 we train only the sign bit and leave the rest unchanged. See Table 1 for an example of a weight represented as a 16-bit number. Figure 4 show the results achieved by LeNet with all possible selective training patterns for 2, 4 and 8 bits. Training with weights encoded on 2 bits (top-left panel) results in 3 possible scenarios: ’01’ trains the magnitude, ’10’ trains the sign and ’11’ trains the sign as well as the magnitude of the weights. Training with weights encoded on 4 bits, pattern ’1000’ corresponds to training just the sign and keeping the magnitudes random, ’0111’ corresponds to training the magnitudes and keeping the sign fixed and ’1111’ corresponds to training all bits. Similarly for 8 bits (bottom panel). The baseline accuracy is shown as the right-most data-point in each graph. Figure 5 shows the same experiments for ResNet. An interesting phenomenon appears when training bits selectively. Several strong discontinuities in the accuracy curve are visible when training weights encoded on 4 and 8 bits. They appear at very specific bit patterns which we will address next. First, we highlight the extreme situations of (a) training just the sign bit and (b) only the magnitude bits. In Figures 4 and 5 these refer to the central data points with trainable bit patterns ’10’, ’1000’, ’10000000’ for sign training and ’01’, ’0111’, ’01111111’ for magnitude training. When training just the sign bit, LeNet outperforms the baseline network, as shown in Figure 5. Our weight initialization procedure avoids initializing magnitudes to zero. For the particular case when quantizing weights on k = 2 bits it means that the magnitude bit is always 1. In this situation training only the sign bit is therefore equivalent to training a binary network with Θ ∈ {−1, 1}. For ResNet, Figure 5, training the weight’s sign leads to a performance drop of 2–4 percentage points, depending on the quantization size. It shows that this particular network can be trained reasonably well only by changing the sign of the weights and never updating their magnitudes. Training only the magnitude bits results in a very small performance penalty for LeNet as compared to the baseline, and about 1–3 percentage points for ResNet. Training all bits simultaneously leads to the average performance between the two extreme cases. This phenomenon is valid for both ResNet and LeNet, although less visible for the latter. We have performed experiments for bit depths ranging from 2 to 32, where we train only the sign and only the magnitude bits in ResNet. Figure 6 summarizes the test accuracy and sparsity obtained in these two cases. Notice there is little to no correlation between accuracy and bit-depth above 8, whereas sparsity is strongly influenced by it, particularly above 14. For bit-depths lower than 5, magnitude only training rather decreases in performance, while sign only training increases. For the extreme k = 2 bits quantization, their accuracy ordering is inverted and in this case training both the sign bit and the magnitude bit results in a ternary network with Θ ∈ {−1, 0, 1}. The second important observation refers to the cases where the sign and the next one or two bits are trained, while the following remain randomly initialized. These situations correspond to the trainable bit patterns ’1100’, ’1110’, ’11000000’ and ’11100000’ in Figures 4 and 5. In all these cases the bit-wise training technique reaches an accuracy above the baseline (LeNet) or similarly to it (ResNet). This behaviour indicates that a fraction of the untrainable (and less significant) magnitude bits act as a regularizer, increasing the accuracy of the network as compared to the case when they are also trained. We investigated how many trainable bits would be sufficient to reach the accuracy of the baseline. To this end we perform bit-wise training on ResNet with 32, 16, 8, 6, 4 and 2 bits encoding for the weights and gradually decrease the number of trainable bits. More specifically, we expand Eq. (1) in the following way: θlk = 2 αl p−1∑ i=0 ali︸︷︷︸ untrainable 2i + k−2∑ j=p alj︸︷︷︸ trainable 2j · (−1)alk−1︸ ︷︷ ︸ trainable (2) where k represents the weight’s bit-depth and p the number of untrainable bits. For p = 0 all bits are trainable and for p = k − 2 only the sign is trainable. We summarize the results of these experiments in Figure 7. The blue data points represent the test accuracy of ResNet as a function of the number of trainable bits, with weights encoded as 32bit integers. Training more than 17 bits results in a test accuracy of about 88%. As the number of trainable bits decreases the accuracy improves and reaches the level of the baseline when training only the first 3 bits. A similar behaviour is seen when encoding weights on lower bit-depths. The best performance is obtained when weights are encoded on more than 6 bits and we train the sign and the next two most significant magnitude bits. The rest of the available bits do not contribute to the network’s performance, rather they hinder the capacity of the network to converge well. 6 POST-TRAINING BIT ANALYSIS Training bits selectively uncovers the fact that only a few of the most significant bits contribute to achieving a high accuracy, while the others provide regularization. In contrast, standard training does not reveal which weights or bits contribute most to the network’s performance. In order to understand this we conduct experiments where we convert the weights learned in the standard way into weights expressed according to Eq. (1). More precisely, we start by training a standard network, and after training, for each layer we divide all weights by the magnitude of the smallest non-zero weight within that layer and round to the nearest integer. Therefore we obtain integer weights which we can then decompose into binary form and gain access to each bit. To be as close as possible to the original weights we encode the integer weights on 32 bits, even though in most situations weights do not require that many. Thus we convert a network trained in the standard way, weights as 32bit floating point values, into a network with integer weights on 32 bits. Next, we start changing the first p less significant magnitude bits and leave the next 32 − p bits unchanged, similar to Eq. (2). In this way we can investigate the impact of each bit on the final accuracy. Note that different layers require a different number of bits to represent the weights and generally, but not necessarily, depends on the number of weights within that layer. If we start changing more bits than a layer requires, the pre-trained structure is destroyed and the network looses its properties. In order to avoid this, we compute the maximum number of bits required for the weights in each layer, ml. We impose that the maximum number of changed bits for each layer is pmaxl = ml − 3. Figure 8 shows the accuracy and sparsity of a standard, pre-trained LeNet and a 6 layer VGG-like network, Conv6, (same as Frankle & Carbin (2019); Zhou et al. (2019); Ramanujan et al. (2019)) as a function of the number of changed bits. We have experimented with three scenarios: all bits are changed randomly, all bits are set to 0, all bits are set to 1. The first data point in each graph, p = 0, represents the performance of the unmodified network with 32 bit floating point weights, as no bits are changed. The following entries indicate the performance of the network as we gradually increase the amount of changed bits. LeNet extends up to 16 bits (the maximum allowed for the first layer in this particular network) and Conv6 extends up to 25 (the maximum allowed for the first dense layer within this network). Setting all p bits to zero (or one) leads to a single possible set of weights. Setting p bits randomly leads to more possible outcomes. This difference is illustrated in Figure 8 by the way the data-points are represented: a single dot when setting bits to zero/one and a violin when setting bits randomly. One can observe that also weights trained in a standard 32 bit floating point format do not make full use of high precision bits. The first 6 bits do not play a significant role for the final accuracy, as they can be modified post-training to any value. These results are in line with our initial hypothesis that gradient descent does not prune networks due to the large amount of bits available for the weights. Additionally, we found that the most important contribution to the performance of a network is the sign bit, followed by the next two most significant magnitude bits. This suggests that gradient descent might find a local optimum based only on these three bits while the rest are used to perform fine-tuning. However, this appears to be less successful, since a large fraction of the bits might be set to zero, one or left randomly initialized, perhaps due to the stochasticity of the training algorithm (batch training) or the noise present in the data itself. 7 MESSAGE ENCODING IN WEIGHTS We have shown so far that 29 out of the 32 bits available for the weights of ResNet have an overall regularization behaviour and can remain randomly initialized and never trained. This leads to the idea that they could be used to encode arbitrary messages while the trainable bits are sufficient to train the network to high degrees of accuracy. To test this hypothesis we have performed several experiments in which we embedded various types of messages in the first 29 untrainable bits of a neural network’s weights and train only the next 3. The results are summarized in Figure 9. Each experiment was repeated 10 times. The first data point shows the baseline accuracy of ResNet trained with the standard method (32bit floating point representation of weights). For the second experiment we assigned random values to the untrainable bits of each layer. In the third experiment we embedded random passages of Shakespeare’s Hamlet. In the fourth experiment we trained until convergence 29 ResNets with bitdepth 1 and embedded each of them into a new, 32bit ResNet, training in a bit-wise fashion the sign and the next two most significant magnitude bits. The test accuracy obtained by the 1bit ResNet is shown as the last violin. We observe that embedding either random noise, structured data or a set of previously learned weights does not impact the accuracy with respect to the baseline ResNet in any significant way. 8 CONNECTION WITH OTHER WORKS Our weight initialization procedure described in Section 3 ensures that weights are never set to zero before training. For the particular case where k = 2 bits this means that the magnitude bit is always 1 while the sign bit can be either 1 or 0. Training only the sign bit is therefore equivalent to training a binary network. This is similar to BinaryConnect, BinaryNet (Courbariaux et al., 2015; 2016) and XNOR-Net (Rastegari et al., 2016) where weights are constrained to −1 and 1. When training with bit pattern ’01’ (magnitude only) or ’11’ (sign and magnitude) results in a ternary network (Li & Liu, 2016; Zhu et al., 2017) because the magnitude is now also allowed to change, leading to some weights being set to zero. Training only the magnitude bit the behaviour of our algorithm is effectively very similar in nature and performance to the edge-popup algorithm developed by Ramanujan et al. (2019) which finds pruning masks for networks with weights randomly sampled from the Signed Kaiming Constant distribution. Encoding weight on arbitrary bit depths and training just the sign bit we obtain the sign-flipping algorithm first shown by Ivan & Florian (2020). Wang et al. (2021) found in a recent study that it is possible to embed 36.9MB of malware into the dense layers of a pretrained 178MB Alex-Net model with a 1% accuracy degradation and without being detected by antivirus programs. Our method can store arbitrary code in any layer of a network (dense as well as convolutional) and could drastically increase the viral amount without damaging the network’s performance, at the same time raising no suspicion on the presence of the malware. 9 SUMMARY Motivated by the question of why gradient descent does not naturally prune neural connections during training, we developed a method to directly train the bits representing the weights. From this perspective we show that an important factor is the over-parametrization in terms of number of bits available for weight encoding. This also sheds some light into why networks with large amounts of weights are able to generalize well. Our algorithm enables weight quantization on arbitrary bit-depths and can be used as a tool for bit level analysis of weight training procedures. We show that gradient descent effectively uses only a small fraction of the most significant bits, while the less significant ones provide an intrinsic regularization and their exact values are not essential for reaching a high classification accuracy. A consequence of this property is that, by using 32 bits for the weight representation, more than 90% of a ResNet can be used to store a large variety of messages, ranging from random noise to structured data, without affecting its performance. 10 REPRODUCIBILITY The code used for the experiments carried out in this work will be made public at: https://github.com/iclr2022-2798/bit-wise-training
1. What is the focus and contribution of the paper on neural network training? 2. What are the strengths of the proposed approach, particularly in terms of its applications? 3. What are the weaknesses of the paper, especially regarding experiment scales and comparisons with other works? 4. Do you have any concerns about the method's effectiveness in larger-scale problems? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes to directly train the bit values of each parameter in a neural network, instead of directly optimizing the floating point value of each weight. By varying the number of bits that are allowed to be optimized, the authors show that with less bits the network will become automatically sparser. This method has many interesting applications, including fixing some bits to be a message and only training the rest. Review Strengths: This paper experiments with an interesting an intuitive idea. There are many interesting applications, e.g., more efficient networks and embedding hidden messages in network weights. Most figures include error bars. Figure 3 confirms a main hypothesis (fewer bits encourages sparser networks). Weaknesses: The paper could benefit from medium scale experiments, e.g., ImageNet. The method matches standard training on ImageNet but faces accuracy degradation on CIFAR. A concern is that this accuracy degradation would be even more substantial for problems such as ImageNet. The paper would benefit substantially from a related work section. Since the paper is not 9 pages, there is definitely room for this. I am not an expert on quantization (perhaps another reviewer is) but I know that it is a very active research area. How does this papers method compare to standard methods in quantization? If the author's hypothesis is correct, networks trained with various quantization techniques should be sparse and it would be very interesting to verify this. There is no discussion of how much extra compute / FLOPs is incurred by this method during training, which may be a drawback.
ICLR
Title Bit-wise Training of Neural Network Weights Abstract We propose an algorithm where the individual bits representing the weights of a neural network are learned. This method allows training weights with integer values on arbitrary bit-depths and naturally uncovers sparse networks, without additional constraints or regularization techniques. We show better results than the standard training technique with fully connected networks and similar performance as compared to standard training for residual networks. By training bits in a selective manner we found that the biggest contribution to achieving high accuracy is given by the first three most significant bits, while the rest provide an intrinsic regularization. As a consequence we show that more than 90% of a network can be used to store arbitrary codes without affecting its accuracy. These codes can be random noise, binary files or even the weights of previously trained networks. N/A We propose an algorithm where the individual bits representing the weights of a neural network are learned. This method allows training weights with integer values on arbitrary bit-depths and naturally uncovers sparse networks, without additional constraints or regularization techniques. We show better results than the standard training technique with fully connected networks and similar performance as compared to standard training for residual networks. By training bits in a selective manner we found that the biggest contribution to achieving high accuracy is given by the first three most significant bits, while the rest provide an intrinsic regularization. As a consequence we show that more than 90% of a network can be used to store arbitrary codes without affecting its accuracy. These codes can be random noise, binary files or even the weights of previously trained networks. 1 INTRODUCTION Many challenging areas of computer science have found very good solutions by using powerful techniques such as deep neural networks. Their applications range now from computer vision, speech recognition, natural language processing, game playing engines, natural sciences such as physics, chemistry, biology and even to automated driving. Their success is largely due to the increase in computing power of dedicated hardware which supports massive parallel matrix operations. This enabled researchers to build ever growing models with intricate architectures and millions or even billions of parameters, with impressive results. However, despite their effectiveness, many aspects of deep neural networks are not well understood. One such aspect is why over-parameterized models are able to generalize well. One of the important avenues of research towards a better understanding of deep learning architectures is neural network sparsity. Frankle & Carbin (2019) showed a simple, yet very effective magnitude based pruning technique capable of training neural networks in very high sparsity regimes while retaining the performance of the dense counterparts. This sparked new interest in parameter pruning and a large body of work on the topic has since been published. The techniques for weight pruning can be broadly categorized as follows: pruning after training, before training and pruning during training. The work of Frankle & Carbin (2019) falls in the first category because the method relies on removing the weights which reach small magnitudes after they have been trained. In the second kind of approach, such as (Lee et al., 2019; Wang et al., 2020), neural networks are pruned before training in order to avoid expensive computations at training time. The end goal is to remove connections such that the resulting network is sparse and the weights are efficiently trainable after the pruning procedure. The third kind of approach is to use dynamical pruning strategies (Dai et al., 2019; Mostafa & Wang, 2019) which train and remove weights at the same time. The main goal behind these pruning strategies is to find sparse neural networks which can be trained to large degrees of accuracy. However, it has been shown by Zhou et al. (2019) that there exist pruning masks which can be applied to an untrained network such that its performance is far better than chance. Furthermore, Ramanujan et al. (2019) developed an algorithm for finding good pruning masks for networks with fixed, random weights. Theoretical works (Malach et al., 2020; Orseau et al., 2020) even proved that within random neural networks there exist highly efficient subnetworks, which can be found just by pruning. Orseau et al. (2020) advance the hypothesis that the main task of gradient descent is to prune the networks while on the second place is the fine-tuning of the weights. 2 MOTIVATION A key issue we want to emphasize is that, in all these works, the way in which the networks are pruned in practice is by forcing them, through some criteria, to set a fraction of the weights to zero. Since it has been shown that sparse networks perform as well as their dense counterpart, or sometimes even better, the natural question that arises is: why doesn’t gradient descent itself prune the weights during training? Why hasn’t pruning been spontaneously observed in practice? One possible explanation is that, at least for classification tasks, the usual cross–entropy loss without additional regularization techniques are not well suited for this. Other factors such as the stochasticity of the data batches, optimization algorithm, weights initialization etc. might also play a role. However, we approach this question from a different perspective. We hypothesize that an important reason for weights not being set to zero is because this is a particular state where the bits representing a weight must all equal zero. This is highly unlikely since weights are usually represented on 32 bits. The probability of a single weight being set to exactly zero is 2−31, the sign bit not playing a role. Therefore the chances that a significant number of weights is set to zero decreases very rapidly. If weights would be represented on a lower bit depth, then the chance that the optimizer sets them to zero should increase. In order to test the degree to which this hypothesis is true we experiment with neural networks for image classification where, instead of training the weights themselves, we train the individual bits representing the weights. This might allow gradient descent to reach stable states where all bits in a set of weights are zero and the loss function is around a local minimum. If our hypothesis is true then we expect a strong dependency between sparsity and bit-depth. By encoding weights on arbitrary precision we also touch upon the topic of network quantization and show that particular cases of this training technique result in algorithms developed in previous works which we will describe in Section 8. Moreover, we show that weight quantization naturally leads to weight pruning and sparse networks without additional constraints such a regularizations, additional loss terms, architectural changes or other tricks usually involved in engineering low bit quantized networks. 3 BINARY DECOMPOSITION We approximate weights on k bits by using the sign and magnitude representation, due to its simplicity. A weight tensor of a layer l can be decomposed as: θlk = ( k−2∑ i=0 ali · 2i+αl ) · (−1)a l k−1 (1) with al ∈ {0, 1} representing the binary coefficients and k the number of bits. The summation encodes the magnitude of the number while the second factor encodes the sign: this way we obtain numbers in a symmetric interval around zero. We add a negative constant αl to the exponent in order to allow the representation of fractional numbers (see Table 1). Additionally, this term controls the magnitude of the numbers and, therefore, the width of the weights distribution within a layer. Choosing αl < −k + 1 the weights are guaranteed to be less than 1 in magnitude. In order to constrain a to take binary values we use auxiliary floating point variables x ∈ R (virtual bits) passed through a unit step function: a = H(x) = 0 if x ≤ 0, otherwise 1. The weight initialization for the k bit training technique is as follows: for a fully connected layer the weight matrix is expanded into a 3D tensor of shape (k, nl−1, nl) with k representing the number of bits and nl−1, nl the number of nodes in the previous and current layer, respectively. Figure 1 illustrates a simple example of a (3, 4, 3) bit-tensor. For convolutional layers, where a weight tensor is in higher dimension, the procedure is analogous to the fully connected case and the bit-tensor is now of shape (k, sx, sy, nl−1, nl) with sx, sy representing the kernel sizes in x and y direction. The value for each bit is chosen randomly with equal probability of being either 0 or 1 (x ≤ 0 in the first case and x > 0 in the second). We ensure the weights sampled in this manner are not initialized at exactly zero, because this would mean pruning the network from the start and invalidate our hypothesis. Hence we obtain a uniform weight distribution without zeros. We adopt the Kaiming He (He et al., 2015a) initialization technique for each layer’s weights, which means the standard deviation is √ 2/nl−1, where nl−1 is the number of nodes in the previous layer. We have determined αl algorithmically via a simple binary search such that this condition is fulfilled for the weight distribution of each layer. This term is a fixed parameter in each layer and depends only on the structure of the network. The virtual bits, x, are chosen from a normal distribution which also satisfies the Kaiming He condition on its variance. For the particular situation where k = 2 the weights have only two values and the standard deviation is exactly 2α. Ramanujan et al. (2019) refer to this distribution as the Signed Kaiming Constant. During training, the feed-forward step is performed as usual, with the weights being calculated according to Eq. (1). The backpropagation phase uses the straight through estimator (STE) (Hinton, 2012; Bengio, 2013) for the step function introduced in the weight’s binary decomposition. The derivative of a hard threshold function such as the Heaviside step function is zero everywhere except at zero (more specifically it is the Dirac delta function). Since the values of the weights are passed through this step function are almost never exactly zero, the gradients during backpropagation will almost always be zero. This situation leads to a stagnant network which never updates its weights and never learns. To avoid this, during the backpropagation phase the gradient of the step function is replaced by the gradient of a different function which is non-zero on a domain larger than for the step function. Such functions are usually referred to as proxy functions and can take many forms. Yin et al. (2019); Shekhovtsov & Yanush (2020) provide in-depth discussions on the properties of STEs. Throughout this work we adopt the method first proposed by Hinton (2012) which treats the gradient of a hard threshold function as if it were the identity function. This method has been shown to work very well in practice (Liu et al., 2018; Bulat et al., 2019; 2021; Bethge et al., 2019; Alizadeh et al., 2019) Notice that in Eq.(1) the additive constant αl can be factored out of the sum. The resulting weights are in the form θlk = 2 αl · Θlk, where Θlk contains only integer numbers on k bits. The ReLU activation function has the property that σ(α · x) = α · σ(x) for any α > 0. It can be shown that for a ReLU network of depth L, scaling the weights of each layer by a factor αl, with l ∈ [0, 1, . . . L−1] representing the layer index, is equivalent to scaling just a single layer with α = ∏L−1 l=0 αl, including the input layer. This means that we can gather all factors αl into a single α, scale the input images with that factor and train the network with just integer numbers represented on k bits. At inference time, for classification tasks α is not relevant because the output nodes are all scaled by the same coefficient and argmax(α · x) = argmax(x) for any α > 0. 4 EXPERIMENTS We have performed an extensive set of experiments where networks were trained on bit-depths ranging from 2 to 32. Figure 2 summarises the performance of LeNet and ResNet-18 (LeCun et al., 1998; He et al., 2015b) trained on MNIST and CIFAR10 (LeCun & Cortes, 2010; Krizhevsky, 2009). Each experiment was repeated 15 times. Each data point represents the best accuracy/sparsity obtained from all runs and are displayed as violin plots. They show via the kernel density estimation the minimum, mean, maximum and the spread of the repeated runs. The right-most violin shows the performance of the standard 32-bit training technique, the horizontal black line its mean and the shaded area the minimum and maximum accuracy. The networks were trained with the following setup. For LeNet the learning rate starts at 9 · 10−4 and is divided by 10 at epoch 40 and 80. We have also experimented with a single, fixed learning rate but in that case the standard training technique on 32bits reached a maximum accuracy of only 97.7%, while bit-wise weight training did not suffer any noticeable penalty. For ResNet the learning rate starts at 6 · 10−4 and is divided by 10 at epoch 150 and 170. In both cases we used the Adam optimizer (Kingma & Ba, 2017). For LeNet (left panels in Figure 2) this training technique consistently achieves higher mean accuracies than the baseline while at the same time pruning the network significantly. Moreover, as the bit depth decreases there seems to be a slight increase in the mean classification accuracy. This indicates that the additional bits available for the weights impede the ability of the gradient descent to converge to better solutions. The right panels in Figure 2 show the results of ResNet-18 trained on CIFAR10. Here we observe a degradation in terms of classification accuracy compared to the standard training technique of about 1.7 percentage points (we will show in Section 5 how to mitigate this issue). The network sparsity is higher than in the case of LeNet, somewhere in the range of 25-35%, for bit depths 2 to 16. Note that the sparsity plots are also represented as violins, but their height is smaller relative to the scale of the entire curve due to the very small variations in the sparsity achieved at the end of training. For both LeNet and ResNet there is a strong dependency between the bit depth and the amount of zero weights found by the network. This is in line with our hypothesis that gradient descent does not naturally uncover sparse networks when training weights represented on 32bits. This also explains why currently used pruning techniques require external mechanisms which force the network to set weights to zero while training. In essence, they bypass the weight’s whole bit structure, effectively setting all bits to zero at once. The black dots in Figure 2 indicate the percentage of weights set to zero by random chance. We observe that for high bit-depths (k > 24) the chance that gradient descent sets a certain amount of weights to zero is almost the same as random chance. However, for lower bit-depths gradient descent is much more likely to set weights to zero due to the much smaller search space of the weight’s bit structure. Figure 3 shows the histogram of (float) weight distribution of the second hidden layer in LeNet before and after training. Bit-wise weight learning moves a significant amount of weights either to exactly zero or to the maximum value representable on k bits. The frequency of intermediary values is significantly reduced, in some cases by one order of magnitude. Although this technique has no special regularization nor an external weight pruning mechanism, it naturally uncovers sparse networks. This comes in stark contrast with the standard training technique, right most panels. Here, the distribution of the weights after training is much more spread out than the initial one and has a large peak towards zero, but the weights are never exactly zero. 5 SELECTIVE BIT TRAINING In Section 4 we have presented experiments where all weight bits are simultaneously trained together. Our algorithm, however, also allows us to train specific bits only, while keeping others fixed as they were originally initialized. We can encode as a string mask of 0’s and 1’s which bit is trainable and which not, e.g. for a 4-bit mask 0001 we initialize all bits randomly but we train only the least significant bit, while for 1000 we train only the sign bit and leave the rest unchanged. See Table 1 for an example of a weight represented as a 16-bit number. Figure 4 show the results achieved by LeNet with all possible selective training patterns for 2, 4 and 8 bits. Training with weights encoded on 2 bits (top-left panel) results in 3 possible scenarios: ’01’ trains the magnitude, ’10’ trains the sign and ’11’ trains the sign as well as the magnitude of the weights. Training with weights encoded on 4 bits, pattern ’1000’ corresponds to training just the sign and keeping the magnitudes random, ’0111’ corresponds to training the magnitudes and keeping the sign fixed and ’1111’ corresponds to training all bits. Similarly for 8 bits (bottom panel). The baseline accuracy is shown as the right-most data-point in each graph. Figure 5 shows the same experiments for ResNet. An interesting phenomenon appears when training bits selectively. Several strong discontinuities in the accuracy curve are visible when training weights encoded on 4 and 8 bits. They appear at very specific bit patterns which we will address next. First, we highlight the extreme situations of (a) training just the sign bit and (b) only the magnitude bits. In Figures 4 and 5 these refer to the central data points with trainable bit patterns ’10’, ’1000’, ’10000000’ for sign training and ’01’, ’0111’, ’01111111’ for magnitude training. When training just the sign bit, LeNet outperforms the baseline network, as shown in Figure 5. Our weight initialization procedure avoids initializing magnitudes to zero. For the particular case when quantizing weights on k = 2 bits it means that the magnitude bit is always 1. In this situation training only the sign bit is therefore equivalent to training a binary network with Θ ∈ {−1, 1}. For ResNet, Figure 5, training the weight’s sign leads to a performance drop of 2–4 percentage points, depending on the quantization size. It shows that this particular network can be trained reasonably well only by changing the sign of the weights and never updating their magnitudes. Training only the magnitude bits results in a very small performance penalty for LeNet as compared to the baseline, and about 1–3 percentage points for ResNet. Training all bits simultaneously leads to the average performance between the two extreme cases. This phenomenon is valid for both ResNet and LeNet, although less visible for the latter. We have performed experiments for bit depths ranging from 2 to 32, where we train only the sign and only the magnitude bits in ResNet. Figure 6 summarizes the test accuracy and sparsity obtained in these two cases. Notice there is little to no correlation between accuracy and bit-depth above 8, whereas sparsity is strongly influenced by it, particularly above 14. For bit-depths lower than 5, magnitude only training rather decreases in performance, while sign only training increases. For the extreme k = 2 bits quantization, their accuracy ordering is inverted and in this case training both the sign bit and the magnitude bit results in a ternary network with Θ ∈ {−1, 0, 1}. The second important observation refers to the cases where the sign and the next one or two bits are trained, while the following remain randomly initialized. These situations correspond to the trainable bit patterns ’1100’, ’1110’, ’11000000’ and ’11100000’ in Figures 4 and 5. In all these cases the bit-wise training technique reaches an accuracy above the baseline (LeNet) or similarly to it (ResNet). This behaviour indicates that a fraction of the untrainable (and less significant) magnitude bits act as a regularizer, increasing the accuracy of the network as compared to the case when they are also trained. We investigated how many trainable bits would be sufficient to reach the accuracy of the baseline. To this end we perform bit-wise training on ResNet with 32, 16, 8, 6, 4 and 2 bits encoding for the weights and gradually decrease the number of trainable bits. More specifically, we expand Eq. (1) in the following way: θlk = 2 αl p−1∑ i=0 ali︸︷︷︸ untrainable 2i + k−2∑ j=p alj︸︷︷︸ trainable 2j · (−1)alk−1︸ ︷︷ ︸ trainable (2) where k represents the weight’s bit-depth and p the number of untrainable bits. For p = 0 all bits are trainable and for p = k − 2 only the sign is trainable. We summarize the results of these experiments in Figure 7. The blue data points represent the test accuracy of ResNet as a function of the number of trainable bits, with weights encoded as 32bit integers. Training more than 17 bits results in a test accuracy of about 88%. As the number of trainable bits decreases the accuracy improves and reaches the level of the baseline when training only the first 3 bits. A similar behaviour is seen when encoding weights on lower bit-depths. The best performance is obtained when weights are encoded on more than 6 bits and we train the sign and the next two most significant magnitude bits. The rest of the available bits do not contribute to the network’s performance, rather they hinder the capacity of the network to converge well. 6 POST-TRAINING BIT ANALYSIS Training bits selectively uncovers the fact that only a few of the most significant bits contribute to achieving a high accuracy, while the others provide regularization. In contrast, standard training does not reveal which weights or bits contribute most to the network’s performance. In order to understand this we conduct experiments where we convert the weights learned in the standard way into weights expressed according to Eq. (1). More precisely, we start by training a standard network, and after training, for each layer we divide all weights by the magnitude of the smallest non-zero weight within that layer and round to the nearest integer. Therefore we obtain integer weights which we can then decompose into binary form and gain access to each bit. To be as close as possible to the original weights we encode the integer weights on 32 bits, even though in most situations weights do not require that many. Thus we convert a network trained in the standard way, weights as 32bit floating point values, into a network with integer weights on 32 bits. Next, we start changing the first p less significant magnitude bits and leave the next 32 − p bits unchanged, similar to Eq. (2). In this way we can investigate the impact of each bit on the final accuracy. Note that different layers require a different number of bits to represent the weights and generally, but not necessarily, depends on the number of weights within that layer. If we start changing more bits than a layer requires, the pre-trained structure is destroyed and the network looses its properties. In order to avoid this, we compute the maximum number of bits required for the weights in each layer, ml. We impose that the maximum number of changed bits for each layer is pmaxl = ml − 3. Figure 8 shows the accuracy and sparsity of a standard, pre-trained LeNet and a 6 layer VGG-like network, Conv6, (same as Frankle & Carbin (2019); Zhou et al. (2019); Ramanujan et al. (2019)) as a function of the number of changed bits. We have experimented with three scenarios: all bits are changed randomly, all bits are set to 0, all bits are set to 1. The first data point in each graph, p = 0, represents the performance of the unmodified network with 32 bit floating point weights, as no bits are changed. The following entries indicate the performance of the network as we gradually increase the amount of changed bits. LeNet extends up to 16 bits (the maximum allowed for the first layer in this particular network) and Conv6 extends up to 25 (the maximum allowed for the first dense layer within this network). Setting all p bits to zero (or one) leads to a single possible set of weights. Setting p bits randomly leads to more possible outcomes. This difference is illustrated in Figure 8 by the way the data-points are represented: a single dot when setting bits to zero/one and a violin when setting bits randomly. One can observe that also weights trained in a standard 32 bit floating point format do not make full use of high precision bits. The first 6 bits do not play a significant role for the final accuracy, as they can be modified post-training to any value. These results are in line with our initial hypothesis that gradient descent does not prune networks due to the large amount of bits available for the weights. Additionally, we found that the most important contribution to the performance of a network is the sign bit, followed by the next two most significant magnitude bits. This suggests that gradient descent might find a local optimum based only on these three bits while the rest are used to perform fine-tuning. However, this appears to be less successful, since a large fraction of the bits might be set to zero, one or left randomly initialized, perhaps due to the stochasticity of the training algorithm (batch training) or the noise present in the data itself. 7 MESSAGE ENCODING IN WEIGHTS We have shown so far that 29 out of the 32 bits available for the weights of ResNet have an overall regularization behaviour and can remain randomly initialized and never trained. This leads to the idea that they could be used to encode arbitrary messages while the trainable bits are sufficient to train the network to high degrees of accuracy. To test this hypothesis we have performed several experiments in which we embedded various types of messages in the first 29 untrainable bits of a neural network’s weights and train only the next 3. The results are summarized in Figure 9. Each experiment was repeated 10 times. The first data point shows the baseline accuracy of ResNet trained with the standard method (32bit floating point representation of weights). For the second experiment we assigned random values to the untrainable bits of each layer. In the third experiment we embedded random passages of Shakespeare’s Hamlet. In the fourth experiment we trained until convergence 29 ResNets with bitdepth 1 and embedded each of them into a new, 32bit ResNet, training in a bit-wise fashion the sign and the next two most significant magnitude bits. The test accuracy obtained by the 1bit ResNet is shown as the last violin. We observe that embedding either random noise, structured data or a set of previously learned weights does not impact the accuracy with respect to the baseline ResNet in any significant way. 8 CONNECTION WITH OTHER WORKS Our weight initialization procedure described in Section 3 ensures that weights are never set to zero before training. For the particular case where k = 2 bits this means that the magnitude bit is always 1 while the sign bit can be either 1 or 0. Training only the sign bit is therefore equivalent to training a binary network. This is similar to BinaryConnect, BinaryNet (Courbariaux et al., 2015; 2016) and XNOR-Net (Rastegari et al., 2016) where weights are constrained to −1 and 1. When training with bit pattern ’01’ (magnitude only) or ’11’ (sign and magnitude) results in a ternary network (Li & Liu, 2016; Zhu et al., 2017) because the magnitude is now also allowed to change, leading to some weights being set to zero. Training only the magnitude bit the behaviour of our algorithm is effectively very similar in nature and performance to the edge-popup algorithm developed by Ramanujan et al. (2019) which finds pruning masks for networks with weights randomly sampled from the Signed Kaiming Constant distribution. Encoding weight on arbitrary bit depths and training just the sign bit we obtain the sign-flipping algorithm first shown by Ivan & Florian (2020). Wang et al. (2021) found in a recent study that it is possible to embed 36.9MB of malware into the dense layers of a pretrained 178MB Alex-Net model with a 1% accuracy degradation and without being detected by antivirus programs. Our method can store arbitrary code in any layer of a network (dense as well as convolutional) and could drastically increase the viral amount without damaging the network’s performance, at the same time raising no suspicion on the presence of the malware. 9 SUMMARY Motivated by the question of why gradient descent does not naturally prune neural connections during training, we developed a method to directly train the bits representing the weights. From this perspective we show that an important factor is the over-parametrization in terms of number of bits available for weight encoding. This also sheds some light into why networks with large amounts of weights are able to generalize well. Our algorithm enables weight quantization on arbitrary bit-depths and can be used as a tool for bit level analysis of weight training procedures. We show that gradient descent effectively uses only a small fraction of the most significant bits, while the less significant ones provide an intrinsic regularization and their exact values are not essential for reaching a high classification accuracy. A consequence of this property is that, by using 32 bits for the weight representation, more than 90% of a ResNet can be used to store a large variety of messages, ranging from random noise to structured data, without affecting its performance. 10 REPRODUCIBILITY The code used for the experiments carried out in this work will be made public at: https://github.com/iclr2022-2798/bit-wise-training
1. What is the focus of the paper regarding neural network training? 2. What are the strengths of the proposed approach, particularly in analyzing network weights and regularization? 3. What are the weaknesses of the paper, especially regarding its claims and experiments? 4. Do you have any concerns or questions about the paper's observations and conclusions? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a neural network training technique such that individual weight bits can be optimized separately. In detail, each weight is represented as a weighted sum of its bits weighted by powers of 2. In training, updating each bit b is achieved by updating a floating-point number x ( b = 1 , x > 0 ; b = 0 , x ≤ 0 ). By conducing extensive experiments, the authors find: Network with shorter bit-width show more weight sparsity than that with longer bit-width. (Sec 4.) With selective bit training, only a few most significant bits contribute to the final high model accuracy. The other less important bits serve as regularization. (Sec 5.) The less significant bits can be used to encode other information. (Sec 6.) Review Here are the strengths and weaknesses of the paper. Strengths: The paper is interesting and, in some sense, novel in that it analyze the network weights/bits and regularization in an interesting perspective. By decomposing a weight into separate bits, the function of each bit can be more easily observed and analyzed. The authors conducted extensive experiments to demonstrate different phenomena from the bit-wise training idea. Weaknesses: In the second paragraph, Page 5, I guess it should be "as shown in Figure 4" instead of "as shown in Figure 5". In second sentence in Sec 5.1 is quoted here: "In contrast, standard training does not reveal which weights or bits contribute most to the network’s performance". I didn't find the following paragraphs support this claim. In contrary, the paragraphs show the most significant bits contribute more to the performance than the others. In network quantization, e.g., 32-bit to 8-bit, one can also find that the least significant bits can be dropped off without hurting performance much. In second paragraph of Page 4, "..., in general, neural networks can be trained well only by changing the sign of the weights and never updating their magnitudes". I don't think 4-percentage drop is small considering Cifar10 is a small dataset and ResNet-18 is relatively a large model. When using a smaller network, the performance drop might be big. Throughout the paper (e.g., Sec 5.1 and Sec 6), one main claim of the paper is "a few of the most significant bits contribute to achieving a high accuracy, while the others provide regularization". This is true but not a significant observation from the perspective of network quantization. When network is big and contains redundancy, the network can be quantized to lower-bit one (e.g., 3 bit) with comparable performance (e.g., [1]). Compared with the full-precision model, the quantized model is well regularized. Questions: As the paper hypothesizes, the 32-bit model does not have much zero weights because the probability of an exactly zero-valued weight is very small (1e-31). If this is true, one should expect the trend between 2 and 14 bit-width in Figure 2 is exponential instead of flat. I didn't find an explanation in the relevant section. [1] Zhang, Dongqing, et al. "Lq-nets: Learned quantization for highly accurate and compact deep neural networks." Proceedings of the European conference on computer vision (ECCV). 2018.
ICLR
Title Towards Boosting the Open-Domain Chatbot with Human Feedback Abstract Many open-domain dialogue models pre-trained with social media comments can generate coherent replies but have difficulties producing engaging responses. This phenomenon might mainly result from the deficiency of annotated human-human conversations and the misalignment with human preference. In this paper, we propose a novel and efficient framework Diamante to boost the open-domain chatbot, where two kinds of human feedback (including explicit demonstration and implicit preference) are collected and leveraged. By asking annotators to select or amend the model-generated candidate responses, Diamante efficiently collects the human demonstrated responses and constructs a Chinese chit-chat dataset. To enhance the alignment with human preference, Diamante leverages the implicit preference in the data collection process and introduces the generation-evaluation joint training. Comprehensive experiments indicate that the Diamante dataset and joint training paradigm can significantly boost the performance of pre-trained dialogue models. The overall engagingness of the previous state-of-the-art model has been improved remarkably by 50% in Chinese open-domain conversations. 1 INTRODUCTION In recent years, the self-supervised pre-training based on tremendous unlabeled data has brought great success for many natural language processing tasks (Brown et al., 2020; Chowdhery et al., 2022). In dialogue generation, the pre-training is usually carried out with massive social media comments, acting as human-like conversations (Adiwardana et al., 2020; Bao et al., 2021; Thoppilan et al., 2022). Despite that these pre-trained dialogue models are capable of generating coherent replies, they have difficulties producing engaging responses. The main reasons for this phenomenon might be twofold. Firstly, there exists a considerable gap in the data distribution between the proxy human-like conversations (public group discussion) and the real human-human conversations (private two-way messaging). Secondly, the dialogue model usually outputs the response with the highest generation probability, which could reflect the probability mass over all the training data but might not align well with human preference (e.g., some biased or unsafe statements). One straightforward way to narrow the data distribution gap is to fine-tune the pre-trained dialogue model with annotated human-human conversations. For instance, Blender (Roller et al., 2021) employs four annotated datasets (Zhang et al., 2018; Dinan et al., 2019; Rashkin et al., 2019; Smith et al., 2020) to emphasize the conversational skills of personality, knowledge, empathy, and engagingness. As for the alignment with human preference, LaMDA (Thoppilan et al., 2022) defines and quantifies some critical metrics for dialogue evaluation, including safety, interestingness, and so on. By filtering out those candidate responses with poor performance on these metrics, the human preference towards the dialogue model has increased significantly. However, compared with English, the annotations of high-quality human-human conversations or dialogue evaluation samples are relatively scarce in other languages. As a result, even the state-of-the-art Chinese chatbot – PLATO-XL (Bao et al., 2021), is only pre-trained with social media comments and not involved with advanced response evaluation. In this paper, we propose a novel and efficient framework, namely Diamante, consisting of a data collection strategy and a learning method to boost the performance of pre-trained dialogue models. Two kinds of human feedback are collected and leveraged in Diamante, including explicit demonstration and implicit preference. Firstly, to bridge the gap in data distribution, Diamante collects an open-domain chit-chat dataset in Chinese with the assistance of PLATO-XL. Based on modelgenerated candidate responses, human annotators can efficiently produce an engaging response to continue the conversation. Secondly, we propose to leverage the implicit human preference that appeared in the data collection process, i.e., the annotator’s selected or amended response is preferred over the other candidates. To this end, Diamante introduces a novel generation-evaluation joint training paradigm, where high-quality response generation and human preference estimation are learned simultaneously. During inference, the candidate response with the highest preference score would be selected as the final response and returned to the user. Extensive and intensive experiments have been carried out to evaluate the effectiveness of the Diamante framework, including the collected dataset and joint training paradigm. Experimental results reveal that Diamante significantly boosts PLATO-XL’s performance and establishes a new state-of-the-art result in Chinese open-domain conversation. It is notable that compared to the human reference, Diamante even achieves competitive or slightly better performance. In addition to PLATO-XL, Diamante brings remarkable improvements to other pre-trained dialogue models. The Diamante dataset is now publicly available, which can be accessed and downloaded under the license agreement at the data platform1. We have also released all source code2, hoping to facilitate future research in dialogue generation. 2 DIAMANTE DATASET In this paper, we collect an open-domain chit-chat dataset in Chinese with the assistance of a pretrained dialogue model. In the following, we will describe the creation of the Diamante dataset. 2.1 DATA COLLECTION Diamante aims to explore an efficient way to collect a batch of high-quality chit-chat conversations that align well with human values. The data annotation interface is shown in Figure 1 (the original interface is in Chinese and displayed in Figure 6 of the Appendix). The data collection process is carried out as follows. Step 1: Crafting the Dialogue Opening. Firstly, the annotator is encouraged to craft a start utterance based on any topic of interest, as an informative and engaging dialogue opening is critical to a good conversation. As shown in Figure 1, the start utterance is “My cat started shedding everywhere in the spring. How to deal with it?”. We also provide various topics and examples in the guidelines to inspire annotators to write dialogue openings. 1The Diamante dataset is publicly available at https://anonymous. 2The Diamante source code is available at https://github.com/anonymous. Step 2: Generating Candidate Responses with the Dialogue Model. Given the dialogue context, a dialogue model (PLATO-XL in the Diamante dataset) is employed to generate multiple candidate responses. To ensure the diversity of response content and conversation flow, we adopt the top-k sampling as the decoding strategy and select seven candidates for the demonstration to the annotator. Step 3: Producing Response with Human Feedback. We then ask the annotator to select, revise or rewrite the candidate to produce an appropriate response. - Select. As large-scale dialogue models can generate coherent and occasionally interesting responses, the annotator is allowed to select one response directly from the candidates where appropriate. - Revise. Given the possible defects in the candidate responses, such as a lack of consistency or attractiveness, the annotator can choose the preferred candidate and further revise it for better quality. - Rewrite. If no appropriate candidate exists, the annotator needs to write a suitable and engaging response by themselves. Iterating Step 2 & Step 3 to Continue the Dialogue. After collecting the response with human feedback, the conversation will continue by iterating step 2 and step 3. The dialogue collection with the human-model in the loop will continue for at least seven rounds. To ensure the annotation quality of the Diamante dataset, we also designed and followed a rigorous quality control process, with details discussed in the Appendix. The above data collection strategy works well in terms of efficiency and quality. The annotator can produce the final response efficiently by directly selecting or amending the model-generated candidates. The conversation quality is guaranteed or enhanced with the human annotator’s verification or embellishment. Moreover, the implicit human preference that appeared in the data collection process also allows the training of one preference estimation model without additional annotation. 2.2 DATA ANALYSIS Corpus Statistics. In total, 147 annotators participated in the dataset collection. The detailed statistics of the Diamante dataset are summarized in Table 1. The dataset consists of 6,838 dialogues with 98,115 utterances, and the average utterance length is about 14.25. We split the collected data into train, validation, and test sets. As for the annotator operation proportions, 18% of the utterances are produced from Select, 41% from Revise, and 41% from Rewrite. Dialogue Topics. The Diamante dataset is about open-domain chit-chat and is not limited to any topic. For further quantitative analysis, we employ the topic tagger on the Baidu AI platform3 to categorize the dialogues. (The topic visualization of the Diamante dataset is displayed in Figure 7 of the Appendix.) The results show that the Diamante dataset covers all 26 main categories. The top five topics are Society (23%), Entertainment (11%), People (10%), Education (8%), and Food & Drink (8%), which are in line with our daily life. 3 GENERATION-EVALUATION JOINT TRAINING In this paper, we propose to leverage not only the explicit human demonstrations but also the implicit human preference that appeared in the data collection to boost the open-domain chatbot comprehensively. A novel generation-evaluation joint training paradigm is introduced and illustrated in Figure 3https://ai.baidu.com/tech/nlp_apply/topictagger 2, where the high-quality response generation and human preference estimation are optimized simultaneously. The classical training objective of dialogue generation is to minimize the negative log-likelihood (NLL) loss: LNLL = − log pθ(rH|c) (1) where c refers to the dialogue context and rH is the human annotator’s selected or amended response. Besides generation, Diamante encodes evaluation into the joint optimization to enhance the alignment with human preference. Recall that in the data collection process, there exists implicit human preference: given the dialogue context c, the final response rH is preferred by human annotators as compared to a model-generated candidate rM ∈ RM (displayed during annotation). Moreover, either rH or rM is better than a randomly selected response rR in most cases. As such, we can have the following preference ranking rH > rM > rR. The preference estimation (PE) loss is then defined as: LPE = − 1 3 [ log ( σ ( s(c, rH)− s(c, rM) )) + log ( σ ( s(c, rH)− s(c, rR) )) + log ( σ ( s(c, rM)− s(c, rR) ))] (2) where the input is a quadruple of (c, rH, rM, rR), σ(·) is the sigmoid function, and s(·) is the scalar output of the model. The total objective of the generation-evaluation joint training is to minimize the following integrated loss: L = LNLL + LPE (3) The first term helps the model learn to mimic human demonstrations and generate high-quality candidate responses. And the second term helps the model learn the nuanced distinctions among human preferences. During inference, we adopt the top-k sampling to produce multiple candidate responses and then perform ranking with their corresponding preference estimation scores. The one with the highest preference score would be selected as the final response and returned to the user. Notably, the preference estimation follows the candidate response decoding and only involves one more token processing, which incurs negligible computational cost. One similar work to Diamante’s joint training is LaMDA (Thoppilan et al., 2022), where a single model functions as both a generator and a discriminator. In comparison, there exist several critical differences between Diamante and LaMDA. Firstly, LaMDA chooses to learn the discriminator and generator sequentially. By contrast, Diamante optimizes generation and evaluation simultaneously, trying to avoid the catastrophic forgetting issue of the two-stage training (Kirkpatrick et al., 2017; Liu et al., 2022b). Secondly, LaMDA defines fine-grained dialogue evaluation metrics and collects corresponding discriminator training samples. Considering the expensive cost of data collection and the difficulty of reaching an agreement in fine-grained dialogue evaluation (Smith et al., 2022), Diamante leverages the implicit human preference as the overall evaluation and gets rid of additional annotations. Thirdly, as suggested in the works of human alignment (Askell et al., 2021), the ranked preference evaluation adopted in Diamante performs better than the binary discrimination used in LaMDA. 4 EXPERIMENTS 4.1 SETTINGS 4.1.1 IMPLEMENTATION DETAILS We apply the Diamante dataset and joint training paradigm to boost PLATO-XL’s performance. In the generation-evaluation joint training, the input samples are formulated as quadruples (c, rH, rM, rR), where c is the dialogue context, rH is the human annotator’s selected or amended response, rM is one candidate response displayed during annotation, and rR is one randomly selected response from the dataset. During the construction of joint training samples, if the sampled model-generated candidate rM is found to be the same as the human-generated response rH, rM will be re-sampled to guarantee the agreement (preference ranking rH > rM). In addition, rM and rR are re-sampled at each training epoch. The model is initialized with the 11B parameter PLATO-XL, with the transformer architecture of PrefixLM (Radford et al., 2018; Dong et al., 2019). (There are 72 transformer blocks and 32 attention heads, with the embedding dimension of 3072. The hidden dimension of the feedforward layer is set to 18432.) The preference estimation value s(·) is obtained through one fully-connected layer (converting the transformer output into one scalar). The hyper-parameter settings used in the training process are listed as follows. The maximum sequence length of context and response is set to 384 and 128, respectively. We use Adam (Kingma & Ba, 2015) as the optimizer, with a learning rate scheduler including a linear warmup and an invsqrt decay (Vaswani et al., 2017). The peak learning rate is set to 2e-6, and the warmup step is set to 500. The model is trained for five epochs with a batch size of 168. The implementation is based on the PaddlePaddle framework, and the experiments are carried out on 8 Nvidia A100 GPUs (40G RAM). During inference, we adopt the top-k sampling (k set to 10) to produce 20 candidate responses and select one with the highest preference estimation score as the final response. 4.1.2 COMPARED APPROACHES In the experiments, the following Chinese dialogue models are considered: • CDial-GPT (Wang et al., 2020) is a 104M parameter model trained on LCCC conversations. • EVA2.0 (Gu et al., 2022) is a 2.8B parameter model pre-trained on cleaned WDC-Dialogue. • PLATO-XL (Bao et al., 2021) is the largest Chinese dialogue model with up to 11B parameters, pre-trained on social media conversations. In addition to the above dialogue models, the following commercial chatbots in Chinese are included: Microsoft XiaoIce (Zhou et al., 2020), Xiao AI, Tmall Genie, and Apple Siri. 4.1.3 EVALUATION METRICS In the experiments, we employ crowd-sourcing workers to evaluate the dialogue quality in four aspects: coherence, informativeness, safety, and engagingness. We discuss these criteria below and provide scoring details in Appendix A. • Coherence assesses whether the response is relevant and consistent with the context. • Informativeness evaluates whether the response includes appropriate information. • Safety evaluates whether the response contains harmful, biased, or misleading content. • Engagingness measures the willingness to have a long conversation with the partner. The coherence, informativeness, and safety are the utterance-level metrics. The engagingness is the dialogue-level metric. These metrics are evaluated on a range of [0, 1, 2], with higher scores being better. Each sample is distributed to three crowd-sourcing workers, and the final score is determined through majority voting. 4.2 EXPERIMENTAL RESULTS Considering the limitations of automatic dialogue evaluation (Liu et al., 2016), we employ crowdsourcing workers to evaluate the dialogue quality, including static evaluation, self-chat evaluation, and human-bot chat evaluation. 4.2.1 STATIC EVALUATION In the static evaluation, we randomly select 100 samples from the test set and employ the models to generate the response given the multi-turn dialogue context. In addition to PLATO-XL and Dia- mante, we also provide the performance of ground truth for reference. The evaluation results are summarized in Table 2. Diamante significantly improves the response quality on all criteria compared to PLATO-XL. Diamante even achieves competitive or slightly better performance compared to the human reference. For a detailed analysis, we further reviewed the 14/100 cases where Diamante achieved a higher engagingness score than the human reference. We found out that possible reasons for this phenomenon could be twofold. Firstly, it is difficult for annotators to keep producing attractive and engaging responses at each round in multi-turn conversations, which is regular and consistent with our daily conversations. Secondly, Diamante encodes the preference estimation in the joint training to enhance the alignment with human preference, which helps it select the human-preferred response among candidate responses. 4.2.2 SELF-CHAT EVALUATION As suggested by Adiwardana et al. (2020), the static evaluation can be biased by the construction of dialogue context. Therefore, we also include the interactive evaluation in the experiments, including the self-chat evaluation and human-bot chat evaluation. Following the settings in PLATO-XL, 50 open-domain utterances are selected as dialogue openings, and models play the roles of both partners to continue the conversation for 5 rounds. Then these conversations are distributed to crowd-sourcing workers for evaluation. The self-chat evaluation results are summarized in Table 3. Diamante outperforms the rest models in all evaluation aspects and establishes a new state-ofthe-art result in Chinese open-domain conversation. In particular, Diamante achieves a remarkable 50% improvement on the metric of engagingness compared to PLATO-XL. These results verify the effectiveness of the Diamante dataset and generation-evaluation joint training paradigm. 4.2.3 HUMAN-BOT CHAT EVALUATION In addition to the above dialogue models, Diamante is compared to common commercial chatbots in Chinese through human-bot chat evaluations. We select 20 high-frequency topics from a deployed chatbot and ask in-house data specialists to interact with these chatbots for 7-14 rounds. The humanbot chat evaluation results are summarized in Table 4. Diamante consistently outperforms the rest of the commercial chatbots by a large margin across all the human evaluation metrics. These results indicate that Diamante can produce high-quality responses when interacting with real users. The Fleiss’ kappa (Fleiss, 1971) score for the static evaluation, self-chat evaluation, and human-bot chat evaluation is 0.433, 0.468, and 0.424, respectively. This suggests that crowd-sourcing workers have reached a moderate agreement in human evaluation. 4.3 DISCUSSIONS 4.3.1 ABLATION STUDY ON JOINT TRAINING As discussed in previous sections, the improvements of Diamante compared to PLATO-XL come from two aspects: the Diamante dataset bridges the distribution gap towards human-human conversations, and the joint training paradigm enhances the alignment with human preference. For further dissection, we carry out ablation studies on joint training as follows. Without joint training, PLATOXL is trained with the Diamante dataset to minimize the NLL loss, and the final response is selected based on generation probability during inference. With joint training, PLATO-XL is trained with the Diamante dataset to minimize the generation-evaluation integrated loss, and the final response is selected based on preference estimation during inference. Firstly, we conduct automatic evaluations of response selection on the test set to compare these two approaches. Each dialogue context has one human annotated response and seven model-generated candidates (displayed during annotation). The experiments evaluate the ranking of the reference response among these candidates. The results are reported in terms of mean average precision (MAP), mean reciprocal rank (MRR), and precision at position 1 (P@1), as summarized in Figure 3. The preference estimation of the joint training is adept at selecting the response that aligns well with human beings. By contrast, the generation probability has difficulty capturing the nuanced distinctions and delivers almost random performance in response ranking. Secondly, we conduct human evaluations to compare these two approaches, with self-chat evaluation results summarized in Table 5. As exhibited in the comparison, the absence of joint training leads to a substantial performance decrease in engagingness, informativeness, and safety. These results validate that the joint training paradigm improves the alignment with human preference and plays a critical role in boosting the open-domain chatbot. 4.3.2 APPLYING DIAMANTE TO OTHER DIALOGUE MODELS Although the Diamante dataset is collected with the assistance of PLATO-XL and the main experiments are carried out to evaluate Diamante’s improvements towards PLATO-XL, the framework is Start P2 P2 P2 P2 indeed universal and not limited to one particular dialogue model. Further explorations of applying Diamante to other dialogue models are carried out, with CDial-GPT taken as an example. The self-chat evaluation results are summarized in Table 6. Compared to the original model, applying Diamante to CDial-GPT brings remarkable improvements across all evaluation metrics, verifying the effectiveness of Diamante in boosting the performance of Chinese pre-trained dialogue models. 4.3.3 CASE ANALYSIS We provide two check-picked examples in Figure 4 and Figure 5 for qualitative analysis. In the self-chat example, the dialogue opening is about favorite food, and the model plays the role of both partners to continue the conversation. The two speakers have a depth discussion on hot pot, covering favorite dishes to dipping source recipes. In the human-bot chat example, the bot expresses its opinions on the ideal partner and maintains them well within the multi-turn conversation (i.e., personality is more important). At the same time, the bot respects the different opinions of the other speaker and exhibits a good alignment with human values. 5 RELATED WORK 5.1 HUMAN FEEDBACK With the rapid development of large language models, it becomes critical to build helpful, honest, and harmless language assistants, keeping in mind the alignment with human values (Askell et al., 2021; Bai et al., 2022; Glaese et al., 2022). Given the misalignment of the conventional training objective and the ultimate human preference, some works (such as WebGPT (Nakano et al., 2021) and InstructGPT (Ouyang et al., 2022)) leverage the human feedback to train a reward model and optimize towards this proxy objective using reinforcement learning. There are some similar works in dialogue generation (Yi et al., 2019; Jaques et al., 2020), where the reward combines multifaceted evaluation scores, including sentiment, repetition, coherence, etc. While using these reinforcement learning-based approaches, it needs to be careful with the “alignment tax” and not optimize too much (Liu et al., 2022a). In addition to the above reinforcement learning approaches, some works (Hancock et al., 2019; Shuster et al., 2020; Xu et al., 2022) in dialogue generation continue supervised training with human feedback, with the primary motivation of lifelong learning. The dialogue agent will iterate the following steps: deploy the dialogue model, collect the human-model conversations, and update the model with the newly collected samples. During this process, only those human responses are used to update the model, and special attention is required to avoid low-quality responses from trolls (Ju et al., 2022). In comparison, Diamante involves human workers during the development phase rather than after deployment, bringing several benefits. Firstly, human annotators in Diamante have access to model-generated candidate responses and can efficiently formulate a high-quality conversation. While other approaches collect indirect demonstrations from human workers with canned responses, which inevitably interrupts the conversation flow and leads to decreased quality. Besides, the Diamante dataset is collected with recruited annotators, eliminating the adverse impact of the trolls. Secondly, in addition to the explicit human demonstration, there exists implicit human preference in Diamante’s data collection process, which allows the training of one preference estimation model without additional annotation. 5.2 OPEN-DOMAIN DIALOGUE DATASET Given the limited number of annotated human-human conversations, open-domain dialogue models are typically pre-trained with human-like conversations collected from social media, such as Twitter, Reddit, Weibo, and Douban. To alleviate the problems brought by the data distribution gap, it has become common to fine-tune these dialogue models with annotated human-human conversations. Representative English datasets include DailyDialog (Li et al., 2017), ConvAI2 (Zhang et al., 2018), Empathetic Dialogues (Rashkin et al., 2019), Wizard of Wikipedia (Dinan et al., 2019), Blended Skill Talk (Smith et al., 2020), etc. In comparison, high-quality annotations of human-human conversations are more scarce in other languages. Most Chinese chit-chat datasets are constructed based on social media comments, including LCCC (Wang et al., 2020), WDC-Dialogue (Zhou et al., 2021), and so on. To our knowledge, the Diamante dataset is the first chit-chat dataset with annotated human-human conversations in Chinese. It is worth noting that Diamante is not a simple fix to the limitation in Chinese conversation. It provides a systematic data collection strategy that is applicable to all languages with high efficiency. 6 CONCLUSION In this paper, we propose to collect and leverage human feedback to boost the open-domain chatbot. By asking annotators to select or amend the model-generated candidate responses, Diamante efficiently collects a high-quality Chinese chit-chat dataset. Diamante introduces a novel generationevaluation joint training paradigm, which leverages both explicit human demonstration and implicit human preference that appeared in the data collection process. Experimental results indicate that the Diamante dataset and joint training paradigm significantly improve pre-trained dialogue models. 7 ETHICS STATEMENT In the dataset collection, annotators need to select or amend the model-generated candidate responses, where some candidates may contain potentially unsafe content. We ask annotators to produce safe and engaging responses. (As the model is pre-trained with social media comments, sometimes it may generate biased or harmful statements. During annotation, we have been monitoring the proportion of potentially unsafe candidates, which is less than 1%.) After annotation, we further employ data experts to review collected data and remove ineligible conversations. Diamante’s dataset and joint training paradigm help boost the open-domain chatbot and align well with human values. In practical deployments, it is desirable to employ more strategies to guarantee dialogue safety (Dinan et al., 2021), including sensitive topic detection, response safety classification, and so on. 8 REPRODUCIBILITY STATEMENT We describe the collection of Diamante’s dataset in Section 2 and Appendix B, including the annotation interface, annotation procedures, quality control process, etc. The Diamante dataset is now publicly available, which can be accessed and downloaded under the license agreement at the data platform. We introduce the model designs in Section 3, and discuss the training configurations in Section 4.1.1. We have included Diamante source code in the supplementary materials to facilitate reproducibility. A SCORING CRITERIA IN HUMAN EVALUATION The criteria used in human evaluation are provided in Table 7. B DATASET DETAILS B.1 ANNOTATION INTERFACE The original annotation interface of Diamante is in Chinese, as shown in Figure 6. The annotator first crafts the dialogue opening and then selects or amends the model-generated candidate responses to continue the conversation. The left-hand area displays the dialogue context and the input box. The top right-hand part provides a brief task description and a link to the detailed guidelines. The bottom right-hand part lists some inspiring topics or model-generated candidate responses. B.2 QUALITY CONTROL To ensure the annotation quality of the Diamante dataset, we designed and followed a rigorous quality control process. We engaged with a vendor company to recruit experienced annotators, instructed them with detailed guidelines, set up admission tests, answered questions in an online shared room, and executed regular reviews within the annotation. After annotation, we ask data experts to review all collected conversations and remove the conversation whenever one expert deems it ineligible. B.3 TOPIC VISUALIZATION The topic visualization of the Diamante dataset is displayed in Figure 7. There are 26 categories in the topic tagger, and the Diamante dataset covers all of them. The top five topics are Society (23%), Entertainment (11%), People (10%), Education (8%), and Food & Drink (8%), which are in line with our daily life. C FURTHER DISCUSSIONS C.1 MORE EXPLORATION ON JOINT TRAINING As shown in Table 5, the Diamante dataset and joint training paradigm bring significant improvements. To further analyze the effects of joint training, we carry out the pairwise comparison between models with and without joint training (PLATO-XL trained on the Diamante dataset). We ask crowdsourcing workers to compare the self-chat conversations generated by these two models and select the preferred one. The comparison in Figure 8 (upper bar) exhibits that the joint training paradigm is crucial in boosting the open-domain chatbot. In Diamante, the joint training leverages the implicit human preference that appeared in the data collection rH > rM. We also explore applying the joint training to other conventional dialogue datasets, with DuSinc (Zhou et al., 2022) taken as an example. To formulate training samples for the preference ranking rH > rM > rR, PLATO-XL is employed to simulate model-generated responses. Two models (PLATO-XL with joint training & PLATO-XL w/o joint training) are trained on the DuSinc dataset. We randomly select 100 samples from the test set for static evaluation and ask crowd-sourcing workers to compare the generated responses by these two models. The comparison in Figure 8 (bottom bar) verifies the effectiveness and generality of the joint training paradigm. C.2 SAFETY UNDER ADVERSARIAL ATTACK The main experiments reveal that Diamante achieves better safety on normal/insensitive topics. To further analyze the safety performance under adversarial attacks, we asked annotators to interact with PLATO-XL on sensitive topics and induce unsafe responses from the model. The annotators were then asked to amend these unsafe responses into safe ones. These sensitive topics are designed and selected according to Chinese cultural and social norms, including harmful speech (e.g., offensive content, self-harm suggestions, and personal attacks), group discrimination (e.g., region, gender, disability, and religion), misleading information (e.g., political controversies, ethnic division, and conspiracy theories), and so on. In total, we collected 1000 samples (including adversarial dialogue context, original unsafe response, and amended safe response). We employ these samples to evaluate Diamante’s safety under adversarial attacks. The automatic evaluation results in Figure 9 suggest that Diamante is adept at selecting safe responses. We also randomly selected 100 samples and employed crowd-sourcing workers to evaluate generated responses. The results in Table 8 reveal that Diamante achieves a remarkable safety improvement, with 76% of responses identified as safe. Even though Diamante is only trained with insensitive conversations, it absorbs human preferences and maintains good safety performance under adversarial attacks. C.3 AUTOMATIC DIALOGUE EVALUATION We also carry out automatic evaluation with rule-based and model-based metrics, including BLEU2/4 (Chen & Cherry, 2014), Distinct-1/2 (Li et al., 2016), Unigram F1 (Dinan et al., 2019), and BERTScore (Zhang et al., 2019). The automatic evaluation results in Table 9 are inconsistent with the human evaluation results in Table 2, where human evaluation is the golden standard in opendomain chitchat evaluation. The difference between Diamante and PLATO-XL is minor in automatic evaluation. In comparison, Diamante significantly improves PLATO-XL in human evaluation. C.4 CASE ANALYSIS WITH COMPARED APPROACHES We provide two more examples by PLATO-XL and XiaoIce in Figure 10 and Figure 11. These two examples are under the same starting utterances as Diamante in Figure 4 and Figure 5.
1. What is the focus and contribution of the paper on training chatbots with human preferences? 2. What are the strengths of the proposed approach, particularly in terms of including human feedback and preference estimation loss? 3. What are the weaknesses of the paper, especially regarding the potential contradiction in the preference estimation loss and the risk of overfitting to biases of a select set of reviewers? 4. Do you have any concerns about the evaluation methodology, such as annotator agreement, data quantity, and potential bias? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This work aims to train chatbots that generate responses aligned with human preferences. The training dataset is generated with human annotation. Specifically, the annotators start a conversation with a chatbot on a certain topic. At each turn, the annotators can provide feedback on the generated response. The feedback can come in the form of revisions to the generated response or a complete rewrite. After correction, the dialogue is continued, and the “response → feedback” steps are repeated. During fine-tuning, the corrected responses are used in the standard perplexity loss. Additionally, a “preference estimation” loss is included to encourage the model to rank the corrected responses over the original model response. Humans rate the responses generated by the proposed model to be more coherent, informative, and engaging. Strengths And Weaknesses Strengths Including human preferences and feedback in generation models is an important research direction. The paper takes meaningful steps to achieve this in a non-English setting. The methodology for data collection is sensible, and the scale of the dataset is sufficiently large to be of interest to the broader community. Weaknesses In the preference estimation loss (Equation 2), the authors always assume that human-generated responses are better. However, it is mentioned in Table 1 that 18% of the responses were selected as being already good. Isn’t there a contradiction here? Essentially, Equation 2 will sometimes force the model to learn not to select an otherwise perfectly valid response? Evaluation 2.1: This is probably a clarification: was there any overlap in the set of human annotators the same for data creation and evaluation (the paper mentions “in-house data specialists”)? If yes, there is a risk that the method overfits not to human preferences but to biases of a select set of reviewers. For example, consider 5 annotators: A, B, C, D. A and B like cheerful responses, and C and D like brief, serious responses. Suppose the responses from the baseline are either cheerful or serious with a 50% probability. If we collect data using annotators A, B and also make them evaluate, the baseline performance will be ~ 50%, and your method will achieve a score of nearly 100%. The other extreme follows from using C, D as evaluators. Thus, the right way to go about it is to randomly select the groups for evaluation and data generation. 2.2. “We select 20 high-frequency topics from a deployed chatbot and ask in-house data specialists to interact with these chatbots for 7-14 rounds”. How much data was used for evaluation? 2.3 The annotator agreement is moderate (closer to the boundary of low and moderate), indicating that the method may generate better responses simply because of more fine-tuning. Engagement vs. utility: The key assumption in this work is that engagement is proportional to human preferences. Is that necessarily true? For example, a chatbot that produces brief/terse responses may not be preferred for conversation in general but might be okay for a chatbot as long as it’s useful. Other notes/good to have: a) engaginess should probably be changed to “engagement.” b) For human evaluation results in Table 2, significance tests are missing. Clarity, Quality, Novelty And Reproducibility The paper is well-written and easy to understand. Some methods used in this work are derived from existing work.
ICLR
Title Towards Boosting the Open-Domain Chatbot with Human Feedback Abstract Many open-domain dialogue models pre-trained with social media comments can generate coherent replies but have difficulties producing engaging responses. This phenomenon might mainly result from the deficiency of annotated human-human conversations and the misalignment with human preference. In this paper, we propose a novel and efficient framework Diamante to boost the open-domain chatbot, where two kinds of human feedback (including explicit demonstration and implicit preference) are collected and leveraged. By asking annotators to select or amend the model-generated candidate responses, Diamante efficiently collects the human demonstrated responses and constructs a Chinese chit-chat dataset. To enhance the alignment with human preference, Diamante leverages the implicit preference in the data collection process and introduces the generation-evaluation joint training. Comprehensive experiments indicate that the Diamante dataset and joint training paradigm can significantly boost the performance of pre-trained dialogue models. The overall engagingness of the previous state-of-the-art model has been improved remarkably by 50% in Chinese open-domain conversations. 1 INTRODUCTION In recent years, the self-supervised pre-training based on tremendous unlabeled data has brought great success for many natural language processing tasks (Brown et al., 2020; Chowdhery et al., 2022). In dialogue generation, the pre-training is usually carried out with massive social media comments, acting as human-like conversations (Adiwardana et al., 2020; Bao et al., 2021; Thoppilan et al., 2022). Despite that these pre-trained dialogue models are capable of generating coherent replies, they have difficulties producing engaging responses. The main reasons for this phenomenon might be twofold. Firstly, there exists a considerable gap in the data distribution between the proxy human-like conversations (public group discussion) and the real human-human conversations (private two-way messaging). Secondly, the dialogue model usually outputs the response with the highest generation probability, which could reflect the probability mass over all the training data but might not align well with human preference (e.g., some biased or unsafe statements). One straightforward way to narrow the data distribution gap is to fine-tune the pre-trained dialogue model with annotated human-human conversations. For instance, Blender (Roller et al., 2021) employs four annotated datasets (Zhang et al., 2018; Dinan et al., 2019; Rashkin et al., 2019; Smith et al., 2020) to emphasize the conversational skills of personality, knowledge, empathy, and engagingness. As for the alignment with human preference, LaMDA (Thoppilan et al., 2022) defines and quantifies some critical metrics for dialogue evaluation, including safety, interestingness, and so on. By filtering out those candidate responses with poor performance on these metrics, the human preference towards the dialogue model has increased significantly. However, compared with English, the annotations of high-quality human-human conversations or dialogue evaluation samples are relatively scarce in other languages. As a result, even the state-of-the-art Chinese chatbot – PLATO-XL (Bao et al., 2021), is only pre-trained with social media comments and not involved with advanced response evaluation. In this paper, we propose a novel and efficient framework, namely Diamante, consisting of a data collection strategy and a learning method to boost the performance of pre-trained dialogue models. Two kinds of human feedback are collected and leveraged in Diamante, including explicit demonstration and implicit preference. Firstly, to bridge the gap in data distribution, Diamante collects an open-domain chit-chat dataset in Chinese with the assistance of PLATO-XL. Based on modelgenerated candidate responses, human annotators can efficiently produce an engaging response to continue the conversation. Secondly, we propose to leverage the implicit human preference that appeared in the data collection process, i.e., the annotator’s selected or amended response is preferred over the other candidates. To this end, Diamante introduces a novel generation-evaluation joint training paradigm, where high-quality response generation and human preference estimation are learned simultaneously. During inference, the candidate response with the highest preference score would be selected as the final response and returned to the user. Extensive and intensive experiments have been carried out to evaluate the effectiveness of the Diamante framework, including the collected dataset and joint training paradigm. Experimental results reveal that Diamante significantly boosts PLATO-XL’s performance and establishes a new state-of-the-art result in Chinese open-domain conversation. It is notable that compared to the human reference, Diamante even achieves competitive or slightly better performance. In addition to PLATO-XL, Diamante brings remarkable improvements to other pre-trained dialogue models. The Diamante dataset is now publicly available, which can be accessed and downloaded under the license agreement at the data platform1. We have also released all source code2, hoping to facilitate future research in dialogue generation. 2 DIAMANTE DATASET In this paper, we collect an open-domain chit-chat dataset in Chinese with the assistance of a pretrained dialogue model. In the following, we will describe the creation of the Diamante dataset. 2.1 DATA COLLECTION Diamante aims to explore an efficient way to collect a batch of high-quality chit-chat conversations that align well with human values. The data annotation interface is shown in Figure 1 (the original interface is in Chinese and displayed in Figure 6 of the Appendix). The data collection process is carried out as follows. Step 1: Crafting the Dialogue Opening. Firstly, the annotator is encouraged to craft a start utterance based on any topic of interest, as an informative and engaging dialogue opening is critical to a good conversation. As shown in Figure 1, the start utterance is “My cat started shedding everywhere in the spring. How to deal with it?”. We also provide various topics and examples in the guidelines to inspire annotators to write dialogue openings. 1The Diamante dataset is publicly available at https://anonymous. 2The Diamante source code is available at https://github.com/anonymous. Step 2: Generating Candidate Responses with the Dialogue Model. Given the dialogue context, a dialogue model (PLATO-XL in the Diamante dataset) is employed to generate multiple candidate responses. To ensure the diversity of response content and conversation flow, we adopt the top-k sampling as the decoding strategy and select seven candidates for the demonstration to the annotator. Step 3: Producing Response with Human Feedback. We then ask the annotator to select, revise or rewrite the candidate to produce an appropriate response. - Select. As large-scale dialogue models can generate coherent and occasionally interesting responses, the annotator is allowed to select one response directly from the candidates where appropriate. - Revise. Given the possible defects in the candidate responses, such as a lack of consistency or attractiveness, the annotator can choose the preferred candidate and further revise it for better quality. - Rewrite. If no appropriate candidate exists, the annotator needs to write a suitable and engaging response by themselves. Iterating Step 2 & Step 3 to Continue the Dialogue. After collecting the response with human feedback, the conversation will continue by iterating step 2 and step 3. The dialogue collection with the human-model in the loop will continue for at least seven rounds. To ensure the annotation quality of the Diamante dataset, we also designed and followed a rigorous quality control process, with details discussed in the Appendix. The above data collection strategy works well in terms of efficiency and quality. The annotator can produce the final response efficiently by directly selecting or amending the model-generated candidates. The conversation quality is guaranteed or enhanced with the human annotator’s verification or embellishment. Moreover, the implicit human preference that appeared in the data collection process also allows the training of one preference estimation model without additional annotation. 2.2 DATA ANALYSIS Corpus Statistics. In total, 147 annotators participated in the dataset collection. The detailed statistics of the Diamante dataset are summarized in Table 1. The dataset consists of 6,838 dialogues with 98,115 utterances, and the average utterance length is about 14.25. We split the collected data into train, validation, and test sets. As for the annotator operation proportions, 18% of the utterances are produced from Select, 41% from Revise, and 41% from Rewrite. Dialogue Topics. The Diamante dataset is about open-domain chit-chat and is not limited to any topic. For further quantitative analysis, we employ the topic tagger on the Baidu AI platform3 to categorize the dialogues. (The topic visualization of the Diamante dataset is displayed in Figure 7 of the Appendix.) The results show that the Diamante dataset covers all 26 main categories. The top five topics are Society (23%), Entertainment (11%), People (10%), Education (8%), and Food & Drink (8%), which are in line with our daily life. 3 GENERATION-EVALUATION JOINT TRAINING In this paper, we propose to leverage not only the explicit human demonstrations but also the implicit human preference that appeared in the data collection to boost the open-domain chatbot comprehensively. A novel generation-evaluation joint training paradigm is introduced and illustrated in Figure 3https://ai.baidu.com/tech/nlp_apply/topictagger 2, where the high-quality response generation and human preference estimation are optimized simultaneously. The classical training objective of dialogue generation is to minimize the negative log-likelihood (NLL) loss: LNLL = − log pθ(rH|c) (1) where c refers to the dialogue context and rH is the human annotator’s selected or amended response. Besides generation, Diamante encodes evaluation into the joint optimization to enhance the alignment with human preference. Recall that in the data collection process, there exists implicit human preference: given the dialogue context c, the final response rH is preferred by human annotators as compared to a model-generated candidate rM ∈ RM (displayed during annotation). Moreover, either rH or rM is better than a randomly selected response rR in most cases. As such, we can have the following preference ranking rH > rM > rR. The preference estimation (PE) loss is then defined as: LPE = − 1 3 [ log ( σ ( s(c, rH)− s(c, rM) )) + log ( σ ( s(c, rH)− s(c, rR) )) + log ( σ ( s(c, rM)− s(c, rR) ))] (2) where the input is a quadruple of (c, rH, rM, rR), σ(·) is the sigmoid function, and s(·) is the scalar output of the model. The total objective of the generation-evaluation joint training is to minimize the following integrated loss: L = LNLL + LPE (3) The first term helps the model learn to mimic human demonstrations and generate high-quality candidate responses. And the second term helps the model learn the nuanced distinctions among human preferences. During inference, we adopt the top-k sampling to produce multiple candidate responses and then perform ranking with their corresponding preference estimation scores. The one with the highest preference score would be selected as the final response and returned to the user. Notably, the preference estimation follows the candidate response decoding and only involves one more token processing, which incurs negligible computational cost. One similar work to Diamante’s joint training is LaMDA (Thoppilan et al., 2022), where a single model functions as both a generator and a discriminator. In comparison, there exist several critical differences between Diamante and LaMDA. Firstly, LaMDA chooses to learn the discriminator and generator sequentially. By contrast, Diamante optimizes generation and evaluation simultaneously, trying to avoid the catastrophic forgetting issue of the two-stage training (Kirkpatrick et al., 2017; Liu et al., 2022b). Secondly, LaMDA defines fine-grained dialogue evaluation metrics and collects corresponding discriminator training samples. Considering the expensive cost of data collection and the difficulty of reaching an agreement in fine-grained dialogue evaluation (Smith et al., 2022), Diamante leverages the implicit human preference as the overall evaluation and gets rid of additional annotations. Thirdly, as suggested in the works of human alignment (Askell et al., 2021), the ranked preference evaluation adopted in Diamante performs better than the binary discrimination used in LaMDA. 4 EXPERIMENTS 4.1 SETTINGS 4.1.1 IMPLEMENTATION DETAILS We apply the Diamante dataset and joint training paradigm to boost PLATO-XL’s performance. In the generation-evaluation joint training, the input samples are formulated as quadruples (c, rH, rM, rR), where c is the dialogue context, rH is the human annotator’s selected or amended response, rM is one candidate response displayed during annotation, and rR is one randomly selected response from the dataset. During the construction of joint training samples, if the sampled model-generated candidate rM is found to be the same as the human-generated response rH, rM will be re-sampled to guarantee the agreement (preference ranking rH > rM). In addition, rM and rR are re-sampled at each training epoch. The model is initialized with the 11B parameter PLATO-XL, with the transformer architecture of PrefixLM (Radford et al., 2018; Dong et al., 2019). (There are 72 transformer blocks and 32 attention heads, with the embedding dimension of 3072. The hidden dimension of the feedforward layer is set to 18432.) The preference estimation value s(·) is obtained through one fully-connected layer (converting the transformer output into one scalar). The hyper-parameter settings used in the training process are listed as follows. The maximum sequence length of context and response is set to 384 and 128, respectively. We use Adam (Kingma & Ba, 2015) as the optimizer, with a learning rate scheduler including a linear warmup and an invsqrt decay (Vaswani et al., 2017). The peak learning rate is set to 2e-6, and the warmup step is set to 500. The model is trained for five epochs with a batch size of 168. The implementation is based on the PaddlePaddle framework, and the experiments are carried out on 8 Nvidia A100 GPUs (40G RAM). During inference, we adopt the top-k sampling (k set to 10) to produce 20 candidate responses and select one with the highest preference estimation score as the final response. 4.1.2 COMPARED APPROACHES In the experiments, the following Chinese dialogue models are considered: • CDial-GPT (Wang et al., 2020) is a 104M parameter model trained on LCCC conversations. • EVA2.0 (Gu et al., 2022) is a 2.8B parameter model pre-trained on cleaned WDC-Dialogue. • PLATO-XL (Bao et al., 2021) is the largest Chinese dialogue model with up to 11B parameters, pre-trained on social media conversations. In addition to the above dialogue models, the following commercial chatbots in Chinese are included: Microsoft XiaoIce (Zhou et al., 2020), Xiao AI, Tmall Genie, and Apple Siri. 4.1.3 EVALUATION METRICS In the experiments, we employ crowd-sourcing workers to evaluate the dialogue quality in four aspects: coherence, informativeness, safety, and engagingness. We discuss these criteria below and provide scoring details in Appendix A. • Coherence assesses whether the response is relevant and consistent with the context. • Informativeness evaluates whether the response includes appropriate information. • Safety evaluates whether the response contains harmful, biased, or misleading content. • Engagingness measures the willingness to have a long conversation with the partner. The coherence, informativeness, and safety are the utterance-level metrics. The engagingness is the dialogue-level metric. These metrics are evaluated on a range of [0, 1, 2], with higher scores being better. Each sample is distributed to three crowd-sourcing workers, and the final score is determined through majority voting. 4.2 EXPERIMENTAL RESULTS Considering the limitations of automatic dialogue evaluation (Liu et al., 2016), we employ crowdsourcing workers to evaluate the dialogue quality, including static evaluation, self-chat evaluation, and human-bot chat evaluation. 4.2.1 STATIC EVALUATION In the static evaluation, we randomly select 100 samples from the test set and employ the models to generate the response given the multi-turn dialogue context. In addition to PLATO-XL and Dia- mante, we also provide the performance of ground truth for reference. The evaluation results are summarized in Table 2. Diamante significantly improves the response quality on all criteria compared to PLATO-XL. Diamante even achieves competitive or slightly better performance compared to the human reference. For a detailed analysis, we further reviewed the 14/100 cases where Diamante achieved a higher engagingness score than the human reference. We found out that possible reasons for this phenomenon could be twofold. Firstly, it is difficult for annotators to keep producing attractive and engaging responses at each round in multi-turn conversations, which is regular and consistent with our daily conversations. Secondly, Diamante encodes the preference estimation in the joint training to enhance the alignment with human preference, which helps it select the human-preferred response among candidate responses. 4.2.2 SELF-CHAT EVALUATION As suggested by Adiwardana et al. (2020), the static evaluation can be biased by the construction of dialogue context. Therefore, we also include the interactive evaluation in the experiments, including the self-chat evaluation and human-bot chat evaluation. Following the settings in PLATO-XL, 50 open-domain utterances are selected as dialogue openings, and models play the roles of both partners to continue the conversation for 5 rounds. Then these conversations are distributed to crowd-sourcing workers for evaluation. The self-chat evaluation results are summarized in Table 3. Diamante outperforms the rest models in all evaluation aspects and establishes a new state-ofthe-art result in Chinese open-domain conversation. In particular, Diamante achieves a remarkable 50% improvement on the metric of engagingness compared to PLATO-XL. These results verify the effectiveness of the Diamante dataset and generation-evaluation joint training paradigm. 4.2.3 HUMAN-BOT CHAT EVALUATION In addition to the above dialogue models, Diamante is compared to common commercial chatbots in Chinese through human-bot chat evaluations. We select 20 high-frequency topics from a deployed chatbot and ask in-house data specialists to interact with these chatbots for 7-14 rounds. The humanbot chat evaluation results are summarized in Table 4. Diamante consistently outperforms the rest of the commercial chatbots by a large margin across all the human evaluation metrics. These results indicate that Diamante can produce high-quality responses when interacting with real users. The Fleiss’ kappa (Fleiss, 1971) score for the static evaluation, self-chat evaluation, and human-bot chat evaluation is 0.433, 0.468, and 0.424, respectively. This suggests that crowd-sourcing workers have reached a moderate agreement in human evaluation. 4.3 DISCUSSIONS 4.3.1 ABLATION STUDY ON JOINT TRAINING As discussed in previous sections, the improvements of Diamante compared to PLATO-XL come from two aspects: the Diamante dataset bridges the distribution gap towards human-human conversations, and the joint training paradigm enhances the alignment with human preference. For further dissection, we carry out ablation studies on joint training as follows. Without joint training, PLATOXL is trained with the Diamante dataset to minimize the NLL loss, and the final response is selected based on generation probability during inference. With joint training, PLATO-XL is trained with the Diamante dataset to minimize the generation-evaluation integrated loss, and the final response is selected based on preference estimation during inference. Firstly, we conduct automatic evaluations of response selection on the test set to compare these two approaches. Each dialogue context has one human annotated response and seven model-generated candidates (displayed during annotation). The experiments evaluate the ranking of the reference response among these candidates. The results are reported in terms of mean average precision (MAP), mean reciprocal rank (MRR), and precision at position 1 (P@1), as summarized in Figure 3. The preference estimation of the joint training is adept at selecting the response that aligns well with human beings. By contrast, the generation probability has difficulty capturing the nuanced distinctions and delivers almost random performance in response ranking. Secondly, we conduct human evaluations to compare these two approaches, with self-chat evaluation results summarized in Table 5. As exhibited in the comparison, the absence of joint training leads to a substantial performance decrease in engagingness, informativeness, and safety. These results validate that the joint training paradigm improves the alignment with human preference and plays a critical role in boosting the open-domain chatbot. 4.3.2 APPLYING DIAMANTE TO OTHER DIALOGUE MODELS Although the Diamante dataset is collected with the assistance of PLATO-XL and the main experiments are carried out to evaluate Diamante’s improvements towards PLATO-XL, the framework is Start P2 P2 P2 P2 indeed universal and not limited to one particular dialogue model. Further explorations of applying Diamante to other dialogue models are carried out, with CDial-GPT taken as an example. The self-chat evaluation results are summarized in Table 6. Compared to the original model, applying Diamante to CDial-GPT brings remarkable improvements across all evaluation metrics, verifying the effectiveness of Diamante in boosting the performance of Chinese pre-trained dialogue models. 4.3.3 CASE ANALYSIS We provide two check-picked examples in Figure 4 and Figure 5 for qualitative analysis. In the self-chat example, the dialogue opening is about favorite food, and the model plays the role of both partners to continue the conversation. The two speakers have a depth discussion on hot pot, covering favorite dishes to dipping source recipes. In the human-bot chat example, the bot expresses its opinions on the ideal partner and maintains them well within the multi-turn conversation (i.e., personality is more important). At the same time, the bot respects the different opinions of the other speaker and exhibits a good alignment with human values. 5 RELATED WORK 5.1 HUMAN FEEDBACK With the rapid development of large language models, it becomes critical to build helpful, honest, and harmless language assistants, keeping in mind the alignment with human values (Askell et al., 2021; Bai et al., 2022; Glaese et al., 2022). Given the misalignment of the conventional training objective and the ultimate human preference, some works (such as WebGPT (Nakano et al., 2021) and InstructGPT (Ouyang et al., 2022)) leverage the human feedback to train a reward model and optimize towards this proxy objective using reinforcement learning. There are some similar works in dialogue generation (Yi et al., 2019; Jaques et al., 2020), where the reward combines multifaceted evaluation scores, including sentiment, repetition, coherence, etc. While using these reinforcement learning-based approaches, it needs to be careful with the “alignment tax” and not optimize too much (Liu et al., 2022a). In addition to the above reinforcement learning approaches, some works (Hancock et al., 2019; Shuster et al., 2020; Xu et al., 2022) in dialogue generation continue supervised training with human feedback, with the primary motivation of lifelong learning. The dialogue agent will iterate the following steps: deploy the dialogue model, collect the human-model conversations, and update the model with the newly collected samples. During this process, only those human responses are used to update the model, and special attention is required to avoid low-quality responses from trolls (Ju et al., 2022). In comparison, Diamante involves human workers during the development phase rather than after deployment, bringing several benefits. Firstly, human annotators in Diamante have access to model-generated candidate responses and can efficiently formulate a high-quality conversation. While other approaches collect indirect demonstrations from human workers with canned responses, which inevitably interrupts the conversation flow and leads to decreased quality. Besides, the Diamante dataset is collected with recruited annotators, eliminating the adverse impact of the trolls. Secondly, in addition to the explicit human demonstration, there exists implicit human preference in Diamante’s data collection process, which allows the training of one preference estimation model without additional annotation. 5.2 OPEN-DOMAIN DIALOGUE DATASET Given the limited number of annotated human-human conversations, open-domain dialogue models are typically pre-trained with human-like conversations collected from social media, such as Twitter, Reddit, Weibo, and Douban. To alleviate the problems brought by the data distribution gap, it has become common to fine-tune these dialogue models with annotated human-human conversations. Representative English datasets include DailyDialog (Li et al., 2017), ConvAI2 (Zhang et al., 2018), Empathetic Dialogues (Rashkin et al., 2019), Wizard of Wikipedia (Dinan et al., 2019), Blended Skill Talk (Smith et al., 2020), etc. In comparison, high-quality annotations of human-human conversations are more scarce in other languages. Most Chinese chit-chat datasets are constructed based on social media comments, including LCCC (Wang et al., 2020), WDC-Dialogue (Zhou et al., 2021), and so on. To our knowledge, the Diamante dataset is the first chit-chat dataset with annotated human-human conversations in Chinese. It is worth noting that Diamante is not a simple fix to the limitation in Chinese conversation. It provides a systematic data collection strategy that is applicable to all languages with high efficiency. 6 CONCLUSION In this paper, we propose to collect and leverage human feedback to boost the open-domain chatbot. By asking annotators to select or amend the model-generated candidate responses, Diamante efficiently collects a high-quality Chinese chit-chat dataset. Diamante introduces a novel generationevaluation joint training paradigm, which leverages both explicit human demonstration and implicit human preference that appeared in the data collection process. Experimental results indicate that the Diamante dataset and joint training paradigm significantly improve pre-trained dialogue models. 7 ETHICS STATEMENT In the dataset collection, annotators need to select or amend the model-generated candidate responses, where some candidates may contain potentially unsafe content. We ask annotators to produce safe and engaging responses. (As the model is pre-trained with social media comments, sometimes it may generate biased or harmful statements. During annotation, we have been monitoring the proportion of potentially unsafe candidates, which is less than 1%.) After annotation, we further employ data experts to review collected data and remove ineligible conversations. Diamante’s dataset and joint training paradigm help boost the open-domain chatbot and align well with human values. In practical deployments, it is desirable to employ more strategies to guarantee dialogue safety (Dinan et al., 2021), including sensitive topic detection, response safety classification, and so on. 8 REPRODUCIBILITY STATEMENT We describe the collection of Diamante’s dataset in Section 2 and Appendix B, including the annotation interface, annotation procedures, quality control process, etc. The Diamante dataset is now publicly available, which can be accessed and downloaded under the license agreement at the data platform. We introduce the model designs in Section 3, and discuss the training configurations in Section 4.1.1. We have included Diamante source code in the supplementary materials to facilitate reproducibility. A SCORING CRITERIA IN HUMAN EVALUATION The criteria used in human evaluation are provided in Table 7. B DATASET DETAILS B.1 ANNOTATION INTERFACE The original annotation interface of Diamante is in Chinese, as shown in Figure 6. The annotator first crafts the dialogue opening and then selects or amends the model-generated candidate responses to continue the conversation. The left-hand area displays the dialogue context and the input box. The top right-hand part provides a brief task description and a link to the detailed guidelines. The bottom right-hand part lists some inspiring topics or model-generated candidate responses. B.2 QUALITY CONTROL To ensure the annotation quality of the Diamante dataset, we designed and followed a rigorous quality control process. We engaged with a vendor company to recruit experienced annotators, instructed them with detailed guidelines, set up admission tests, answered questions in an online shared room, and executed regular reviews within the annotation. After annotation, we ask data experts to review all collected conversations and remove the conversation whenever one expert deems it ineligible. B.3 TOPIC VISUALIZATION The topic visualization of the Diamante dataset is displayed in Figure 7. There are 26 categories in the topic tagger, and the Diamante dataset covers all of them. The top five topics are Society (23%), Entertainment (11%), People (10%), Education (8%), and Food & Drink (8%), which are in line with our daily life. C FURTHER DISCUSSIONS C.1 MORE EXPLORATION ON JOINT TRAINING As shown in Table 5, the Diamante dataset and joint training paradigm bring significant improvements. To further analyze the effects of joint training, we carry out the pairwise comparison between models with and without joint training (PLATO-XL trained on the Diamante dataset). We ask crowdsourcing workers to compare the self-chat conversations generated by these two models and select the preferred one. The comparison in Figure 8 (upper bar) exhibits that the joint training paradigm is crucial in boosting the open-domain chatbot. In Diamante, the joint training leverages the implicit human preference that appeared in the data collection rH > rM. We also explore applying the joint training to other conventional dialogue datasets, with DuSinc (Zhou et al., 2022) taken as an example. To formulate training samples for the preference ranking rH > rM > rR, PLATO-XL is employed to simulate model-generated responses. Two models (PLATO-XL with joint training & PLATO-XL w/o joint training) are trained on the DuSinc dataset. We randomly select 100 samples from the test set for static evaluation and ask crowd-sourcing workers to compare the generated responses by these two models. The comparison in Figure 8 (bottom bar) verifies the effectiveness and generality of the joint training paradigm. C.2 SAFETY UNDER ADVERSARIAL ATTACK The main experiments reveal that Diamante achieves better safety on normal/insensitive topics. To further analyze the safety performance under adversarial attacks, we asked annotators to interact with PLATO-XL on sensitive topics and induce unsafe responses from the model. The annotators were then asked to amend these unsafe responses into safe ones. These sensitive topics are designed and selected according to Chinese cultural and social norms, including harmful speech (e.g., offensive content, self-harm suggestions, and personal attacks), group discrimination (e.g., region, gender, disability, and religion), misleading information (e.g., political controversies, ethnic division, and conspiracy theories), and so on. In total, we collected 1000 samples (including adversarial dialogue context, original unsafe response, and amended safe response). We employ these samples to evaluate Diamante’s safety under adversarial attacks. The automatic evaluation results in Figure 9 suggest that Diamante is adept at selecting safe responses. We also randomly selected 100 samples and employed crowd-sourcing workers to evaluate generated responses. The results in Table 8 reveal that Diamante achieves a remarkable safety improvement, with 76% of responses identified as safe. Even though Diamante is only trained with insensitive conversations, it absorbs human preferences and maintains good safety performance under adversarial attacks. C.3 AUTOMATIC DIALOGUE EVALUATION We also carry out automatic evaluation with rule-based and model-based metrics, including BLEU2/4 (Chen & Cherry, 2014), Distinct-1/2 (Li et al., 2016), Unigram F1 (Dinan et al., 2019), and BERTScore (Zhang et al., 2019). The automatic evaluation results in Table 9 are inconsistent with the human evaluation results in Table 2, where human evaluation is the golden standard in opendomain chitchat evaluation. The difference between Diamante and PLATO-XL is minor in automatic evaluation. In comparison, Diamante significantly improves PLATO-XL in human evaluation. C.4 CASE ANALYSIS WITH COMPARED APPROACHES We provide two more examples by PLATO-XL and XiaoIce in Figure 10 and Figure 11. These two examples are under the same starting utterances as Diamante in Figure 4 and Figure 5.
1. What is the focus and contribution of the paper regarding Chinese open-domain pre-trained dialogue models? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and experimental results? 3. Do you have any concerns regarding the quality and details of the human annotations used in the study? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any missing examples or aspects that the authors should provide to better demonstrate the effectiveness and safety of their method?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this paper, authors propose a new framework named Diamante to improve the Chinese open-domain pre-trained dialogue models with human annotations, since these models have difficulty in generating engaging utterances during the conversation. Diamante first builds a Chinese dataset by requiring human annotators to choose or amend the model-generated responses. Based on the dataset, it designs a joint-training process that considers both the generation loss and the human preference estimation loss. Experimental results demonstrate the effectiveness of Diamante compared with existing Chinese open-domain pre-trained dialogue methods. Strengths And Weaknesses Strength: Well written and easy to follow It is the first in Chinese to consider human preference in pretrained dialogue models based on human evaluations Experimental results show a significant improvement over existing methods in all metrics Weaknesses: Limited novelty: It seems that the major difference between Diamante and LaMDA which also focuses on the alignment of human preference is the language. Although the authors claim that their joint-learning objective can be more effective than sequential learning in LaMDA, there are no experiments to support it. Authors should further clarify their contributions which currently are incremental as the combination of LaMDA and Chinese dialogue models such as PLATO-XL Insufficient details: Since the alignment of human preference relies on the quality of human evaluations, it is fundamental to demonstrate essential details such as the demographics of the crowd workers or the instructions for them to annotate, which is included in LaMDA. Authors are suggested to provide these details to address possible concerns about the quality of the human annotations. Moreover, if the annotations are implicit, it should be difficult for methods to distinguish the difference between safe and unsafe utterances. Thus authors should also provide a detailed description of their dataset, including the average turns of the conversation, and how and why these utterances are selected, revised, or rewritten from machine-generated utterances. Missing Examples: Authors only present the examples of Diamante, which can not demonstrate the improvement without the comparison of the examples of baselines including PLATO-XL and common commercial chatbots. Besides, the topics of the examples only cover insensitive topics such as food and people. Authors should show more examples of their methods and baselines for generating responses to sensitive topics such as Society or Culture. It is unclear if Diamente can yield more safety responses like LaMDA, especially on these sensitive topics. Clarity, Quality, Novelty And Reproducibility This paper is well-written and easy to follow. The quality of the script is good, although there are still several typos such as "18% of the utterances" rather than "18% utterances". However, more examples and details of the human evaluations should be provided to support the claims of the authors. Moreover, the novelty is incremental since the proposed method is the composition of existing models, especially when compared with the similar previous effort LaMDA. It seems that the authors have released the dataset and published their code, which should be less challenging to reproduce their results.
ICLR
Title Towards Boosting the Open-Domain Chatbot with Human Feedback Abstract Many open-domain dialogue models pre-trained with social media comments can generate coherent replies but have difficulties producing engaging responses. This phenomenon might mainly result from the deficiency of annotated human-human conversations and the misalignment with human preference. In this paper, we propose a novel and efficient framework Diamante to boost the open-domain chatbot, where two kinds of human feedback (including explicit demonstration and implicit preference) are collected and leveraged. By asking annotators to select or amend the model-generated candidate responses, Diamante efficiently collects the human demonstrated responses and constructs a Chinese chit-chat dataset. To enhance the alignment with human preference, Diamante leverages the implicit preference in the data collection process and introduces the generation-evaluation joint training. Comprehensive experiments indicate that the Diamante dataset and joint training paradigm can significantly boost the performance of pre-trained dialogue models. The overall engagingness of the previous state-of-the-art model has been improved remarkably by 50% in Chinese open-domain conversations. 1 INTRODUCTION In recent years, the self-supervised pre-training based on tremendous unlabeled data has brought great success for many natural language processing tasks (Brown et al., 2020; Chowdhery et al., 2022). In dialogue generation, the pre-training is usually carried out with massive social media comments, acting as human-like conversations (Adiwardana et al., 2020; Bao et al., 2021; Thoppilan et al., 2022). Despite that these pre-trained dialogue models are capable of generating coherent replies, they have difficulties producing engaging responses. The main reasons for this phenomenon might be twofold. Firstly, there exists a considerable gap in the data distribution between the proxy human-like conversations (public group discussion) and the real human-human conversations (private two-way messaging). Secondly, the dialogue model usually outputs the response with the highest generation probability, which could reflect the probability mass over all the training data but might not align well with human preference (e.g., some biased or unsafe statements). One straightforward way to narrow the data distribution gap is to fine-tune the pre-trained dialogue model with annotated human-human conversations. For instance, Blender (Roller et al., 2021) employs four annotated datasets (Zhang et al., 2018; Dinan et al., 2019; Rashkin et al., 2019; Smith et al., 2020) to emphasize the conversational skills of personality, knowledge, empathy, and engagingness. As for the alignment with human preference, LaMDA (Thoppilan et al., 2022) defines and quantifies some critical metrics for dialogue evaluation, including safety, interestingness, and so on. By filtering out those candidate responses with poor performance on these metrics, the human preference towards the dialogue model has increased significantly. However, compared with English, the annotations of high-quality human-human conversations or dialogue evaluation samples are relatively scarce in other languages. As a result, even the state-of-the-art Chinese chatbot – PLATO-XL (Bao et al., 2021), is only pre-trained with social media comments and not involved with advanced response evaluation. In this paper, we propose a novel and efficient framework, namely Diamante, consisting of a data collection strategy and a learning method to boost the performance of pre-trained dialogue models. Two kinds of human feedback are collected and leveraged in Diamante, including explicit demonstration and implicit preference. Firstly, to bridge the gap in data distribution, Diamante collects an open-domain chit-chat dataset in Chinese with the assistance of PLATO-XL. Based on modelgenerated candidate responses, human annotators can efficiently produce an engaging response to continue the conversation. Secondly, we propose to leverage the implicit human preference that appeared in the data collection process, i.e., the annotator’s selected or amended response is preferred over the other candidates. To this end, Diamante introduces a novel generation-evaluation joint training paradigm, where high-quality response generation and human preference estimation are learned simultaneously. During inference, the candidate response with the highest preference score would be selected as the final response and returned to the user. Extensive and intensive experiments have been carried out to evaluate the effectiveness of the Diamante framework, including the collected dataset and joint training paradigm. Experimental results reveal that Diamante significantly boosts PLATO-XL’s performance and establishes a new state-of-the-art result in Chinese open-domain conversation. It is notable that compared to the human reference, Diamante even achieves competitive or slightly better performance. In addition to PLATO-XL, Diamante brings remarkable improvements to other pre-trained dialogue models. The Diamante dataset is now publicly available, which can be accessed and downloaded under the license agreement at the data platform1. We have also released all source code2, hoping to facilitate future research in dialogue generation. 2 DIAMANTE DATASET In this paper, we collect an open-domain chit-chat dataset in Chinese with the assistance of a pretrained dialogue model. In the following, we will describe the creation of the Diamante dataset. 2.1 DATA COLLECTION Diamante aims to explore an efficient way to collect a batch of high-quality chit-chat conversations that align well with human values. The data annotation interface is shown in Figure 1 (the original interface is in Chinese and displayed in Figure 6 of the Appendix). The data collection process is carried out as follows. Step 1: Crafting the Dialogue Opening. Firstly, the annotator is encouraged to craft a start utterance based on any topic of interest, as an informative and engaging dialogue opening is critical to a good conversation. As shown in Figure 1, the start utterance is “My cat started shedding everywhere in the spring. How to deal with it?”. We also provide various topics and examples in the guidelines to inspire annotators to write dialogue openings. 1The Diamante dataset is publicly available at https://anonymous. 2The Diamante source code is available at https://github.com/anonymous. Step 2: Generating Candidate Responses with the Dialogue Model. Given the dialogue context, a dialogue model (PLATO-XL in the Diamante dataset) is employed to generate multiple candidate responses. To ensure the diversity of response content and conversation flow, we adopt the top-k sampling as the decoding strategy and select seven candidates for the demonstration to the annotator. Step 3: Producing Response with Human Feedback. We then ask the annotator to select, revise or rewrite the candidate to produce an appropriate response. - Select. As large-scale dialogue models can generate coherent and occasionally interesting responses, the annotator is allowed to select one response directly from the candidates where appropriate. - Revise. Given the possible defects in the candidate responses, such as a lack of consistency or attractiveness, the annotator can choose the preferred candidate and further revise it for better quality. - Rewrite. If no appropriate candidate exists, the annotator needs to write a suitable and engaging response by themselves. Iterating Step 2 & Step 3 to Continue the Dialogue. After collecting the response with human feedback, the conversation will continue by iterating step 2 and step 3. The dialogue collection with the human-model in the loop will continue for at least seven rounds. To ensure the annotation quality of the Diamante dataset, we also designed and followed a rigorous quality control process, with details discussed in the Appendix. The above data collection strategy works well in terms of efficiency and quality. The annotator can produce the final response efficiently by directly selecting or amending the model-generated candidates. The conversation quality is guaranteed or enhanced with the human annotator’s verification or embellishment. Moreover, the implicit human preference that appeared in the data collection process also allows the training of one preference estimation model without additional annotation. 2.2 DATA ANALYSIS Corpus Statistics. In total, 147 annotators participated in the dataset collection. The detailed statistics of the Diamante dataset are summarized in Table 1. The dataset consists of 6,838 dialogues with 98,115 utterances, and the average utterance length is about 14.25. We split the collected data into train, validation, and test sets. As for the annotator operation proportions, 18% of the utterances are produced from Select, 41% from Revise, and 41% from Rewrite. Dialogue Topics. The Diamante dataset is about open-domain chit-chat and is not limited to any topic. For further quantitative analysis, we employ the topic tagger on the Baidu AI platform3 to categorize the dialogues. (The topic visualization of the Diamante dataset is displayed in Figure 7 of the Appendix.) The results show that the Diamante dataset covers all 26 main categories. The top five topics are Society (23%), Entertainment (11%), People (10%), Education (8%), and Food & Drink (8%), which are in line with our daily life. 3 GENERATION-EVALUATION JOINT TRAINING In this paper, we propose to leverage not only the explicit human demonstrations but also the implicit human preference that appeared in the data collection to boost the open-domain chatbot comprehensively. A novel generation-evaluation joint training paradigm is introduced and illustrated in Figure 3https://ai.baidu.com/tech/nlp_apply/topictagger 2, where the high-quality response generation and human preference estimation are optimized simultaneously. The classical training objective of dialogue generation is to minimize the negative log-likelihood (NLL) loss: LNLL = − log pθ(rH|c) (1) where c refers to the dialogue context and rH is the human annotator’s selected or amended response. Besides generation, Diamante encodes evaluation into the joint optimization to enhance the alignment with human preference. Recall that in the data collection process, there exists implicit human preference: given the dialogue context c, the final response rH is preferred by human annotators as compared to a model-generated candidate rM ∈ RM (displayed during annotation). Moreover, either rH or rM is better than a randomly selected response rR in most cases. As such, we can have the following preference ranking rH > rM > rR. The preference estimation (PE) loss is then defined as: LPE = − 1 3 [ log ( σ ( s(c, rH)− s(c, rM) )) + log ( σ ( s(c, rH)− s(c, rR) )) + log ( σ ( s(c, rM)− s(c, rR) ))] (2) where the input is a quadruple of (c, rH, rM, rR), σ(·) is the sigmoid function, and s(·) is the scalar output of the model. The total objective of the generation-evaluation joint training is to minimize the following integrated loss: L = LNLL + LPE (3) The first term helps the model learn to mimic human demonstrations and generate high-quality candidate responses. And the second term helps the model learn the nuanced distinctions among human preferences. During inference, we adopt the top-k sampling to produce multiple candidate responses and then perform ranking with their corresponding preference estimation scores. The one with the highest preference score would be selected as the final response and returned to the user. Notably, the preference estimation follows the candidate response decoding and only involves one more token processing, which incurs negligible computational cost. One similar work to Diamante’s joint training is LaMDA (Thoppilan et al., 2022), where a single model functions as both a generator and a discriminator. In comparison, there exist several critical differences between Diamante and LaMDA. Firstly, LaMDA chooses to learn the discriminator and generator sequentially. By contrast, Diamante optimizes generation and evaluation simultaneously, trying to avoid the catastrophic forgetting issue of the two-stage training (Kirkpatrick et al., 2017; Liu et al., 2022b). Secondly, LaMDA defines fine-grained dialogue evaluation metrics and collects corresponding discriminator training samples. Considering the expensive cost of data collection and the difficulty of reaching an agreement in fine-grained dialogue evaluation (Smith et al., 2022), Diamante leverages the implicit human preference as the overall evaluation and gets rid of additional annotations. Thirdly, as suggested in the works of human alignment (Askell et al., 2021), the ranked preference evaluation adopted in Diamante performs better than the binary discrimination used in LaMDA. 4 EXPERIMENTS 4.1 SETTINGS 4.1.1 IMPLEMENTATION DETAILS We apply the Diamante dataset and joint training paradigm to boost PLATO-XL’s performance. In the generation-evaluation joint training, the input samples are formulated as quadruples (c, rH, rM, rR), where c is the dialogue context, rH is the human annotator’s selected or amended response, rM is one candidate response displayed during annotation, and rR is one randomly selected response from the dataset. During the construction of joint training samples, if the sampled model-generated candidate rM is found to be the same as the human-generated response rH, rM will be re-sampled to guarantee the agreement (preference ranking rH > rM). In addition, rM and rR are re-sampled at each training epoch. The model is initialized with the 11B parameter PLATO-XL, with the transformer architecture of PrefixLM (Radford et al., 2018; Dong et al., 2019). (There are 72 transformer blocks and 32 attention heads, with the embedding dimension of 3072. The hidden dimension of the feedforward layer is set to 18432.) The preference estimation value s(·) is obtained through one fully-connected layer (converting the transformer output into one scalar). The hyper-parameter settings used in the training process are listed as follows. The maximum sequence length of context and response is set to 384 and 128, respectively. We use Adam (Kingma & Ba, 2015) as the optimizer, with a learning rate scheduler including a linear warmup and an invsqrt decay (Vaswani et al., 2017). The peak learning rate is set to 2e-6, and the warmup step is set to 500. The model is trained for five epochs with a batch size of 168. The implementation is based on the PaddlePaddle framework, and the experiments are carried out on 8 Nvidia A100 GPUs (40G RAM). During inference, we adopt the top-k sampling (k set to 10) to produce 20 candidate responses and select one with the highest preference estimation score as the final response. 4.1.2 COMPARED APPROACHES In the experiments, the following Chinese dialogue models are considered: • CDial-GPT (Wang et al., 2020) is a 104M parameter model trained on LCCC conversations. • EVA2.0 (Gu et al., 2022) is a 2.8B parameter model pre-trained on cleaned WDC-Dialogue. • PLATO-XL (Bao et al., 2021) is the largest Chinese dialogue model with up to 11B parameters, pre-trained on social media conversations. In addition to the above dialogue models, the following commercial chatbots in Chinese are included: Microsoft XiaoIce (Zhou et al., 2020), Xiao AI, Tmall Genie, and Apple Siri. 4.1.3 EVALUATION METRICS In the experiments, we employ crowd-sourcing workers to evaluate the dialogue quality in four aspects: coherence, informativeness, safety, and engagingness. We discuss these criteria below and provide scoring details in Appendix A. • Coherence assesses whether the response is relevant and consistent with the context. • Informativeness evaluates whether the response includes appropriate information. • Safety evaluates whether the response contains harmful, biased, or misleading content. • Engagingness measures the willingness to have a long conversation with the partner. The coherence, informativeness, and safety are the utterance-level metrics. The engagingness is the dialogue-level metric. These metrics are evaluated on a range of [0, 1, 2], with higher scores being better. Each sample is distributed to three crowd-sourcing workers, and the final score is determined through majority voting. 4.2 EXPERIMENTAL RESULTS Considering the limitations of automatic dialogue evaluation (Liu et al., 2016), we employ crowdsourcing workers to evaluate the dialogue quality, including static evaluation, self-chat evaluation, and human-bot chat evaluation. 4.2.1 STATIC EVALUATION In the static evaluation, we randomly select 100 samples from the test set and employ the models to generate the response given the multi-turn dialogue context. In addition to PLATO-XL and Dia- mante, we also provide the performance of ground truth for reference. The evaluation results are summarized in Table 2. Diamante significantly improves the response quality on all criteria compared to PLATO-XL. Diamante even achieves competitive or slightly better performance compared to the human reference. For a detailed analysis, we further reviewed the 14/100 cases where Diamante achieved a higher engagingness score than the human reference. We found out that possible reasons for this phenomenon could be twofold. Firstly, it is difficult for annotators to keep producing attractive and engaging responses at each round in multi-turn conversations, which is regular and consistent with our daily conversations. Secondly, Diamante encodes the preference estimation in the joint training to enhance the alignment with human preference, which helps it select the human-preferred response among candidate responses. 4.2.2 SELF-CHAT EVALUATION As suggested by Adiwardana et al. (2020), the static evaluation can be biased by the construction of dialogue context. Therefore, we also include the interactive evaluation in the experiments, including the self-chat evaluation and human-bot chat evaluation. Following the settings in PLATO-XL, 50 open-domain utterances are selected as dialogue openings, and models play the roles of both partners to continue the conversation for 5 rounds. Then these conversations are distributed to crowd-sourcing workers for evaluation. The self-chat evaluation results are summarized in Table 3. Diamante outperforms the rest models in all evaluation aspects and establishes a new state-ofthe-art result in Chinese open-domain conversation. In particular, Diamante achieves a remarkable 50% improvement on the metric of engagingness compared to PLATO-XL. These results verify the effectiveness of the Diamante dataset and generation-evaluation joint training paradigm. 4.2.3 HUMAN-BOT CHAT EVALUATION In addition to the above dialogue models, Diamante is compared to common commercial chatbots in Chinese through human-bot chat evaluations. We select 20 high-frequency topics from a deployed chatbot and ask in-house data specialists to interact with these chatbots for 7-14 rounds. The humanbot chat evaluation results are summarized in Table 4. Diamante consistently outperforms the rest of the commercial chatbots by a large margin across all the human evaluation metrics. These results indicate that Diamante can produce high-quality responses when interacting with real users. The Fleiss’ kappa (Fleiss, 1971) score for the static evaluation, self-chat evaluation, and human-bot chat evaluation is 0.433, 0.468, and 0.424, respectively. This suggests that crowd-sourcing workers have reached a moderate agreement in human evaluation. 4.3 DISCUSSIONS 4.3.1 ABLATION STUDY ON JOINT TRAINING As discussed in previous sections, the improvements of Diamante compared to PLATO-XL come from two aspects: the Diamante dataset bridges the distribution gap towards human-human conversations, and the joint training paradigm enhances the alignment with human preference. For further dissection, we carry out ablation studies on joint training as follows. Without joint training, PLATOXL is trained with the Diamante dataset to minimize the NLL loss, and the final response is selected based on generation probability during inference. With joint training, PLATO-XL is trained with the Diamante dataset to minimize the generation-evaluation integrated loss, and the final response is selected based on preference estimation during inference. Firstly, we conduct automatic evaluations of response selection on the test set to compare these two approaches. Each dialogue context has one human annotated response and seven model-generated candidates (displayed during annotation). The experiments evaluate the ranking of the reference response among these candidates. The results are reported in terms of mean average precision (MAP), mean reciprocal rank (MRR), and precision at position 1 (P@1), as summarized in Figure 3. The preference estimation of the joint training is adept at selecting the response that aligns well with human beings. By contrast, the generation probability has difficulty capturing the nuanced distinctions and delivers almost random performance in response ranking. Secondly, we conduct human evaluations to compare these two approaches, with self-chat evaluation results summarized in Table 5. As exhibited in the comparison, the absence of joint training leads to a substantial performance decrease in engagingness, informativeness, and safety. These results validate that the joint training paradigm improves the alignment with human preference and plays a critical role in boosting the open-domain chatbot. 4.3.2 APPLYING DIAMANTE TO OTHER DIALOGUE MODELS Although the Diamante dataset is collected with the assistance of PLATO-XL and the main experiments are carried out to evaluate Diamante’s improvements towards PLATO-XL, the framework is Start P2 P2 P2 P2 indeed universal and not limited to one particular dialogue model. Further explorations of applying Diamante to other dialogue models are carried out, with CDial-GPT taken as an example. The self-chat evaluation results are summarized in Table 6. Compared to the original model, applying Diamante to CDial-GPT brings remarkable improvements across all evaluation metrics, verifying the effectiveness of Diamante in boosting the performance of Chinese pre-trained dialogue models. 4.3.3 CASE ANALYSIS We provide two check-picked examples in Figure 4 and Figure 5 for qualitative analysis. In the self-chat example, the dialogue opening is about favorite food, and the model plays the role of both partners to continue the conversation. The two speakers have a depth discussion on hot pot, covering favorite dishes to dipping source recipes. In the human-bot chat example, the bot expresses its opinions on the ideal partner and maintains them well within the multi-turn conversation (i.e., personality is more important). At the same time, the bot respects the different opinions of the other speaker and exhibits a good alignment with human values. 5 RELATED WORK 5.1 HUMAN FEEDBACK With the rapid development of large language models, it becomes critical to build helpful, honest, and harmless language assistants, keeping in mind the alignment with human values (Askell et al., 2021; Bai et al., 2022; Glaese et al., 2022). Given the misalignment of the conventional training objective and the ultimate human preference, some works (such as WebGPT (Nakano et al., 2021) and InstructGPT (Ouyang et al., 2022)) leverage the human feedback to train a reward model and optimize towards this proxy objective using reinforcement learning. There are some similar works in dialogue generation (Yi et al., 2019; Jaques et al., 2020), where the reward combines multifaceted evaluation scores, including sentiment, repetition, coherence, etc. While using these reinforcement learning-based approaches, it needs to be careful with the “alignment tax” and not optimize too much (Liu et al., 2022a). In addition to the above reinforcement learning approaches, some works (Hancock et al., 2019; Shuster et al., 2020; Xu et al., 2022) in dialogue generation continue supervised training with human feedback, with the primary motivation of lifelong learning. The dialogue agent will iterate the following steps: deploy the dialogue model, collect the human-model conversations, and update the model with the newly collected samples. During this process, only those human responses are used to update the model, and special attention is required to avoid low-quality responses from trolls (Ju et al., 2022). In comparison, Diamante involves human workers during the development phase rather than after deployment, bringing several benefits. Firstly, human annotators in Diamante have access to model-generated candidate responses and can efficiently formulate a high-quality conversation. While other approaches collect indirect demonstrations from human workers with canned responses, which inevitably interrupts the conversation flow and leads to decreased quality. Besides, the Diamante dataset is collected with recruited annotators, eliminating the adverse impact of the trolls. Secondly, in addition to the explicit human demonstration, there exists implicit human preference in Diamante’s data collection process, which allows the training of one preference estimation model without additional annotation. 5.2 OPEN-DOMAIN DIALOGUE DATASET Given the limited number of annotated human-human conversations, open-domain dialogue models are typically pre-trained with human-like conversations collected from social media, such as Twitter, Reddit, Weibo, and Douban. To alleviate the problems brought by the data distribution gap, it has become common to fine-tune these dialogue models with annotated human-human conversations. Representative English datasets include DailyDialog (Li et al., 2017), ConvAI2 (Zhang et al., 2018), Empathetic Dialogues (Rashkin et al., 2019), Wizard of Wikipedia (Dinan et al., 2019), Blended Skill Talk (Smith et al., 2020), etc. In comparison, high-quality annotations of human-human conversations are more scarce in other languages. Most Chinese chit-chat datasets are constructed based on social media comments, including LCCC (Wang et al., 2020), WDC-Dialogue (Zhou et al., 2021), and so on. To our knowledge, the Diamante dataset is the first chit-chat dataset with annotated human-human conversations in Chinese. It is worth noting that Diamante is not a simple fix to the limitation in Chinese conversation. It provides a systematic data collection strategy that is applicable to all languages with high efficiency. 6 CONCLUSION In this paper, we propose to collect and leverage human feedback to boost the open-domain chatbot. By asking annotators to select or amend the model-generated candidate responses, Diamante efficiently collects a high-quality Chinese chit-chat dataset. Diamante introduces a novel generationevaluation joint training paradigm, which leverages both explicit human demonstration and implicit human preference that appeared in the data collection process. Experimental results indicate that the Diamante dataset and joint training paradigm significantly improve pre-trained dialogue models. 7 ETHICS STATEMENT In the dataset collection, annotators need to select or amend the model-generated candidate responses, where some candidates may contain potentially unsafe content. We ask annotators to produce safe and engaging responses. (As the model is pre-trained with social media comments, sometimes it may generate biased or harmful statements. During annotation, we have been monitoring the proportion of potentially unsafe candidates, which is less than 1%.) After annotation, we further employ data experts to review collected data and remove ineligible conversations. Diamante’s dataset and joint training paradigm help boost the open-domain chatbot and align well with human values. In practical deployments, it is desirable to employ more strategies to guarantee dialogue safety (Dinan et al., 2021), including sensitive topic detection, response safety classification, and so on. 8 REPRODUCIBILITY STATEMENT We describe the collection of Diamante’s dataset in Section 2 and Appendix B, including the annotation interface, annotation procedures, quality control process, etc. The Diamante dataset is now publicly available, which can be accessed and downloaded under the license agreement at the data platform. We introduce the model designs in Section 3, and discuss the training configurations in Section 4.1.1. We have included Diamante source code in the supplementary materials to facilitate reproducibility. A SCORING CRITERIA IN HUMAN EVALUATION The criteria used in human evaluation are provided in Table 7. B DATASET DETAILS B.1 ANNOTATION INTERFACE The original annotation interface of Diamante is in Chinese, as shown in Figure 6. The annotator first crafts the dialogue opening and then selects or amends the model-generated candidate responses to continue the conversation. The left-hand area displays the dialogue context and the input box. The top right-hand part provides a brief task description and a link to the detailed guidelines. The bottom right-hand part lists some inspiring topics or model-generated candidate responses. B.2 QUALITY CONTROL To ensure the annotation quality of the Diamante dataset, we designed and followed a rigorous quality control process. We engaged with a vendor company to recruit experienced annotators, instructed them with detailed guidelines, set up admission tests, answered questions in an online shared room, and executed regular reviews within the annotation. After annotation, we ask data experts to review all collected conversations and remove the conversation whenever one expert deems it ineligible. B.3 TOPIC VISUALIZATION The topic visualization of the Diamante dataset is displayed in Figure 7. There are 26 categories in the topic tagger, and the Diamante dataset covers all of them. The top five topics are Society (23%), Entertainment (11%), People (10%), Education (8%), and Food & Drink (8%), which are in line with our daily life. C FURTHER DISCUSSIONS C.1 MORE EXPLORATION ON JOINT TRAINING As shown in Table 5, the Diamante dataset and joint training paradigm bring significant improvements. To further analyze the effects of joint training, we carry out the pairwise comparison between models with and without joint training (PLATO-XL trained on the Diamante dataset). We ask crowdsourcing workers to compare the self-chat conversations generated by these two models and select the preferred one. The comparison in Figure 8 (upper bar) exhibits that the joint training paradigm is crucial in boosting the open-domain chatbot. In Diamante, the joint training leverages the implicit human preference that appeared in the data collection rH > rM. We also explore applying the joint training to other conventional dialogue datasets, with DuSinc (Zhou et al., 2022) taken as an example. To formulate training samples for the preference ranking rH > rM > rR, PLATO-XL is employed to simulate model-generated responses. Two models (PLATO-XL with joint training & PLATO-XL w/o joint training) are trained on the DuSinc dataset. We randomly select 100 samples from the test set for static evaluation and ask crowd-sourcing workers to compare the generated responses by these two models. The comparison in Figure 8 (bottom bar) verifies the effectiveness and generality of the joint training paradigm. C.2 SAFETY UNDER ADVERSARIAL ATTACK The main experiments reveal that Diamante achieves better safety on normal/insensitive topics. To further analyze the safety performance under adversarial attacks, we asked annotators to interact with PLATO-XL on sensitive topics and induce unsafe responses from the model. The annotators were then asked to amend these unsafe responses into safe ones. These sensitive topics are designed and selected according to Chinese cultural and social norms, including harmful speech (e.g., offensive content, self-harm suggestions, and personal attacks), group discrimination (e.g., region, gender, disability, and religion), misleading information (e.g., political controversies, ethnic division, and conspiracy theories), and so on. In total, we collected 1000 samples (including adversarial dialogue context, original unsafe response, and amended safe response). We employ these samples to evaluate Diamante’s safety under adversarial attacks. The automatic evaluation results in Figure 9 suggest that Diamante is adept at selecting safe responses. We also randomly selected 100 samples and employed crowd-sourcing workers to evaluate generated responses. The results in Table 8 reveal that Diamante achieves a remarkable safety improvement, with 76% of responses identified as safe. Even though Diamante is only trained with insensitive conversations, it absorbs human preferences and maintains good safety performance under adversarial attacks. C.3 AUTOMATIC DIALOGUE EVALUATION We also carry out automatic evaluation with rule-based and model-based metrics, including BLEU2/4 (Chen & Cherry, 2014), Distinct-1/2 (Li et al., 2016), Unigram F1 (Dinan et al., 2019), and BERTScore (Zhang et al., 2019). The automatic evaluation results in Table 9 are inconsistent with the human evaluation results in Table 2, where human evaluation is the golden standard in opendomain chitchat evaluation. The difference between Diamante and PLATO-XL is minor in automatic evaluation. In comparison, Diamante significantly improves PLATO-XL in human evaluation. C.4 CASE ANALYSIS WITH COMPARED APPROACHES We provide two more examples by PLATO-XL and XiaoIce in Figure 10 and Figure 11. These two examples are under the same starting utterances as Diamante in Figure 4 and Figure 5.
1. What is the focus and contribution of the paper on open domain chatbots? 2. What are the strengths of the proposed approach, particularly in terms of the joint generation-evaluation training paradigm? 3. What are the weaknesses of the paper, especially regarding the dataset collection and potential biases? 4. Do you have any concerns about the effectiveness of the proposed approach compared to other methods? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this work, the authors propose a new framework called Diamante that is aimed at improving the performance of open domain chatbots by incorporating human feedback (both explicit and implicit preferences). The proposed framework has a generation-evaluation training paradigm that is aimed to optimized response generation and preference estimation together. Contributions: The authors collect a new dataset for open domain chitchat conversation in Chinese and is called the Diamante dataset A new joint generation and evaluation framework for optimizing generation and human preference estimation together. Results from automated and human evaluation show that the existing approaches trained with the new training strategy achieves significant gains on metrics. Strengths And Weaknesses Strengths and Weakness: Collection of a new crowd sourced dataset for open domain conversational agent in Chinese Language. How's is the quality of the dataset measure? What are steps taken to avoid bias and safety issues from being part of the dataset? The joint training-generation framework is interesting in terms of modeling response generation. What is hypothesis behind having r_h > r_m > r_r? Results show promising gains for the proposed approach on human evaluation both static and self chat evaluation. Looking at the results from Table 6, it seems like most of the gains come from the dataset. The gains from joint training seems pretty minimal compared to Plato-XL. Clarity, Quality, Novelty And Reproducibility The originality of the work is from the dataset collected and the new training paradigm proposed to model response generation and model human preference.
ICLR
Title Towards Boosting the Open-Domain Chatbot with Human Feedback Abstract Many open-domain dialogue models pre-trained with social media comments can generate coherent replies but have difficulties producing engaging responses. This phenomenon might mainly result from the deficiency of annotated human-human conversations and the misalignment with human preference. In this paper, we propose a novel and efficient framework Diamante to boost the open-domain chatbot, where two kinds of human feedback (including explicit demonstration and implicit preference) are collected and leveraged. By asking annotators to select or amend the model-generated candidate responses, Diamante efficiently collects the human demonstrated responses and constructs a Chinese chit-chat dataset. To enhance the alignment with human preference, Diamante leverages the implicit preference in the data collection process and introduces the generation-evaluation joint training. Comprehensive experiments indicate that the Diamante dataset and joint training paradigm can significantly boost the performance of pre-trained dialogue models. The overall engagingness of the previous state-of-the-art model has been improved remarkably by 50% in Chinese open-domain conversations. 1 INTRODUCTION In recent years, the self-supervised pre-training based on tremendous unlabeled data has brought great success for many natural language processing tasks (Brown et al., 2020; Chowdhery et al., 2022). In dialogue generation, the pre-training is usually carried out with massive social media comments, acting as human-like conversations (Adiwardana et al., 2020; Bao et al., 2021; Thoppilan et al., 2022). Despite that these pre-trained dialogue models are capable of generating coherent replies, they have difficulties producing engaging responses. The main reasons for this phenomenon might be twofold. Firstly, there exists a considerable gap in the data distribution between the proxy human-like conversations (public group discussion) and the real human-human conversations (private two-way messaging). Secondly, the dialogue model usually outputs the response with the highest generation probability, which could reflect the probability mass over all the training data but might not align well with human preference (e.g., some biased or unsafe statements). One straightforward way to narrow the data distribution gap is to fine-tune the pre-trained dialogue model with annotated human-human conversations. For instance, Blender (Roller et al., 2021) employs four annotated datasets (Zhang et al., 2018; Dinan et al., 2019; Rashkin et al., 2019; Smith et al., 2020) to emphasize the conversational skills of personality, knowledge, empathy, and engagingness. As for the alignment with human preference, LaMDA (Thoppilan et al., 2022) defines and quantifies some critical metrics for dialogue evaluation, including safety, interestingness, and so on. By filtering out those candidate responses with poor performance on these metrics, the human preference towards the dialogue model has increased significantly. However, compared with English, the annotations of high-quality human-human conversations or dialogue evaluation samples are relatively scarce in other languages. As a result, even the state-of-the-art Chinese chatbot – PLATO-XL (Bao et al., 2021), is only pre-trained with social media comments and not involved with advanced response evaluation. In this paper, we propose a novel and efficient framework, namely Diamante, consisting of a data collection strategy and a learning method to boost the performance of pre-trained dialogue models. Two kinds of human feedback are collected and leveraged in Diamante, including explicit demonstration and implicit preference. Firstly, to bridge the gap in data distribution, Diamante collects an open-domain chit-chat dataset in Chinese with the assistance of PLATO-XL. Based on modelgenerated candidate responses, human annotators can efficiently produce an engaging response to continue the conversation. Secondly, we propose to leverage the implicit human preference that appeared in the data collection process, i.e., the annotator’s selected or amended response is preferred over the other candidates. To this end, Diamante introduces a novel generation-evaluation joint training paradigm, where high-quality response generation and human preference estimation are learned simultaneously. During inference, the candidate response with the highest preference score would be selected as the final response and returned to the user. Extensive and intensive experiments have been carried out to evaluate the effectiveness of the Diamante framework, including the collected dataset and joint training paradigm. Experimental results reveal that Diamante significantly boosts PLATO-XL’s performance and establishes a new state-of-the-art result in Chinese open-domain conversation. It is notable that compared to the human reference, Diamante even achieves competitive or slightly better performance. In addition to PLATO-XL, Diamante brings remarkable improvements to other pre-trained dialogue models. The Diamante dataset is now publicly available, which can be accessed and downloaded under the license agreement at the data platform1. We have also released all source code2, hoping to facilitate future research in dialogue generation. 2 DIAMANTE DATASET In this paper, we collect an open-domain chit-chat dataset in Chinese with the assistance of a pretrained dialogue model. In the following, we will describe the creation of the Diamante dataset. 2.1 DATA COLLECTION Diamante aims to explore an efficient way to collect a batch of high-quality chit-chat conversations that align well with human values. The data annotation interface is shown in Figure 1 (the original interface is in Chinese and displayed in Figure 6 of the Appendix). The data collection process is carried out as follows. Step 1: Crafting the Dialogue Opening. Firstly, the annotator is encouraged to craft a start utterance based on any topic of interest, as an informative and engaging dialogue opening is critical to a good conversation. As shown in Figure 1, the start utterance is “My cat started shedding everywhere in the spring. How to deal with it?”. We also provide various topics and examples in the guidelines to inspire annotators to write dialogue openings. 1The Diamante dataset is publicly available at https://anonymous. 2The Diamante source code is available at https://github.com/anonymous. Step 2: Generating Candidate Responses with the Dialogue Model. Given the dialogue context, a dialogue model (PLATO-XL in the Diamante dataset) is employed to generate multiple candidate responses. To ensure the diversity of response content and conversation flow, we adopt the top-k sampling as the decoding strategy and select seven candidates for the demonstration to the annotator. Step 3: Producing Response with Human Feedback. We then ask the annotator to select, revise or rewrite the candidate to produce an appropriate response. - Select. As large-scale dialogue models can generate coherent and occasionally interesting responses, the annotator is allowed to select one response directly from the candidates where appropriate. - Revise. Given the possible defects in the candidate responses, such as a lack of consistency or attractiveness, the annotator can choose the preferred candidate and further revise it for better quality. - Rewrite. If no appropriate candidate exists, the annotator needs to write a suitable and engaging response by themselves. Iterating Step 2 & Step 3 to Continue the Dialogue. After collecting the response with human feedback, the conversation will continue by iterating step 2 and step 3. The dialogue collection with the human-model in the loop will continue for at least seven rounds. To ensure the annotation quality of the Diamante dataset, we also designed and followed a rigorous quality control process, with details discussed in the Appendix. The above data collection strategy works well in terms of efficiency and quality. The annotator can produce the final response efficiently by directly selecting or amending the model-generated candidates. The conversation quality is guaranteed or enhanced with the human annotator’s verification or embellishment. Moreover, the implicit human preference that appeared in the data collection process also allows the training of one preference estimation model without additional annotation. 2.2 DATA ANALYSIS Corpus Statistics. In total, 147 annotators participated in the dataset collection. The detailed statistics of the Diamante dataset are summarized in Table 1. The dataset consists of 6,838 dialogues with 98,115 utterances, and the average utterance length is about 14.25. We split the collected data into train, validation, and test sets. As for the annotator operation proportions, 18% of the utterances are produced from Select, 41% from Revise, and 41% from Rewrite. Dialogue Topics. The Diamante dataset is about open-domain chit-chat and is not limited to any topic. For further quantitative analysis, we employ the topic tagger on the Baidu AI platform3 to categorize the dialogues. (The topic visualization of the Diamante dataset is displayed in Figure 7 of the Appendix.) The results show that the Diamante dataset covers all 26 main categories. The top five topics are Society (23%), Entertainment (11%), People (10%), Education (8%), and Food & Drink (8%), which are in line with our daily life. 3 GENERATION-EVALUATION JOINT TRAINING In this paper, we propose to leverage not only the explicit human demonstrations but also the implicit human preference that appeared in the data collection to boost the open-domain chatbot comprehensively. A novel generation-evaluation joint training paradigm is introduced and illustrated in Figure 3https://ai.baidu.com/tech/nlp_apply/topictagger 2, where the high-quality response generation and human preference estimation are optimized simultaneously. The classical training objective of dialogue generation is to minimize the negative log-likelihood (NLL) loss: LNLL = − log pθ(rH|c) (1) where c refers to the dialogue context and rH is the human annotator’s selected or amended response. Besides generation, Diamante encodes evaluation into the joint optimization to enhance the alignment with human preference. Recall that in the data collection process, there exists implicit human preference: given the dialogue context c, the final response rH is preferred by human annotators as compared to a model-generated candidate rM ∈ RM (displayed during annotation). Moreover, either rH or rM is better than a randomly selected response rR in most cases. As such, we can have the following preference ranking rH > rM > rR. The preference estimation (PE) loss is then defined as: LPE = − 1 3 [ log ( σ ( s(c, rH)− s(c, rM) )) + log ( σ ( s(c, rH)− s(c, rR) )) + log ( σ ( s(c, rM)− s(c, rR) ))] (2) where the input is a quadruple of (c, rH, rM, rR), σ(·) is the sigmoid function, and s(·) is the scalar output of the model. The total objective of the generation-evaluation joint training is to minimize the following integrated loss: L = LNLL + LPE (3) The first term helps the model learn to mimic human demonstrations and generate high-quality candidate responses. And the second term helps the model learn the nuanced distinctions among human preferences. During inference, we adopt the top-k sampling to produce multiple candidate responses and then perform ranking with their corresponding preference estimation scores. The one with the highest preference score would be selected as the final response and returned to the user. Notably, the preference estimation follows the candidate response decoding and only involves one more token processing, which incurs negligible computational cost. One similar work to Diamante’s joint training is LaMDA (Thoppilan et al., 2022), where a single model functions as both a generator and a discriminator. In comparison, there exist several critical differences between Diamante and LaMDA. Firstly, LaMDA chooses to learn the discriminator and generator sequentially. By contrast, Diamante optimizes generation and evaluation simultaneously, trying to avoid the catastrophic forgetting issue of the two-stage training (Kirkpatrick et al., 2017; Liu et al., 2022b). Secondly, LaMDA defines fine-grained dialogue evaluation metrics and collects corresponding discriminator training samples. Considering the expensive cost of data collection and the difficulty of reaching an agreement in fine-grained dialogue evaluation (Smith et al., 2022), Diamante leverages the implicit human preference as the overall evaluation and gets rid of additional annotations. Thirdly, as suggested in the works of human alignment (Askell et al., 2021), the ranked preference evaluation adopted in Diamante performs better than the binary discrimination used in LaMDA. 4 EXPERIMENTS 4.1 SETTINGS 4.1.1 IMPLEMENTATION DETAILS We apply the Diamante dataset and joint training paradigm to boost PLATO-XL’s performance. In the generation-evaluation joint training, the input samples are formulated as quadruples (c, rH, rM, rR), where c is the dialogue context, rH is the human annotator’s selected or amended response, rM is one candidate response displayed during annotation, and rR is one randomly selected response from the dataset. During the construction of joint training samples, if the sampled model-generated candidate rM is found to be the same as the human-generated response rH, rM will be re-sampled to guarantee the agreement (preference ranking rH > rM). In addition, rM and rR are re-sampled at each training epoch. The model is initialized with the 11B parameter PLATO-XL, with the transformer architecture of PrefixLM (Radford et al., 2018; Dong et al., 2019). (There are 72 transformer blocks and 32 attention heads, with the embedding dimension of 3072. The hidden dimension of the feedforward layer is set to 18432.) The preference estimation value s(·) is obtained through one fully-connected layer (converting the transformer output into one scalar). The hyper-parameter settings used in the training process are listed as follows. The maximum sequence length of context and response is set to 384 and 128, respectively. We use Adam (Kingma & Ba, 2015) as the optimizer, with a learning rate scheduler including a linear warmup and an invsqrt decay (Vaswani et al., 2017). The peak learning rate is set to 2e-6, and the warmup step is set to 500. The model is trained for five epochs with a batch size of 168. The implementation is based on the PaddlePaddle framework, and the experiments are carried out on 8 Nvidia A100 GPUs (40G RAM). During inference, we adopt the top-k sampling (k set to 10) to produce 20 candidate responses and select one with the highest preference estimation score as the final response. 4.1.2 COMPARED APPROACHES In the experiments, the following Chinese dialogue models are considered: • CDial-GPT (Wang et al., 2020) is a 104M parameter model trained on LCCC conversations. • EVA2.0 (Gu et al., 2022) is a 2.8B parameter model pre-trained on cleaned WDC-Dialogue. • PLATO-XL (Bao et al., 2021) is the largest Chinese dialogue model with up to 11B parameters, pre-trained on social media conversations. In addition to the above dialogue models, the following commercial chatbots in Chinese are included: Microsoft XiaoIce (Zhou et al., 2020), Xiao AI, Tmall Genie, and Apple Siri. 4.1.3 EVALUATION METRICS In the experiments, we employ crowd-sourcing workers to evaluate the dialogue quality in four aspects: coherence, informativeness, safety, and engagingness. We discuss these criteria below and provide scoring details in Appendix A. • Coherence assesses whether the response is relevant and consistent with the context. • Informativeness evaluates whether the response includes appropriate information. • Safety evaluates whether the response contains harmful, biased, or misleading content. • Engagingness measures the willingness to have a long conversation with the partner. The coherence, informativeness, and safety are the utterance-level metrics. The engagingness is the dialogue-level metric. These metrics are evaluated on a range of [0, 1, 2], with higher scores being better. Each sample is distributed to three crowd-sourcing workers, and the final score is determined through majority voting. 4.2 EXPERIMENTAL RESULTS Considering the limitations of automatic dialogue evaluation (Liu et al., 2016), we employ crowdsourcing workers to evaluate the dialogue quality, including static evaluation, self-chat evaluation, and human-bot chat evaluation. 4.2.1 STATIC EVALUATION In the static evaluation, we randomly select 100 samples from the test set and employ the models to generate the response given the multi-turn dialogue context. In addition to PLATO-XL and Dia- mante, we also provide the performance of ground truth for reference. The evaluation results are summarized in Table 2. Diamante significantly improves the response quality on all criteria compared to PLATO-XL. Diamante even achieves competitive or slightly better performance compared to the human reference. For a detailed analysis, we further reviewed the 14/100 cases where Diamante achieved a higher engagingness score than the human reference. We found out that possible reasons for this phenomenon could be twofold. Firstly, it is difficult for annotators to keep producing attractive and engaging responses at each round in multi-turn conversations, which is regular and consistent with our daily conversations. Secondly, Diamante encodes the preference estimation in the joint training to enhance the alignment with human preference, which helps it select the human-preferred response among candidate responses. 4.2.2 SELF-CHAT EVALUATION As suggested by Adiwardana et al. (2020), the static evaluation can be biased by the construction of dialogue context. Therefore, we also include the interactive evaluation in the experiments, including the self-chat evaluation and human-bot chat evaluation. Following the settings in PLATO-XL, 50 open-domain utterances are selected as dialogue openings, and models play the roles of both partners to continue the conversation for 5 rounds. Then these conversations are distributed to crowd-sourcing workers for evaluation. The self-chat evaluation results are summarized in Table 3. Diamante outperforms the rest models in all evaluation aspects and establishes a new state-ofthe-art result in Chinese open-domain conversation. In particular, Diamante achieves a remarkable 50% improvement on the metric of engagingness compared to PLATO-XL. These results verify the effectiveness of the Diamante dataset and generation-evaluation joint training paradigm. 4.2.3 HUMAN-BOT CHAT EVALUATION In addition to the above dialogue models, Diamante is compared to common commercial chatbots in Chinese through human-bot chat evaluations. We select 20 high-frequency topics from a deployed chatbot and ask in-house data specialists to interact with these chatbots for 7-14 rounds. The humanbot chat evaluation results are summarized in Table 4. Diamante consistently outperforms the rest of the commercial chatbots by a large margin across all the human evaluation metrics. These results indicate that Diamante can produce high-quality responses when interacting with real users. The Fleiss’ kappa (Fleiss, 1971) score for the static evaluation, self-chat evaluation, and human-bot chat evaluation is 0.433, 0.468, and 0.424, respectively. This suggests that crowd-sourcing workers have reached a moderate agreement in human evaluation. 4.3 DISCUSSIONS 4.3.1 ABLATION STUDY ON JOINT TRAINING As discussed in previous sections, the improvements of Diamante compared to PLATO-XL come from two aspects: the Diamante dataset bridges the distribution gap towards human-human conversations, and the joint training paradigm enhances the alignment with human preference. For further dissection, we carry out ablation studies on joint training as follows. Without joint training, PLATOXL is trained with the Diamante dataset to minimize the NLL loss, and the final response is selected based on generation probability during inference. With joint training, PLATO-XL is trained with the Diamante dataset to minimize the generation-evaluation integrated loss, and the final response is selected based on preference estimation during inference. Firstly, we conduct automatic evaluations of response selection on the test set to compare these two approaches. Each dialogue context has one human annotated response and seven model-generated candidates (displayed during annotation). The experiments evaluate the ranking of the reference response among these candidates. The results are reported in terms of mean average precision (MAP), mean reciprocal rank (MRR), and precision at position 1 (P@1), as summarized in Figure 3. The preference estimation of the joint training is adept at selecting the response that aligns well with human beings. By contrast, the generation probability has difficulty capturing the nuanced distinctions and delivers almost random performance in response ranking. Secondly, we conduct human evaluations to compare these two approaches, with self-chat evaluation results summarized in Table 5. As exhibited in the comparison, the absence of joint training leads to a substantial performance decrease in engagingness, informativeness, and safety. These results validate that the joint training paradigm improves the alignment with human preference and plays a critical role in boosting the open-domain chatbot. 4.3.2 APPLYING DIAMANTE TO OTHER DIALOGUE MODELS Although the Diamante dataset is collected with the assistance of PLATO-XL and the main experiments are carried out to evaluate Diamante’s improvements towards PLATO-XL, the framework is Start P2 P2 P2 P2 indeed universal and not limited to one particular dialogue model. Further explorations of applying Diamante to other dialogue models are carried out, with CDial-GPT taken as an example. The self-chat evaluation results are summarized in Table 6. Compared to the original model, applying Diamante to CDial-GPT brings remarkable improvements across all evaluation metrics, verifying the effectiveness of Diamante in boosting the performance of Chinese pre-trained dialogue models. 4.3.3 CASE ANALYSIS We provide two check-picked examples in Figure 4 and Figure 5 for qualitative analysis. In the self-chat example, the dialogue opening is about favorite food, and the model plays the role of both partners to continue the conversation. The two speakers have a depth discussion on hot pot, covering favorite dishes to dipping source recipes. In the human-bot chat example, the bot expresses its opinions on the ideal partner and maintains them well within the multi-turn conversation (i.e., personality is more important). At the same time, the bot respects the different opinions of the other speaker and exhibits a good alignment with human values. 5 RELATED WORK 5.1 HUMAN FEEDBACK With the rapid development of large language models, it becomes critical to build helpful, honest, and harmless language assistants, keeping in mind the alignment with human values (Askell et al., 2021; Bai et al., 2022; Glaese et al., 2022). Given the misalignment of the conventional training objective and the ultimate human preference, some works (such as WebGPT (Nakano et al., 2021) and InstructGPT (Ouyang et al., 2022)) leverage the human feedback to train a reward model and optimize towards this proxy objective using reinforcement learning. There are some similar works in dialogue generation (Yi et al., 2019; Jaques et al., 2020), where the reward combines multifaceted evaluation scores, including sentiment, repetition, coherence, etc. While using these reinforcement learning-based approaches, it needs to be careful with the “alignment tax” and not optimize too much (Liu et al., 2022a). In addition to the above reinforcement learning approaches, some works (Hancock et al., 2019; Shuster et al., 2020; Xu et al., 2022) in dialogue generation continue supervised training with human feedback, with the primary motivation of lifelong learning. The dialogue agent will iterate the following steps: deploy the dialogue model, collect the human-model conversations, and update the model with the newly collected samples. During this process, only those human responses are used to update the model, and special attention is required to avoid low-quality responses from trolls (Ju et al., 2022). In comparison, Diamante involves human workers during the development phase rather than after deployment, bringing several benefits. Firstly, human annotators in Diamante have access to model-generated candidate responses and can efficiently formulate a high-quality conversation. While other approaches collect indirect demonstrations from human workers with canned responses, which inevitably interrupts the conversation flow and leads to decreased quality. Besides, the Diamante dataset is collected with recruited annotators, eliminating the adverse impact of the trolls. Secondly, in addition to the explicit human demonstration, there exists implicit human preference in Diamante’s data collection process, which allows the training of one preference estimation model without additional annotation. 5.2 OPEN-DOMAIN DIALOGUE DATASET Given the limited number of annotated human-human conversations, open-domain dialogue models are typically pre-trained with human-like conversations collected from social media, such as Twitter, Reddit, Weibo, and Douban. To alleviate the problems brought by the data distribution gap, it has become common to fine-tune these dialogue models with annotated human-human conversations. Representative English datasets include DailyDialog (Li et al., 2017), ConvAI2 (Zhang et al., 2018), Empathetic Dialogues (Rashkin et al., 2019), Wizard of Wikipedia (Dinan et al., 2019), Blended Skill Talk (Smith et al., 2020), etc. In comparison, high-quality annotations of human-human conversations are more scarce in other languages. Most Chinese chit-chat datasets are constructed based on social media comments, including LCCC (Wang et al., 2020), WDC-Dialogue (Zhou et al., 2021), and so on. To our knowledge, the Diamante dataset is the first chit-chat dataset with annotated human-human conversations in Chinese. It is worth noting that Diamante is not a simple fix to the limitation in Chinese conversation. It provides a systematic data collection strategy that is applicable to all languages with high efficiency. 6 CONCLUSION In this paper, we propose to collect and leverage human feedback to boost the open-domain chatbot. By asking annotators to select or amend the model-generated candidate responses, Diamante efficiently collects a high-quality Chinese chit-chat dataset. Diamante introduces a novel generationevaluation joint training paradigm, which leverages both explicit human demonstration and implicit human preference that appeared in the data collection process. Experimental results indicate that the Diamante dataset and joint training paradigm significantly improve pre-trained dialogue models. 7 ETHICS STATEMENT In the dataset collection, annotators need to select or amend the model-generated candidate responses, where some candidates may contain potentially unsafe content. We ask annotators to produce safe and engaging responses. (As the model is pre-trained with social media comments, sometimes it may generate biased or harmful statements. During annotation, we have been monitoring the proportion of potentially unsafe candidates, which is less than 1%.) After annotation, we further employ data experts to review collected data and remove ineligible conversations. Diamante’s dataset and joint training paradigm help boost the open-domain chatbot and align well with human values. In practical deployments, it is desirable to employ more strategies to guarantee dialogue safety (Dinan et al., 2021), including sensitive topic detection, response safety classification, and so on. 8 REPRODUCIBILITY STATEMENT We describe the collection of Diamante’s dataset in Section 2 and Appendix B, including the annotation interface, annotation procedures, quality control process, etc. The Diamante dataset is now publicly available, which can be accessed and downloaded under the license agreement at the data platform. We introduce the model designs in Section 3, and discuss the training configurations in Section 4.1.1. We have included Diamante source code in the supplementary materials to facilitate reproducibility. A SCORING CRITERIA IN HUMAN EVALUATION The criteria used in human evaluation are provided in Table 7. B DATASET DETAILS B.1 ANNOTATION INTERFACE The original annotation interface of Diamante is in Chinese, as shown in Figure 6. The annotator first crafts the dialogue opening and then selects or amends the model-generated candidate responses to continue the conversation. The left-hand area displays the dialogue context and the input box. The top right-hand part provides a brief task description and a link to the detailed guidelines. The bottom right-hand part lists some inspiring topics or model-generated candidate responses. B.2 QUALITY CONTROL To ensure the annotation quality of the Diamante dataset, we designed and followed a rigorous quality control process. We engaged with a vendor company to recruit experienced annotators, instructed them with detailed guidelines, set up admission tests, answered questions in an online shared room, and executed regular reviews within the annotation. After annotation, we ask data experts to review all collected conversations and remove the conversation whenever one expert deems it ineligible. B.3 TOPIC VISUALIZATION The topic visualization of the Diamante dataset is displayed in Figure 7. There are 26 categories in the topic tagger, and the Diamante dataset covers all of them. The top five topics are Society (23%), Entertainment (11%), People (10%), Education (8%), and Food & Drink (8%), which are in line with our daily life. C FURTHER DISCUSSIONS C.1 MORE EXPLORATION ON JOINT TRAINING As shown in Table 5, the Diamante dataset and joint training paradigm bring significant improvements. To further analyze the effects of joint training, we carry out the pairwise comparison between models with and without joint training (PLATO-XL trained on the Diamante dataset). We ask crowdsourcing workers to compare the self-chat conversations generated by these two models and select the preferred one. The comparison in Figure 8 (upper bar) exhibits that the joint training paradigm is crucial in boosting the open-domain chatbot. In Diamante, the joint training leverages the implicit human preference that appeared in the data collection rH > rM. We also explore applying the joint training to other conventional dialogue datasets, with DuSinc (Zhou et al., 2022) taken as an example. To formulate training samples for the preference ranking rH > rM > rR, PLATO-XL is employed to simulate model-generated responses. Two models (PLATO-XL with joint training & PLATO-XL w/o joint training) are trained on the DuSinc dataset. We randomly select 100 samples from the test set for static evaluation and ask crowd-sourcing workers to compare the generated responses by these two models. The comparison in Figure 8 (bottom bar) verifies the effectiveness and generality of the joint training paradigm. C.2 SAFETY UNDER ADVERSARIAL ATTACK The main experiments reveal that Diamante achieves better safety on normal/insensitive topics. To further analyze the safety performance under adversarial attacks, we asked annotators to interact with PLATO-XL on sensitive topics and induce unsafe responses from the model. The annotators were then asked to amend these unsafe responses into safe ones. These sensitive topics are designed and selected according to Chinese cultural and social norms, including harmful speech (e.g., offensive content, self-harm suggestions, and personal attacks), group discrimination (e.g., region, gender, disability, and religion), misleading information (e.g., political controversies, ethnic division, and conspiracy theories), and so on. In total, we collected 1000 samples (including adversarial dialogue context, original unsafe response, and amended safe response). We employ these samples to evaluate Diamante’s safety under adversarial attacks. The automatic evaluation results in Figure 9 suggest that Diamante is adept at selecting safe responses. We also randomly selected 100 samples and employed crowd-sourcing workers to evaluate generated responses. The results in Table 8 reveal that Diamante achieves a remarkable safety improvement, with 76% of responses identified as safe. Even though Diamante is only trained with insensitive conversations, it absorbs human preferences and maintains good safety performance under adversarial attacks. C.3 AUTOMATIC DIALOGUE EVALUATION We also carry out automatic evaluation with rule-based and model-based metrics, including BLEU2/4 (Chen & Cherry, 2014), Distinct-1/2 (Li et al., 2016), Unigram F1 (Dinan et al., 2019), and BERTScore (Zhang et al., 2019). The automatic evaluation results in Table 9 are inconsistent with the human evaluation results in Table 2, where human evaluation is the golden standard in opendomain chitchat evaluation. The difference between Diamante and PLATO-XL is minor in automatic evaluation. In comparison, Diamante significantly improves PLATO-XL in human evaluation. C.4 CASE ANALYSIS WITH COMPARED APPROACHES We provide two more examples by PLATO-XL and XiaoIce in Figure 10 and Figure 11. These two examples are under the same starting utterances as Diamante in Figure 4 and Figure 5.
1. What is the focus and contribution of the paper regarding dialogue models? 2. What are the strengths and weaknesses of the proposed approach, particularly in its training setup and data collection strategy? 3. Do you have any concerns about the model's ability to generate diverse topics or leverage revisions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for improving the model's performance or expanding its capabilities?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes a a data collection strategy combined with human preference capture and a joint objecting learning method to boost the performance of pre-trained dialogue models. The paper is very well-written, contains descriptions of the collected dataset alongwith data analysis, as well as a learning strategy that leverages joint training via cross-entropy and contrastive learning (contrasting human preference dialog with model generated and random responses). The paper also conducts extensive human evaluation of the model responses alongwith ablation studies. Strengths And Weaknesses the paper uses standard ML methods for the training strategies without any significant technical innovation of the work. However, the model training setup is pretty interesting, combining mask learning with contrastive learning using implicit human preferences. the model training does not explicitly seem to leverage the revisions which would have been interesting to model. the step 1 (crafting the dialog opening) isn't very innovative. Authors mention they suggest topics in the guidelines but there isn't any AI assisted or explicit strategy to encourage users to discuss diverse topics. addition AI assistance in the interface could help with faster data collection (sentence completion in the text box for example and if users select or reject the sentence level word completions) all the results suggest that this model is better on all the studies metrics in all the comparisons. This is a very strong result; however similar analysis on another dataset (maybe another language) would convince the readers much better on the strength of the model fine-tuning. Clarity, Quality, Novelty And Reproducibility The paper is very well written and reproducible (code will be released). There's very interesting NLP contribution here but less novelty in terms of learning representation methods used. The work takes inspiration from the LaMDA model training strategies but claims to overcome the identified gaps in the LaMDA model. The paper uses standard generation strategies (top-K sampling instead of others such as diversity based sampling methods in literature to encourage more diverse model responses).
ICLR
Title Towards Boosting the Open-Domain Chatbot with Human Feedback Abstract Many open-domain dialogue models pre-trained with social media comments can generate coherent replies but have difficulties producing engaging responses. This phenomenon might mainly result from the deficiency of annotated human-human conversations and the misalignment with human preference. In this paper, we propose a novel and efficient framework Diamante to boost the open-domain chatbot, where two kinds of human feedback (including explicit demonstration and implicit preference) are collected and leveraged. By asking annotators to select or amend the model-generated candidate responses, Diamante efficiently collects the human demonstrated responses and constructs a Chinese chit-chat dataset. To enhance the alignment with human preference, Diamante leverages the implicit preference in the data collection process and introduces the generation-evaluation joint training. Comprehensive experiments indicate that the Diamante dataset and joint training paradigm can significantly boost the performance of pre-trained dialogue models. The overall engagingness of the previous state-of-the-art model has been improved remarkably by 50% in Chinese open-domain conversations. 1 INTRODUCTION In recent years, the self-supervised pre-training based on tremendous unlabeled data has brought great success for many natural language processing tasks (Brown et al., 2020; Chowdhery et al., 2022). In dialogue generation, the pre-training is usually carried out with massive social media comments, acting as human-like conversations (Adiwardana et al., 2020; Bao et al., 2021; Thoppilan et al., 2022). Despite that these pre-trained dialogue models are capable of generating coherent replies, they have difficulties producing engaging responses. The main reasons for this phenomenon might be twofold. Firstly, there exists a considerable gap in the data distribution between the proxy human-like conversations (public group discussion) and the real human-human conversations (private two-way messaging). Secondly, the dialogue model usually outputs the response with the highest generation probability, which could reflect the probability mass over all the training data but might not align well with human preference (e.g., some biased or unsafe statements). One straightforward way to narrow the data distribution gap is to fine-tune the pre-trained dialogue model with annotated human-human conversations. For instance, Blender (Roller et al., 2021) employs four annotated datasets (Zhang et al., 2018; Dinan et al., 2019; Rashkin et al., 2019; Smith et al., 2020) to emphasize the conversational skills of personality, knowledge, empathy, and engagingness. As for the alignment with human preference, LaMDA (Thoppilan et al., 2022) defines and quantifies some critical metrics for dialogue evaluation, including safety, interestingness, and so on. By filtering out those candidate responses with poor performance on these metrics, the human preference towards the dialogue model has increased significantly. However, compared with English, the annotations of high-quality human-human conversations or dialogue evaluation samples are relatively scarce in other languages. As a result, even the state-of-the-art Chinese chatbot – PLATO-XL (Bao et al., 2021), is only pre-trained with social media comments and not involved with advanced response evaluation. In this paper, we propose a novel and efficient framework, namely Diamante, consisting of a data collection strategy and a learning method to boost the performance of pre-trained dialogue models. Two kinds of human feedback are collected and leveraged in Diamante, including explicit demonstration and implicit preference. Firstly, to bridge the gap in data distribution, Diamante collects an open-domain chit-chat dataset in Chinese with the assistance of PLATO-XL. Based on modelgenerated candidate responses, human annotators can efficiently produce an engaging response to continue the conversation. Secondly, we propose to leverage the implicit human preference that appeared in the data collection process, i.e., the annotator’s selected or amended response is preferred over the other candidates. To this end, Diamante introduces a novel generation-evaluation joint training paradigm, where high-quality response generation and human preference estimation are learned simultaneously. During inference, the candidate response with the highest preference score would be selected as the final response and returned to the user. Extensive and intensive experiments have been carried out to evaluate the effectiveness of the Diamante framework, including the collected dataset and joint training paradigm. Experimental results reveal that Diamante significantly boosts PLATO-XL’s performance and establishes a new state-of-the-art result in Chinese open-domain conversation. It is notable that compared to the human reference, Diamante even achieves competitive or slightly better performance. In addition to PLATO-XL, Diamante brings remarkable improvements to other pre-trained dialogue models. The Diamante dataset is now publicly available, which can be accessed and downloaded under the license agreement at the data platform1. We have also released all source code2, hoping to facilitate future research in dialogue generation. 2 DIAMANTE DATASET In this paper, we collect an open-domain chit-chat dataset in Chinese with the assistance of a pretrained dialogue model. In the following, we will describe the creation of the Diamante dataset. 2.1 DATA COLLECTION Diamante aims to explore an efficient way to collect a batch of high-quality chit-chat conversations that align well with human values. The data annotation interface is shown in Figure 1 (the original interface is in Chinese and displayed in Figure 6 of the Appendix). The data collection process is carried out as follows. Step 1: Crafting the Dialogue Opening. Firstly, the annotator is encouraged to craft a start utterance based on any topic of interest, as an informative and engaging dialogue opening is critical to a good conversation. As shown in Figure 1, the start utterance is “My cat started shedding everywhere in the spring. How to deal with it?”. We also provide various topics and examples in the guidelines to inspire annotators to write dialogue openings. 1The Diamante dataset is publicly available at https://anonymous. 2The Diamante source code is available at https://github.com/anonymous. Step 2: Generating Candidate Responses with the Dialogue Model. Given the dialogue context, a dialogue model (PLATO-XL in the Diamante dataset) is employed to generate multiple candidate responses. To ensure the diversity of response content and conversation flow, we adopt the top-k sampling as the decoding strategy and select seven candidates for the demonstration to the annotator. Step 3: Producing Response with Human Feedback. We then ask the annotator to select, revise or rewrite the candidate to produce an appropriate response. - Select. As large-scale dialogue models can generate coherent and occasionally interesting responses, the annotator is allowed to select one response directly from the candidates where appropriate. - Revise. Given the possible defects in the candidate responses, such as a lack of consistency or attractiveness, the annotator can choose the preferred candidate and further revise it for better quality. - Rewrite. If no appropriate candidate exists, the annotator needs to write a suitable and engaging response by themselves. Iterating Step 2 & Step 3 to Continue the Dialogue. After collecting the response with human feedback, the conversation will continue by iterating step 2 and step 3. The dialogue collection with the human-model in the loop will continue for at least seven rounds. To ensure the annotation quality of the Diamante dataset, we also designed and followed a rigorous quality control process, with details discussed in the Appendix. The above data collection strategy works well in terms of efficiency and quality. The annotator can produce the final response efficiently by directly selecting or amending the model-generated candidates. The conversation quality is guaranteed or enhanced with the human annotator’s verification or embellishment. Moreover, the implicit human preference that appeared in the data collection process also allows the training of one preference estimation model without additional annotation. 2.2 DATA ANALYSIS Corpus Statistics. In total, 147 annotators participated in the dataset collection. The detailed statistics of the Diamante dataset are summarized in Table 1. The dataset consists of 6,838 dialogues with 98,115 utterances, and the average utterance length is about 14.25. We split the collected data into train, validation, and test sets. As for the annotator operation proportions, 18% of the utterances are produced from Select, 41% from Revise, and 41% from Rewrite. Dialogue Topics. The Diamante dataset is about open-domain chit-chat and is not limited to any topic. For further quantitative analysis, we employ the topic tagger on the Baidu AI platform3 to categorize the dialogues. (The topic visualization of the Diamante dataset is displayed in Figure 7 of the Appendix.) The results show that the Diamante dataset covers all 26 main categories. The top five topics are Society (23%), Entertainment (11%), People (10%), Education (8%), and Food & Drink (8%), which are in line with our daily life. 3 GENERATION-EVALUATION JOINT TRAINING In this paper, we propose to leverage not only the explicit human demonstrations but also the implicit human preference that appeared in the data collection to boost the open-domain chatbot comprehensively. A novel generation-evaluation joint training paradigm is introduced and illustrated in Figure 3https://ai.baidu.com/tech/nlp_apply/topictagger 2, where the high-quality response generation and human preference estimation are optimized simultaneously. The classical training objective of dialogue generation is to minimize the negative log-likelihood (NLL) loss: LNLL = − log pθ(rH|c) (1) where c refers to the dialogue context and rH is the human annotator’s selected or amended response. Besides generation, Diamante encodes evaluation into the joint optimization to enhance the alignment with human preference. Recall that in the data collection process, there exists implicit human preference: given the dialogue context c, the final response rH is preferred by human annotators as compared to a model-generated candidate rM ∈ RM (displayed during annotation). Moreover, either rH or rM is better than a randomly selected response rR in most cases. As such, we can have the following preference ranking rH > rM > rR. The preference estimation (PE) loss is then defined as: LPE = − 1 3 [ log ( σ ( s(c, rH)− s(c, rM) )) + log ( σ ( s(c, rH)− s(c, rR) )) + log ( σ ( s(c, rM)− s(c, rR) ))] (2) where the input is a quadruple of (c, rH, rM, rR), σ(·) is the sigmoid function, and s(·) is the scalar output of the model. The total objective of the generation-evaluation joint training is to minimize the following integrated loss: L = LNLL + LPE (3) The first term helps the model learn to mimic human demonstrations and generate high-quality candidate responses. And the second term helps the model learn the nuanced distinctions among human preferences. During inference, we adopt the top-k sampling to produce multiple candidate responses and then perform ranking with their corresponding preference estimation scores. The one with the highest preference score would be selected as the final response and returned to the user. Notably, the preference estimation follows the candidate response decoding and only involves one more token processing, which incurs negligible computational cost. One similar work to Diamante’s joint training is LaMDA (Thoppilan et al., 2022), where a single model functions as both a generator and a discriminator. In comparison, there exist several critical differences between Diamante and LaMDA. Firstly, LaMDA chooses to learn the discriminator and generator sequentially. By contrast, Diamante optimizes generation and evaluation simultaneously, trying to avoid the catastrophic forgetting issue of the two-stage training (Kirkpatrick et al., 2017; Liu et al., 2022b). Secondly, LaMDA defines fine-grained dialogue evaluation metrics and collects corresponding discriminator training samples. Considering the expensive cost of data collection and the difficulty of reaching an agreement in fine-grained dialogue evaluation (Smith et al., 2022), Diamante leverages the implicit human preference as the overall evaluation and gets rid of additional annotations. Thirdly, as suggested in the works of human alignment (Askell et al., 2021), the ranked preference evaluation adopted in Diamante performs better than the binary discrimination used in LaMDA. 4 EXPERIMENTS 4.1 SETTINGS 4.1.1 IMPLEMENTATION DETAILS We apply the Diamante dataset and joint training paradigm to boost PLATO-XL’s performance. In the generation-evaluation joint training, the input samples are formulated as quadruples (c, rH, rM, rR), where c is the dialogue context, rH is the human annotator’s selected or amended response, rM is one candidate response displayed during annotation, and rR is one randomly selected response from the dataset. During the construction of joint training samples, if the sampled model-generated candidate rM is found to be the same as the human-generated response rH, rM will be re-sampled to guarantee the agreement (preference ranking rH > rM). In addition, rM and rR are re-sampled at each training epoch. The model is initialized with the 11B parameter PLATO-XL, with the transformer architecture of PrefixLM (Radford et al., 2018; Dong et al., 2019). (There are 72 transformer blocks and 32 attention heads, with the embedding dimension of 3072. The hidden dimension of the feedforward layer is set to 18432.) The preference estimation value s(·) is obtained through one fully-connected layer (converting the transformer output into one scalar). The hyper-parameter settings used in the training process are listed as follows. The maximum sequence length of context and response is set to 384 and 128, respectively. We use Adam (Kingma & Ba, 2015) as the optimizer, with a learning rate scheduler including a linear warmup and an invsqrt decay (Vaswani et al., 2017). The peak learning rate is set to 2e-6, and the warmup step is set to 500. The model is trained for five epochs with a batch size of 168. The implementation is based on the PaddlePaddle framework, and the experiments are carried out on 8 Nvidia A100 GPUs (40G RAM). During inference, we adopt the top-k sampling (k set to 10) to produce 20 candidate responses and select one with the highest preference estimation score as the final response. 4.1.2 COMPARED APPROACHES In the experiments, the following Chinese dialogue models are considered: • CDial-GPT (Wang et al., 2020) is a 104M parameter model trained on LCCC conversations. • EVA2.0 (Gu et al., 2022) is a 2.8B parameter model pre-trained on cleaned WDC-Dialogue. • PLATO-XL (Bao et al., 2021) is the largest Chinese dialogue model with up to 11B parameters, pre-trained on social media conversations. In addition to the above dialogue models, the following commercial chatbots in Chinese are included: Microsoft XiaoIce (Zhou et al., 2020), Xiao AI, Tmall Genie, and Apple Siri. 4.1.3 EVALUATION METRICS In the experiments, we employ crowd-sourcing workers to evaluate the dialogue quality in four aspects: coherence, informativeness, safety, and engagingness. We discuss these criteria below and provide scoring details in Appendix A. • Coherence assesses whether the response is relevant and consistent with the context. • Informativeness evaluates whether the response includes appropriate information. • Safety evaluates whether the response contains harmful, biased, or misleading content. • Engagingness measures the willingness to have a long conversation with the partner. The coherence, informativeness, and safety are the utterance-level metrics. The engagingness is the dialogue-level metric. These metrics are evaluated on a range of [0, 1, 2], with higher scores being better. Each sample is distributed to three crowd-sourcing workers, and the final score is determined through majority voting. 4.2 EXPERIMENTAL RESULTS Considering the limitations of automatic dialogue evaluation (Liu et al., 2016), we employ crowdsourcing workers to evaluate the dialogue quality, including static evaluation, self-chat evaluation, and human-bot chat evaluation. 4.2.1 STATIC EVALUATION In the static evaluation, we randomly select 100 samples from the test set and employ the models to generate the response given the multi-turn dialogue context. In addition to PLATO-XL and Dia- mante, we also provide the performance of ground truth for reference. The evaluation results are summarized in Table 2. Diamante significantly improves the response quality on all criteria compared to PLATO-XL. Diamante even achieves competitive or slightly better performance compared to the human reference. For a detailed analysis, we further reviewed the 14/100 cases where Diamante achieved a higher engagingness score than the human reference. We found out that possible reasons for this phenomenon could be twofold. Firstly, it is difficult for annotators to keep producing attractive and engaging responses at each round in multi-turn conversations, which is regular and consistent with our daily conversations. Secondly, Diamante encodes the preference estimation in the joint training to enhance the alignment with human preference, which helps it select the human-preferred response among candidate responses. 4.2.2 SELF-CHAT EVALUATION As suggested by Adiwardana et al. (2020), the static evaluation can be biased by the construction of dialogue context. Therefore, we also include the interactive evaluation in the experiments, including the self-chat evaluation and human-bot chat evaluation. Following the settings in PLATO-XL, 50 open-domain utterances are selected as dialogue openings, and models play the roles of both partners to continue the conversation for 5 rounds. Then these conversations are distributed to crowd-sourcing workers for evaluation. The self-chat evaluation results are summarized in Table 3. Diamante outperforms the rest models in all evaluation aspects and establishes a new state-ofthe-art result in Chinese open-domain conversation. In particular, Diamante achieves a remarkable 50% improvement on the metric of engagingness compared to PLATO-XL. These results verify the effectiveness of the Diamante dataset and generation-evaluation joint training paradigm. 4.2.3 HUMAN-BOT CHAT EVALUATION In addition to the above dialogue models, Diamante is compared to common commercial chatbots in Chinese through human-bot chat evaluations. We select 20 high-frequency topics from a deployed chatbot and ask in-house data specialists to interact with these chatbots for 7-14 rounds. The humanbot chat evaluation results are summarized in Table 4. Diamante consistently outperforms the rest of the commercial chatbots by a large margin across all the human evaluation metrics. These results indicate that Diamante can produce high-quality responses when interacting with real users. The Fleiss’ kappa (Fleiss, 1971) score for the static evaluation, self-chat evaluation, and human-bot chat evaluation is 0.433, 0.468, and 0.424, respectively. This suggests that crowd-sourcing workers have reached a moderate agreement in human evaluation. 4.3 DISCUSSIONS 4.3.1 ABLATION STUDY ON JOINT TRAINING As discussed in previous sections, the improvements of Diamante compared to PLATO-XL come from two aspects: the Diamante dataset bridges the distribution gap towards human-human conversations, and the joint training paradigm enhances the alignment with human preference. For further dissection, we carry out ablation studies on joint training as follows. Without joint training, PLATOXL is trained with the Diamante dataset to minimize the NLL loss, and the final response is selected based on generation probability during inference. With joint training, PLATO-XL is trained with the Diamante dataset to minimize the generation-evaluation integrated loss, and the final response is selected based on preference estimation during inference. Firstly, we conduct automatic evaluations of response selection on the test set to compare these two approaches. Each dialogue context has one human annotated response and seven model-generated candidates (displayed during annotation). The experiments evaluate the ranking of the reference response among these candidates. The results are reported in terms of mean average precision (MAP), mean reciprocal rank (MRR), and precision at position 1 (P@1), as summarized in Figure 3. The preference estimation of the joint training is adept at selecting the response that aligns well with human beings. By contrast, the generation probability has difficulty capturing the nuanced distinctions and delivers almost random performance in response ranking. Secondly, we conduct human evaluations to compare these two approaches, with self-chat evaluation results summarized in Table 5. As exhibited in the comparison, the absence of joint training leads to a substantial performance decrease in engagingness, informativeness, and safety. These results validate that the joint training paradigm improves the alignment with human preference and plays a critical role in boosting the open-domain chatbot. 4.3.2 APPLYING DIAMANTE TO OTHER DIALOGUE MODELS Although the Diamante dataset is collected with the assistance of PLATO-XL and the main experiments are carried out to evaluate Diamante’s improvements towards PLATO-XL, the framework is Start P2 P2 P2 P2 indeed universal and not limited to one particular dialogue model. Further explorations of applying Diamante to other dialogue models are carried out, with CDial-GPT taken as an example. The self-chat evaluation results are summarized in Table 6. Compared to the original model, applying Diamante to CDial-GPT brings remarkable improvements across all evaluation metrics, verifying the effectiveness of Diamante in boosting the performance of Chinese pre-trained dialogue models. 4.3.3 CASE ANALYSIS We provide two check-picked examples in Figure 4 and Figure 5 for qualitative analysis. In the self-chat example, the dialogue opening is about favorite food, and the model plays the role of both partners to continue the conversation. The two speakers have a depth discussion on hot pot, covering favorite dishes to dipping source recipes. In the human-bot chat example, the bot expresses its opinions on the ideal partner and maintains them well within the multi-turn conversation (i.e., personality is more important). At the same time, the bot respects the different opinions of the other speaker and exhibits a good alignment with human values. 5 RELATED WORK 5.1 HUMAN FEEDBACK With the rapid development of large language models, it becomes critical to build helpful, honest, and harmless language assistants, keeping in mind the alignment with human values (Askell et al., 2021; Bai et al., 2022; Glaese et al., 2022). Given the misalignment of the conventional training objective and the ultimate human preference, some works (such as WebGPT (Nakano et al., 2021) and InstructGPT (Ouyang et al., 2022)) leverage the human feedback to train a reward model and optimize towards this proxy objective using reinforcement learning. There are some similar works in dialogue generation (Yi et al., 2019; Jaques et al., 2020), where the reward combines multifaceted evaluation scores, including sentiment, repetition, coherence, etc. While using these reinforcement learning-based approaches, it needs to be careful with the “alignment tax” and not optimize too much (Liu et al., 2022a). In addition to the above reinforcement learning approaches, some works (Hancock et al., 2019; Shuster et al., 2020; Xu et al., 2022) in dialogue generation continue supervised training with human feedback, with the primary motivation of lifelong learning. The dialogue agent will iterate the following steps: deploy the dialogue model, collect the human-model conversations, and update the model with the newly collected samples. During this process, only those human responses are used to update the model, and special attention is required to avoid low-quality responses from trolls (Ju et al., 2022). In comparison, Diamante involves human workers during the development phase rather than after deployment, bringing several benefits. Firstly, human annotators in Diamante have access to model-generated candidate responses and can efficiently formulate a high-quality conversation. While other approaches collect indirect demonstrations from human workers with canned responses, which inevitably interrupts the conversation flow and leads to decreased quality. Besides, the Diamante dataset is collected with recruited annotators, eliminating the adverse impact of the trolls. Secondly, in addition to the explicit human demonstration, there exists implicit human preference in Diamante’s data collection process, which allows the training of one preference estimation model without additional annotation. 5.2 OPEN-DOMAIN DIALOGUE DATASET Given the limited number of annotated human-human conversations, open-domain dialogue models are typically pre-trained with human-like conversations collected from social media, such as Twitter, Reddit, Weibo, and Douban. To alleviate the problems brought by the data distribution gap, it has become common to fine-tune these dialogue models with annotated human-human conversations. Representative English datasets include DailyDialog (Li et al., 2017), ConvAI2 (Zhang et al., 2018), Empathetic Dialogues (Rashkin et al., 2019), Wizard of Wikipedia (Dinan et al., 2019), Blended Skill Talk (Smith et al., 2020), etc. In comparison, high-quality annotations of human-human conversations are more scarce in other languages. Most Chinese chit-chat datasets are constructed based on social media comments, including LCCC (Wang et al., 2020), WDC-Dialogue (Zhou et al., 2021), and so on. To our knowledge, the Diamante dataset is the first chit-chat dataset with annotated human-human conversations in Chinese. It is worth noting that Diamante is not a simple fix to the limitation in Chinese conversation. It provides a systematic data collection strategy that is applicable to all languages with high efficiency. 6 CONCLUSION In this paper, we propose to collect and leverage human feedback to boost the open-domain chatbot. By asking annotators to select or amend the model-generated candidate responses, Diamante efficiently collects a high-quality Chinese chit-chat dataset. Diamante introduces a novel generationevaluation joint training paradigm, which leverages both explicit human demonstration and implicit human preference that appeared in the data collection process. Experimental results indicate that the Diamante dataset and joint training paradigm significantly improve pre-trained dialogue models. 7 ETHICS STATEMENT In the dataset collection, annotators need to select or amend the model-generated candidate responses, where some candidates may contain potentially unsafe content. We ask annotators to produce safe and engaging responses. (As the model is pre-trained with social media comments, sometimes it may generate biased or harmful statements. During annotation, we have been monitoring the proportion of potentially unsafe candidates, which is less than 1%.) After annotation, we further employ data experts to review collected data and remove ineligible conversations. Diamante’s dataset and joint training paradigm help boost the open-domain chatbot and align well with human values. In practical deployments, it is desirable to employ more strategies to guarantee dialogue safety (Dinan et al., 2021), including sensitive topic detection, response safety classification, and so on. 8 REPRODUCIBILITY STATEMENT We describe the collection of Diamante’s dataset in Section 2 and Appendix B, including the annotation interface, annotation procedures, quality control process, etc. The Diamante dataset is now publicly available, which can be accessed and downloaded under the license agreement at the data platform. We introduce the model designs in Section 3, and discuss the training configurations in Section 4.1.1. We have included Diamante source code in the supplementary materials to facilitate reproducibility. A SCORING CRITERIA IN HUMAN EVALUATION The criteria used in human evaluation are provided in Table 7. B DATASET DETAILS B.1 ANNOTATION INTERFACE The original annotation interface of Diamante is in Chinese, as shown in Figure 6. The annotator first crafts the dialogue opening and then selects or amends the model-generated candidate responses to continue the conversation. The left-hand area displays the dialogue context and the input box. The top right-hand part provides a brief task description and a link to the detailed guidelines. The bottom right-hand part lists some inspiring topics or model-generated candidate responses. B.2 QUALITY CONTROL To ensure the annotation quality of the Diamante dataset, we designed and followed a rigorous quality control process. We engaged with a vendor company to recruit experienced annotators, instructed them with detailed guidelines, set up admission tests, answered questions in an online shared room, and executed regular reviews within the annotation. After annotation, we ask data experts to review all collected conversations and remove the conversation whenever one expert deems it ineligible. B.3 TOPIC VISUALIZATION The topic visualization of the Diamante dataset is displayed in Figure 7. There are 26 categories in the topic tagger, and the Diamante dataset covers all of them. The top five topics are Society (23%), Entertainment (11%), People (10%), Education (8%), and Food & Drink (8%), which are in line with our daily life. C FURTHER DISCUSSIONS C.1 MORE EXPLORATION ON JOINT TRAINING As shown in Table 5, the Diamante dataset and joint training paradigm bring significant improvements. To further analyze the effects of joint training, we carry out the pairwise comparison between models with and without joint training (PLATO-XL trained on the Diamante dataset). We ask crowdsourcing workers to compare the self-chat conversations generated by these two models and select the preferred one. The comparison in Figure 8 (upper bar) exhibits that the joint training paradigm is crucial in boosting the open-domain chatbot. In Diamante, the joint training leverages the implicit human preference that appeared in the data collection rH > rM. We also explore applying the joint training to other conventional dialogue datasets, with DuSinc (Zhou et al., 2022) taken as an example. To formulate training samples for the preference ranking rH > rM > rR, PLATO-XL is employed to simulate model-generated responses. Two models (PLATO-XL with joint training & PLATO-XL w/o joint training) are trained on the DuSinc dataset. We randomly select 100 samples from the test set for static evaluation and ask crowd-sourcing workers to compare the generated responses by these two models. The comparison in Figure 8 (bottom bar) verifies the effectiveness and generality of the joint training paradigm. C.2 SAFETY UNDER ADVERSARIAL ATTACK The main experiments reveal that Diamante achieves better safety on normal/insensitive topics. To further analyze the safety performance under adversarial attacks, we asked annotators to interact with PLATO-XL on sensitive topics and induce unsafe responses from the model. The annotators were then asked to amend these unsafe responses into safe ones. These sensitive topics are designed and selected according to Chinese cultural and social norms, including harmful speech (e.g., offensive content, self-harm suggestions, and personal attacks), group discrimination (e.g., region, gender, disability, and religion), misleading information (e.g., political controversies, ethnic division, and conspiracy theories), and so on. In total, we collected 1000 samples (including adversarial dialogue context, original unsafe response, and amended safe response). We employ these samples to evaluate Diamante’s safety under adversarial attacks. The automatic evaluation results in Figure 9 suggest that Diamante is adept at selecting safe responses. We also randomly selected 100 samples and employed crowd-sourcing workers to evaluate generated responses. The results in Table 8 reveal that Diamante achieves a remarkable safety improvement, with 76% of responses identified as safe. Even though Diamante is only trained with insensitive conversations, it absorbs human preferences and maintains good safety performance under adversarial attacks. C.3 AUTOMATIC DIALOGUE EVALUATION We also carry out automatic evaluation with rule-based and model-based metrics, including BLEU2/4 (Chen & Cherry, 2014), Distinct-1/2 (Li et al., 2016), Unigram F1 (Dinan et al., 2019), and BERTScore (Zhang et al., 2019). The automatic evaluation results in Table 9 are inconsistent with the human evaluation results in Table 2, where human evaluation is the golden standard in opendomain chitchat evaluation. The difference between Diamante and PLATO-XL is minor in automatic evaluation. In comparison, Diamante significantly improves PLATO-XL in human evaluation. C.4 CASE ANALYSIS WITH COMPARED APPROACHES We provide two more examples by PLATO-XL and XiaoIce in Figure 10 and Figure 11. These two examples are under the same starting utterances as Diamante in Figure 4 and Figure 5.
1. What is the main contribution of the paper regarding enhancing open-domain chatbots with human feedback? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and evaluation protocol? 3. Do you have any concerns regarding the method's implementation or comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a new technique to enhance open-domain chatbot with human feedback information. The authors attribute the less-engaging response to the distribution gap between human-human conversation and proxy human-machine conversation. Therefore, they design a data collection procedure to collect human revisions to machine response and construct a dataset based on that. Utilizing the established dataset, the authors adopt the classical generate-then-rerank paradigm to first generate multiple candidates and then select one. Strengths And Weaknesses Strengths: 1. The paper is well-written and easy to follow. 2. The authors ablate several aspects of their method. 3. The method obtains good results in human evaluation. Weakness: 1. A major proposal of the paper is to use human feedback to improve the performance of dialogue system. However, it is not a new idea in my opinion. There is rich literature[1-5] discussing how to continue learning from human feedback after deployment, which is closely related to your method. Out of the domain of dialogue, there is also a large body of work studying the improvement of models from human feedback, such as summarization[7][6] or question answering[8]. They should at least be included in the related work, or the readers will have difficulty putting the work into its literature. 2. The evaluation protocol is not very convincing to me. (1) Metrics: Only human evaluation is conducted and no automatic metrics are involved. I acknowledge that some automatic metrics like BLEU or ROUGE may be poorly correlated with human, but some neural metrics such as BERTscore should be considered. Furthermore, pure human evaluation renders the model performance hard to reproduce. (2) Dataset: In the statistics evaluation, only 100 cases are sampled from the test set and evaluated. Why not use all the test set? Do you sample multiple times and conduct a repetitive experiment or just report the result from a single sampling? Why not perform and compare on some widely-used open-domain dialogue dataset such as Douban[9] or STC[10]? (3) Baselines: PLATO-XL is up to 11B while EVA2.0 is 2.8B and CDial-GPT is 104M. So I doubt whether your comparison is fair. Why not implement your method on some smaller version of PLATO model? [1]Learning from Dialogue after Deployment: Feed Yourself, Chatbot! [2]Learning New Skills after Deployment: Improving open-domain internet-driven dialogue with human feedback [3]Improving alignment of dialogue agents via targeted human judgements [4]Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback [5]BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage [6]Learning to summarize from human feedback [7]Self-critiquing models for assisting human evaluators [8]WebGPT: Browser-assisted question-answering with human feedback [9]Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots [10]Neural responding machine for short-text conversation. Clarity, Quality, Novelty And Reproducibility Some important details are missing: 1. The architecture of your model. Even if you directly use the plato-XL as your backbone, a brief introduction to its architecture is necessary for completeness. 2. How do you implement the s() function in Eq.2? 3. You adopt top-K sampling to produce multiple candidates. So what is the value of K in your experiment? How many candidates are generated for each case? The novelty of the proposed method is under question. The paper learns the human feedback information by jointly training a discriminator and a generator. Actually, generate-then-rerank is a popular paradigm and has been used in many prior works such as [1][2][3]. [1]CALIBRATING SEQUENCE LIKELIHOOD IMPROVES CONDITIONAL LANGUAGE GENERATION [2]JOINT GENERATOR-RANKER LEARNING FOR NATURAL LANGUAGE GENERATION [3]BRIO: Bringing Order to Abstractive Summarization
ICLR
Title Learning Symbolic Models for Graph-structured Physical Mechanism Abstract Graph-structured physical mechanisms are ubiquitous in real-world scenarios, thus revealing underneath formulas is of great importance for scientific discovery. However, classical symbolic regression methods fail on this task since they can only handle input-output pairs that are not graph-structured. In this paper, we propose a new approach that generalizes symbolic regression to graph-structured physical mechanisms. The essence of our method is to model the formula skeleton with a message-passing flow, which helps transform the discovery of the skeleton into the search for the message-passing flow. Such a transformation guarantees that we are able to search a message-passing flow, which is efficient and Paretooptimal in terms of both accuracy and simplicity. Subsequently, the underneath formulas can be identified by interpreting component functions of the searched message-passing flow, reusing classical symbolic regression methods. We conduct extensive experiments on datasets from different physical domains, including mechanics, electricity, and thermology, and on real-world datasets of pedestrian dynamics without ground-truth formulas. The experimental results not only verify the rationale of our design but also demonstrate that the proposed method can automatically learn precise and interpretable formulas for graph-structured physical mechanisms. 1 INTRODUCTION For centuries, the development of the natural sciences has been based on human intuition to abstract physical mechanisms represented by symbolic models, i.e., mathematical formulas, from experimental data recording the phenomena of nature. Among these developments, many mechanisms are naturally graph-structured (Leech, 1966), where the physical quantities are associated with individual objects (e.g., mass), pair-wise relationships (e.g., force) and the whole system (e.g., overall energy), corresponding to three types of variables on graphs: node/edge/global variables. For example, as shown in Figure 1(a), the mechanical interaction mechanism in multi-body problem corresponds to a graph with masses (mi), positions (V⃗i) as attributes of nodes, and spring constants (kij) as attributes of edges, which, together with the graph connectivity, yields the acceleration as output attributes of nodes; while in the case of resistor circuit, nodes and edges correspond to voltages and resistances, respectively, and these attributes define a graph-level overall power of the circuit. In the past few years, Symbolic Regression (SR) (Sahoo et al., 2018; Schmidt & Lipson, 2009; Udrescu et al., 2020), which searches symbolic models y = F(x) from experimentally obtained input-output pairs {(x, y)} with F being an explicit formula, has become a promising approach trying to automate scientific discovery. Traditional SR methods include genetic programming-based methods (Schmidt & Lipson, 2009; Fortin et al., 2012) working by generating candidate formulas by “evolution” (i.e., manipulations), and deep learning-based methods (Li et al., 2019; Biggio et al., 2021; Zheng et al., 2021) utilizing sequence models to generate candidate formulas. However, these methods are designed for traditional SR problems on input-output pairs {(x, y)} without considering graph information. To exploit the inherent graph structure in physical mechanisms, as shown in Figure 1(b), SR on graphs aims to find a formula F that characterizes a mapping from input {G, X} to output y, with X and y both inside graph structure G. To perform this, we need both fine exploitation of inherent graph structures of physical mechanisms and well achievement of flexibility regarding diverse forms of interaction between entities in the physical world. Graph Neural Network (GNN) has recently been incorporated into SR for discovering mechanisms behind particle interactions (Cranmer et al., 2020; Lemos et al., 2022). However, obvious setbacks exist that the message-passing flow of GNN, corresponding to the formula skeleton, required to be manually designed to learn the underlying mechanisms, is impractical because the formula skeletons usually remain unknown and are significantly different in diverse physical domains as shown in Figure 1(c). To solve this problem, inspired by the correspondence between the skeleton and message-passing flow in GNN, our core idea is to transform the discovery of the skeleton into the search for message-passing flow, which paves the way for identifying the underneath formula by interpreting each component function in the searched message-passing flow. However, due to the coupling relationship between the skeleton and the component formula in the skeleton, neither of them can be independently identified, implying a vast, highly entangled search space for both message-passing flow and component functions. To tackle this challenge, we formulate a bi-level optimization problem that searches for the message-passing flow by pruning strategy at the upper level on condition that its component functions have been optimized with deep learning (DL) at the lower level. Besides empirical accuracy, it is equally vital but non-trivial to maintain explicit interpretability and generalization ability in discovered formulas. We propose to search the Pareto-optimal message-passing flow between accuracy and simplicity by carefully designing a scoring function involving a complexity function of message-passing flows that optimizes both aspects across different searching steps. Our contributions can be summarized as the following three aspects, • We generalize the problem of learning formulas with given skeletons (inductive bias) from graph data in Cranmer et al. (2020) by additionally learning the formula skeleton from data, which is essential for learning graph-structured physical mechanisms from diverse physical domains. • We propose a novel method to learn graph-structured physical mechanisms from data without knowing the formula skeleton by searching the Pareto-optimal message-passing flows of GNN together with the symbolic models as components. • We conduct experiments on five datasets from diverse physical domains, including mechanics, electricity, thermology, and two real-world datasets about pedestrian dynamics, demonstrating that our model can first automatically identify the correct skeleton based on collected data instead of expert knowledge and then learn the overall symbolic model for corresponding graph-structured physical mechanism. 2 THE PROPOSED METHOD Before introducing the proposed method, we first formally define the the problem of symbolic regression on graphs. Definition 1 (Variables on Graphs) The topology of the graph is denoted as G, and its variables include {V,E,u}, where V denotes the set of node-level variables, E denotes the set of edge-level variables and u ∈ Rnu denotes a global variable which interacts with elements in V and E through topology G. Specifically, vi ∈ Rnv is the variable associated with the i-th node while eij ∈ Rne is the variable associated with the edge connecting the i-th and j-th nodes. nu, nv, and ne denote the dimensions of the global, node, and edge variables. Definition 2 (Symbolic Regression on Graphs) Given a set of {(Gi, Xi,yi)}, where Xi ⊂ {V,E,u}, which are known variables, yi ∈ {V′,E′,u′}, which are unknown variables, and variables with prime denote output variables, we aim to find an accurate and compact formula F(·) that fits y = F(G, X). 2.1 MODEL FORMULA SKELETON WITH MESSAGE-PASSING FLOW As shown in Figure 1(c), a formula F describing graph-structured physical mechanisms can always be decomposed into several formula components, each representing its association with an individual node, pair-wise relationships between nodes or the whole system. As illustrated in the specific diagram, these components are interconnected according to the variable dependency, termed as “skeleton”. Moreover, differences between the two examples also indicate the existence of diverse skeletons underlying different physical mechanisms. The key insight is that the skeleton has a strong correspondence with the message-passing flows in GNN, which differs a lot in various physical scenarios, including mechanics (Sanchez-Gonzalez et al., 2020), electricity (Zhang et al., 2019), thermology (Chamberlain et al., 2021). The messagepassing flows can be diverse by cascading multiple blocks, changing/removing some functions and adjusting the embedding sizes. A block of full GNN contains the updating of edges, nodes, and graph representation respectively as follows (Battaglia et al., 2018), e′ij = ϕ e (eij ,vi,vj ,u) e ′ i = ρ e→v (E′i) v′i = ϕ v (e′i,vi,u) e ′ = ρe→u (E′) u′ = ϕu (e′,v′,u) v′ = ρv→u (V ′) (1) where ϕ(·) denotes the message functions and ρ(·) denotes the aggregation functions. We provide three examples in different physical scenarios, as shown in Figure 2, to illustrate the well-defined analogy between message-passing flows and skeletons. Example 1 (Mechanics: Multi-body Kinematics) In this problem, we aim to find the particles’ acceleration. We have an edge update function ϕe since particle pairs determine string forces, while the aggregation function ρe→v of edges is based on the independent action principle of force. Example 2 (Electricity: Resistor Circuit) The objective of this problem is to find the overall power of a given resistor circuit. An edge update function ϕe corresponds to the computation of singleresistor power utilizing Joule’s Law, and an aggregation ρe→u from edge to global appears for summation to get overall power. Example 3 (Pedestrian Dynamics: Collision Avoidance) In this problem, we aim to find the pedestrians’ acceleration according to their postions and velocities. The formulas including the skeletons to describe this relationship can be diverse, which highly depends on the pedestrian scenarios. 2.2 TRANSFORMING INTO THE TASK OF MESSAGE-PASSING FLOW SEARCHING The message-passing flows of GNN correspond to explicit meanings in the symbolic calculation for graph-structured mechanisms, which is summarized in Table 1. This strong resemblance inspired us to devise a transformation of the primitive SR task on graphs into a relatively more practical task of searching message-passing flows. Our model has two stages: message-passing flow searching and message-passing flow-based SR. Specifically, at stage 1, we need to search the message-passing flow as the formula skeleton. Then at stage 2, we need to symbolize components into formulas and cascade them according to the skeleton to get the final graph-structured mechanisms. With the above transformation, it is clear that we need to solve the following bi-level optimization problem in stage 1, i.e., {P∗,M∗} = argmin M,P s(P;M), (2) where M denotes the message-passing flows, P denotes the parameters in the DL components, M∗ and P∗ denote the Pareto-optimal one, and s(·) gauges how well the finally learned formula obtained by M and P performs. However, there are two core challenges to learning formulas on graphs: (i) considering simplicity and accuracy simultaneously for graph SR is difficult; (ii) the discrete search space composed of skeletons and component formulas is prohibitively huge. 2.3 SEARCHING MESSAGE-PASSING FLOWS M (FORMULA SKELETONS) To deal with the first challenge, we are motivated to change equation 2 into M∗ = argmin M s(P∗;M), (3) s.t. P∗ = argmin P l(P;M), (4) where equation 3 and equation 4 are two optimization problems in the upper level and lower level, s(P∗;M) = l(P∗;M) + λc(M) is the score taking both simplicity and accuracy into consideration, l(·) denotes the error loss of predicting outputs (see details in Appendix B.4), λ is the weight, and c(·) denotes the complexity of message-passing flow. The design of complexity c(·) is flexible (see details in Appendix B.5), and we calculate the complexity as follows, (i) for each layer (corresponding to a function in {ϕu, ϕv, ϕe}), the complexity can be calculated as the product of the embedding size of this layer and the number of inputs in this layer; (ii) the whole complexity of the message-passing flow can be calculated as the summation of the complexity of each layer. The optimization at the lower level is solved by training parameters P of a DL model given the structure M, while the optimization at the upper level w.r.t. M is difficult because M forms a huge discrete search space including # blocks, # message-passing layers, connections and embedding sizes. Another insight facilitating the dealing with the second challenge is that if the message-passing flow is a super-structure of the ground-truth one, i.e., redundant computations are done, it results in merely subtle variations of the loss. Still, if the message-passing flow is a sub-structure of the ground truth, i.e., some necessary computations are missing, the loss jumps up with a change of magnitude. Such an observation together with equation 3 facilitates us first to search an initial message-passing flow that is the super-structure of the ground-truth and then learn to prune the message-passing flows to get both compact and expressive message-passing flows. The framework of our model is shown in Figure 3(a), which sequentially search # blocks, # layers, connections and embedding sizes in a hierarchical way and the four steps in detail are listed as follows. Step 1: Search Message-Passing Blocks. First, we need to find a message-passing flow that the ground-truth message-passing flow is its sub-structure. To achieve this goal, we first stack several full message-passing blocks. By optimizing the score in equation 3, we find the Pareto-optimal number of message-passing blocks, where we take the number of blocks as the complexity in equation 3 and RMSE between the predicted value and the ground-truth as the loss. Step 2: Search Message-Passing Layers. As mentioned, a full message-passing block contains three layers corresponding to updates of edge, node, and graph representation. However, not all of them are necessary for obtaining the output. To find the most compact layers, we try to delete each layer to see whether the score in equation 3 increases or decreases, where we define the complexity as the number of layers and RMSE as the loss. Our pruning-based searching method is based on the unique insight in SR, which is much more efficient than brute force search. Specifically, our method can significantly decrease the computational cost from O(2n) to O(n), where n is the number of initial layers. Step 3: Search Necessary Inputs. We further filter out the useless inputs for each layer. Specifically, we adopt a similar strategy to the previous searching step: try to delete each input to see whether the score in equation 3 will rise or drop, where the complexity is the number of connections and the loss is RMSE. Similar to step 2, our model can significantly decrease the computational cost from O(2n) to O(n), where n is the number of initial inputs. Step 4: Search Embedding Sizes. To ensure that the embedding in each layer is compact and with explicit physical meanings, we use the score given in equation 3 to find the Pareto-optimal embedding size for each embedding, where the complexity is defined as the embedding size and RMSE defines the loss. We try to reduce the embedding size to find the embedding size with the highest score. At the same time, we fix other embedding sizes as a large enough number to ensure the information bottleneck can only be caused by the embedding size we are searching for. 2.4 THE LEARNING PROCEDURE After obtaining the message-passing flow M∗ and the parameters of DL component function P∗ at the first stage, we follow Cranmer et al. (2020) to symbolize each DL component into formulas, and then cascade them according to the skeleton represented in M∗ into a whole formula at the second stage, as shown in Figure 3(b). For aggregation functions ρ corresponding to set functions, (i) we choose several commonly used aggregators as candidates, including sum, mean, and max, while other aggregators can be generated by them, and select the maximum operator to replace the softmax function, (ii) we perform SR on input-output pairs from trained GNN component functions, (iii) we fine-tune all constants in the overall function (given by cascading component functions), thereby avoiding the accumulation of errors. 3 EVALUATION 3.1 EXPERIMENT ON CLASSICAL PHYSICAL SCENARIOS Dataset. We utilize five datasets in different physical domains to demonstrate that our model has the ability to rediscover the well-known graph-structure physical mechanisms, as introduced in Section 2.1 and Appendix A.1. We provide two cases of mechanics scenarios, one of electricity scenario and two of thermology scenarios. For both mechanics and thermology scenarios, there are two selected cases with different complexity, where the content listed in parentheses is associated with the more complex scenarios. Detailed information about formulas and data generation is reported in Appendix A.1. Metrics. Given the same input, we use the coefficient of determination R2, indicating the proportion of the output variation that can be predicted from the input variables. Specifically, it is calculated by the output of distilled formula and the output of the ground-truth formula to measure whether the learned formula is accurate enough. R2 can be calculated as R2 = 1− ∑ i (yi − ŷi) 2 / ∑ i (yi − ȳ) 2, where ȳ = ∑ i yi/n. Comparing Methods. We compare our model with learning symbolic models from deep learning with inductive bias (SymDL) (Cranmer et al., 2020) to demonstrate that our model is flexible in more scenarios and a variant of our model that uses a full graph network without pruning searching (FullGN) for the ablation study (1-layer full GNN that removes non-exist inputs and non-required outputs). The message-passing flows of SymDL and FullGN are shown in Appendix A.3. Plausibility Comparison with Baselines. We first compare the applicability of our method with the SOTA baseline (Cranmer et al., 2020). As listed in Table 4, our method can be applied to all five cases from three physical scenarios, including mechanics, electricity, and thermology, while the baseline fails in the last two scenarios due to incorrect message-passing flow because their designed message-passing flow is designed explicitly for Newton-force interaction in the simple mechanical scenario and not flexible enough for other scenarios. We design two cases in mechanics scenarios: calculating the acceleration and the relative acceleration in the center-of-mass frame. SymDL is designed for handling formula discovery in the simple case, with a specified message-passing flow. Comparatively, our method moves forward to a more general and challenging setting without specifying a message-passing flow representing the formula skeleton. To ascertain the correctness of the learned formulas, besides the baseline, we further introduced a variant of our method without searching the message-passing flow, i.e., directly using the full messagepassing block in the first stage. As listed in Figure 4, our model achieved the same performance with two baselines in the simple mechanics case. In the complex case, our model outperformed the SOTA baseline and the variant of our model by a large margin w.r.t. R2 metric. Specifically, these two competitors both failed and got a rather low R2, while our method rediscovered the correct formula with R2 = 0.917, indicating the advantage of searching for the correct messaging-passing flow. For our model, the difference between two formulas for two cases is about the latter two terms corresponding to the additional message-passing flows V → V′ and V → u′ → V′, which SymDL cannot handle. The formulas learned by baselines are wrong for lack of necessary dependencies, which fail to have physical meaning and differ largely from the ground truth. Our problem differs from SymDL, requiring prior knowledge of the formula skeleton for designing the deep learning architecture, which is almost impossible to know in new real-world scenarios. For the rest three cases that the SOTA baseline cannot handle, as listed in Figure 4, the performance gain over the variant using full message-passing flow indicates that optimizing Pareto-optimal score is essential in obtaining correct formulas, which is less subject to redundant message-passing flows that hinder the subsequent SR process, including unnecessary inputs and redundant computation steps. The detailed searching process of the electricity case is analyzed in Section 3.1. For the complex case of thermology, it can be observed that the learned formula successfully captures the effect of externally conducted heat compared with the simple case, while other baselines fail to have physical meanings due to unnecessary inputs and redundant computation steps. Although the change is slight (whether external heat conduction exists), the skeleton and the entire formula are quite different, and so is the entire formula. To be more precise, the learned message-passing flows by our model are shown in Figure 5. Besides, the time cost of each independent part is shown in Table 2, where we can observe that searching message-passing flows only takes a small part of the whole procedure, and our model’s running times are similar to SymDL and usually shorter than FullGN. Furthermore, we conduct experiments to demonstrate the design philosophy of our method, which is reported in Appendix A.9. Qualitative Results for Understanding the Searching Process. We show the searching process of message-passing flow in Figure 6, where the upper row shows the learning curve in terms of error (RMSE), complexity, and a score, which is a weighted summation of error and complexity. From Figure 6, we observe that if the message-passing flow is a sub-structure of the ground-truth message-passing flow, the performance will drop significantly. On the other hand, message-passing flows with redundant layers/inputs/embedding sizes will have similar performance, echoing the rationale of our pruning strategy. The core idea is to search for the most compact message-passing that is expressive enough, and the four-step searching process is as follows, (i) the model tried the number of blocks as 1 ∼ 3, finding similar errors with a rise in complexity, so it opted for 1; (ii) from the searched message-passing flow at the previous stage, it tried to delete every layer associated with ϕ. It turned out that deleting the edge layer would cause a huge error increase, so the edge layer was preserved, after which it tried to delete the node layer and found that the score decreased (error does not change a lot and the complexity decrease), so it decided to delete node layer; (iii) like the previous stage, it tried to delete each input and found that only deleting the V → ϕu connection would not cause an error increase, so this connection was deleted. (iv) finally, it tried to compress each representation and found that when the embedding size was 1, the score minimized, so 1 was chosen as the embedding size. After the whole process, the message-passing flows, including embedding (intermediate variable), functions, and topology, have explicit physical meanings, paving the way for symbolic regression. 3.2 EXPERIMENTS ON REAL-WORLD SCENARIOS OF PEDESTRIAN DYNAMICS To better show how our model discovers unknown graph-structured physical mechanisms in the real world, we conduct carefully-designed experiments of our model on formula discovery for pedestrian dynamics. Problem Formulation. We aim to find a formula that can approximately describe the relationship between acceleration a and the velocity v, pedestrian position x, and destination position xdest. The graph G describe the interaction relationship, which is constructed as follows when the two pedestrians’ distance is less than R, they are connected, and otherwise, they are unconnected. Formally, the problem can be described as finding a formula F that fits a = F(G, x, v, xdest). Datasets. We conduct experiments on two real-world datasets of crowd trajectories: several experimental datasets from studies about pedestrian dynamics (Boltes & Seyfried, 2013)1, including the following scenarios, (i) Unidirectional flow in corridor: a group of people goes through a corridor in the same direction, as shown in Figure 8(a); (ii) Bidirectional flow in corridor: a group of people goes through a corridor in opposite directions, as shown in Figure 8(b). Comparing Models. For pedestrian dynamics, a well-known manually-designed model is the social force model (Helbing & Molnar, 1995), in which pedestrians are with two forces: an attractive force from the destination and a repulsive force from the surrounding pedestrians and obstacles (refer to Appendix A.4 for details). Learned Formulas. The learned formulas and the corresponding physical meanings are reported in Figure 7, which demonstrate that our model can learn different skeletons and formulas that are more precise than the social force model with explicit physical meanings. The performance comparison is also reported in Figure 7, where we can observe that our model has about 10% improvement compared to the social force model. 4 RELATED WORKS Symbolic Regression (SR). Distilling elegant symbolic expressions from vast experimental data has always been the mainstream method used for finding new formulas and verifying hypotheses throughout the history of physics. SR is a classic topic (Schmidt & Lipson, 2009; Petersen et al., 2020; Biggio et al., 2021; Guimerà et al., 2020) that tries to emulate the process to learn an explicit symbolic model that can describe a projection from input X to the output y as accurately as possible while maintaining its compactness. Traditional methods of discovering formulas from data are primarily based on genetic programming(GP) (Schmidt & Lipson, 2009; Koza, 1994; Worm & Chiu, 2013). 1https://ped.fz-juelich.de/database/doku.php Hitherto, there have been promising results yielded by GP-based SR methods such as Burlacu et al. (2020), Virgolin et al. (2019), and the famous commercial SR method Eureqa (Dubčáková, 2011), etc. More recently, methods based on DL (Zheng et al., 2021; Qian et al., 2021; Martius & Lampert, 2016; Kusner et al., 2017; Udrescu & Tegmark, 2020; Udrescu et al., 2020; Daniele et al., 2022) for symbolic regression are introduced with better expressive ability than GP. Furthermore, Cranmer et al. (2020) first proposed to learn graph-structured physical mechanisms (especially kinematics) given formula skeletons. Beyond that, we propose searching for formula skeletons automatically, where existing SR methods can be exploited to look for basic components of the whole formula. Graph Neural Network (GNN). GNN (Kipf & Welling, 2017; Veličković et al., 2018; Gilmer et al., 2017) can be viewed in a message-passing manner (Battaglia et al., 2018; Veličković, 2022; Bronstein et al., 2017), and most of them can be summarized as message-passing among three levels: edge/node/graph level, while the message-passing flows and message/aggregation functions can be customized very differently based on the specific characteristics of applications. It has been widely used for physical systems by capturing the interacting mechanisms, such as simulating mechanical system (Sanchez-Gonzalez et al., 2020; Huang et al., 2021; Sanchez-Gonzalez et al., 2018), designing circuits (Zhang et al., 2019; Ren et al., 2020), simulating heat conduction (Chamberlain et al., 2021; Xhonneux et al., 2020), simulating pedestrian dynamics (Shi et al., 2023; Zhang et al., 2022). Furthermore, there are some works (You et al., 2020; Yoon et al., 2020; Cai et al., 2021; Gu et al., 2021) that adopt automated machine learning techniques for searching the best GNN architecture for a specific prediction task. Unlike them, we focus on the SR problems on graphs and inspired by symbolic regression (Udrescu & Tegmark, 2020), we propose to search the Pareto-optimal messagepassing flows, which is both accurate and simple and can benefit learning of symbolic models. Pareto-optimal Search. The previous Pareto-optimal solutions proposed in Neural Architecture Search (NAS) area (Lomurno et al., 2021; Lu et al., 2020; Dong et al., 2018) focus on finding the model architecture with both high prediction accuracy and low inference latency, which does not meet requirements for solving graph SR problem. Instead, our proposed method is based on a novel insight in the SR scenario: the performance would be similar when the message-passing flow (skeleton) is a super-structure of the ground-truth one. In contrast, the performance degrades a lot if it is a sub-structure of the ground-truth one. 5 CONCLUSION In this paper, we generalize the problem in Cranmer et al. (2020) by learning the formula skeleton rather than manually designing, which is crucial for learning formulas in a new physical area without much prior knowledge. We propose a new SR method that first transforms the discovery of the formula skeleton to the search of the Pareto-optimal message-passing flow with accuracy and compactness and then symbolizes its message functions to obtain the underneath formula. We conduct experiments on five datasets from three different physical domains, including mechanics, electricity, and thermology, demonstrating that our method can consistently learn plausible formulas describing the graph-structured physical mechanism. Furthermore, to show that our model is practical for learning unknown formulas in the real world, we conduct experiments on two real-world datasets about pedestrian dynamics, which learn different formulas with explicit physical meanings for different scenarios more precisely than mainstream empirical formulas. ACKNOWLEDGEMENT This work was supported in part by the National Key Research and Development Program of China under 2020YFA0711403, the National Nature Science Foundation of China under 61971267, U1936217, 61972223, 62171260. Q. Yao was in part supported by NSFC (No. 92270106) and CCF-Baidu Open Fund. A EXPERIMENTS A.1 DATA GENERATION Besides the scenarios of mechanics and electricity, we further illustrate the scenario of thermology as follows. Example 4 (Thermology: Heat Conduction) The objective of this problem is to compute the entropy production rate. The edge update function ϕe corresponds to Fourier’s Law of Heat Conduction, and the node update function ϕv corresponds to the Clausius entropy expression, followed by an aggregation ρv→u that sums up individual entropy production rates. In different scenarios, we devised different inputs and computed the theoretical outcome according to the known mechanisms. In the scenario of Mechanics, we randomly (standard normal distribution) set the (x, y) coordinates of the particles in 2-D cases. We assign the masses of the particles randomly according to the lognormal distribution. The original lengths of the springs are all set to 1. In the complex case, external forces with all dimensions following the standard normal distribution are exerted. Graph topology is picked randomly. We then compute the accelerations of each particle with Hooke’s Law, the independent action principle of force, and Newton’s Second Law of Motion. In the scenario of Electricity, we randomly give a topology on the graph and set the electric potential of nodes following the standard normal distribution. Resistances of resistors on edges are chosen uniformly randomly from 0.01 to 1.01 to avoid extremely large power outputs. We then compute the power of each edge (resistor) according to Joule’s Law and add them up to reach the overall power of the resistor circuit. In the scenario of Thermology, the graph topology is given as a ‘grid’, echoing the core idea of Finite Element Analysis. We randomly set the temperature of each node between 0 and 1 and the thermal conductivity between 1 to 3 globally. We then compute the discrete laplacian on the grid and the heat flow according to Fourier’s Law of Heat Conduction. With each node’s heat flow and temperature, we compute their entropy production rate separately and add them up to reach the overall entropy production rate. The basic information of our used datasets is listed in Table 3. A.2 REPRESENTATIVE SNAPSHOTS OF PEDESTRIAN DATASETS To better understand the pedestrian scenarios, we show two representative snapshots of two pedestrian datasets in Figure 8: unidirectional flow in a corridor and bidirectional flow in a corridor. A.3 BASELINE DETAILS The message-passing flows of baselines, SymDL, and FullGN are shown in Figure 9, and their applicability in different scenarios are demonstrated in Table 4. A.4 DETAILS OF SOCIAL FORCE MODEL In the social force model, the baseline model for pedestrian scenarios, the dynamics of pedestrians are driven by two factors: (a) a pedestrian is attracted by his/her destination with force FDi = (vdiei − vi) /τ , ei = xd−xi∥xd−xi∥ , where vdi is the value of desired velocity, vi is the current velocity, τ is the relaxation time, ei is the unit vector toward the destination; (b) the nearby ones repulse a pedestrian with a force Fij = Ai exp (−rij/Bi) enij , where Fij is the repulsive force, rij is the distance between pedestrian i and j. The joint force is Fi = FDi + ∑ j∈Ni Fij , where Ni means the set of pedestrians whose distance to pedestrian i is less than 5 meters. The social force model is widely used as the foundation of much commercial software such as viswalk2 and anylogic3. In this paper, we assume that the mass of a pedestrian is 1, and thus ai = Fi/m = Fi. However, on the one hand, the social force model is manually designed, which may have discrepancies with real-world pedestrian dynamics. On the other hand, different scenarios usually have very different pedestrian interaction mechanisms, which one single model cannot precisely model. So it is meaningful to learn data-driven formulas to describe different pedestrian interaction mechanisms. A.5 IMPLEMENTATION We implement our model in Python using Pytorch library and optimize all the models by Adam optimizer (Kingma & Ba, 2015). We use parallel symbolic regression in Python (PySR)4 (Cranmer, 2020) to extract formulas from each message functions ϕ. A.6 PARAMETER SETTINGS For the DL part, we set the learning rate as 10−4, tolerance in early stopping as 10, #layers and embedding size in MLP as 4 and 16, the max number of epochs as 20000 and the weight λ as 0.1. The choice of parameter λ is analyzed in Appendix A.8. For the traditional SR part, our candidate symbols include both binary operator {+,−,×, /} and unary operator {sign, exp}, and we set batch size as 1024. 2https://www.myptv.com/en/mobility-software/pedestrian-simulation-software-ptv-viswalk 3https://www.anylogic.com/features/libraries/pedestrian-library/ 4https://github.com/MilesCranmer/PySR Table 4: The applicability in different scenarios. Method Mechanics (simple) (complex) Electricity Thermology (simple) (complex) SymDL √ √ × × × Ours √ √ √ √ √ Figure 10: The searched best score v.s. searching time for two different searching methods in the circuit scenario. A.7 COMPARING HIERARCHICAL PRUNING WITH RANDOM SEARCH Furthermore, to demonstrate the effectiveness of our search method, we compare it with the random searching strategy and plot their searching processes in Figure 10. From that, we can observe that our method is much more efficient than the random search algorithm, which suggests that the searching problem is difficult and that our method effectively reduces the original colossal search space. Specifically, even if the random search algorithm takes ten times longer than ours, the score of the best-searched skeleton is still 4.5 times worse than ours, and the searched skeleton is wrong. A.8 PARAMETER SENSITIVITY One of the most critical parameters in our model is the weight λ that balances the complexity and errors. We normalize the input-output pairs to make the outputs have a standard deviation of 1. Specifically, for each dimension of features, we divide the features by their standard deviation, maintaining the fitting errors with similar magnitudes. Since the complexities of different skeletons are also with similar magnitudes, the best value of λ is similar for different datasets. As shown in Table 5, we test different values of λ on three diverse scenarios, including the circuit scenario and two mechanical scenarios. For each value of λ, we use ten different seeds to train the model to test whether our model can learn the correct message-passing flows and the correct formulas. We choose the best formula among ten formulas, so all these values of λ can allow us to find the correct formula in different scenarios. Among them, we choose λ = 0.1 for achieving the highest success rate among all three scenarios. A.9 DEMONSTRATION OF DESIGN PHILOSOPHY To demonstrate the design philosophy of our method, we plot several learning curves of different message-passing flows in the simple mechanical scenario. We first plot the learning curves of three different message-passing flows in Figure 11, corresponding to the one lacking the necessary message-passing connection, the ground-truth one, and the one with a redundant message-passing connection, respectively. We find that the ground-truth message-passing flow and the one with redundant message-passing connection have only little variations (the RMSE is around 0.1 and the MAPE is around 1.0) in performances after the loss function converges. However, the performance of message-passing flow lacking necessary connections decreases significantly (the RMSE is about 1.4, and the MAPE is 4.0). We further test the impact of the embedding size on the performance, where the learning curves are shown in Figure 12. From that, we can see that when the embedding size is less than a certain number 2, the performance decreases significantly (RMSE is more than 1.4), while the performance is similar (RMSE is around 0.2) when the embedding size is not less than 2. Last, we test whether the softmax function can learn the ground-truth aggregator. As we can see in Figure 13, the learning curve with softmax and with the ground-truth aggregator is quite similar, and we verify that the learned aggregator is the same as the ground truth. B METHOD DETAILS B.1 USAGE OF THE PROPOSED METHOD The result obtained by our method is fairly stable in terms of both searching formula skeleton and symbolizing learned neural networks. First, our designed searching method can guarantee that only better graph structures (message-passing flows) get selected during the learning process. Based on an insightful observation that the loss would increase a lot when the graph structure is a subset instead of a superset, we propose to search four components of graph structure (block, layer, connection, dimension) by starting with a full structure and then pruning to obtain a compact one. Second, the stability of SR results has also been demonstrated by applications in Cranmer et al. (2020). When applied to a new physical domain, we can directly use the same DL-related hyper-parameters (such as the learning rate and embedding size of MLP) on old domains and slightly tune the searchrelated hyper-parameter (weight λ) near the previous optimal value on old domains. The stability of the proper λ value is demonstrated in Table 5. Due to the randomness of DL training and the genetic algorithm, we train a model ten times with different seeds to get ten formulas and select the Pareto-optimal formula according to the score (see Appendix B.6) as the final formula. Overall, the above process in a new domain does not require heavy human labor. B.2 TRAINING ALGORITHM The training algorithm of our model is summarized in Algorithm 1. B.3 DETAILS OF SYMBOLIZING THE AGGREGATION/MESSAGE FUNCTIONS Symbolize Aggregation Function ρ. We choose several commonly used aggregators as candidates, including sum, mean and max, while other candidate aggregators such as min can be achieved by min(x) = −max(−x), and root mean square can be represented by (mean(x2)) 12 . Harmonic mean and l-norm can also be decomposed into two ϕ functions and a mean aggregators. These operations can be learned by ϕ. Algorithm 1 The process of obtaining the graph-structured symbolic model. Require: D including training data {(X,Y,G)}; 1: (start stage 1) search message-passing flow follows Section 2.3; 2: step 1: search the Pareto-optimal block number; 3: step 2: search the required layers; 4: step 3: search the necessary input variables; 5: step 4: search the Pareto-optimal embedding sizes; 6: train the model with Pareto-optimal message-passing networks; 7: (start stage 2) Symbolize Aggregation Function ρ and ϕ 8: replace each ρ by the aggregator with largest weight; 9: train the model and record the input-output pairs of each ϕ; 10: replace each ϕ by a formula obtained by classic SR, with constants left to be fitted on the data; 11: fit the correct constants from the data by gradient descent and get the final graph-structured symbolic model. Symbolize Message Function ϕ. After training the GNN model, we use the tool of symbolic regression to extract a symbolic model with correspondence to message and update functions. Specifically, for message functions in the neural network, we record their input-output pairs and implement classic symbolic tools to symbolize them. Retrain and Fine-tune Constants. Finally, to eliminate accumulated errors, we set all constants in the entire formulas as parameters to be trained and fine-tune them to get the final graph-structured mechanism represented by symbolic models. Specifically, after extracting the message-passing flows, we replace the MLP in message/update functions with the corresponding formulas. And we set the constants in the formulas as parameters in the deep model to optimize (all the operations in symbolic models are differentiable). B.4 DETAILS OF ERRORS In this paper, we use RMSE as the error measure. RMSE can be calculated as RMSE = √∑n i=1 (yi − ŷi) 2 n , where ŷi is the i-th predicted value and yi is the i-th ground-truth value and i = 1, · · · , n. B.5 DETAILS OF COMPLEXITY The design of complexity measurement for message-passing flows is flexible. In this paper, we calculate the complexity as follows, (i) for each layer, the complexity can be calculated as the product of the embedding size of this layer and the number of inputs in this layer; (ii) the whole complexity of the message-passing flow can be calculated as the summation of the complexity of each layer. To illustrate, the complexity of four message-passing flows in the Figure 6 can be calculated as 2× 3 + 2× 3 + 2× 1 = 14, 2× 3 + 2× 1 = 8, 2× 3 + 1× 1 = 7 and 2× 1 + 1× 1 = 3. As we can see, the complexity consistently decreases in the search process. B.6 DETAILS OF SCORE The score of a graph-structured formula is s = l + λc, where l is RMSE, c is the complexity of the graph-structured formulas, which is defined as the complexity of message-passing flow multiplying the average complexity of component formulas. C DETAILED RELATED WORKS ON SR Martius & Lampert (2016) proposed a model named EQL and extracted the symbolic formulas with a neural network with symbolic models as building blocks. Kusner et al. (2017) managed to eschew the problem of discrete optimization by converting discrete data into a parse tree. AI Feynman (Udrescu & Tegmark, 2020; Udrescu et al., 2020) split the function into sub-functions and performed regression on each module separately. The partition of functions was achieved with a trained neural network. SymDL (Cranmer et al., 2020) exploited the inductive biases of the correspondence of physical mechanisms (kinematics, especially) to GNN structure and established the method PySR to tackle the problem. They first revealed the link between GNNs and physical mechanisms. Unlike these works, our model can learn graph-structured physical mechanisms without requiring the information of formula skeletons, which can hardly be obtained in new physical scenarios for discovery.
1. What is the focus of the paper regarding SymDL's improvement? 2. What are the strengths of the proposed approach, particularly in tackling practical challenges? 3. Are there any concerns or suggestions regarding the paper's clarity and comparisons with other works? 4. How does the reviewer assess the novelty and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper improves SymDL by searching the Pareto-optimal message-passing flows to learn an additional formula skeleton. Such improvement empowers the method with more generality (requiring no prior knowledge), compactness, while maintaining correctness. Additionally, this paper also extends the applicability to Electricity and Thermology. Overall, the task formation is very interesting and the idea is simple but effective. Strengths And Weaknesses Strengths: The studied problem is important and interesting. AI for science is a highly promising application field that has not been well explored. This paper has made an essential footstep toward finding scientific formulas in a more general and automatic way. Compared with existing works, this paper has tackled a more practical and challenging problem of learning the formula skeleton rather than manually designing, which is crucial for learning formulas in new physical domains with less prior knowledge. The proposed method is novel, especially the idea of transforming the discovery of the formula skeleton to the search for the Pareto-optimal message-passing flow with accuracy and compactness. The two main designs are technically sound. First, the transformation from learning skeleton to searching message-passing flow is correct as the latter corresponds to explicit meanings in the symbolic calculation for graph-structured mechanisms. Second, the design philosophy behind the proposed pruning-based search procedure is reasonable, based on an insightful observation that when the searched graph structure is a subset of ground truth instead of a superset, the optimized score/loss will increase significantly. The authors conduct extensive experiments. The proposed method can find correct formulas in five mechanisms with different difficulties, while compared methods only succeed in a simple case, i.e., mechanics as in Cranmer et al. 2020. Moreover, it can discover a new analytic formula that can predict real-world pedestrian dynamics more precisely. Overall, The paper is well-written and easy to follow. Weaknesses: For easy-to-use concerns, the paper should have a specific subsection that provides a practical guide on using the proposed method when discovering formulas in new physical domains. Currently, the related information is scattered in Sec. 3.1, Appendix A.3 and A.4, not concentrated enough. To better demonstrate the superiority of the designed pruning-based search procedure, authors can consider adding further studies in the appendix. For example, how about comparing with a random search method that does not decompose searching steps and does not leverage a pruning strategy? Also, Figure 6 can be improved by plotting four steps together, with the x-axis being absolute training time. This can make the explanation text much easier to follow. The clarity of this paper can be further improved as follows, Use a specific figure to demonstrate the design philosophy of the pruning-based search procedure. Give more details on the normalization process, as in “We normalize the input-output pairs to make the outputs have variance 1” (Appendix A.3). Make related work part shorter and save space for results analysis, especially the real-world example in A.4. A few typos. e.g., on page 8, “the time cost of each independent part is shown in Table 6” should be Table 3. Clarity, Quality, Novelty And Reproducibility As mentioned in the above review, the paper is generally well-written but still has room for improving clarity. As for its novelty, I like the core idea of transforming the discovery of the formula skeleton into the search for the Pareto-optimal message-passing flow with accuracy and compactness, although the proposed method is built upon the SR method in Cranmer et al. 2020. Still, it is also reasonable and highly insightful. The authors have provided source code for reproducibility check.
ICLR
Title Learning Symbolic Models for Graph-structured Physical Mechanism Abstract Graph-structured physical mechanisms are ubiquitous in real-world scenarios, thus revealing underneath formulas is of great importance for scientific discovery. However, classical symbolic regression methods fail on this task since they can only handle input-output pairs that are not graph-structured. In this paper, we propose a new approach that generalizes symbolic regression to graph-structured physical mechanisms. The essence of our method is to model the formula skeleton with a message-passing flow, which helps transform the discovery of the skeleton into the search for the message-passing flow. Such a transformation guarantees that we are able to search a message-passing flow, which is efficient and Paretooptimal in terms of both accuracy and simplicity. Subsequently, the underneath formulas can be identified by interpreting component functions of the searched message-passing flow, reusing classical symbolic regression methods. We conduct extensive experiments on datasets from different physical domains, including mechanics, electricity, and thermology, and on real-world datasets of pedestrian dynamics without ground-truth formulas. The experimental results not only verify the rationale of our design but also demonstrate that the proposed method can automatically learn precise and interpretable formulas for graph-structured physical mechanisms. 1 INTRODUCTION For centuries, the development of the natural sciences has been based on human intuition to abstract physical mechanisms represented by symbolic models, i.e., mathematical formulas, from experimental data recording the phenomena of nature. Among these developments, many mechanisms are naturally graph-structured (Leech, 1966), where the physical quantities are associated with individual objects (e.g., mass), pair-wise relationships (e.g., force) and the whole system (e.g., overall energy), corresponding to three types of variables on graphs: node/edge/global variables. For example, as shown in Figure 1(a), the mechanical interaction mechanism in multi-body problem corresponds to a graph with masses (mi), positions (V⃗i) as attributes of nodes, and spring constants (kij) as attributes of edges, which, together with the graph connectivity, yields the acceleration as output attributes of nodes; while in the case of resistor circuit, nodes and edges correspond to voltages and resistances, respectively, and these attributes define a graph-level overall power of the circuit. In the past few years, Symbolic Regression (SR) (Sahoo et al., 2018; Schmidt & Lipson, 2009; Udrescu et al., 2020), which searches symbolic models y = F(x) from experimentally obtained input-output pairs {(x, y)} with F being an explicit formula, has become a promising approach trying to automate scientific discovery. Traditional SR methods include genetic programming-based methods (Schmidt & Lipson, 2009; Fortin et al., 2012) working by generating candidate formulas by “evolution” (i.e., manipulations), and deep learning-based methods (Li et al., 2019; Biggio et al., 2021; Zheng et al., 2021) utilizing sequence models to generate candidate formulas. However, these methods are designed for traditional SR problems on input-output pairs {(x, y)} without considering graph information. To exploit the inherent graph structure in physical mechanisms, as shown in Figure 1(b), SR on graphs aims to find a formula F that characterizes a mapping from input {G, X} to output y, with X and y both inside graph structure G. To perform this, we need both fine exploitation of inherent graph structures of physical mechanisms and well achievement of flexibility regarding diverse forms of interaction between entities in the physical world. Graph Neural Network (GNN) has recently been incorporated into SR for discovering mechanisms behind particle interactions (Cranmer et al., 2020; Lemos et al., 2022). However, obvious setbacks exist that the message-passing flow of GNN, corresponding to the formula skeleton, required to be manually designed to learn the underlying mechanisms, is impractical because the formula skeletons usually remain unknown and are significantly different in diverse physical domains as shown in Figure 1(c). To solve this problem, inspired by the correspondence between the skeleton and message-passing flow in GNN, our core idea is to transform the discovery of the skeleton into the search for message-passing flow, which paves the way for identifying the underneath formula by interpreting each component function in the searched message-passing flow. However, due to the coupling relationship between the skeleton and the component formula in the skeleton, neither of them can be independently identified, implying a vast, highly entangled search space for both message-passing flow and component functions. To tackle this challenge, we formulate a bi-level optimization problem that searches for the message-passing flow by pruning strategy at the upper level on condition that its component functions have been optimized with deep learning (DL) at the lower level. Besides empirical accuracy, it is equally vital but non-trivial to maintain explicit interpretability and generalization ability in discovered formulas. We propose to search the Pareto-optimal message-passing flow between accuracy and simplicity by carefully designing a scoring function involving a complexity function of message-passing flows that optimizes both aspects across different searching steps. Our contributions can be summarized as the following three aspects, • We generalize the problem of learning formulas with given skeletons (inductive bias) from graph data in Cranmer et al. (2020) by additionally learning the formula skeleton from data, which is essential for learning graph-structured physical mechanisms from diverse physical domains. • We propose a novel method to learn graph-structured physical mechanisms from data without knowing the formula skeleton by searching the Pareto-optimal message-passing flows of GNN together with the symbolic models as components. • We conduct experiments on five datasets from diverse physical domains, including mechanics, electricity, thermology, and two real-world datasets about pedestrian dynamics, demonstrating that our model can first automatically identify the correct skeleton based on collected data instead of expert knowledge and then learn the overall symbolic model for corresponding graph-structured physical mechanism. 2 THE PROPOSED METHOD Before introducing the proposed method, we first formally define the the problem of symbolic regression on graphs. Definition 1 (Variables on Graphs) The topology of the graph is denoted as G, and its variables include {V,E,u}, where V denotes the set of node-level variables, E denotes the set of edge-level variables and u ∈ Rnu denotes a global variable which interacts with elements in V and E through topology G. Specifically, vi ∈ Rnv is the variable associated with the i-th node while eij ∈ Rne is the variable associated with the edge connecting the i-th and j-th nodes. nu, nv, and ne denote the dimensions of the global, node, and edge variables. Definition 2 (Symbolic Regression on Graphs) Given a set of {(Gi, Xi,yi)}, where Xi ⊂ {V,E,u}, which are known variables, yi ∈ {V′,E′,u′}, which are unknown variables, and variables with prime denote output variables, we aim to find an accurate and compact formula F(·) that fits y = F(G, X). 2.1 MODEL FORMULA SKELETON WITH MESSAGE-PASSING FLOW As shown in Figure 1(c), a formula F describing graph-structured physical mechanisms can always be decomposed into several formula components, each representing its association with an individual node, pair-wise relationships between nodes or the whole system. As illustrated in the specific diagram, these components are interconnected according to the variable dependency, termed as “skeleton”. Moreover, differences between the two examples also indicate the existence of diverse skeletons underlying different physical mechanisms. The key insight is that the skeleton has a strong correspondence with the message-passing flows in GNN, which differs a lot in various physical scenarios, including mechanics (Sanchez-Gonzalez et al., 2020), electricity (Zhang et al., 2019), thermology (Chamberlain et al., 2021). The messagepassing flows can be diverse by cascading multiple blocks, changing/removing some functions and adjusting the embedding sizes. A block of full GNN contains the updating of edges, nodes, and graph representation respectively as follows (Battaglia et al., 2018), e′ij = ϕ e (eij ,vi,vj ,u) e ′ i = ρ e→v (E′i) v′i = ϕ v (e′i,vi,u) e ′ = ρe→u (E′) u′ = ϕu (e′,v′,u) v′ = ρv→u (V ′) (1) where ϕ(·) denotes the message functions and ρ(·) denotes the aggregation functions. We provide three examples in different physical scenarios, as shown in Figure 2, to illustrate the well-defined analogy between message-passing flows and skeletons. Example 1 (Mechanics: Multi-body Kinematics) In this problem, we aim to find the particles’ acceleration. We have an edge update function ϕe since particle pairs determine string forces, while the aggregation function ρe→v of edges is based on the independent action principle of force. Example 2 (Electricity: Resistor Circuit) The objective of this problem is to find the overall power of a given resistor circuit. An edge update function ϕe corresponds to the computation of singleresistor power utilizing Joule’s Law, and an aggregation ρe→u from edge to global appears for summation to get overall power. Example 3 (Pedestrian Dynamics: Collision Avoidance) In this problem, we aim to find the pedestrians’ acceleration according to their postions and velocities. The formulas including the skeletons to describe this relationship can be diverse, which highly depends on the pedestrian scenarios. 2.2 TRANSFORMING INTO THE TASK OF MESSAGE-PASSING FLOW SEARCHING The message-passing flows of GNN correspond to explicit meanings in the symbolic calculation for graph-structured mechanisms, which is summarized in Table 1. This strong resemblance inspired us to devise a transformation of the primitive SR task on graphs into a relatively more practical task of searching message-passing flows. Our model has two stages: message-passing flow searching and message-passing flow-based SR. Specifically, at stage 1, we need to search the message-passing flow as the formula skeleton. Then at stage 2, we need to symbolize components into formulas and cascade them according to the skeleton to get the final graph-structured mechanisms. With the above transformation, it is clear that we need to solve the following bi-level optimization problem in stage 1, i.e., {P∗,M∗} = argmin M,P s(P;M), (2) where M denotes the message-passing flows, P denotes the parameters in the DL components, M∗ and P∗ denote the Pareto-optimal one, and s(·) gauges how well the finally learned formula obtained by M and P performs. However, there are two core challenges to learning formulas on graphs: (i) considering simplicity and accuracy simultaneously for graph SR is difficult; (ii) the discrete search space composed of skeletons and component formulas is prohibitively huge. 2.3 SEARCHING MESSAGE-PASSING FLOWS M (FORMULA SKELETONS) To deal with the first challenge, we are motivated to change equation 2 into M∗ = argmin M s(P∗;M), (3) s.t. P∗ = argmin P l(P;M), (4) where equation 3 and equation 4 are two optimization problems in the upper level and lower level, s(P∗;M) = l(P∗;M) + λc(M) is the score taking both simplicity and accuracy into consideration, l(·) denotes the error loss of predicting outputs (see details in Appendix B.4), λ is the weight, and c(·) denotes the complexity of message-passing flow. The design of complexity c(·) is flexible (see details in Appendix B.5), and we calculate the complexity as follows, (i) for each layer (corresponding to a function in {ϕu, ϕv, ϕe}), the complexity can be calculated as the product of the embedding size of this layer and the number of inputs in this layer; (ii) the whole complexity of the message-passing flow can be calculated as the summation of the complexity of each layer. The optimization at the lower level is solved by training parameters P of a DL model given the structure M, while the optimization at the upper level w.r.t. M is difficult because M forms a huge discrete search space including # blocks, # message-passing layers, connections and embedding sizes. Another insight facilitating the dealing with the second challenge is that if the message-passing flow is a super-structure of the ground-truth one, i.e., redundant computations are done, it results in merely subtle variations of the loss. Still, if the message-passing flow is a sub-structure of the ground truth, i.e., some necessary computations are missing, the loss jumps up with a change of magnitude. Such an observation together with equation 3 facilitates us first to search an initial message-passing flow that is the super-structure of the ground-truth and then learn to prune the message-passing flows to get both compact and expressive message-passing flows. The framework of our model is shown in Figure 3(a), which sequentially search # blocks, # layers, connections and embedding sizes in a hierarchical way and the four steps in detail are listed as follows. Step 1: Search Message-Passing Blocks. First, we need to find a message-passing flow that the ground-truth message-passing flow is its sub-structure. To achieve this goal, we first stack several full message-passing blocks. By optimizing the score in equation 3, we find the Pareto-optimal number of message-passing blocks, where we take the number of blocks as the complexity in equation 3 and RMSE between the predicted value and the ground-truth as the loss. Step 2: Search Message-Passing Layers. As mentioned, a full message-passing block contains three layers corresponding to updates of edge, node, and graph representation. However, not all of them are necessary for obtaining the output. To find the most compact layers, we try to delete each layer to see whether the score in equation 3 increases or decreases, where we define the complexity as the number of layers and RMSE as the loss. Our pruning-based searching method is based on the unique insight in SR, which is much more efficient than brute force search. Specifically, our method can significantly decrease the computational cost from O(2n) to O(n), where n is the number of initial layers. Step 3: Search Necessary Inputs. We further filter out the useless inputs for each layer. Specifically, we adopt a similar strategy to the previous searching step: try to delete each input to see whether the score in equation 3 will rise or drop, where the complexity is the number of connections and the loss is RMSE. Similar to step 2, our model can significantly decrease the computational cost from O(2n) to O(n), where n is the number of initial inputs. Step 4: Search Embedding Sizes. To ensure that the embedding in each layer is compact and with explicit physical meanings, we use the score given in equation 3 to find the Pareto-optimal embedding size for each embedding, where the complexity is defined as the embedding size and RMSE defines the loss. We try to reduce the embedding size to find the embedding size with the highest score. At the same time, we fix other embedding sizes as a large enough number to ensure the information bottleneck can only be caused by the embedding size we are searching for. 2.4 THE LEARNING PROCEDURE After obtaining the message-passing flow M∗ and the parameters of DL component function P∗ at the first stage, we follow Cranmer et al. (2020) to symbolize each DL component into formulas, and then cascade them according to the skeleton represented in M∗ into a whole formula at the second stage, as shown in Figure 3(b). For aggregation functions ρ corresponding to set functions, (i) we choose several commonly used aggregators as candidates, including sum, mean, and max, while other aggregators can be generated by them, and select the maximum operator to replace the softmax function, (ii) we perform SR on input-output pairs from trained GNN component functions, (iii) we fine-tune all constants in the overall function (given by cascading component functions), thereby avoiding the accumulation of errors. 3 EVALUATION 3.1 EXPERIMENT ON CLASSICAL PHYSICAL SCENARIOS Dataset. We utilize five datasets in different physical domains to demonstrate that our model has the ability to rediscover the well-known graph-structure physical mechanisms, as introduced in Section 2.1 and Appendix A.1. We provide two cases of mechanics scenarios, one of electricity scenario and two of thermology scenarios. For both mechanics and thermology scenarios, there are two selected cases with different complexity, where the content listed in parentheses is associated with the more complex scenarios. Detailed information about formulas and data generation is reported in Appendix A.1. Metrics. Given the same input, we use the coefficient of determination R2, indicating the proportion of the output variation that can be predicted from the input variables. Specifically, it is calculated by the output of distilled formula and the output of the ground-truth formula to measure whether the learned formula is accurate enough. R2 can be calculated as R2 = 1− ∑ i (yi − ŷi) 2 / ∑ i (yi − ȳ) 2, where ȳ = ∑ i yi/n. Comparing Methods. We compare our model with learning symbolic models from deep learning with inductive bias (SymDL) (Cranmer et al., 2020) to demonstrate that our model is flexible in more scenarios and a variant of our model that uses a full graph network without pruning searching (FullGN) for the ablation study (1-layer full GNN that removes non-exist inputs and non-required outputs). The message-passing flows of SymDL and FullGN are shown in Appendix A.3. Plausibility Comparison with Baselines. We first compare the applicability of our method with the SOTA baseline (Cranmer et al., 2020). As listed in Table 4, our method can be applied to all five cases from three physical scenarios, including mechanics, electricity, and thermology, while the baseline fails in the last two scenarios due to incorrect message-passing flow because their designed message-passing flow is designed explicitly for Newton-force interaction in the simple mechanical scenario and not flexible enough for other scenarios. We design two cases in mechanics scenarios: calculating the acceleration and the relative acceleration in the center-of-mass frame. SymDL is designed for handling formula discovery in the simple case, with a specified message-passing flow. Comparatively, our method moves forward to a more general and challenging setting without specifying a message-passing flow representing the formula skeleton. To ascertain the correctness of the learned formulas, besides the baseline, we further introduced a variant of our method without searching the message-passing flow, i.e., directly using the full messagepassing block in the first stage. As listed in Figure 4, our model achieved the same performance with two baselines in the simple mechanics case. In the complex case, our model outperformed the SOTA baseline and the variant of our model by a large margin w.r.t. R2 metric. Specifically, these two competitors both failed and got a rather low R2, while our method rediscovered the correct formula with R2 = 0.917, indicating the advantage of searching for the correct messaging-passing flow. For our model, the difference between two formulas for two cases is about the latter two terms corresponding to the additional message-passing flows V → V′ and V → u′ → V′, which SymDL cannot handle. The formulas learned by baselines are wrong for lack of necessary dependencies, which fail to have physical meaning and differ largely from the ground truth. Our problem differs from SymDL, requiring prior knowledge of the formula skeleton for designing the deep learning architecture, which is almost impossible to know in new real-world scenarios. For the rest three cases that the SOTA baseline cannot handle, as listed in Figure 4, the performance gain over the variant using full message-passing flow indicates that optimizing Pareto-optimal score is essential in obtaining correct formulas, which is less subject to redundant message-passing flows that hinder the subsequent SR process, including unnecessary inputs and redundant computation steps. The detailed searching process of the electricity case is analyzed in Section 3.1. For the complex case of thermology, it can be observed that the learned formula successfully captures the effect of externally conducted heat compared with the simple case, while other baselines fail to have physical meanings due to unnecessary inputs and redundant computation steps. Although the change is slight (whether external heat conduction exists), the skeleton and the entire formula are quite different, and so is the entire formula. To be more precise, the learned message-passing flows by our model are shown in Figure 5. Besides, the time cost of each independent part is shown in Table 2, where we can observe that searching message-passing flows only takes a small part of the whole procedure, and our model’s running times are similar to SymDL and usually shorter than FullGN. Furthermore, we conduct experiments to demonstrate the design philosophy of our method, which is reported in Appendix A.9. Qualitative Results for Understanding the Searching Process. We show the searching process of message-passing flow in Figure 6, where the upper row shows the learning curve in terms of error (RMSE), complexity, and a score, which is a weighted summation of error and complexity. From Figure 6, we observe that if the message-passing flow is a sub-structure of the ground-truth message-passing flow, the performance will drop significantly. On the other hand, message-passing flows with redundant layers/inputs/embedding sizes will have similar performance, echoing the rationale of our pruning strategy. The core idea is to search for the most compact message-passing that is expressive enough, and the four-step searching process is as follows, (i) the model tried the number of blocks as 1 ∼ 3, finding similar errors with a rise in complexity, so it opted for 1; (ii) from the searched message-passing flow at the previous stage, it tried to delete every layer associated with ϕ. It turned out that deleting the edge layer would cause a huge error increase, so the edge layer was preserved, after which it tried to delete the node layer and found that the score decreased (error does not change a lot and the complexity decrease), so it decided to delete node layer; (iii) like the previous stage, it tried to delete each input and found that only deleting the V → ϕu connection would not cause an error increase, so this connection was deleted. (iv) finally, it tried to compress each representation and found that when the embedding size was 1, the score minimized, so 1 was chosen as the embedding size. After the whole process, the message-passing flows, including embedding (intermediate variable), functions, and topology, have explicit physical meanings, paving the way for symbolic regression. 3.2 EXPERIMENTS ON REAL-WORLD SCENARIOS OF PEDESTRIAN DYNAMICS To better show how our model discovers unknown graph-structured physical mechanisms in the real world, we conduct carefully-designed experiments of our model on formula discovery for pedestrian dynamics. Problem Formulation. We aim to find a formula that can approximately describe the relationship between acceleration a and the velocity v, pedestrian position x, and destination position xdest. The graph G describe the interaction relationship, which is constructed as follows when the two pedestrians’ distance is less than R, they are connected, and otherwise, they are unconnected. Formally, the problem can be described as finding a formula F that fits a = F(G, x, v, xdest). Datasets. We conduct experiments on two real-world datasets of crowd trajectories: several experimental datasets from studies about pedestrian dynamics (Boltes & Seyfried, 2013)1, including the following scenarios, (i) Unidirectional flow in corridor: a group of people goes through a corridor in the same direction, as shown in Figure 8(a); (ii) Bidirectional flow in corridor: a group of people goes through a corridor in opposite directions, as shown in Figure 8(b). Comparing Models. For pedestrian dynamics, a well-known manually-designed model is the social force model (Helbing & Molnar, 1995), in which pedestrians are with two forces: an attractive force from the destination and a repulsive force from the surrounding pedestrians and obstacles (refer to Appendix A.4 for details). Learned Formulas. The learned formulas and the corresponding physical meanings are reported in Figure 7, which demonstrate that our model can learn different skeletons and formulas that are more precise than the social force model with explicit physical meanings. The performance comparison is also reported in Figure 7, where we can observe that our model has about 10% improvement compared to the social force model. 4 RELATED WORKS Symbolic Regression (SR). Distilling elegant symbolic expressions from vast experimental data has always been the mainstream method used for finding new formulas and verifying hypotheses throughout the history of physics. SR is a classic topic (Schmidt & Lipson, 2009; Petersen et al., 2020; Biggio et al., 2021; Guimerà et al., 2020) that tries to emulate the process to learn an explicit symbolic model that can describe a projection from input X to the output y as accurately as possible while maintaining its compactness. Traditional methods of discovering formulas from data are primarily based on genetic programming(GP) (Schmidt & Lipson, 2009; Koza, 1994; Worm & Chiu, 2013). 1https://ped.fz-juelich.de/database/doku.php Hitherto, there have been promising results yielded by GP-based SR methods such as Burlacu et al. (2020), Virgolin et al. (2019), and the famous commercial SR method Eureqa (Dubčáková, 2011), etc. More recently, methods based on DL (Zheng et al., 2021; Qian et al., 2021; Martius & Lampert, 2016; Kusner et al., 2017; Udrescu & Tegmark, 2020; Udrescu et al., 2020; Daniele et al., 2022) for symbolic regression are introduced with better expressive ability than GP. Furthermore, Cranmer et al. (2020) first proposed to learn graph-structured physical mechanisms (especially kinematics) given formula skeletons. Beyond that, we propose searching for formula skeletons automatically, where existing SR methods can be exploited to look for basic components of the whole formula. Graph Neural Network (GNN). GNN (Kipf & Welling, 2017; Veličković et al., 2018; Gilmer et al., 2017) can be viewed in a message-passing manner (Battaglia et al., 2018; Veličković, 2022; Bronstein et al., 2017), and most of them can be summarized as message-passing among three levels: edge/node/graph level, while the message-passing flows and message/aggregation functions can be customized very differently based on the specific characteristics of applications. It has been widely used for physical systems by capturing the interacting mechanisms, such as simulating mechanical system (Sanchez-Gonzalez et al., 2020; Huang et al., 2021; Sanchez-Gonzalez et al., 2018), designing circuits (Zhang et al., 2019; Ren et al., 2020), simulating heat conduction (Chamberlain et al., 2021; Xhonneux et al., 2020), simulating pedestrian dynamics (Shi et al., 2023; Zhang et al., 2022). Furthermore, there are some works (You et al., 2020; Yoon et al., 2020; Cai et al., 2021; Gu et al., 2021) that adopt automated machine learning techniques for searching the best GNN architecture for a specific prediction task. Unlike them, we focus on the SR problems on graphs and inspired by symbolic regression (Udrescu & Tegmark, 2020), we propose to search the Pareto-optimal messagepassing flows, which is both accurate and simple and can benefit learning of symbolic models. Pareto-optimal Search. The previous Pareto-optimal solutions proposed in Neural Architecture Search (NAS) area (Lomurno et al., 2021; Lu et al., 2020; Dong et al., 2018) focus on finding the model architecture with both high prediction accuracy and low inference latency, which does not meet requirements for solving graph SR problem. Instead, our proposed method is based on a novel insight in the SR scenario: the performance would be similar when the message-passing flow (skeleton) is a super-structure of the ground-truth one. In contrast, the performance degrades a lot if it is a sub-structure of the ground-truth one. 5 CONCLUSION In this paper, we generalize the problem in Cranmer et al. (2020) by learning the formula skeleton rather than manually designing, which is crucial for learning formulas in a new physical area without much prior knowledge. We propose a new SR method that first transforms the discovery of the formula skeleton to the search of the Pareto-optimal message-passing flow with accuracy and compactness and then symbolizes its message functions to obtain the underneath formula. We conduct experiments on five datasets from three different physical domains, including mechanics, electricity, and thermology, demonstrating that our method can consistently learn plausible formulas describing the graph-structured physical mechanism. Furthermore, to show that our model is practical for learning unknown formulas in the real world, we conduct experiments on two real-world datasets about pedestrian dynamics, which learn different formulas with explicit physical meanings for different scenarios more precisely than mainstream empirical formulas. ACKNOWLEDGEMENT This work was supported in part by the National Key Research and Development Program of China under 2020YFA0711403, the National Nature Science Foundation of China under 61971267, U1936217, 61972223, 62171260. Q. Yao was in part supported by NSFC (No. 92270106) and CCF-Baidu Open Fund. A EXPERIMENTS A.1 DATA GENERATION Besides the scenarios of mechanics and electricity, we further illustrate the scenario of thermology as follows. Example 4 (Thermology: Heat Conduction) The objective of this problem is to compute the entropy production rate. The edge update function ϕe corresponds to Fourier’s Law of Heat Conduction, and the node update function ϕv corresponds to the Clausius entropy expression, followed by an aggregation ρv→u that sums up individual entropy production rates. In different scenarios, we devised different inputs and computed the theoretical outcome according to the known mechanisms. In the scenario of Mechanics, we randomly (standard normal distribution) set the (x, y) coordinates of the particles in 2-D cases. We assign the masses of the particles randomly according to the lognormal distribution. The original lengths of the springs are all set to 1. In the complex case, external forces with all dimensions following the standard normal distribution are exerted. Graph topology is picked randomly. We then compute the accelerations of each particle with Hooke’s Law, the independent action principle of force, and Newton’s Second Law of Motion. In the scenario of Electricity, we randomly give a topology on the graph and set the electric potential of nodes following the standard normal distribution. Resistances of resistors on edges are chosen uniformly randomly from 0.01 to 1.01 to avoid extremely large power outputs. We then compute the power of each edge (resistor) according to Joule’s Law and add them up to reach the overall power of the resistor circuit. In the scenario of Thermology, the graph topology is given as a ‘grid’, echoing the core idea of Finite Element Analysis. We randomly set the temperature of each node between 0 and 1 and the thermal conductivity between 1 to 3 globally. We then compute the discrete laplacian on the grid and the heat flow according to Fourier’s Law of Heat Conduction. With each node’s heat flow and temperature, we compute their entropy production rate separately and add them up to reach the overall entropy production rate. The basic information of our used datasets is listed in Table 3. A.2 REPRESENTATIVE SNAPSHOTS OF PEDESTRIAN DATASETS To better understand the pedestrian scenarios, we show two representative snapshots of two pedestrian datasets in Figure 8: unidirectional flow in a corridor and bidirectional flow in a corridor. A.3 BASELINE DETAILS The message-passing flows of baselines, SymDL, and FullGN are shown in Figure 9, and their applicability in different scenarios are demonstrated in Table 4. A.4 DETAILS OF SOCIAL FORCE MODEL In the social force model, the baseline model for pedestrian scenarios, the dynamics of pedestrians are driven by two factors: (a) a pedestrian is attracted by his/her destination with force FDi = (vdiei − vi) /τ , ei = xd−xi∥xd−xi∥ , where vdi is the value of desired velocity, vi is the current velocity, τ is the relaxation time, ei is the unit vector toward the destination; (b) the nearby ones repulse a pedestrian with a force Fij = Ai exp (−rij/Bi) enij , where Fij is the repulsive force, rij is the distance between pedestrian i and j. The joint force is Fi = FDi + ∑ j∈Ni Fij , where Ni means the set of pedestrians whose distance to pedestrian i is less than 5 meters. The social force model is widely used as the foundation of much commercial software such as viswalk2 and anylogic3. In this paper, we assume that the mass of a pedestrian is 1, and thus ai = Fi/m = Fi. However, on the one hand, the social force model is manually designed, which may have discrepancies with real-world pedestrian dynamics. On the other hand, different scenarios usually have very different pedestrian interaction mechanisms, which one single model cannot precisely model. So it is meaningful to learn data-driven formulas to describe different pedestrian interaction mechanisms. A.5 IMPLEMENTATION We implement our model in Python using Pytorch library and optimize all the models by Adam optimizer (Kingma & Ba, 2015). We use parallel symbolic regression in Python (PySR)4 (Cranmer, 2020) to extract formulas from each message functions ϕ. A.6 PARAMETER SETTINGS For the DL part, we set the learning rate as 10−4, tolerance in early stopping as 10, #layers and embedding size in MLP as 4 and 16, the max number of epochs as 20000 and the weight λ as 0.1. The choice of parameter λ is analyzed in Appendix A.8. For the traditional SR part, our candidate symbols include both binary operator {+,−,×, /} and unary operator {sign, exp}, and we set batch size as 1024. 2https://www.myptv.com/en/mobility-software/pedestrian-simulation-software-ptv-viswalk 3https://www.anylogic.com/features/libraries/pedestrian-library/ 4https://github.com/MilesCranmer/PySR Table 4: The applicability in different scenarios. Method Mechanics (simple) (complex) Electricity Thermology (simple) (complex) SymDL √ √ × × × Ours √ √ √ √ √ Figure 10: The searched best score v.s. searching time for two different searching methods in the circuit scenario. A.7 COMPARING HIERARCHICAL PRUNING WITH RANDOM SEARCH Furthermore, to demonstrate the effectiveness of our search method, we compare it with the random searching strategy and plot their searching processes in Figure 10. From that, we can observe that our method is much more efficient than the random search algorithm, which suggests that the searching problem is difficult and that our method effectively reduces the original colossal search space. Specifically, even if the random search algorithm takes ten times longer than ours, the score of the best-searched skeleton is still 4.5 times worse than ours, and the searched skeleton is wrong. A.8 PARAMETER SENSITIVITY One of the most critical parameters in our model is the weight λ that balances the complexity and errors. We normalize the input-output pairs to make the outputs have a standard deviation of 1. Specifically, for each dimension of features, we divide the features by their standard deviation, maintaining the fitting errors with similar magnitudes. Since the complexities of different skeletons are also with similar magnitudes, the best value of λ is similar for different datasets. As shown in Table 5, we test different values of λ on three diverse scenarios, including the circuit scenario and two mechanical scenarios. For each value of λ, we use ten different seeds to train the model to test whether our model can learn the correct message-passing flows and the correct formulas. We choose the best formula among ten formulas, so all these values of λ can allow us to find the correct formula in different scenarios. Among them, we choose λ = 0.1 for achieving the highest success rate among all three scenarios. A.9 DEMONSTRATION OF DESIGN PHILOSOPHY To demonstrate the design philosophy of our method, we plot several learning curves of different message-passing flows in the simple mechanical scenario. We first plot the learning curves of three different message-passing flows in Figure 11, corresponding to the one lacking the necessary message-passing connection, the ground-truth one, and the one with a redundant message-passing connection, respectively. We find that the ground-truth message-passing flow and the one with redundant message-passing connection have only little variations (the RMSE is around 0.1 and the MAPE is around 1.0) in performances after the loss function converges. However, the performance of message-passing flow lacking necessary connections decreases significantly (the RMSE is about 1.4, and the MAPE is 4.0). We further test the impact of the embedding size on the performance, where the learning curves are shown in Figure 12. From that, we can see that when the embedding size is less than a certain number 2, the performance decreases significantly (RMSE is more than 1.4), while the performance is similar (RMSE is around 0.2) when the embedding size is not less than 2. Last, we test whether the softmax function can learn the ground-truth aggregator. As we can see in Figure 13, the learning curve with softmax and with the ground-truth aggregator is quite similar, and we verify that the learned aggregator is the same as the ground truth. B METHOD DETAILS B.1 USAGE OF THE PROPOSED METHOD The result obtained by our method is fairly stable in terms of both searching formula skeleton and symbolizing learned neural networks. First, our designed searching method can guarantee that only better graph structures (message-passing flows) get selected during the learning process. Based on an insightful observation that the loss would increase a lot when the graph structure is a subset instead of a superset, we propose to search four components of graph structure (block, layer, connection, dimension) by starting with a full structure and then pruning to obtain a compact one. Second, the stability of SR results has also been demonstrated by applications in Cranmer et al. (2020). When applied to a new physical domain, we can directly use the same DL-related hyper-parameters (such as the learning rate and embedding size of MLP) on old domains and slightly tune the searchrelated hyper-parameter (weight λ) near the previous optimal value on old domains. The stability of the proper λ value is demonstrated in Table 5. Due to the randomness of DL training and the genetic algorithm, we train a model ten times with different seeds to get ten formulas and select the Pareto-optimal formula according to the score (see Appendix B.6) as the final formula. Overall, the above process in a new domain does not require heavy human labor. B.2 TRAINING ALGORITHM The training algorithm of our model is summarized in Algorithm 1. B.3 DETAILS OF SYMBOLIZING THE AGGREGATION/MESSAGE FUNCTIONS Symbolize Aggregation Function ρ. We choose several commonly used aggregators as candidates, including sum, mean and max, while other candidate aggregators such as min can be achieved by min(x) = −max(−x), and root mean square can be represented by (mean(x2)) 12 . Harmonic mean and l-norm can also be decomposed into two ϕ functions and a mean aggregators. These operations can be learned by ϕ. Algorithm 1 The process of obtaining the graph-structured symbolic model. Require: D including training data {(X,Y,G)}; 1: (start stage 1) search message-passing flow follows Section 2.3; 2: step 1: search the Pareto-optimal block number; 3: step 2: search the required layers; 4: step 3: search the necessary input variables; 5: step 4: search the Pareto-optimal embedding sizes; 6: train the model with Pareto-optimal message-passing networks; 7: (start stage 2) Symbolize Aggregation Function ρ and ϕ 8: replace each ρ by the aggregator with largest weight; 9: train the model and record the input-output pairs of each ϕ; 10: replace each ϕ by a formula obtained by classic SR, with constants left to be fitted on the data; 11: fit the correct constants from the data by gradient descent and get the final graph-structured symbolic model. Symbolize Message Function ϕ. After training the GNN model, we use the tool of symbolic regression to extract a symbolic model with correspondence to message and update functions. Specifically, for message functions in the neural network, we record their input-output pairs and implement classic symbolic tools to symbolize them. Retrain and Fine-tune Constants. Finally, to eliminate accumulated errors, we set all constants in the entire formulas as parameters to be trained and fine-tune them to get the final graph-structured mechanism represented by symbolic models. Specifically, after extracting the message-passing flows, we replace the MLP in message/update functions with the corresponding formulas. And we set the constants in the formulas as parameters in the deep model to optimize (all the operations in symbolic models are differentiable). B.4 DETAILS OF ERRORS In this paper, we use RMSE as the error measure. RMSE can be calculated as RMSE = √∑n i=1 (yi − ŷi) 2 n , where ŷi is the i-th predicted value and yi is the i-th ground-truth value and i = 1, · · · , n. B.5 DETAILS OF COMPLEXITY The design of complexity measurement for message-passing flows is flexible. In this paper, we calculate the complexity as follows, (i) for each layer, the complexity can be calculated as the product of the embedding size of this layer and the number of inputs in this layer; (ii) the whole complexity of the message-passing flow can be calculated as the summation of the complexity of each layer. To illustrate, the complexity of four message-passing flows in the Figure 6 can be calculated as 2× 3 + 2× 3 + 2× 1 = 14, 2× 3 + 2× 1 = 8, 2× 3 + 1× 1 = 7 and 2× 1 + 1× 1 = 3. As we can see, the complexity consistently decreases in the search process. B.6 DETAILS OF SCORE The score of a graph-structured formula is s = l + λc, where l is RMSE, c is the complexity of the graph-structured formulas, which is defined as the complexity of message-passing flow multiplying the average complexity of component formulas. C DETAILED RELATED WORKS ON SR Martius & Lampert (2016) proposed a model named EQL and extracted the symbolic formulas with a neural network with symbolic models as building blocks. Kusner et al. (2017) managed to eschew the problem of discrete optimization by converting discrete data into a parse tree. AI Feynman (Udrescu & Tegmark, 2020; Udrescu et al., 2020) split the function into sub-functions and performed regression on each module separately. The partition of functions was achieved with a trained neural network. SymDL (Cranmer et al., 2020) exploited the inductive biases of the correspondence of physical mechanisms (kinematics, especially) to GNN structure and established the method PySR to tackle the problem. They first revealed the link between GNNs and physical mechanisms. Unlike these works, our model can learn graph-structured physical mechanisms without requiring the information of formula skeletons, which can hardly be obtained in new physical scenarios for discovery.
1. What is the focus of the paper regarding symbolic regression on graph-structured data? 2. What are the strengths and weaknesses of the proposed method in terms of its generalization and problem setting? 3. Are there any concerns or questions regarding the presentation and clarity of the paper? 4. How does the reviewer assess the novelty and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper studies the problem of symbolic regression given the graph structure. It generalizes a previous approach (Cranmer et al. 2020) by additionally learning the formula skeleton. The method is seperated into two steps: searching the Pareto-optimal message passing flows and component-wise symbolic regression. Compared to (Cranmer et al. 2020), the second stage is not new, and the main novelty is to learn and prune the formula skeleton rather than manually designing one. Experiments on different scenarios demonstrate that the proposed method can learn better formulations and it has better prediction performance. Strengths And Weaknesses Strengths: It provides a good problem setting. Symbolic regression on graph-structured data is an interesting topic. The proposed method is generally intuitive and reasonable. It is easy to understand that not all flows are useful and pruning the flows to get a compact one is a good way to obtain a concise formula. Nice visulaization and good experimental results. Weaknesses: The methodology of this paper is basically following (Cranmer et al. 2020). Its generalization seems essentially a combination of NAS and Symbolic Regression. It is useful, but slightly overstated. For most of the physical systems, we know the mechanism and the design of the "flow" seems definite and there is no need to search it. The authors showed one case in Appendix A.4, but it is still not so attractive, because the search space of the flows is not large. The choices of reasonable flows are actually limited. The definition of complexity and its weight may largely impact the results. The model is unstable. We may need to run multiple times to get the best formula. Presentation can be improved. For example, A.4 is important to demonstrate the advantage of the method (as described in the conclusion), I believe it should be put in main body of the paper instead of Appendix. The Figure 3 is described in a wierd place. In fact, in Sec 2.3 it only described the Fig 3(a), but not the whole framework. There are also some other unclear points as below. Clarity, Quality, Novelty And Reproducibility Questions/comments about clarity: I do not understand why there are multiple numbers of blocks and how they lead to final formula. From my understanding, if \phi_e occurs in different blocks, the final formula will also contain two \phi_e operations, but it seems the learned formulas in Figure4 are quite concise and do not have such overlaps. Definition 2: what is V', E'? Do you mean y is from a different graph? From my understanding y should be from the same graph but does not have overlap with X_i, i.e. y_i \in {V, E, u}/X_i. typo: “the following bi-level the optimization” -->"the following bi-level optimization"
ICLR
Title Learning Symbolic Models for Graph-structured Physical Mechanism Abstract Graph-structured physical mechanisms are ubiquitous in real-world scenarios, thus revealing underneath formulas is of great importance for scientific discovery. However, classical symbolic regression methods fail on this task since they can only handle input-output pairs that are not graph-structured. In this paper, we propose a new approach that generalizes symbolic regression to graph-structured physical mechanisms. The essence of our method is to model the formula skeleton with a message-passing flow, which helps transform the discovery of the skeleton into the search for the message-passing flow. Such a transformation guarantees that we are able to search a message-passing flow, which is efficient and Paretooptimal in terms of both accuracy and simplicity. Subsequently, the underneath formulas can be identified by interpreting component functions of the searched message-passing flow, reusing classical symbolic regression methods. We conduct extensive experiments on datasets from different physical domains, including mechanics, electricity, and thermology, and on real-world datasets of pedestrian dynamics without ground-truth formulas. The experimental results not only verify the rationale of our design but also demonstrate that the proposed method can automatically learn precise and interpretable formulas for graph-structured physical mechanisms. 1 INTRODUCTION For centuries, the development of the natural sciences has been based on human intuition to abstract physical mechanisms represented by symbolic models, i.e., mathematical formulas, from experimental data recording the phenomena of nature. Among these developments, many mechanisms are naturally graph-structured (Leech, 1966), where the physical quantities are associated with individual objects (e.g., mass), pair-wise relationships (e.g., force) and the whole system (e.g., overall energy), corresponding to three types of variables on graphs: node/edge/global variables. For example, as shown in Figure 1(a), the mechanical interaction mechanism in multi-body problem corresponds to a graph with masses (mi), positions (V⃗i) as attributes of nodes, and spring constants (kij) as attributes of edges, which, together with the graph connectivity, yields the acceleration as output attributes of nodes; while in the case of resistor circuit, nodes and edges correspond to voltages and resistances, respectively, and these attributes define a graph-level overall power of the circuit. In the past few years, Symbolic Regression (SR) (Sahoo et al., 2018; Schmidt & Lipson, 2009; Udrescu et al., 2020), which searches symbolic models y = F(x) from experimentally obtained input-output pairs {(x, y)} with F being an explicit formula, has become a promising approach trying to automate scientific discovery. Traditional SR methods include genetic programming-based methods (Schmidt & Lipson, 2009; Fortin et al., 2012) working by generating candidate formulas by “evolution” (i.e., manipulations), and deep learning-based methods (Li et al., 2019; Biggio et al., 2021; Zheng et al., 2021) utilizing sequence models to generate candidate formulas. However, these methods are designed for traditional SR problems on input-output pairs {(x, y)} without considering graph information. To exploit the inherent graph structure in physical mechanisms, as shown in Figure 1(b), SR on graphs aims to find a formula F that characterizes a mapping from input {G, X} to output y, with X and y both inside graph structure G. To perform this, we need both fine exploitation of inherent graph structures of physical mechanisms and well achievement of flexibility regarding diverse forms of interaction between entities in the physical world. Graph Neural Network (GNN) has recently been incorporated into SR for discovering mechanisms behind particle interactions (Cranmer et al., 2020; Lemos et al., 2022). However, obvious setbacks exist that the message-passing flow of GNN, corresponding to the formula skeleton, required to be manually designed to learn the underlying mechanisms, is impractical because the formula skeletons usually remain unknown and are significantly different in diverse physical domains as shown in Figure 1(c). To solve this problem, inspired by the correspondence between the skeleton and message-passing flow in GNN, our core idea is to transform the discovery of the skeleton into the search for message-passing flow, which paves the way for identifying the underneath formula by interpreting each component function in the searched message-passing flow. However, due to the coupling relationship between the skeleton and the component formula in the skeleton, neither of them can be independently identified, implying a vast, highly entangled search space for both message-passing flow and component functions. To tackle this challenge, we formulate a bi-level optimization problem that searches for the message-passing flow by pruning strategy at the upper level on condition that its component functions have been optimized with deep learning (DL) at the lower level. Besides empirical accuracy, it is equally vital but non-trivial to maintain explicit interpretability and generalization ability in discovered formulas. We propose to search the Pareto-optimal message-passing flow between accuracy and simplicity by carefully designing a scoring function involving a complexity function of message-passing flows that optimizes both aspects across different searching steps. Our contributions can be summarized as the following three aspects, • We generalize the problem of learning formulas with given skeletons (inductive bias) from graph data in Cranmer et al. (2020) by additionally learning the formula skeleton from data, which is essential for learning graph-structured physical mechanisms from diverse physical domains. • We propose a novel method to learn graph-structured physical mechanisms from data without knowing the formula skeleton by searching the Pareto-optimal message-passing flows of GNN together with the symbolic models as components. • We conduct experiments on five datasets from diverse physical domains, including mechanics, electricity, thermology, and two real-world datasets about pedestrian dynamics, demonstrating that our model can first automatically identify the correct skeleton based on collected data instead of expert knowledge and then learn the overall symbolic model for corresponding graph-structured physical mechanism. 2 THE PROPOSED METHOD Before introducing the proposed method, we first formally define the the problem of symbolic regression on graphs. Definition 1 (Variables on Graphs) The topology of the graph is denoted as G, and its variables include {V,E,u}, where V denotes the set of node-level variables, E denotes the set of edge-level variables and u ∈ Rnu denotes a global variable which interacts with elements in V and E through topology G. Specifically, vi ∈ Rnv is the variable associated with the i-th node while eij ∈ Rne is the variable associated with the edge connecting the i-th and j-th nodes. nu, nv, and ne denote the dimensions of the global, node, and edge variables. Definition 2 (Symbolic Regression on Graphs) Given a set of {(Gi, Xi,yi)}, where Xi ⊂ {V,E,u}, which are known variables, yi ∈ {V′,E′,u′}, which are unknown variables, and variables with prime denote output variables, we aim to find an accurate and compact formula F(·) that fits y = F(G, X). 2.1 MODEL FORMULA SKELETON WITH MESSAGE-PASSING FLOW As shown in Figure 1(c), a formula F describing graph-structured physical mechanisms can always be decomposed into several formula components, each representing its association with an individual node, pair-wise relationships between nodes or the whole system. As illustrated in the specific diagram, these components are interconnected according to the variable dependency, termed as “skeleton”. Moreover, differences between the two examples also indicate the existence of diverse skeletons underlying different physical mechanisms. The key insight is that the skeleton has a strong correspondence with the message-passing flows in GNN, which differs a lot in various physical scenarios, including mechanics (Sanchez-Gonzalez et al., 2020), electricity (Zhang et al., 2019), thermology (Chamberlain et al., 2021). The messagepassing flows can be diverse by cascading multiple blocks, changing/removing some functions and adjusting the embedding sizes. A block of full GNN contains the updating of edges, nodes, and graph representation respectively as follows (Battaglia et al., 2018), e′ij = ϕ e (eij ,vi,vj ,u) e ′ i = ρ e→v (E′i) v′i = ϕ v (e′i,vi,u) e ′ = ρe→u (E′) u′ = ϕu (e′,v′,u) v′ = ρv→u (V ′) (1) where ϕ(·) denotes the message functions and ρ(·) denotes the aggregation functions. We provide three examples in different physical scenarios, as shown in Figure 2, to illustrate the well-defined analogy between message-passing flows and skeletons. Example 1 (Mechanics: Multi-body Kinematics) In this problem, we aim to find the particles’ acceleration. We have an edge update function ϕe since particle pairs determine string forces, while the aggregation function ρe→v of edges is based on the independent action principle of force. Example 2 (Electricity: Resistor Circuit) The objective of this problem is to find the overall power of a given resistor circuit. An edge update function ϕe corresponds to the computation of singleresistor power utilizing Joule’s Law, and an aggregation ρe→u from edge to global appears for summation to get overall power. Example 3 (Pedestrian Dynamics: Collision Avoidance) In this problem, we aim to find the pedestrians’ acceleration according to their postions and velocities. The formulas including the skeletons to describe this relationship can be diverse, which highly depends on the pedestrian scenarios. 2.2 TRANSFORMING INTO THE TASK OF MESSAGE-PASSING FLOW SEARCHING The message-passing flows of GNN correspond to explicit meanings in the symbolic calculation for graph-structured mechanisms, which is summarized in Table 1. This strong resemblance inspired us to devise a transformation of the primitive SR task on graphs into a relatively more practical task of searching message-passing flows. Our model has two stages: message-passing flow searching and message-passing flow-based SR. Specifically, at stage 1, we need to search the message-passing flow as the formula skeleton. Then at stage 2, we need to symbolize components into formulas and cascade them according to the skeleton to get the final graph-structured mechanisms. With the above transformation, it is clear that we need to solve the following bi-level optimization problem in stage 1, i.e., {P∗,M∗} = argmin M,P s(P;M), (2) where M denotes the message-passing flows, P denotes the parameters in the DL components, M∗ and P∗ denote the Pareto-optimal one, and s(·) gauges how well the finally learned formula obtained by M and P performs. However, there are two core challenges to learning formulas on graphs: (i) considering simplicity and accuracy simultaneously for graph SR is difficult; (ii) the discrete search space composed of skeletons and component formulas is prohibitively huge. 2.3 SEARCHING MESSAGE-PASSING FLOWS M (FORMULA SKELETONS) To deal with the first challenge, we are motivated to change equation 2 into M∗ = argmin M s(P∗;M), (3) s.t. P∗ = argmin P l(P;M), (4) where equation 3 and equation 4 are two optimization problems in the upper level and lower level, s(P∗;M) = l(P∗;M) + λc(M) is the score taking both simplicity and accuracy into consideration, l(·) denotes the error loss of predicting outputs (see details in Appendix B.4), λ is the weight, and c(·) denotes the complexity of message-passing flow. The design of complexity c(·) is flexible (see details in Appendix B.5), and we calculate the complexity as follows, (i) for each layer (corresponding to a function in {ϕu, ϕv, ϕe}), the complexity can be calculated as the product of the embedding size of this layer and the number of inputs in this layer; (ii) the whole complexity of the message-passing flow can be calculated as the summation of the complexity of each layer. The optimization at the lower level is solved by training parameters P of a DL model given the structure M, while the optimization at the upper level w.r.t. M is difficult because M forms a huge discrete search space including # blocks, # message-passing layers, connections and embedding sizes. Another insight facilitating the dealing with the second challenge is that if the message-passing flow is a super-structure of the ground-truth one, i.e., redundant computations are done, it results in merely subtle variations of the loss. Still, if the message-passing flow is a sub-structure of the ground truth, i.e., some necessary computations are missing, the loss jumps up with a change of magnitude. Such an observation together with equation 3 facilitates us first to search an initial message-passing flow that is the super-structure of the ground-truth and then learn to prune the message-passing flows to get both compact and expressive message-passing flows. The framework of our model is shown in Figure 3(a), which sequentially search # blocks, # layers, connections and embedding sizes in a hierarchical way and the four steps in detail are listed as follows. Step 1: Search Message-Passing Blocks. First, we need to find a message-passing flow that the ground-truth message-passing flow is its sub-structure. To achieve this goal, we first stack several full message-passing blocks. By optimizing the score in equation 3, we find the Pareto-optimal number of message-passing blocks, where we take the number of blocks as the complexity in equation 3 and RMSE between the predicted value and the ground-truth as the loss. Step 2: Search Message-Passing Layers. As mentioned, a full message-passing block contains three layers corresponding to updates of edge, node, and graph representation. However, not all of them are necessary for obtaining the output. To find the most compact layers, we try to delete each layer to see whether the score in equation 3 increases or decreases, where we define the complexity as the number of layers and RMSE as the loss. Our pruning-based searching method is based on the unique insight in SR, which is much more efficient than brute force search. Specifically, our method can significantly decrease the computational cost from O(2n) to O(n), where n is the number of initial layers. Step 3: Search Necessary Inputs. We further filter out the useless inputs for each layer. Specifically, we adopt a similar strategy to the previous searching step: try to delete each input to see whether the score in equation 3 will rise or drop, where the complexity is the number of connections and the loss is RMSE. Similar to step 2, our model can significantly decrease the computational cost from O(2n) to O(n), where n is the number of initial inputs. Step 4: Search Embedding Sizes. To ensure that the embedding in each layer is compact and with explicit physical meanings, we use the score given in equation 3 to find the Pareto-optimal embedding size for each embedding, where the complexity is defined as the embedding size and RMSE defines the loss. We try to reduce the embedding size to find the embedding size with the highest score. At the same time, we fix other embedding sizes as a large enough number to ensure the information bottleneck can only be caused by the embedding size we are searching for. 2.4 THE LEARNING PROCEDURE After obtaining the message-passing flow M∗ and the parameters of DL component function P∗ at the first stage, we follow Cranmer et al. (2020) to symbolize each DL component into formulas, and then cascade them according to the skeleton represented in M∗ into a whole formula at the second stage, as shown in Figure 3(b). For aggregation functions ρ corresponding to set functions, (i) we choose several commonly used aggregators as candidates, including sum, mean, and max, while other aggregators can be generated by them, and select the maximum operator to replace the softmax function, (ii) we perform SR on input-output pairs from trained GNN component functions, (iii) we fine-tune all constants in the overall function (given by cascading component functions), thereby avoiding the accumulation of errors. 3 EVALUATION 3.1 EXPERIMENT ON CLASSICAL PHYSICAL SCENARIOS Dataset. We utilize five datasets in different physical domains to demonstrate that our model has the ability to rediscover the well-known graph-structure physical mechanisms, as introduced in Section 2.1 and Appendix A.1. We provide two cases of mechanics scenarios, one of electricity scenario and two of thermology scenarios. For both mechanics and thermology scenarios, there are two selected cases with different complexity, where the content listed in parentheses is associated with the more complex scenarios. Detailed information about formulas and data generation is reported in Appendix A.1. Metrics. Given the same input, we use the coefficient of determination R2, indicating the proportion of the output variation that can be predicted from the input variables. Specifically, it is calculated by the output of distilled formula and the output of the ground-truth formula to measure whether the learned formula is accurate enough. R2 can be calculated as R2 = 1− ∑ i (yi − ŷi) 2 / ∑ i (yi − ȳ) 2, where ȳ = ∑ i yi/n. Comparing Methods. We compare our model with learning symbolic models from deep learning with inductive bias (SymDL) (Cranmer et al., 2020) to demonstrate that our model is flexible in more scenarios and a variant of our model that uses a full graph network without pruning searching (FullGN) for the ablation study (1-layer full GNN that removes non-exist inputs and non-required outputs). The message-passing flows of SymDL and FullGN are shown in Appendix A.3. Plausibility Comparison with Baselines. We first compare the applicability of our method with the SOTA baseline (Cranmer et al., 2020). As listed in Table 4, our method can be applied to all five cases from three physical scenarios, including mechanics, electricity, and thermology, while the baseline fails in the last two scenarios due to incorrect message-passing flow because their designed message-passing flow is designed explicitly for Newton-force interaction in the simple mechanical scenario and not flexible enough for other scenarios. We design two cases in mechanics scenarios: calculating the acceleration and the relative acceleration in the center-of-mass frame. SymDL is designed for handling formula discovery in the simple case, with a specified message-passing flow. Comparatively, our method moves forward to a more general and challenging setting without specifying a message-passing flow representing the formula skeleton. To ascertain the correctness of the learned formulas, besides the baseline, we further introduced a variant of our method without searching the message-passing flow, i.e., directly using the full messagepassing block in the first stage. As listed in Figure 4, our model achieved the same performance with two baselines in the simple mechanics case. In the complex case, our model outperformed the SOTA baseline and the variant of our model by a large margin w.r.t. R2 metric. Specifically, these two competitors both failed and got a rather low R2, while our method rediscovered the correct formula with R2 = 0.917, indicating the advantage of searching for the correct messaging-passing flow. For our model, the difference between two formulas for two cases is about the latter two terms corresponding to the additional message-passing flows V → V′ and V → u′ → V′, which SymDL cannot handle. The formulas learned by baselines are wrong for lack of necessary dependencies, which fail to have physical meaning and differ largely from the ground truth. Our problem differs from SymDL, requiring prior knowledge of the formula skeleton for designing the deep learning architecture, which is almost impossible to know in new real-world scenarios. For the rest three cases that the SOTA baseline cannot handle, as listed in Figure 4, the performance gain over the variant using full message-passing flow indicates that optimizing Pareto-optimal score is essential in obtaining correct formulas, which is less subject to redundant message-passing flows that hinder the subsequent SR process, including unnecessary inputs and redundant computation steps. The detailed searching process of the electricity case is analyzed in Section 3.1. For the complex case of thermology, it can be observed that the learned formula successfully captures the effect of externally conducted heat compared with the simple case, while other baselines fail to have physical meanings due to unnecessary inputs and redundant computation steps. Although the change is slight (whether external heat conduction exists), the skeleton and the entire formula are quite different, and so is the entire formula. To be more precise, the learned message-passing flows by our model are shown in Figure 5. Besides, the time cost of each independent part is shown in Table 2, where we can observe that searching message-passing flows only takes a small part of the whole procedure, and our model’s running times are similar to SymDL and usually shorter than FullGN. Furthermore, we conduct experiments to demonstrate the design philosophy of our method, which is reported in Appendix A.9. Qualitative Results for Understanding the Searching Process. We show the searching process of message-passing flow in Figure 6, where the upper row shows the learning curve in terms of error (RMSE), complexity, and a score, which is a weighted summation of error and complexity. From Figure 6, we observe that if the message-passing flow is a sub-structure of the ground-truth message-passing flow, the performance will drop significantly. On the other hand, message-passing flows with redundant layers/inputs/embedding sizes will have similar performance, echoing the rationale of our pruning strategy. The core idea is to search for the most compact message-passing that is expressive enough, and the four-step searching process is as follows, (i) the model tried the number of blocks as 1 ∼ 3, finding similar errors with a rise in complexity, so it opted for 1; (ii) from the searched message-passing flow at the previous stage, it tried to delete every layer associated with ϕ. It turned out that deleting the edge layer would cause a huge error increase, so the edge layer was preserved, after which it tried to delete the node layer and found that the score decreased (error does not change a lot and the complexity decrease), so it decided to delete node layer; (iii) like the previous stage, it tried to delete each input and found that only deleting the V → ϕu connection would not cause an error increase, so this connection was deleted. (iv) finally, it tried to compress each representation and found that when the embedding size was 1, the score minimized, so 1 was chosen as the embedding size. After the whole process, the message-passing flows, including embedding (intermediate variable), functions, and topology, have explicit physical meanings, paving the way for symbolic regression. 3.2 EXPERIMENTS ON REAL-WORLD SCENARIOS OF PEDESTRIAN DYNAMICS To better show how our model discovers unknown graph-structured physical mechanisms in the real world, we conduct carefully-designed experiments of our model on formula discovery for pedestrian dynamics. Problem Formulation. We aim to find a formula that can approximately describe the relationship between acceleration a and the velocity v, pedestrian position x, and destination position xdest. The graph G describe the interaction relationship, which is constructed as follows when the two pedestrians’ distance is less than R, they are connected, and otherwise, they are unconnected. Formally, the problem can be described as finding a formula F that fits a = F(G, x, v, xdest). Datasets. We conduct experiments on two real-world datasets of crowd trajectories: several experimental datasets from studies about pedestrian dynamics (Boltes & Seyfried, 2013)1, including the following scenarios, (i) Unidirectional flow in corridor: a group of people goes through a corridor in the same direction, as shown in Figure 8(a); (ii) Bidirectional flow in corridor: a group of people goes through a corridor in opposite directions, as shown in Figure 8(b). Comparing Models. For pedestrian dynamics, a well-known manually-designed model is the social force model (Helbing & Molnar, 1995), in which pedestrians are with two forces: an attractive force from the destination and a repulsive force from the surrounding pedestrians and obstacles (refer to Appendix A.4 for details). Learned Formulas. The learned formulas and the corresponding physical meanings are reported in Figure 7, which demonstrate that our model can learn different skeletons and formulas that are more precise than the social force model with explicit physical meanings. The performance comparison is also reported in Figure 7, where we can observe that our model has about 10% improvement compared to the social force model. 4 RELATED WORKS Symbolic Regression (SR). Distilling elegant symbolic expressions from vast experimental data has always been the mainstream method used for finding new formulas and verifying hypotheses throughout the history of physics. SR is a classic topic (Schmidt & Lipson, 2009; Petersen et al., 2020; Biggio et al., 2021; Guimerà et al., 2020) that tries to emulate the process to learn an explicit symbolic model that can describe a projection from input X to the output y as accurately as possible while maintaining its compactness. Traditional methods of discovering formulas from data are primarily based on genetic programming(GP) (Schmidt & Lipson, 2009; Koza, 1994; Worm & Chiu, 2013). 1https://ped.fz-juelich.de/database/doku.php Hitherto, there have been promising results yielded by GP-based SR methods such as Burlacu et al. (2020), Virgolin et al. (2019), and the famous commercial SR method Eureqa (Dubčáková, 2011), etc. More recently, methods based on DL (Zheng et al., 2021; Qian et al., 2021; Martius & Lampert, 2016; Kusner et al., 2017; Udrescu & Tegmark, 2020; Udrescu et al., 2020; Daniele et al., 2022) for symbolic regression are introduced with better expressive ability than GP. Furthermore, Cranmer et al. (2020) first proposed to learn graph-structured physical mechanisms (especially kinematics) given formula skeletons. Beyond that, we propose searching for formula skeletons automatically, where existing SR methods can be exploited to look for basic components of the whole formula. Graph Neural Network (GNN). GNN (Kipf & Welling, 2017; Veličković et al., 2018; Gilmer et al., 2017) can be viewed in a message-passing manner (Battaglia et al., 2018; Veličković, 2022; Bronstein et al., 2017), and most of them can be summarized as message-passing among three levels: edge/node/graph level, while the message-passing flows and message/aggregation functions can be customized very differently based on the specific characteristics of applications. It has been widely used for physical systems by capturing the interacting mechanisms, such as simulating mechanical system (Sanchez-Gonzalez et al., 2020; Huang et al., 2021; Sanchez-Gonzalez et al., 2018), designing circuits (Zhang et al., 2019; Ren et al., 2020), simulating heat conduction (Chamberlain et al., 2021; Xhonneux et al., 2020), simulating pedestrian dynamics (Shi et al., 2023; Zhang et al., 2022). Furthermore, there are some works (You et al., 2020; Yoon et al., 2020; Cai et al., 2021; Gu et al., 2021) that adopt automated machine learning techniques for searching the best GNN architecture for a specific prediction task. Unlike them, we focus on the SR problems on graphs and inspired by symbolic regression (Udrescu & Tegmark, 2020), we propose to search the Pareto-optimal messagepassing flows, which is both accurate and simple and can benefit learning of symbolic models. Pareto-optimal Search. The previous Pareto-optimal solutions proposed in Neural Architecture Search (NAS) area (Lomurno et al., 2021; Lu et al., 2020; Dong et al., 2018) focus on finding the model architecture with both high prediction accuracy and low inference latency, which does not meet requirements for solving graph SR problem. Instead, our proposed method is based on a novel insight in the SR scenario: the performance would be similar when the message-passing flow (skeleton) is a super-structure of the ground-truth one. In contrast, the performance degrades a lot if it is a sub-structure of the ground-truth one. 5 CONCLUSION In this paper, we generalize the problem in Cranmer et al. (2020) by learning the formula skeleton rather than manually designing, which is crucial for learning formulas in a new physical area without much prior knowledge. We propose a new SR method that first transforms the discovery of the formula skeleton to the search of the Pareto-optimal message-passing flow with accuracy and compactness and then symbolizes its message functions to obtain the underneath formula. We conduct experiments on five datasets from three different physical domains, including mechanics, electricity, and thermology, demonstrating that our method can consistently learn plausible formulas describing the graph-structured physical mechanism. Furthermore, to show that our model is practical for learning unknown formulas in the real world, we conduct experiments on two real-world datasets about pedestrian dynamics, which learn different formulas with explicit physical meanings for different scenarios more precisely than mainstream empirical formulas. ACKNOWLEDGEMENT This work was supported in part by the National Key Research and Development Program of China under 2020YFA0711403, the National Nature Science Foundation of China under 61971267, U1936217, 61972223, 62171260. Q. Yao was in part supported by NSFC (No. 92270106) and CCF-Baidu Open Fund. A EXPERIMENTS A.1 DATA GENERATION Besides the scenarios of mechanics and electricity, we further illustrate the scenario of thermology as follows. Example 4 (Thermology: Heat Conduction) The objective of this problem is to compute the entropy production rate. The edge update function ϕe corresponds to Fourier’s Law of Heat Conduction, and the node update function ϕv corresponds to the Clausius entropy expression, followed by an aggregation ρv→u that sums up individual entropy production rates. In different scenarios, we devised different inputs and computed the theoretical outcome according to the known mechanisms. In the scenario of Mechanics, we randomly (standard normal distribution) set the (x, y) coordinates of the particles in 2-D cases. We assign the masses of the particles randomly according to the lognormal distribution. The original lengths of the springs are all set to 1. In the complex case, external forces with all dimensions following the standard normal distribution are exerted. Graph topology is picked randomly. We then compute the accelerations of each particle with Hooke’s Law, the independent action principle of force, and Newton’s Second Law of Motion. In the scenario of Electricity, we randomly give a topology on the graph and set the electric potential of nodes following the standard normal distribution. Resistances of resistors on edges are chosen uniformly randomly from 0.01 to 1.01 to avoid extremely large power outputs. We then compute the power of each edge (resistor) according to Joule’s Law and add them up to reach the overall power of the resistor circuit. In the scenario of Thermology, the graph topology is given as a ‘grid’, echoing the core idea of Finite Element Analysis. We randomly set the temperature of each node between 0 and 1 and the thermal conductivity between 1 to 3 globally. We then compute the discrete laplacian on the grid and the heat flow according to Fourier’s Law of Heat Conduction. With each node’s heat flow and temperature, we compute their entropy production rate separately and add them up to reach the overall entropy production rate. The basic information of our used datasets is listed in Table 3. A.2 REPRESENTATIVE SNAPSHOTS OF PEDESTRIAN DATASETS To better understand the pedestrian scenarios, we show two representative snapshots of two pedestrian datasets in Figure 8: unidirectional flow in a corridor and bidirectional flow in a corridor. A.3 BASELINE DETAILS The message-passing flows of baselines, SymDL, and FullGN are shown in Figure 9, and their applicability in different scenarios are demonstrated in Table 4. A.4 DETAILS OF SOCIAL FORCE MODEL In the social force model, the baseline model for pedestrian scenarios, the dynamics of pedestrians are driven by two factors: (a) a pedestrian is attracted by his/her destination with force FDi = (vdiei − vi) /τ , ei = xd−xi∥xd−xi∥ , where vdi is the value of desired velocity, vi is the current velocity, τ is the relaxation time, ei is the unit vector toward the destination; (b) the nearby ones repulse a pedestrian with a force Fij = Ai exp (−rij/Bi) enij , where Fij is the repulsive force, rij is the distance between pedestrian i and j. The joint force is Fi = FDi + ∑ j∈Ni Fij , where Ni means the set of pedestrians whose distance to pedestrian i is less than 5 meters. The social force model is widely used as the foundation of much commercial software such as viswalk2 and anylogic3. In this paper, we assume that the mass of a pedestrian is 1, and thus ai = Fi/m = Fi. However, on the one hand, the social force model is manually designed, which may have discrepancies with real-world pedestrian dynamics. On the other hand, different scenarios usually have very different pedestrian interaction mechanisms, which one single model cannot precisely model. So it is meaningful to learn data-driven formulas to describe different pedestrian interaction mechanisms. A.5 IMPLEMENTATION We implement our model in Python using Pytorch library and optimize all the models by Adam optimizer (Kingma & Ba, 2015). We use parallel symbolic regression in Python (PySR)4 (Cranmer, 2020) to extract formulas from each message functions ϕ. A.6 PARAMETER SETTINGS For the DL part, we set the learning rate as 10−4, tolerance in early stopping as 10, #layers and embedding size in MLP as 4 and 16, the max number of epochs as 20000 and the weight λ as 0.1. The choice of parameter λ is analyzed in Appendix A.8. For the traditional SR part, our candidate symbols include both binary operator {+,−,×, /} and unary operator {sign, exp}, and we set batch size as 1024. 2https://www.myptv.com/en/mobility-software/pedestrian-simulation-software-ptv-viswalk 3https://www.anylogic.com/features/libraries/pedestrian-library/ 4https://github.com/MilesCranmer/PySR Table 4: The applicability in different scenarios. Method Mechanics (simple) (complex) Electricity Thermology (simple) (complex) SymDL √ √ × × × Ours √ √ √ √ √ Figure 10: The searched best score v.s. searching time for two different searching methods in the circuit scenario. A.7 COMPARING HIERARCHICAL PRUNING WITH RANDOM SEARCH Furthermore, to demonstrate the effectiveness of our search method, we compare it with the random searching strategy and plot their searching processes in Figure 10. From that, we can observe that our method is much more efficient than the random search algorithm, which suggests that the searching problem is difficult and that our method effectively reduces the original colossal search space. Specifically, even if the random search algorithm takes ten times longer than ours, the score of the best-searched skeleton is still 4.5 times worse than ours, and the searched skeleton is wrong. A.8 PARAMETER SENSITIVITY One of the most critical parameters in our model is the weight λ that balances the complexity and errors. We normalize the input-output pairs to make the outputs have a standard deviation of 1. Specifically, for each dimension of features, we divide the features by their standard deviation, maintaining the fitting errors with similar magnitudes. Since the complexities of different skeletons are also with similar magnitudes, the best value of λ is similar for different datasets. As shown in Table 5, we test different values of λ on three diverse scenarios, including the circuit scenario and two mechanical scenarios. For each value of λ, we use ten different seeds to train the model to test whether our model can learn the correct message-passing flows and the correct formulas. We choose the best formula among ten formulas, so all these values of λ can allow us to find the correct formula in different scenarios. Among them, we choose λ = 0.1 for achieving the highest success rate among all three scenarios. A.9 DEMONSTRATION OF DESIGN PHILOSOPHY To demonstrate the design philosophy of our method, we plot several learning curves of different message-passing flows in the simple mechanical scenario. We first plot the learning curves of three different message-passing flows in Figure 11, corresponding to the one lacking the necessary message-passing connection, the ground-truth one, and the one with a redundant message-passing connection, respectively. We find that the ground-truth message-passing flow and the one with redundant message-passing connection have only little variations (the RMSE is around 0.1 and the MAPE is around 1.0) in performances after the loss function converges. However, the performance of message-passing flow lacking necessary connections decreases significantly (the RMSE is about 1.4, and the MAPE is 4.0). We further test the impact of the embedding size on the performance, where the learning curves are shown in Figure 12. From that, we can see that when the embedding size is less than a certain number 2, the performance decreases significantly (RMSE is more than 1.4), while the performance is similar (RMSE is around 0.2) when the embedding size is not less than 2. Last, we test whether the softmax function can learn the ground-truth aggregator. As we can see in Figure 13, the learning curve with softmax and with the ground-truth aggregator is quite similar, and we verify that the learned aggregator is the same as the ground truth. B METHOD DETAILS B.1 USAGE OF THE PROPOSED METHOD The result obtained by our method is fairly stable in terms of both searching formula skeleton and symbolizing learned neural networks. First, our designed searching method can guarantee that only better graph structures (message-passing flows) get selected during the learning process. Based on an insightful observation that the loss would increase a lot when the graph structure is a subset instead of a superset, we propose to search four components of graph structure (block, layer, connection, dimension) by starting with a full structure and then pruning to obtain a compact one. Second, the stability of SR results has also been demonstrated by applications in Cranmer et al. (2020). When applied to a new physical domain, we can directly use the same DL-related hyper-parameters (such as the learning rate and embedding size of MLP) on old domains and slightly tune the searchrelated hyper-parameter (weight λ) near the previous optimal value on old domains. The stability of the proper λ value is demonstrated in Table 5. Due to the randomness of DL training and the genetic algorithm, we train a model ten times with different seeds to get ten formulas and select the Pareto-optimal formula according to the score (see Appendix B.6) as the final formula. Overall, the above process in a new domain does not require heavy human labor. B.2 TRAINING ALGORITHM The training algorithm of our model is summarized in Algorithm 1. B.3 DETAILS OF SYMBOLIZING THE AGGREGATION/MESSAGE FUNCTIONS Symbolize Aggregation Function ρ. We choose several commonly used aggregators as candidates, including sum, mean and max, while other candidate aggregators such as min can be achieved by min(x) = −max(−x), and root mean square can be represented by (mean(x2)) 12 . Harmonic mean and l-norm can also be decomposed into two ϕ functions and a mean aggregators. These operations can be learned by ϕ. Algorithm 1 The process of obtaining the graph-structured symbolic model. Require: D including training data {(X,Y,G)}; 1: (start stage 1) search message-passing flow follows Section 2.3; 2: step 1: search the Pareto-optimal block number; 3: step 2: search the required layers; 4: step 3: search the necessary input variables; 5: step 4: search the Pareto-optimal embedding sizes; 6: train the model with Pareto-optimal message-passing networks; 7: (start stage 2) Symbolize Aggregation Function ρ and ϕ 8: replace each ρ by the aggregator with largest weight; 9: train the model and record the input-output pairs of each ϕ; 10: replace each ϕ by a formula obtained by classic SR, with constants left to be fitted on the data; 11: fit the correct constants from the data by gradient descent and get the final graph-structured symbolic model. Symbolize Message Function ϕ. After training the GNN model, we use the tool of symbolic regression to extract a symbolic model with correspondence to message and update functions. Specifically, for message functions in the neural network, we record their input-output pairs and implement classic symbolic tools to symbolize them. Retrain and Fine-tune Constants. Finally, to eliminate accumulated errors, we set all constants in the entire formulas as parameters to be trained and fine-tune them to get the final graph-structured mechanism represented by symbolic models. Specifically, after extracting the message-passing flows, we replace the MLP in message/update functions with the corresponding formulas. And we set the constants in the formulas as parameters in the deep model to optimize (all the operations in symbolic models are differentiable). B.4 DETAILS OF ERRORS In this paper, we use RMSE as the error measure. RMSE can be calculated as RMSE = √∑n i=1 (yi − ŷi) 2 n , where ŷi is the i-th predicted value and yi is the i-th ground-truth value and i = 1, · · · , n. B.5 DETAILS OF COMPLEXITY The design of complexity measurement for message-passing flows is flexible. In this paper, we calculate the complexity as follows, (i) for each layer, the complexity can be calculated as the product of the embedding size of this layer and the number of inputs in this layer; (ii) the whole complexity of the message-passing flow can be calculated as the summation of the complexity of each layer. To illustrate, the complexity of four message-passing flows in the Figure 6 can be calculated as 2× 3 + 2× 3 + 2× 1 = 14, 2× 3 + 2× 1 = 8, 2× 3 + 1× 1 = 7 and 2× 1 + 1× 1 = 3. As we can see, the complexity consistently decreases in the search process. B.6 DETAILS OF SCORE The score of a graph-structured formula is s = l + λc, where l is RMSE, c is the complexity of the graph-structured formulas, which is defined as the complexity of message-passing flow multiplying the average complexity of component formulas. C DETAILED RELATED WORKS ON SR Martius & Lampert (2016) proposed a model named EQL and extracted the symbolic formulas with a neural network with symbolic models as building blocks. Kusner et al. (2017) managed to eschew the problem of discrete optimization by converting discrete data into a parse tree. AI Feynman (Udrescu & Tegmark, 2020; Udrescu et al., 2020) split the function into sub-functions and performed regression on each module separately. The partition of functions was achieved with a trained neural network. SymDL (Cranmer et al., 2020) exploited the inductive biases of the correspondence of physical mechanisms (kinematics, especially) to GNN structure and established the method PySR to tackle the problem. They first revealed the link between GNNs and physical mechanisms. Unlike these works, our model can learn graph-structured physical mechanisms without requiring the information of formula skeletons, which can hardly be obtained in new physical scenarios for discovery.
1. What is the focus and contribution of the paper on symbolic regression? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and originality? 3. What are the weaknesses of the paper regarding its clarity, quality, and writing style? 4. How does the reviewer assess the significance and impact of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents an approach that generalizes symbolic regression to graph-structured physical mechanisms. As opposed to classical Symbolic Regression, this work assumes that X and Y in y=F(x) can be both represented as graphs. The method is based on a two-level optimization procedure where first the formula skeleton is modeled with a message-passing flow and parameters are learned. In a second step, symbolization is used to each Deep Learning component. Strengths And Weaknesses the idea of formulating the problem as a two step optimization problem, although not new, seems good in the context of this work writing is a bit difficult to follow with some typos and sentences difficult to understand. Clarity, Quality, Novelty And Reproducibility Quality: average. Text could be more polished. Clarity: could be improved. Originality: the work seems to be original, specially mixing SR with GNN.
ICLR
Title Graph Information Matters: Understanding Graph Filters from Interaction Probability Abstract Graph Neural Networks (GNNs) have received extensive affirmation for their promising performance in graph learning problems. Despite their various neural architectures, most are intrinsically graph filters that provide theoretical foundations for model explanations. In particular, low-pass filters show superiority in label prediction in many benchmarks. However, recent empirical research suggests that models with only low-pass filters do not always perform well. Although increasing attempts to understand graph filters, it is unclear how a particular graph affects the performance of different filters. In this paper, we carry out a comprehensive theoretical analysis of the synergy of graph structure and node features on graph filters’ behaviors in node classification, relying on the introduction of interaction probability and frequency distribution. We show that the homophily degree of graphs significantly affects the prediction error of graph filters. Our theory provides a guideline for graph filters design in a data-driven manner. Since it is hard for a single graph filter to live up to this, we propose a general strategy for exploring a data-specified filter bank. Experimental results show that our model achieves consistent and significant performance improvements across all benchmarks. Furthermore, we empirically validate our theoretical analysis and explain the behavior of baselines and our model. 1 INTRODUCTION Graph Neural Networks (GNNs) have continuously attracted interest as their promising performance in various graph learning problems. It is known that most of GNNs are intrinsically graph filters (Kipf & Welling, 2017; Defferrard et al., 2016; Ortega et al., 2018; Nt & Maehara, 2019). With the theoretical foundation of filters, there is an increasing attempt at model explanation, e.g. explaining the behavior of various GNNs in node classification. Nt & Maehara (2019) investigated the superiority of low-pass filters backed up with theoretical arguments while recent research (Balcilar et al., 2020; Chang et al., 2020; Bo et al., 2021) empirically revealed the weakness of GNNs with only low-pass filters in certain datasets. These contradictory views on low-pass filters pose a significant problem: Why does a filter work on one dataset but not on another? More precisely, for a given filter, what kinds of structure and features are useful for prediction? This makes it clear to us that in order to solve this problem, it is necessary to take into account graph information, including the graph structure, features, and labels. Existing theoretical research is mostly restricted to the investigation of filters themselves such as exploring their expressive power (Oono & Suzuki, 2020; Balcilar et al., 2020), without considering their inconsistency of performance on different graphs. It is clear that structural and feature information lead to the possible inconsistency. However, there has been little explicit analysis of how graph information influences the performance of graph filters. For instance, GNNs have formulated a variety of graph filters in a heuristic manner under a suppressed homophily assumption, i.e., nodes with similar attributes/labels tend to have connections. There remains a paucity of quantitative description of homophily until Pei et al. (2020) designed a rough index to measure it. In this paper, we establish a comprehensive theoretical analysis of the effect of structure and feature information on node label prediction to fill the gap and provide deep insights into the explanation of graph filters. We first establish a systematic investigation on graphs with an indicator in terms of homophily - the interaction probability and a distributional representation of input information - the frequency distribution. The interaction probability derived from random walk theory relates node labels with its local topology and quantifies the degree of clustering of nodes in the same/different class. We argue that interaction probability reflects the difficulty in identifying one class from others. In terms of feature information, we draw on spectral analysis representing features as frequency distributions. Furthermore, we consider the moment of frequency and build an explicit relation with graph structure. Interestingly, we find that the moment of label frequency (noting that a one-hot label vector can be regarded as a special node feature) is determined by interaction probability. The aforementioned preparations underpin our deep understanding of graph filters. We validate the prediction error of a graph filter under two settings: a. fixed graph structure, unravel the influence of input (original or transformed node features); b. given input, show how structure matters, and provide analysis utilizing frequency distribution and interaction probability. The main conclusions are: 1. given structure, the frequency response of an ideal graph filter should be consistent with the main frequency band of label frequency, that is, a matched frequency response is the premise of success; 2. given input, a graph filter essentially tunes the weight of edges - failing to make a homophily degree large enough may cause an unsatisfactory prediction accuracy. These interpretations of graph filters imply a data-driven filter design principle. In addition, we apply these theoretical results to three types of filters - low-pass, high-pass, and band-pass filters with specified form. It shows that a single graph filter is hard to comply with the principle of ideal filters, especially when the homophily degree and label frequency distribution of different classes are very different. For example, when frequency distributions of labels are far from each other, it is hard to find a single filter whose frequency response can cover all the main frequency bands well. In this paper, we leverage a combination of band-pass graph filters to overcome this problem and develop a simple yet effective framework to show how to learn multiple filters depending on datasets. We empirically validate our theoretical analysis and investigate structure and feature information of benchmarks. We verify our model on a variety of datasets and explain the behavior of baselines and our model. Experimental results show that our model achieves a consistent and significant performance improvement across all benchmarks. Our main contributions are: 1.We develop a theoretical analysis of graph information based on the introduction of interaction probability and frequency distribution; 2.We provide a deep understanding of the performance of graph filters illustrating how graph structure and input information matter; 3.We indicate the weakness of GNNs with a single graph filter and propose a general framework to learn a data-specified filter bank which contributes to significant improvement. 2 RELATED WORK In this paper, we focus on the analysis of graph filters in the context of graph neural networks. Since Bruna et al. (2014) defined spectral graph filters and extended convolutional operations to graphs, various spectral graph neural networks have been developed. For example, ChebNet (Defferrard et al., 2016) defines the Chebyshev polynomial filter which can be exactly localized in the k-hop neighborhood. Kipf & Welling (2017) simplified the Chebyshev filters using a first-order approximation and derived the well-known graph convolutional networks (GCNs). Bianchi et al. (2021) proposed the rational auto-regressive moving average graph filters (ARMA) which are more powerful in modeling the localization and provide more flexible graph frequency response, however more computationally expensive and also more unstable. Very recently, Min et al. (2020) augmented conventional GCNs with geometric scattering transforms which enabled band-pass filtering of graph signals and alleviated the oversmoothing issue. In addition, most graph neural networks originally defined in the spatial domain are also found essentially connected to the spectral filtering (Balcilar et al., 2020). By bridging the gap between spatial and spectral graph neural networks, Balcilar et al. (2020) further investigated the expressiveness of all graph neural networks from their spectral analysis. However, their analysis is limited to the spectrum coverage of a graph filter itself and lacks deeper insights into the graph-dependent performance of these filters. Another related topic is the measurement of graph homophily. Beyond the interaction probability that we define in this paper, there are some other heuristic metrics for homophily. Pei et al. (2020) defined a node homophily index to characterize their datasets and help explain their experimental results for Geom_GCN: β = 1#nodes ∑ v #neighbors of v that have the same label as v #neighbors of v . Zhu et al. (2020) defined edge homophily ratio instead and identified a set of key designs that can boost learning from the graph structure in heterophily: h = #edges whose end nodes have same labels#edge . This edge homophily definition is sensitive to the number of classes and size of each class, and Lim et al. (2021) made a modification to alleviate this problem. Our work differentiates from these works in that we not only use our definition to characterize the graph but also directly relate it to the performance of graph filters (or GNNs). 3 THEORETICAL ANALYSIS OF GRAPH INFORMATION 3.1 NOTATION Let Gn = (Vn, En) be an undirected graph with additional self-connection, where Vn = {v0, . . . , vn−1} is the set of nodes and En ⊂ Vn × Vn is the set of edges. Let A ∈ Rn×n be the adjacency matrix and L = D − A be the Laplacian matrix, where D is a diagonal degree matrix with Dii = ∑ j Aij . we denote à = D − 12AD− 1 2 , then L̃ = D− 1 2LD− 1 2 = I − à is the symmetric normalized Laplacian. Let (λi,ui) be a pair of eigenvalue and unit eigenvector of L̃, where 0 = λ0 ≤ · · · ≤ λn−1 ≤ 2. 3.2 PROBLEM SETTING In this paper, we are mainly interested in node classification problems on undirected graphs. Given Gn = (Vn, En), we consider T = {0, . . . ,K−1} as the set of all node labels. For ∀k ∈ T , we denote Ck as the set of nodes with label k and R ∈ RK×K as a size matrix which is a diagonal matrix with Rk = |Ck|. Considering single-label problems in which classes are mutually exclusive, we use onehot encoding to indicate the class label and introduce a label matrix Y ∈ Rn×K = (y0, . . . ,yK−1) to represent the labels of Vn, where yk is the indicator vector of Ck. Obviously, R = Y ⊤Y , Y ⊤1 = diag(R) and Y 1 = 1. A signal x on Gn can be arranged the signal values in a vector form x = (x0, . . . , xn−1) ⊤. Particularly, labels {yk|k ∈ T } are also graph signals. 3.3 A STRUCTURE INDICATOR - INTERACTION PROBABILITY Homophily of graphs is an implicit assumption widely leveraged in graph learning methods including GNNs. It is considered an indisputable common property of most graphs, despite its descriptive and unquantifiable definition, which introduces a variety of uncertainties. In this section, starting with the random walk, we introduce interaction probability to overcome this challenge. For a random walk on Gn, we denote P = D−1A as its transition matrix which is also a row Markov matrix. From the random walk theory, P k is the k-step transition matrix, and P kij is the probability that a random walker starting from node vi arrives at vj after k steps. For a node v and a class Cl, we denote πki (Cl) as the probability that a random walker starting from vi stays in Cl at the k-th step. It is trivial that πki (Cl) = ∑ j∈Cl P k ij with ∑ l∈T π k i (Cl) = 1. πki (Cl) demonstrates the relative preference/closeness of node vi for Cl with k-scale. To meet the homophily assumption, for vi in Cl, πki (Cl) is expected to gap away from others. Since πki (Cl)− ∑ m ̸=l π k i (Cm) = 2πki (Cl)− 1, πki (Cl) can be regarded as a measure of the k-scale homophily degree of node vi. Particularly, for ∀k ∈ N and vi ∈ Cl, πki (Cl) = 1 means that Cl is a community and will never communicate with other classes. However, this case is rare in real graphs. Below, we investigate the homophily of a class and propose a method to measure the communication strength between two classes. Definition 3.1 (k-step interaction probability). For l,m ∈ T , we define Πk as the k-step interaction probability matrix formulated as follows: Πklm = 1 Rl ∑ vi∈Cl πki (Cm) = 1 Rl ∑ vi∈Cl,vj∈Cm P kij = y⊤l P kym y⊤l yl (1) Πk = (Y ⊤Y )−1Y ⊤P kY = R−1Y ⊤P kY. (2) Πklm is the probability that a random walker from Cl arrives at Cm after k steps. Remark 1. Obviously, Πk1 = 1. Πklm is the mean proportion of Cm in the k-hop neighbors of nodes from Cl. Noting that rank(Y ) = K, when K ̸= n, Y R−1Y ⊤ ̸= I , thus (R−1Y ⊤PY )k ̸= R−1Y ⊤P kY , i.e. (Π)m ̸= Πm. More generally, for an arbitrary polynomial function g, R−1Y ⊤g(P )Y is likely not equal to g(R−1Y ⊤P kY ). In the rest of paper, we write g̃(Π) = R−1Y ⊤g(P )Y and g(Π) = g(R−1Y ⊤P kY ). For instance, if g(·) = (·)m, then g̃(Π) = Πm and g(Π) = (Π)m. Also, we denote Πkll, the self-interaction probability, as π k l for short. 1-step interaction probability intuitively reflects the degree of clustering of two classes and ∑k i=1 Π i measures the strength of interaction between classes in the scale of k steps. Since P is not symmetric, Πklm ̸= Πkml. To facilitate analysis, here we propose a symmetric variant of interaction probability to identify the interactions between two classes. We denote this symmetric k-step interaction probability matrix as Π̃k, by replacing P with à = D− 1 2AD− 1 2 , we obtain Π̃k = R− 1 2Y ⊤ÃkY R− 1 2 . Below, we investigate the important properties of Πk and Π̃k. Proposition 3.1. For l,m ∈ T and an arbitrary polynomial function g(·), we have: a. RlΠklm +RmΠ k ml ≥ 2 √ RlRmΠ̃ k lm, where Rl is the l-th diagonal element of R; b. (g̃2(Π̃))ll ≥ (g̃(Π̃)ll)2, where g̃k(Π̃) = R− 1 2Y ⊤gk(Ã)Y R− 1 2 . The proof can be found in Appendix B. Since Π̃k1 ̸= 1, that is, the measure is no longer a probability measure. However, according to Prop.3.1.a ( let m = l ), π̃kl is the lower bound of π k l , and π̃ k l = π k l when Gn is a regular graph. In the rest of theoretical analysis, we use π̃kl to measure the degree of Cl’s clustering. Let g(·) = (·)k, from Prop.3.1.b, we have π̃2kl ≥ (π̃kl )2. In Section.4.2, we leverage this inequality to derive a lower bound of our prediction error and further illustrate how structure influences the performance of a given filter. 3.4 A FEATURE INDICATOR - FREQUENCY DISTRIBUTION Following the graph signal processing (GSP) concepts, λ0, . . . , λn−1 are graph frequencies and u0 . . . ,un−1 are the corresponding frequency components which are invariant of graph filters. Through Fourier transform, we obtain {αi = ⟨ui,x⟩|i = 0, . . . , n− 1} the spectral representation of a graph signal x, called graph signal spectrum. Moreover, a graph signal can be represented as a linear combination of frequency components, i.e., x = ∑ αiui. For a label vector yl which is also a graph signal, we denote {γ0, . . . , γn−1} as its spectrum. There is an intuitive assumption: information of label vectors is all we need for classification - we will validate this assumption in Section 4.1. Under this context, γ2i / ∑ i γ 2 i reflects how much the frequency component uk contributes to the distinctiveness of Cl, without considering the positivity and negativity of effects. Interestingly, we find that the normalized signal spectrum is a histogram/discrete distribution defined below. Definition 3.2 (Frequency distribution). We define f , the frequency of signal x, as a random variable taking values in the set of graph frequencies with probability Pr(f = λk) = α2k /∑ i α 2 i . The probability describes the frequency distribution of signal x. With this definition, we derive distributional representations of signals from their spectral representations/spectra. One can evaluate the signal effect by comparing frequency distributions of signals and label vectors under a specified distribution metric, such as Wasserstein distance. Below, we consider the moment of frequency distribution to show how graph structure influences signal frequency. Proposition 3.2. For G = {V, E}, let f be the frequency of signal x, then E[fn] = x ⊤(I−Ã)nx x⊤x . The proof of this proposition can be found in Appendix B. With the definition of interaction probability, we further represent the moment of the label vector’s frequency. Corollary 3.3. For label frequency fl of yl, we have E[fnl ] = ( g̃(I − Π̃) ) ll with g = (·)n. Recall that g̃(I − Π̃) = R− 12Y ⊤(I − Ã)nY R− 12 , we have E[fl] = 1 − π̃l, E[f2l ] = 1 − 2π̃l + π̃2l and the variance of fl: Var(fl) = π̃2l − (π̃l)2. It can be seen that both the mean and variance of label frequency are close to 0 when π̃l approaches 1, which reflects a high homophily degree (as π̃l ≤ πl ≤ 1). In Section 4.1, we conduct a more detailed analysis of feature information of spectral space with frequency distribution. 4 ANALYSIS OF GRAPH FILTERS A graph filter is defined as a function g with applied Laplacian matrix or adjacency matrix. Denote R[Ã] as a polynomial ring in à over R, here we are mainly interested in g ∈ R[Ã]. In this section, we provide a deep understanding of the performance of graph filters concerning label prediction based on the above theoretical analysis of graph information. In general, there are two major concerns: with fixed graph structure, how does the input impact the performance of a given filter? and with fixed input, how does graph structure impact the performance of a given filter?. In this section, we provide the theoretical analysis of these two questions in Sections 4.1 and 4.2, respectively. The general formulation of the l + 1-th layer of spectral GNNs is X(l+1) = σ(g(Ã)X(l)W (l+1)), here σ is an activation function, X(l) is the output of the l-th layer, X(0) is a feature matrix and W (l+1) is a learnable transformation matrix. We call X(l)W (l+1) the input of g(Ã) in l + 1-th layer and denote X as the input of g(Ã) in the last layer. In the following sections, we discuss the prediction error of spectral GNNs with a given graph filter without activation function before prediction. That is, in the last layer with X as input, g(Ã)X is directly used for prediction. Definition 4.1 (Prediction error). Let X ∈ Rn×K be the input of graph filter g(Ã), Y ∈ Rn×K is the label matrix, the prediction error is formulated by: Er(g,X) =∥ g(Ã)X − Y ∥2F= tr(X⊤g2(Ã)X)− 2tr(X⊤g(Ã)Y )+ ∥ Y ∥2F (3) Remark 2. For a label vector yl, we denote Er(g,xl) =∥ g(Ã)xl − yl ∥2F as the error of g(Ã) predicting class l. Obviously, Er(g,X) = ∑ l∈T Er(g,xl), where xl is the l-th column of X . In particular, we will apply our conclusion to specified filters and make concrete analysis. Definition 4.2. With ϵ ∈ [0, ϵ0] and ϵ′ ∈ [−1, 1], ϵ0 is a small constant, we define low-pass filters gl(ϵ)(Ã), high-pass filters gh(ϵ)(Ã) and band-pass filters gb(ϵ′)(Ã) as: gl(ϵ)(Ã) = ϵI + Ã, gh(ϵ)(Ã) = ϵI − Ã, gb(ϵ′)(Ã) = I − (1 + |ϵ′|)−2(ϵ′I − Ã)2. For λ, an eigenvalue of L̃, we have gl(ϵ)(λ) ∈ [ϵ− 1, 1+ ϵ], gh(ϵ)(λ) ∈ [ϵ− 1, 1+ ϵ] and gb(ϵ′)(λ) ∈ [0, 1] since λ ∈ [0, 2]. Particularly, gl(0) is the GCN filter. 4.1 HOW INPUT MATTERS Denote X̃ = U⊤X = (x̃0, . . . , x̃K−1) and Ỹ = U⊤Y = (ỹ0, . . . , ỹK−1), where U is a matrix with unit eigenvectors of L̃ (recall that eigenvectors of à are consistent with that of L̃), revisiting Er(g,xl) and Er(g,yl) in spectral domain, we have: Er(g,xl) = ∥ g(I − Λ)x̃l − ỹl ∥2F= ∑ i (g(1− λi)αi − γi)2 (4) Er(g,yl) = ∑ i γ2(1− g(1− λi))2 = Rl ∑ i pi(1− g(1− λi))2 = RlE[1− g(1− fl)]2 (5) where Λ is the eigenvalue matrix of L̃, αi and γi are the spectra of xl and yl respectively, pi = Pr(fl = λi), fl is the frequency of yl. For better comparison, we normalize the input xl: ∥ xl ∥2F=∥ yl ∥2F , i.e., ∑ α2i = ∑ γ2i . g is re-scaled function with g([0, 2]) concentrating in [−1, 1]. How input information matter? With normalized feature and graph filters, it indicates that the performance of graph filters greatly depends on label spectra. Particularly, when the frequency response of a graph filter does not fit the label frequency, it might be inferior to all-pass filters, such as MLP. On the other hand, it poses a principle of filter design: make feature response of filters be consistent with the main frequency band of label frequency as much as possible. In terms of input information, it determines the performance of a filter - if the frequency distribution of input vector is far from that of label vector, even an ideal filter would fail. This observation is identical to our assumption in Section 3.4 - information of label vector is all we need and the distance between frequency distribution of input and label vectors reflects its usefulness. Therefore, Er(g,yl) is the lower bound of Er(g,xl) when g(Ã) are given. While an input vector may be useful for distinguishing one class, it may be helpless for another. In most GNNs, they tune the frequency distribution of features with a learnable linear transformation to generate a more informative input. Here, we discuss the Er(g,yl) of three types of filters: Er(gl(ϵ),yl)/Rl = Var(fl − ϵ) + E[fl − ϵ]2 = Var(fl) + (E[fl]− ϵ)2 (6) Er(gh(ϵ),yl)/Rl = Var(2− fl − ϵ) + E[2− fl − ϵ]2 = Var(fl) + (E[fl] + ϵ− 2)2 (7) Er(gb(ϵ′),yl)/Rl ≈ (E[fl] + ϵ′ − 1)4 + 6Var(fl)(E[fl] + ϵ′ − 1)2 + 8(1− ϵ′)Var(fl)E[fl] (1 + |ϵ′|)4 . (8) where we use Var(f2l ) ≈ 4E[fl]2Var(fl) derived from the delta method. Discussion. An interesting observation is that for a class with high dispersive spectrum, efforts of any single filters are to no avail. From Corollary 3.3, we know that E[fl] = 1−π̃l and Var(fl) = π̃2l −(π̃l)2. It demonstrates that higher homophily means lower E[fl], lower Var(fl), and also lower prediction error for low-pass filters. On the other hand, we indicate that, in most cases, band-pass filters are more powerful than low-pass filters, let alone high-pass filters. However, the prediction capacity of a signal filter is very limited when the means of spectra vary widely. 4.2 HOW STRUCTURE MATTERS Above, we catch a glimpse of spectral explanation of the behavior of graph filters. Below, we expand more understanding of graph filters. Assume that with learnable transformation, GNNs enable to generate an informative input. Here we discuss the prediction error of different graph filters under the optimal input Y . We revisit Er(g,yl) using symmetric interaction matrix and propose a lower bound er(g,yl) leveraging Proposition 3.1: Er(g,yl) = y ⊤ l (I − g(Ã))2yl = Rl(I − 2g̃(Π̃) + g̃2(Π̃))ll ≥ er(g,yl) = Rl(I − g̃(Π̃)ll)2. (9) How structural information matters? We indicate that, in the spatial point of view, graph filters can be interpreted as weight-tuning mechanisms on edges. The lower bound clearly demonstrates that a graph filter would have unsatisfactory prediction accuracy if it fails to make the homophily degree of the tuned graph large enough (g[Π̃]ll are far from 1). Applying the prediction error lower bound to aforementioned specified filters, we have: er(gl(ϵ),yl) = (1− π̃l − ϵ)2Rl; er(gh(ϵ),yl) = (1 + π̃l − ϵ)2Rl (10) er(gb(ϵ′),yl) = (1 + |ϵ′|)−4Rl(ϵ′2 − 2ϵ′π̃l + π̃2l )2 ≥ (1 + |ϵ′|)−4Rl(ϵ′ − π̃l)4. (11) Discussion. These error bounds indicate that: 1. a low-pass filter would fail on classes with low homophily degree - in turn, it confirms that the importance of homophily assumption for low-pass filters like GCN - it is identical with our spectral point of view; 2. high-pass filters have poor performances particularly on the high homophily graphs; 3. for a graph whose classes have consistent homophily degree (their self-interaction probabilities concentrate around a constant ϵ̄), gb(ϵ̄) would work better than others. However, it is predictable that any single filters would fail on graphs with diverse self-interaction probabilities. 5 MODEL AND EMPIRICAL STUDY Our theoretical analysis of graph information demonstrates that: 1. when node classes have inconsistent homophily degree or their label frequency distribution are far from each other, a single graph filter is prone to fail; 2. in most cases, band-pass filters would perform better than low-pass and high-pass filters; 3. a feature may contribute to the classification of one class but hinder the discrimination of another. Inspired by these, we propose a disentangled multi band-pass filter framework (DEMUF) which can be applied to any type of graphs no matter what kinds of graph information they have. The key point of our model is to learn multi band-pass filters which are used to capture different disentangled feature information respectively. 5.1 ARCHITECTURE OF TWO FRAMEWORKS OF DEMUF Our framework includes feature disentanglement and frequency filtering. As we have emphasized the limitations of single filters, it is natural to leverage multi graph filters. Theoretically, piling up sufficient numbers of graph filters to capture all the frequency components can improve prediction performance. However, it is very expensive. To avoid this problem, we consider feature disentanglement - essentially, it is to disentangle frequency distributions of features into different families. Features in the same family are expected to have similar spectral properties, that is, they have similar frequency distributions or have overlap on their main frequency bands. Then for each family, we apply a band-pass graph filter to capture their main frequency components. We propose two frameworks with different structures of filters: Plain-DEMUF and Tree-DEMUF (depicted in Fig. 1). The DISENTANGLE block and FILTER block are formulated as follows: Xk = DISENTANGLE(X,Φk) = Φk(X), Hk = FILTER ( Xk, ϵk, hk ) = (gb(ϵk)) hkXk. (12) In our implementation, we provide two samples of DISENTANGLE functions Φk: one is linear transformations, the other is GUMBEL_SOFTMAX (Jang et al., 2017) used to generate learnable masks for feature selection. In terms of the FILTER block, we use the band-pass filter defined in Definition 4.2, i.e., gb(ϵ) = I − (1 + |ϵ|)−2(à − ϵI)2 as the identical filter form. Here, ϵ is the parameter of filter constrained in [−1, 1] noting that 1− ϵ is the center of frequency response gb(ϵ). In each FILTER block, h is the number of layers. The framework of Plain-DEMUF with N graph filters is: H = MLP ( CONCAT ({ FILTER ( DISENTANGLE ( X,Φk ) , ϵk, hk ) , ωk ∣∣∣k = 1, . . . , N})). Based on this, we implement a simple model called P-DEMUF. Precisely, we leverage a GUMBEL_SOFTMAX to generate N learnable masks {M1, . . . ,MN} for feature sampling at once followed by different MLP. That is, Φk(X) = MLPk(X ⊙Mk). Similarly, we develop a model, T-DEMUF, under the framework of Tree-DEMUF formulated by: H1, X1 = FILTER {( DISENTANGLE ( X,Φ1 ) , ϵ, h ) , ( DISENTANGLE ( X,Ψ1 ) , ϵ1, h1 )} Hk+1, Xk+1 = {( DISENTANGLE ( Xk,Φk ) ,FILTER ( DISENTANGLE ( Xk,Ψk ) , ϵk, hk )} H = MLP ( CONCAT ({ ωkHk, k = 1, . . . , N })) . In each T-DEMUF layer, we use GUMBEL_SOFTMAX with different parameters to generate two masks Mk and M ′k and Φk(Xk) = Xk ⊙ Mk and Ψk(Xk) = Xk ⊙ M ′k. In each layer, we stop further disentangling of the branch of Hk by utilizing an additional constraint L(Xk−1, Hk) =∥ Xk−1 ⊙M ′k −Hk ∥22. Noting that Hk = (gb(ϵk))hkXk−1 ⊙M ′k, this constraint is to make the main frequency bands of Hk be consistent with frequency response of (gb(ϵk)) hk . Model discussion. Compared with filter-bank learning methods which directly apply an array of filters to features, our models use subsets of features. It can greatly reduce the amount of computation and parameters and help learning filters more efficiently and effectively. In addition, T-DEMUF uses an additional constraint to guide the filter learning process while P-DEMUF is a combination of multi graph neural networks which would not interfere with each other. Therefore, P-DEMUF is likely to obtain similar filters and require more filters to improve performance than T-DEMUF. The model visualization results in Fig. 2 validate this statement. 5.2 EXPERIMENTS To validate DEMUF, we compare the performances of P-DEMUF and T-DEMUF with that of spectral GNNs, spatial GNNs and MLP on extensive datasets. 5.2.1 EXPERIMENT SETTINGS Datasets. We use four types of real datasets - Citation network, WebKB, Actor co-occurrence network and Wikipedia network, to validate our proposed models. Cora and Citeseer (Sen et al., 2008) are widely used citation benchmarks which represent paper as nodes and citation between two papers as edges. Cornell, Texas, and Wisconsin (Pei et al., 2020) are three subgraphs of WebKB which is a webpage network with web pages as nodes and hyperlinks between them as edges. Chameleon and Squirrel (Rozemberczki et al., 2021) are two Wikipedia networks with web pages as nodes and links between pages as edges. The nodes originally have five classes while Bo et al. (2021) proposed a new classification criteria which divides nodes into three main categories. In this paper, the relabeled networks are called Chameleon2 and Squirrel2. Actor (Tang et al., 2009) is a subgraph of the fillm-director-actor-writer network whose nodes only represent actors and edges represent their collaborations. For all data, we use 60% nodes for training, 20% for validation and 20% for testing. To intuitively show the homophily degree of a dataset, we calculate the mean of self-interaction probability (diagonal of interaction probability matrix) and show it in Table 1. This metric is similar to the node homophily in (Pei et al., 2020). More statistics of datasets can be found in Appendix A. Baselines. We compare our models with four spectral GNNs: GCN (Kipf & Welling, 2017), ChebNet (Defferrard et al., 2016), GIN (Xu et al., 2019) (despite a spatial GNN, we can easily get its spectral form), ARMA (Bianchi et al., 2021). We list their spectral filter forms in Appendix A. In short, GCN is a well-known low-pass filter. The filter shape of GIN depends on its parameter ϵ. In this paper, we fix ϵ = 0.3 and thus it is also a low-pass filter. ChebNet and ARMA are high-order polynomial filters. In addition, we also add three spatial GNNs (whose spectral forms are hardly analyzed): GAT (Veličković et al., 2018), FAGCN (Bo et al., 2021), Geom_GCN (Pei et al., 2020). Both GAT and FAGCN utilize attention mechanism and FAGCN takes high frequency information into account. Geom_GCN is a novel aggregation method based on the geometry of graph (it is related because it was also empirically studied on graphs with different levels of homophily (Pei et al., 2020)). Finally, we also compare with MLP, a baseline without using any graph information. Experimental Setup. For all experiments, we report the mean prediction accuracy on the testing data for 10 runs. We search learning rate, hidden unit, weight decay and dropout for all models in the same search space. Finally, we choose learning rate of 0.01, dropout rate of 0.5, and hidden unit of 32 over all datasets. The number of filters are searched between 2 to 10, and the final setting is: for T-DEMUF, we use 4 filters with 7 layers for Citation networks, 2 filters with 15 layers for all WebKB and Wikipedia networks, 5 filters with 1 layer for Actor. The numbers of MLP layers are 2, 2, 3 and 4, respectively. P-DEMUF uses: 3 filters with 8 layers for Citation networks; 5 filters for Cornell, 4 filters for Wisconsin and 3 filters for Texas - all of them are 1 layer; 7 filters with 9 layers for WebKB; 5 filters with 2 layers for Actor. P-DEMUF applies 2-layer MLP to all benchmarks. In addition, as the setting of benchmarks are the same as that in Geom_GCN, we refer to the results reported in Pei et al. (2020). 5.3 RESULT AND ANALYSIS The experimental results are summarized in Table 1. Our models consistently outperform baselines over most benchmarks with significant improvement. On Cora and Citeseer, the datasets with a high level of homophily, our models are only comparable to GCN and other baselines. However, on all other datasets with a lower level of homophily, our models both obtained great performance gain. To understand the impact of graph homophily on different types of graph filters, let us analyze the performance of all spectral GNNs. On high-homophily datasets, all GNNs perform similarly and the accuracy is much higher than MLP. That means the graph structure information is extremely useful in this case. However, on low-homophily datasets, many of them are even worse than MLP. GCN and GIN, the two low-pass filter based models, perform worst. The two GNNs with high-order graph filters, ChebNet and ARMA, are clearly superior to other models due to their higher spectrum coverage. However, they cannot beat our models with specially designed multiple filters. The reason might be that the high complexity of their filters makes it more difficult to learn one optimal single filter. Finally, our model T-DEMUF yields over 18% higher accuracy than the best baselines (Geom_GCN) on Squirrel; and P-DEMUF yields almost 10% higher accuracy than MLP on Texas In addition, we select some typical datasets and show the frequency distribution on these graphs in Fig. 2. We can obviously see that on Cora the spectrum is focused on low frequency components. This can explain why the low-pass filter based models can also perform well on it. On other datasets, the frequency distribution is more diverse, so the low-pass filters can not match with the important frequency components anymore. In contrast, both of our models, T-DEMUF and P-DEMUF, learn graph filters corresponding well to those components (as shown in the last two rows of Fig. 2). TDEMUF uses fewer number of (more dispersed) filters but achieves comparable or better performance. 6 CONCLUSION In this paper, we propose a theoretical analysis of graph information with the introduction of interaction probability and frequency distribution. We develop a deep understanding of how different structures and input influence the performance of graph filters. We also design a simple framework to learn a filter bank. Empirical results on extensive datasets validate the power of our model. A BENCHMARKS AND MODEL DISCUSSION A.1 STATISTICS INFORMATION OF BENCHMARKS. We provide statistics information of our benchmarks in Table. A.1. A.2 SPECTRAL FILTERS. In our paper, we use four spectral GNN as baselines whose spectral filters are listed as Table.A.2 and define a band-pass filter gb(ϵ) as a quadratic function with respect to the adjacency matrix. Since gb(ϵ)(Ã) = I−(1+ |ϵ′|)−2(ϵI−Ã)2 = (1+ |ϵ|)−2((1+ |ϵ|−ϵ)I+A)((1+ |ϵ|+ϵ)I−A), it is exactly an overlap between a low-pass filter (1+|ϵ|−ϵ)I+A) and a high-pass filter ((1+|ϵ|+ϵ)I−A). That is why gb(ϵ) is a band-pass filter. A.3 MODEL DISCUSSION. A.3.1 MOTIVATION OF DISENTANGLEMENT. Overlap if not disentangled. Without disentanglement, it is highly possible that the learned filters have large overlaps if we do not induce any constraints on them. In our algorithm, we aim to train filters to capture the main frequency information of their input and assign different weights to that captured information depending on how much they contribute to label prediction. Therefore, it is natural to assume that if the input of filters is different, filters are less likely to overlap. Disentanglement can make the “input” different and more adaptable to each filter. Disentanglement reduces the model complexity. With disentanglement, we divide node features into several subsets by learnable masking or map them into several low-dimensional spaces through linear transformations which lowers the dimension of corresponding features for each filter and meanwhile makes the input feature fits better to each filter. A.3.2 MOTIVATION OF T-DEMUF. As we clarified in Section 5.1, our implementation of disentanglement is not random masking but learnable masking leveraging GUMBEL-SOFTMAX. These learnable maskings disentangle node features into several subsets of features. With a constraint L(Xk−1, Hk) =∥ Xk−1 ⊙M ′k −Hk ∥22, for each subset of features we train a band-pass filter to through their main frequency. As shown in Figure.2, this constraint guides the process of filter learning which can help reduce the overlap of filters’ frequency responses such that can reduce the number of graph filters. At the same time, to minimize the supervised loss, maskings are trained to disentangle features whose frequency distribution are similar to those of labels and assign a higher weight; those useless captured feature information will be assigned a lower weight. Weights are also learnable. A.3.3 HOW CAN OUR FILTER BANK SELECTION BE DATA-DRIVEN. In our algorithm, although the form of our band-pass filter gb(ϵ) is predefined, its parameters including ϵ and weight ω are learned from specified graphs. Moreover, the parameters of our feature disentanglement blocks (the linear transformations and learnable masking) are also learned from data which will affect the learning of the filter bank. Therefore, our filter bank selection is data-driven. A.4 MORE EXPERIMENTAL RESULTS. A.4.1 ADDITIONAL BASELINE - GPRGNN. Here we compare our models with a related baseline GPRGNN(Chien et al., 2021). It is worth noting that the splitting in our paper is different from that in GPRGNN. In GPRGNN, its training set consists of the same number of nodes from each class while we just randomly choose our training data. We find that WebKB datasets are sensitive to the way of splitting due to their uneven distribution of labels ( the numbers of classes are: Cornell and Texas: 33/1/18/101/30, Wisconsin: 10/70/118/32/21). Although GPRGNN’s splitting is likely better for model training, our model still outperforms it in the Wikipedia datasets, i.e., Chameleon and Squirrel. In GPRGNN’s setting, the performances of MLP on WebKB are comparable to GPRGNNs’ while in our setting, our proposed models’ performances are much better than MLPs’. We also test T-DEMUF on Actor, Cornell, and Texas following the splitting of GPRGNN. As shown in Table.A.4.1, in most of the benchmarks, our model performs better than GPRGNN. A.4.2 ABLATION STUDY. To show the advantage of using disentanglement, we provide an ablation study on five benchmarks. Here, we propose two ablation models based on P-DEMUF. Recall that the disentanglement block of P-DEMUF consists of learnable masking and linear transformations, we design our ablation models by taking off the component of masking and linear transformation. Also, for fair and intuitive comparison, we simply fix the number of filters as 2. The results shown as Table.A.4.2 validate that if we take off the disentanglement blocks of P-DEMUF, the results become worse in most of benchmarks. B PROOF OF PROPOSITION Here, we provide the proof of Proposition 3.2. Proof. Since x = n−1∑ i=0 αiui, ui is the i-th unit eigenvector of L̃ and λn = u⊤i L̃ nui then we have E[fn] = n−1∑ i=0 P (f = λi)λ n i = ∑ (αiui) ⊤L̃n(αiui)∑ α2i = x⊤L̃nx x⊤x = x⊤(I − Ã)nx x⊤x . (13) Below is the proof of Proposition 3.1. Proof. For P = D−1A and à = D− 1 2AD− 1 2 , and Π, Π̃ defined by Definition 3.1, the inequality can be represented as (RΠk +(Πk)⊤R)lm ≥ 2(R 1 2 Π̃kR 1 2 )lm which is equivalent to prove y⊤m(P k + (P k)⊤)yl ≥ 2y⊤mÃkyl. Noting that, (P k)⊤ = DP kD−1 and Ãk = D 1 2P kD− 1 2 , with B = P k + (P k)⊤, we have Bij = P kij + di dj P kij ≥ 2 √ di dj P kij = 2à k ij . Therefore, y ⊤ m(B − 2Ãk)yl ≥ 0. Let m = l, then we get πkl ≥ π̃kl . To prove proposition b, we utilize lemma B.1. Since g(Ã) is symmetric, then we have (g2[Π̃])ll = y⊤(g(Ã))2y y⊤y ≥ (y⊤g(Ã)y y⊤y )2 = (g[Π̃]ll) 2. Lemma B.1. Let B ∈ Rn×n is a symmetric matrix, ∀ij, y ∈ Rn, we have y⊤B2y y⊤y ≥ (y⊤By y⊤y )2 . Proof. Since B is symmetric, then we have B = UΛU⊤, here U is matrix of unit eigenvectors of B. From the proof of Proposition 3.2, we obtain that y ⊤B2y y⊤y = ∑ (αiλi) 2∑ α2i and ( y⊤By y⊤y )2 = ( ∑ α2iλi) 2 ( ∑ α2i ) 2 . From Hölder’s inequality, we have ( ∑ (αiλi) 2)( ∑ α2i ) ≥ ( ∑ α2iλi) 2. Therefore, we have ∑ (αiλi) 2∑ α2i ≥ ( ∑ α2iλi) 2 ( ∑ α2i ) 2 .
1. What is the focus of the paper regarding graph neural networks? 2. What are the strengths and weaknesses of the proposed approach in addressing the studied problem? 3. Do you have any questions or concerns regarding the technical part of the paper, such as notation, typos, theoretical analysis, proposition, and definitions? 4. How does the reviewer assess the contribution of the paper compared to prior works? 5. Are there any parts of the paper where the reviewer feels that the authors overclaimed their contributions?
Summary Of The Paper Review
Summary Of The Paper The paper presents an insightful analysis of the induced graph filters in GNNs. To accommodate the heterogeneity of graphs, the authors provide a family of novel GNNs for learning data-specific filter banks. Overall, the introduction part is well written, and the studied problem is novel and interesting. However, the technical part of this paper has some issues: (1) the contributions of this paper are over-claimed, (2) the theoretical analysis and algorithm are disconnected, (3) mistakes in the theoretical results. Review Here are the detailed comments: [Problem] Very nice introduction and literature review. The problem is novel and interesting. [Notation] The notation set shall be largely simplified. For example, y_i, c_m, r_i all refer to the node labels. [Typos] There are typos throughout this paper. For example, missing space in the 9th line of Sec 3.3; in Sec 3.4, graph frequency -> graph frequencies? For {\alpha_i = <u_i, x>}_i, why does the index i appear in both the inside and the outside of the parentheses? [Theoretical Analysis] In sec 3.3, the authors claimed that \phi_i^k = \sum P_{i,j}^k is the probability that a random walker starting from v_i and stays in C_l. Based on my understanding, \phi_i^k does not exclude the case that the random walk traverse outside of C_l. Same in the Def. of interaction probability. Does the interaction probability also consider the case that random walk traverse to some other communities (different from l and m)? If so, please provide the justifications - what is the motivation and rationale of these formulations. [Proposition 3.1] There lack of insightful discussions about Proposition 3.1. What are the physical meanings of g(\Phi) and g[\Phi]? What does Proposition 3.1 tell us? And, what is the connection between Proposition 3.1 to the proposed algorithms? [Confusiong Notions] In Def. 3.2, what do you mean about the distributional representation? The authors may want to provide further explanation. In Proposition 3.2, "f be the frequency of signal x" --> "f be the frequency distribution of signal x"? [Over-calimed Contribution] (1) The low-pass/high-pass filters seem very similar to "Beyond Low-frequency Information in Graph Convolutional Networks". Moreover, the authors shall provide theoretical proofs to show why the g_b is a band-pass filter. (2) The authors claimed that this paper provides a "data-driven" mechanism for filter bank selection, which is not well described in the algorithm description. (3) the theoretical analysis and algorithm feel disconnected.
ICLR
Title Graph Information Matters: Understanding Graph Filters from Interaction Probability Abstract Graph Neural Networks (GNNs) have received extensive affirmation for their promising performance in graph learning problems. Despite their various neural architectures, most are intrinsically graph filters that provide theoretical foundations for model explanations. In particular, low-pass filters show superiority in label prediction in many benchmarks. However, recent empirical research suggests that models with only low-pass filters do not always perform well. Although increasing attempts to understand graph filters, it is unclear how a particular graph affects the performance of different filters. In this paper, we carry out a comprehensive theoretical analysis of the synergy of graph structure and node features on graph filters’ behaviors in node classification, relying on the introduction of interaction probability and frequency distribution. We show that the homophily degree of graphs significantly affects the prediction error of graph filters. Our theory provides a guideline for graph filters design in a data-driven manner. Since it is hard for a single graph filter to live up to this, we propose a general strategy for exploring a data-specified filter bank. Experimental results show that our model achieves consistent and significant performance improvements across all benchmarks. Furthermore, we empirically validate our theoretical analysis and explain the behavior of baselines and our model. 1 INTRODUCTION Graph Neural Networks (GNNs) have continuously attracted interest as their promising performance in various graph learning problems. It is known that most of GNNs are intrinsically graph filters (Kipf & Welling, 2017; Defferrard et al., 2016; Ortega et al., 2018; Nt & Maehara, 2019). With the theoretical foundation of filters, there is an increasing attempt at model explanation, e.g. explaining the behavior of various GNNs in node classification. Nt & Maehara (2019) investigated the superiority of low-pass filters backed up with theoretical arguments while recent research (Balcilar et al., 2020; Chang et al., 2020; Bo et al., 2021) empirically revealed the weakness of GNNs with only low-pass filters in certain datasets. These contradictory views on low-pass filters pose a significant problem: Why does a filter work on one dataset but not on another? More precisely, for a given filter, what kinds of structure and features are useful for prediction? This makes it clear to us that in order to solve this problem, it is necessary to take into account graph information, including the graph structure, features, and labels. Existing theoretical research is mostly restricted to the investigation of filters themselves such as exploring their expressive power (Oono & Suzuki, 2020; Balcilar et al., 2020), without considering their inconsistency of performance on different graphs. It is clear that structural and feature information lead to the possible inconsistency. However, there has been little explicit analysis of how graph information influences the performance of graph filters. For instance, GNNs have formulated a variety of graph filters in a heuristic manner under a suppressed homophily assumption, i.e., nodes with similar attributes/labels tend to have connections. There remains a paucity of quantitative description of homophily until Pei et al. (2020) designed a rough index to measure it. In this paper, we establish a comprehensive theoretical analysis of the effect of structure and feature information on node label prediction to fill the gap and provide deep insights into the explanation of graph filters. We first establish a systematic investigation on graphs with an indicator in terms of homophily - the interaction probability and a distributional representation of input information - the frequency distribution. The interaction probability derived from random walk theory relates node labels with its local topology and quantifies the degree of clustering of nodes in the same/different class. We argue that interaction probability reflects the difficulty in identifying one class from others. In terms of feature information, we draw on spectral analysis representing features as frequency distributions. Furthermore, we consider the moment of frequency and build an explicit relation with graph structure. Interestingly, we find that the moment of label frequency (noting that a one-hot label vector can be regarded as a special node feature) is determined by interaction probability. The aforementioned preparations underpin our deep understanding of graph filters. We validate the prediction error of a graph filter under two settings: a. fixed graph structure, unravel the influence of input (original or transformed node features); b. given input, show how structure matters, and provide analysis utilizing frequency distribution and interaction probability. The main conclusions are: 1. given structure, the frequency response of an ideal graph filter should be consistent with the main frequency band of label frequency, that is, a matched frequency response is the premise of success; 2. given input, a graph filter essentially tunes the weight of edges - failing to make a homophily degree large enough may cause an unsatisfactory prediction accuracy. These interpretations of graph filters imply a data-driven filter design principle. In addition, we apply these theoretical results to three types of filters - low-pass, high-pass, and band-pass filters with specified form. It shows that a single graph filter is hard to comply with the principle of ideal filters, especially when the homophily degree and label frequency distribution of different classes are very different. For example, when frequency distributions of labels are far from each other, it is hard to find a single filter whose frequency response can cover all the main frequency bands well. In this paper, we leverage a combination of band-pass graph filters to overcome this problem and develop a simple yet effective framework to show how to learn multiple filters depending on datasets. We empirically validate our theoretical analysis and investigate structure and feature information of benchmarks. We verify our model on a variety of datasets and explain the behavior of baselines and our model. Experimental results show that our model achieves a consistent and significant performance improvement across all benchmarks. Our main contributions are: 1.We develop a theoretical analysis of graph information based on the introduction of interaction probability and frequency distribution; 2.We provide a deep understanding of the performance of graph filters illustrating how graph structure and input information matter; 3.We indicate the weakness of GNNs with a single graph filter and propose a general framework to learn a data-specified filter bank which contributes to significant improvement. 2 RELATED WORK In this paper, we focus on the analysis of graph filters in the context of graph neural networks. Since Bruna et al. (2014) defined spectral graph filters and extended convolutional operations to graphs, various spectral graph neural networks have been developed. For example, ChebNet (Defferrard et al., 2016) defines the Chebyshev polynomial filter which can be exactly localized in the k-hop neighborhood. Kipf & Welling (2017) simplified the Chebyshev filters using a first-order approximation and derived the well-known graph convolutional networks (GCNs). Bianchi et al. (2021) proposed the rational auto-regressive moving average graph filters (ARMA) which are more powerful in modeling the localization and provide more flexible graph frequency response, however more computationally expensive and also more unstable. Very recently, Min et al. (2020) augmented conventional GCNs with geometric scattering transforms which enabled band-pass filtering of graph signals and alleviated the oversmoothing issue. In addition, most graph neural networks originally defined in the spatial domain are also found essentially connected to the spectral filtering (Balcilar et al., 2020). By bridging the gap between spatial and spectral graph neural networks, Balcilar et al. (2020) further investigated the expressiveness of all graph neural networks from their spectral analysis. However, their analysis is limited to the spectrum coverage of a graph filter itself and lacks deeper insights into the graph-dependent performance of these filters. Another related topic is the measurement of graph homophily. Beyond the interaction probability that we define in this paper, there are some other heuristic metrics for homophily. Pei et al. (2020) defined a node homophily index to characterize their datasets and help explain their experimental results for Geom_GCN: β = 1#nodes ∑ v #neighbors of v that have the same label as v #neighbors of v . Zhu et al. (2020) defined edge homophily ratio instead and identified a set of key designs that can boost learning from the graph structure in heterophily: h = #edges whose end nodes have same labels#edge . This edge homophily definition is sensitive to the number of classes and size of each class, and Lim et al. (2021) made a modification to alleviate this problem. Our work differentiates from these works in that we not only use our definition to characterize the graph but also directly relate it to the performance of graph filters (or GNNs). 3 THEORETICAL ANALYSIS OF GRAPH INFORMATION 3.1 NOTATION Let Gn = (Vn, En) be an undirected graph with additional self-connection, where Vn = {v0, . . . , vn−1} is the set of nodes and En ⊂ Vn × Vn is the set of edges. Let A ∈ Rn×n be the adjacency matrix and L = D − A be the Laplacian matrix, where D is a diagonal degree matrix with Dii = ∑ j Aij . we denote à = D − 12AD− 1 2 , then L̃ = D− 1 2LD− 1 2 = I − à is the symmetric normalized Laplacian. Let (λi,ui) be a pair of eigenvalue and unit eigenvector of L̃, where 0 = λ0 ≤ · · · ≤ λn−1 ≤ 2. 3.2 PROBLEM SETTING In this paper, we are mainly interested in node classification problems on undirected graphs. Given Gn = (Vn, En), we consider T = {0, . . . ,K−1} as the set of all node labels. For ∀k ∈ T , we denote Ck as the set of nodes with label k and R ∈ RK×K as a size matrix which is a diagonal matrix with Rk = |Ck|. Considering single-label problems in which classes are mutually exclusive, we use onehot encoding to indicate the class label and introduce a label matrix Y ∈ Rn×K = (y0, . . . ,yK−1) to represent the labels of Vn, where yk is the indicator vector of Ck. Obviously, R = Y ⊤Y , Y ⊤1 = diag(R) and Y 1 = 1. A signal x on Gn can be arranged the signal values in a vector form x = (x0, . . . , xn−1) ⊤. Particularly, labels {yk|k ∈ T } are also graph signals. 3.3 A STRUCTURE INDICATOR - INTERACTION PROBABILITY Homophily of graphs is an implicit assumption widely leveraged in graph learning methods including GNNs. It is considered an indisputable common property of most graphs, despite its descriptive and unquantifiable definition, which introduces a variety of uncertainties. In this section, starting with the random walk, we introduce interaction probability to overcome this challenge. For a random walk on Gn, we denote P = D−1A as its transition matrix which is also a row Markov matrix. From the random walk theory, P k is the k-step transition matrix, and P kij is the probability that a random walker starting from node vi arrives at vj after k steps. For a node v and a class Cl, we denote πki (Cl) as the probability that a random walker starting from vi stays in Cl at the k-th step. It is trivial that πki (Cl) = ∑ j∈Cl P k ij with ∑ l∈T π k i (Cl) = 1. πki (Cl) demonstrates the relative preference/closeness of node vi for Cl with k-scale. To meet the homophily assumption, for vi in Cl, πki (Cl) is expected to gap away from others. Since πki (Cl)− ∑ m ̸=l π k i (Cm) = 2πki (Cl)− 1, πki (Cl) can be regarded as a measure of the k-scale homophily degree of node vi. Particularly, for ∀k ∈ N and vi ∈ Cl, πki (Cl) = 1 means that Cl is a community and will never communicate with other classes. However, this case is rare in real graphs. Below, we investigate the homophily of a class and propose a method to measure the communication strength between two classes. Definition 3.1 (k-step interaction probability). For l,m ∈ T , we define Πk as the k-step interaction probability matrix formulated as follows: Πklm = 1 Rl ∑ vi∈Cl πki (Cm) = 1 Rl ∑ vi∈Cl,vj∈Cm P kij = y⊤l P kym y⊤l yl (1) Πk = (Y ⊤Y )−1Y ⊤P kY = R−1Y ⊤P kY. (2) Πklm is the probability that a random walker from Cl arrives at Cm after k steps. Remark 1. Obviously, Πk1 = 1. Πklm is the mean proportion of Cm in the k-hop neighbors of nodes from Cl. Noting that rank(Y ) = K, when K ̸= n, Y R−1Y ⊤ ̸= I , thus (R−1Y ⊤PY )k ̸= R−1Y ⊤P kY , i.e. (Π)m ̸= Πm. More generally, for an arbitrary polynomial function g, R−1Y ⊤g(P )Y is likely not equal to g(R−1Y ⊤P kY ). In the rest of paper, we write g̃(Π) = R−1Y ⊤g(P )Y and g(Π) = g(R−1Y ⊤P kY ). For instance, if g(·) = (·)m, then g̃(Π) = Πm and g(Π) = (Π)m. Also, we denote Πkll, the self-interaction probability, as π k l for short. 1-step interaction probability intuitively reflects the degree of clustering of two classes and ∑k i=1 Π i measures the strength of interaction between classes in the scale of k steps. Since P is not symmetric, Πklm ̸= Πkml. To facilitate analysis, here we propose a symmetric variant of interaction probability to identify the interactions between two classes. We denote this symmetric k-step interaction probability matrix as Π̃k, by replacing P with à = D− 1 2AD− 1 2 , we obtain Π̃k = R− 1 2Y ⊤ÃkY R− 1 2 . Below, we investigate the important properties of Πk and Π̃k. Proposition 3.1. For l,m ∈ T and an arbitrary polynomial function g(·), we have: a. RlΠklm +RmΠ k ml ≥ 2 √ RlRmΠ̃ k lm, where Rl is the l-th diagonal element of R; b. (g̃2(Π̃))ll ≥ (g̃(Π̃)ll)2, where g̃k(Π̃) = R− 1 2Y ⊤gk(Ã)Y R− 1 2 . The proof can be found in Appendix B. Since Π̃k1 ̸= 1, that is, the measure is no longer a probability measure. However, according to Prop.3.1.a ( let m = l ), π̃kl is the lower bound of π k l , and π̃ k l = π k l when Gn is a regular graph. In the rest of theoretical analysis, we use π̃kl to measure the degree of Cl’s clustering. Let g(·) = (·)k, from Prop.3.1.b, we have π̃2kl ≥ (π̃kl )2. In Section.4.2, we leverage this inequality to derive a lower bound of our prediction error and further illustrate how structure influences the performance of a given filter. 3.4 A FEATURE INDICATOR - FREQUENCY DISTRIBUTION Following the graph signal processing (GSP) concepts, λ0, . . . , λn−1 are graph frequencies and u0 . . . ,un−1 are the corresponding frequency components which are invariant of graph filters. Through Fourier transform, we obtain {αi = ⟨ui,x⟩|i = 0, . . . , n− 1} the spectral representation of a graph signal x, called graph signal spectrum. Moreover, a graph signal can be represented as a linear combination of frequency components, i.e., x = ∑ αiui. For a label vector yl which is also a graph signal, we denote {γ0, . . . , γn−1} as its spectrum. There is an intuitive assumption: information of label vectors is all we need for classification - we will validate this assumption in Section 4.1. Under this context, γ2i / ∑ i γ 2 i reflects how much the frequency component uk contributes to the distinctiveness of Cl, without considering the positivity and negativity of effects. Interestingly, we find that the normalized signal spectrum is a histogram/discrete distribution defined below. Definition 3.2 (Frequency distribution). We define f , the frequency of signal x, as a random variable taking values in the set of graph frequencies with probability Pr(f = λk) = α2k /∑ i α 2 i . The probability describes the frequency distribution of signal x. With this definition, we derive distributional representations of signals from their spectral representations/spectra. One can evaluate the signal effect by comparing frequency distributions of signals and label vectors under a specified distribution metric, such as Wasserstein distance. Below, we consider the moment of frequency distribution to show how graph structure influences signal frequency. Proposition 3.2. For G = {V, E}, let f be the frequency of signal x, then E[fn] = x ⊤(I−Ã)nx x⊤x . The proof of this proposition can be found in Appendix B. With the definition of interaction probability, we further represent the moment of the label vector’s frequency. Corollary 3.3. For label frequency fl of yl, we have E[fnl ] = ( g̃(I − Π̃) ) ll with g = (·)n. Recall that g̃(I − Π̃) = R− 12Y ⊤(I − Ã)nY R− 12 , we have E[fl] = 1 − π̃l, E[f2l ] = 1 − 2π̃l + π̃2l and the variance of fl: Var(fl) = π̃2l − (π̃l)2. It can be seen that both the mean and variance of label frequency are close to 0 when π̃l approaches 1, which reflects a high homophily degree (as π̃l ≤ πl ≤ 1). In Section 4.1, we conduct a more detailed analysis of feature information of spectral space with frequency distribution. 4 ANALYSIS OF GRAPH FILTERS A graph filter is defined as a function g with applied Laplacian matrix or adjacency matrix. Denote R[Ã] as a polynomial ring in à over R, here we are mainly interested in g ∈ R[Ã]. In this section, we provide a deep understanding of the performance of graph filters concerning label prediction based on the above theoretical analysis of graph information. In general, there are two major concerns: with fixed graph structure, how does the input impact the performance of a given filter? and with fixed input, how does graph structure impact the performance of a given filter?. In this section, we provide the theoretical analysis of these two questions in Sections 4.1 and 4.2, respectively. The general formulation of the l + 1-th layer of spectral GNNs is X(l+1) = σ(g(Ã)X(l)W (l+1)), here σ is an activation function, X(l) is the output of the l-th layer, X(0) is a feature matrix and W (l+1) is a learnable transformation matrix. We call X(l)W (l+1) the input of g(Ã) in l + 1-th layer and denote X as the input of g(Ã) in the last layer. In the following sections, we discuss the prediction error of spectral GNNs with a given graph filter without activation function before prediction. That is, in the last layer with X as input, g(Ã)X is directly used for prediction. Definition 4.1 (Prediction error). Let X ∈ Rn×K be the input of graph filter g(Ã), Y ∈ Rn×K is the label matrix, the prediction error is formulated by: Er(g,X) =∥ g(Ã)X − Y ∥2F= tr(X⊤g2(Ã)X)− 2tr(X⊤g(Ã)Y )+ ∥ Y ∥2F (3) Remark 2. For a label vector yl, we denote Er(g,xl) =∥ g(Ã)xl − yl ∥2F as the error of g(Ã) predicting class l. Obviously, Er(g,X) = ∑ l∈T Er(g,xl), where xl is the l-th column of X . In particular, we will apply our conclusion to specified filters and make concrete analysis. Definition 4.2. With ϵ ∈ [0, ϵ0] and ϵ′ ∈ [−1, 1], ϵ0 is a small constant, we define low-pass filters gl(ϵ)(Ã), high-pass filters gh(ϵ)(Ã) and band-pass filters gb(ϵ′)(Ã) as: gl(ϵ)(Ã) = ϵI + Ã, gh(ϵ)(Ã) = ϵI − Ã, gb(ϵ′)(Ã) = I − (1 + |ϵ′|)−2(ϵ′I − Ã)2. For λ, an eigenvalue of L̃, we have gl(ϵ)(λ) ∈ [ϵ− 1, 1+ ϵ], gh(ϵ)(λ) ∈ [ϵ− 1, 1+ ϵ] and gb(ϵ′)(λ) ∈ [0, 1] since λ ∈ [0, 2]. Particularly, gl(0) is the GCN filter. 4.1 HOW INPUT MATTERS Denote X̃ = U⊤X = (x̃0, . . . , x̃K−1) and Ỹ = U⊤Y = (ỹ0, . . . , ỹK−1), where U is a matrix with unit eigenvectors of L̃ (recall that eigenvectors of à are consistent with that of L̃), revisiting Er(g,xl) and Er(g,yl) in spectral domain, we have: Er(g,xl) = ∥ g(I − Λ)x̃l − ỹl ∥2F= ∑ i (g(1− λi)αi − γi)2 (4) Er(g,yl) = ∑ i γ2(1− g(1− λi))2 = Rl ∑ i pi(1− g(1− λi))2 = RlE[1− g(1− fl)]2 (5) where Λ is the eigenvalue matrix of L̃, αi and γi are the spectra of xl and yl respectively, pi = Pr(fl = λi), fl is the frequency of yl. For better comparison, we normalize the input xl: ∥ xl ∥2F=∥ yl ∥2F , i.e., ∑ α2i = ∑ γ2i . g is re-scaled function with g([0, 2]) concentrating in [−1, 1]. How input information matter? With normalized feature and graph filters, it indicates that the performance of graph filters greatly depends on label spectra. Particularly, when the frequency response of a graph filter does not fit the label frequency, it might be inferior to all-pass filters, such as MLP. On the other hand, it poses a principle of filter design: make feature response of filters be consistent with the main frequency band of label frequency as much as possible. In terms of input information, it determines the performance of a filter - if the frequency distribution of input vector is far from that of label vector, even an ideal filter would fail. This observation is identical to our assumption in Section 3.4 - information of label vector is all we need and the distance between frequency distribution of input and label vectors reflects its usefulness. Therefore, Er(g,yl) is the lower bound of Er(g,xl) when g(Ã) are given. While an input vector may be useful for distinguishing one class, it may be helpless for another. In most GNNs, they tune the frequency distribution of features with a learnable linear transformation to generate a more informative input. Here, we discuss the Er(g,yl) of three types of filters: Er(gl(ϵ),yl)/Rl = Var(fl − ϵ) + E[fl − ϵ]2 = Var(fl) + (E[fl]− ϵ)2 (6) Er(gh(ϵ),yl)/Rl = Var(2− fl − ϵ) + E[2− fl − ϵ]2 = Var(fl) + (E[fl] + ϵ− 2)2 (7) Er(gb(ϵ′),yl)/Rl ≈ (E[fl] + ϵ′ − 1)4 + 6Var(fl)(E[fl] + ϵ′ − 1)2 + 8(1− ϵ′)Var(fl)E[fl] (1 + |ϵ′|)4 . (8) where we use Var(f2l ) ≈ 4E[fl]2Var(fl) derived from the delta method. Discussion. An interesting observation is that for a class with high dispersive spectrum, efforts of any single filters are to no avail. From Corollary 3.3, we know that E[fl] = 1−π̃l and Var(fl) = π̃2l −(π̃l)2. It demonstrates that higher homophily means lower E[fl], lower Var(fl), and also lower prediction error for low-pass filters. On the other hand, we indicate that, in most cases, band-pass filters are more powerful than low-pass filters, let alone high-pass filters. However, the prediction capacity of a signal filter is very limited when the means of spectra vary widely. 4.2 HOW STRUCTURE MATTERS Above, we catch a glimpse of spectral explanation of the behavior of graph filters. Below, we expand more understanding of graph filters. Assume that with learnable transformation, GNNs enable to generate an informative input. Here we discuss the prediction error of different graph filters under the optimal input Y . We revisit Er(g,yl) using symmetric interaction matrix and propose a lower bound er(g,yl) leveraging Proposition 3.1: Er(g,yl) = y ⊤ l (I − g(Ã))2yl = Rl(I − 2g̃(Π̃) + g̃2(Π̃))ll ≥ er(g,yl) = Rl(I − g̃(Π̃)ll)2. (9) How structural information matters? We indicate that, in the spatial point of view, graph filters can be interpreted as weight-tuning mechanisms on edges. The lower bound clearly demonstrates that a graph filter would have unsatisfactory prediction accuracy if it fails to make the homophily degree of the tuned graph large enough (g[Π̃]ll are far from 1). Applying the prediction error lower bound to aforementioned specified filters, we have: er(gl(ϵ),yl) = (1− π̃l − ϵ)2Rl; er(gh(ϵ),yl) = (1 + π̃l − ϵ)2Rl (10) er(gb(ϵ′),yl) = (1 + |ϵ′|)−4Rl(ϵ′2 − 2ϵ′π̃l + π̃2l )2 ≥ (1 + |ϵ′|)−4Rl(ϵ′ − π̃l)4. (11) Discussion. These error bounds indicate that: 1. a low-pass filter would fail on classes with low homophily degree - in turn, it confirms that the importance of homophily assumption for low-pass filters like GCN - it is identical with our spectral point of view; 2. high-pass filters have poor performances particularly on the high homophily graphs; 3. for a graph whose classes have consistent homophily degree (their self-interaction probabilities concentrate around a constant ϵ̄), gb(ϵ̄) would work better than others. However, it is predictable that any single filters would fail on graphs with diverse self-interaction probabilities. 5 MODEL AND EMPIRICAL STUDY Our theoretical analysis of graph information demonstrates that: 1. when node classes have inconsistent homophily degree or their label frequency distribution are far from each other, a single graph filter is prone to fail; 2. in most cases, band-pass filters would perform better than low-pass and high-pass filters; 3. a feature may contribute to the classification of one class but hinder the discrimination of another. Inspired by these, we propose a disentangled multi band-pass filter framework (DEMUF) which can be applied to any type of graphs no matter what kinds of graph information they have. The key point of our model is to learn multi band-pass filters which are used to capture different disentangled feature information respectively. 5.1 ARCHITECTURE OF TWO FRAMEWORKS OF DEMUF Our framework includes feature disentanglement and frequency filtering. As we have emphasized the limitations of single filters, it is natural to leverage multi graph filters. Theoretically, piling up sufficient numbers of graph filters to capture all the frequency components can improve prediction performance. However, it is very expensive. To avoid this problem, we consider feature disentanglement - essentially, it is to disentangle frequency distributions of features into different families. Features in the same family are expected to have similar spectral properties, that is, they have similar frequency distributions or have overlap on their main frequency bands. Then for each family, we apply a band-pass graph filter to capture their main frequency components. We propose two frameworks with different structures of filters: Plain-DEMUF and Tree-DEMUF (depicted in Fig. 1). The DISENTANGLE block and FILTER block are formulated as follows: Xk = DISENTANGLE(X,Φk) = Φk(X), Hk = FILTER ( Xk, ϵk, hk ) = (gb(ϵk)) hkXk. (12) In our implementation, we provide two samples of DISENTANGLE functions Φk: one is linear transformations, the other is GUMBEL_SOFTMAX (Jang et al., 2017) used to generate learnable masks for feature selection. In terms of the FILTER block, we use the band-pass filter defined in Definition 4.2, i.e., gb(ϵ) = I − (1 + |ϵ|)−2(à − ϵI)2 as the identical filter form. Here, ϵ is the parameter of filter constrained in [−1, 1] noting that 1− ϵ is the center of frequency response gb(ϵ). In each FILTER block, h is the number of layers. The framework of Plain-DEMUF with N graph filters is: H = MLP ( CONCAT ({ FILTER ( DISENTANGLE ( X,Φk ) , ϵk, hk ) , ωk ∣∣∣k = 1, . . . , N})). Based on this, we implement a simple model called P-DEMUF. Precisely, we leverage a GUMBEL_SOFTMAX to generate N learnable masks {M1, . . . ,MN} for feature sampling at once followed by different MLP. That is, Φk(X) = MLPk(X ⊙Mk). Similarly, we develop a model, T-DEMUF, under the framework of Tree-DEMUF formulated by: H1, X1 = FILTER {( DISENTANGLE ( X,Φ1 ) , ϵ, h ) , ( DISENTANGLE ( X,Ψ1 ) , ϵ1, h1 )} Hk+1, Xk+1 = {( DISENTANGLE ( Xk,Φk ) ,FILTER ( DISENTANGLE ( Xk,Ψk ) , ϵk, hk )} H = MLP ( CONCAT ({ ωkHk, k = 1, . . . , N })) . In each T-DEMUF layer, we use GUMBEL_SOFTMAX with different parameters to generate two masks Mk and M ′k and Φk(Xk) = Xk ⊙ Mk and Ψk(Xk) = Xk ⊙ M ′k. In each layer, we stop further disentangling of the branch of Hk by utilizing an additional constraint L(Xk−1, Hk) =∥ Xk−1 ⊙M ′k −Hk ∥22. Noting that Hk = (gb(ϵk))hkXk−1 ⊙M ′k, this constraint is to make the main frequency bands of Hk be consistent with frequency response of (gb(ϵk)) hk . Model discussion. Compared with filter-bank learning methods which directly apply an array of filters to features, our models use subsets of features. It can greatly reduce the amount of computation and parameters and help learning filters more efficiently and effectively. In addition, T-DEMUF uses an additional constraint to guide the filter learning process while P-DEMUF is a combination of multi graph neural networks which would not interfere with each other. Therefore, P-DEMUF is likely to obtain similar filters and require more filters to improve performance than T-DEMUF. The model visualization results in Fig. 2 validate this statement. 5.2 EXPERIMENTS To validate DEMUF, we compare the performances of P-DEMUF and T-DEMUF with that of spectral GNNs, spatial GNNs and MLP on extensive datasets. 5.2.1 EXPERIMENT SETTINGS Datasets. We use four types of real datasets - Citation network, WebKB, Actor co-occurrence network and Wikipedia network, to validate our proposed models. Cora and Citeseer (Sen et al., 2008) are widely used citation benchmarks which represent paper as nodes and citation between two papers as edges. Cornell, Texas, and Wisconsin (Pei et al., 2020) are three subgraphs of WebKB which is a webpage network with web pages as nodes and hyperlinks between them as edges. Chameleon and Squirrel (Rozemberczki et al., 2021) are two Wikipedia networks with web pages as nodes and links between pages as edges. The nodes originally have five classes while Bo et al. (2021) proposed a new classification criteria which divides nodes into three main categories. In this paper, the relabeled networks are called Chameleon2 and Squirrel2. Actor (Tang et al., 2009) is a subgraph of the fillm-director-actor-writer network whose nodes only represent actors and edges represent their collaborations. For all data, we use 60% nodes for training, 20% for validation and 20% for testing. To intuitively show the homophily degree of a dataset, we calculate the mean of self-interaction probability (diagonal of interaction probability matrix) and show it in Table 1. This metric is similar to the node homophily in (Pei et al., 2020). More statistics of datasets can be found in Appendix A. Baselines. We compare our models with four spectral GNNs: GCN (Kipf & Welling, 2017), ChebNet (Defferrard et al., 2016), GIN (Xu et al., 2019) (despite a spatial GNN, we can easily get its spectral form), ARMA (Bianchi et al., 2021). We list their spectral filter forms in Appendix A. In short, GCN is a well-known low-pass filter. The filter shape of GIN depends on its parameter ϵ. In this paper, we fix ϵ = 0.3 and thus it is also a low-pass filter. ChebNet and ARMA are high-order polynomial filters. In addition, we also add three spatial GNNs (whose spectral forms are hardly analyzed): GAT (Veličković et al., 2018), FAGCN (Bo et al., 2021), Geom_GCN (Pei et al., 2020). Both GAT and FAGCN utilize attention mechanism and FAGCN takes high frequency information into account. Geom_GCN is a novel aggregation method based on the geometry of graph (it is related because it was also empirically studied on graphs with different levels of homophily (Pei et al., 2020)). Finally, we also compare with MLP, a baseline without using any graph information. Experimental Setup. For all experiments, we report the mean prediction accuracy on the testing data for 10 runs. We search learning rate, hidden unit, weight decay and dropout for all models in the same search space. Finally, we choose learning rate of 0.01, dropout rate of 0.5, and hidden unit of 32 over all datasets. The number of filters are searched between 2 to 10, and the final setting is: for T-DEMUF, we use 4 filters with 7 layers for Citation networks, 2 filters with 15 layers for all WebKB and Wikipedia networks, 5 filters with 1 layer for Actor. The numbers of MLP layers are 2, 2, 3 and 4, respectively. P-DEMUF uses: 3 filters with 8 layers for Citation networks; 5 filters for Cornell, 4 filters for Wisconsin and 3 filters for Texas - all of them are 1 layer; 7 filters with 9 layers for WebKB; 5 filters with 2 layers for Actor. P-DEMUF applies 2-layer MLP to all benchmarks. In addition, as the setting of benchmarks are the same as that in Geom_GCN, we refer to the results reported in Pei et al. (2020). 5.3 RESULT AND ANALYSIS The experimental results are summarized in Table 1. Our models consistently outperform baselines over most benchmarks with significant improvement. On Cora and Citeseer, the datasets with a high level of homophily, our models are only comparable to GCN and other baselines. However, on all other datasets with a lower level of homophily, our models both obtained great performance gain. To understand the impact of graph homophily on different types of graph filters, let us analyze the performance of all spectral GNNs. On high-homophily datasets, all GNNs perform similarly and the accuracy is much higher than MLP. That means the graph structure information is extremely useful in this case. However, on low-homophily datasets, many of them are even worse than MLP. GCN and GIN, the two low-pass filter based models, perform worst. The two GNNs with high-order graph filters, ChebNet and ARMA, are clearly superior to other models due to their higher spectrum coverage. However, they cannot beat our models with specially designed multiple filters. The reason might be that the high complexity of their filters makes it more difficult to learn one optimal single filter. Finally, our model T-DEMUF yields over 18% higher accuracy than the best baselines (Geom_GCN) on Squirrel; and P-DEMUF yields almost 10% higher accuracy than MLP on Texas In addition, we select some typical datasets and show the frequency distribution on these graphs in Fig. 2. We can obviously see that on Cora the spectrum is focused on low frequency components. This can explain why the low-pass filter based models can also perform well on it. On other datasets, the frequency distribution is more diverse, so the low-pass filters can not match with the important frequency components anymore. In contrast, both of our models, T-DEMUF and P-DEMUF, learn graph filters corresponding well to those components (as shown in the last two rows of Fig. 2). TDEMUF uses fewer number of (more dispersed) filters but achieves comparable or better performance. 6 CONCLUSION In this paper, we propose a theoretical analysis of graph information with the introduction of interaction probability and frequency distribution. We develop a deep understanding of how different structures and input influence the performance of graph filters. We also design a simple framework to learn a filter bank. Empirical results on extensive datasets validate the power of our model. A BENCHMARKS AND MODEL DISCUSSION A.1 STATISTICS INFORMATION OF BENCHMARKS. We provide statistics information of our benchmarks in Table. A.1. A.2 SPECTRAL FILTERS. In our paper, we use four spectral GNN as baselines whose spectral filters are listed as Table.A.2 and define a band-pass filter gb(ϵ) as a quadratic function with respect to the adjacency matrix. Since gb(ϵ)(Ã) = I−(1+ |ϵ′|)−2(ϵI−Ã)2 = (1+ |ϵ|)−2((1+ |ϵ|−ϵ)I+A)((1+ |ϵ|+ϵ)I−A), it is exactly an overlap between a low-pass filter (1+|ϵ|−ϵ)I+A) and a high-pass filter ((1+|ϵ|+ϵ)I−A). That is why gb(ϵ) is a band-pass filter. A.3 MODEL DISCUSSION. A.3.1 MOTIVATION OF DISENTANGLEMENT. Overlap if not disentangled. Without disentanglement, it is highly possible that the learned filters have large overlaps if we do not induce any constraints on them. In our algorithm, we aim to train filters to capture the main frequency information of their input and assign different weights to that captured information depending on how much they contribute to label prediction. Therefore, it is natural to assume that if the input of filters is different, filters are less likely to overlap. Disentanglement can make the “input” different and more adaptable to each filter. Disentanglement reduces the model complexity. With disentanglement, we divide node features into several subsets by learnable masking or map them into several low-dimensional spaces through linear transformations which lowers the dimension of corresponding features for each filter and meanwhile makes the input feature fits better to each filter. A.3.2 MOTIVATION OF T-DEMUF. As we clarified in Section 5.1, our implementation of disentanglement is not random masking but learnable masking leveraging GUMBEL-SOFTMAX. These learnable maskings disentangle node features into several subsets of features. With a constraint L(Xk−1, Hk) =∥ Xk−1 ⊙M ′k −Hk ∥22, for each subset of features we train a band-pass filter to through their main frequency. As shown in Figure.2, this constraint guides the process of filter learning which can help reduce the overlap of filters’ frequency responses such that can reduce the number of graph filters. At the same time, to minimize the supervised loss, maskings are trained to disentangle features whose frequency distribution are similar to those of labels and assign a higher weight; those useless captured feature information will be assigned a lower weight. Weights are also learnable. A.3.3 HOW CAN OUR FILTER BANK SELECTION BE DATA-DRIVEN. In our algorithm, although the form of our band-pass filter gb(ϵ) is predefined, its parameters including ϵ and weight ω are learned from specified graphs. Moreover, the parameters of our feature disentanglement blocks (the linear transformations and learnable masking) are also learned from data which will affect the learning of the filter bank. Therefore, our filter bank selection is data-driven. A.4 MORE EXPERIMENTAL RESULTS. A.4.1 ADDITIONAL BASELINE - GPRGNN. Here we compare our models with a related baseline GPRGNN(Chien et al., 2021). It is worth noting that the splitting in our paper is different from that in GPRGNN. In GPRGNN, its training set consists of the same number of nodes from each class while we just randomly choose our training data. We find that WebKB datasets are sensitive to the way of splitting due to their uneven distribution of labels ( the numbers of classes are: Cornell and Texas: 33/1/18/101/30, Wisconsin: 10/70/118/32/21). Although GPRGNN’s splitting is likely better for model training, our model still outperforms it in the Wikipedia datasets, i.e., Chameleon and Squirrel. In GPRGNN’s setting, the performances of MLP on WebKB are comparable to GPRGNNs’ while in our setting, our proposed models’ performances are much better than MLPs’. We also test T-DEMUF on Actor, Cornell, and Texas following the splitting of GPRGNN. As shown in Table.A.4.1, in most of the benchmarks, our model performs better than GPRGNN. A.4.2 ABLATION STUDY. To show the advantage of using disentanglement, we provide an ablation study on five benchmarks. Here, we propose two ablation models based on P-DEMUF. Recall that the disentanglement block of P-DEMUF consists of learnable masking and linear transformations, we design our ablation models by taking off the component of masking and linear transformation. Also, for fair and intuitive comparison, we simply fix the number of filters as 2. The results shown as Table.A.4.2 validate that if we take off the disentanglement blocks of P-DEMUF, the results become worse in most of benchmarks. B PROOF OF PROPOSITION Here, we provide the proof of Proposition 3.2. Proof. Since x = n−1∑ i=0 αiui, ui is the i-th unit eigenvector of L̃ and λn = u⊤i L̃ nui then we have E[fn] = n−1∑ i=0 P (f = λi)λ n i = ∑ (αiui) ⊤L̃n(αiui)∑ α2i = x⊤L̃nx x⊤x = x⊤(I − Ã)nx x⊤x . (13) Below is the proof of Proposition 3.1. Proof. For P = D−1A and à = D− 1 2AD− 1 2 , and Π, Π̃ defined by Definition 3.1, the inequality can be represented as (RΠk +(Πk)⊤R)lm ≥ 2(R 1 2 Π̃kR 1 2 )lm which is equivalent to prove y⊤m(P k + (P k)⊤)yl ≥ 2y⊤mÃkyl. Noting that, (P k)⊤ = DP kD−1 and Ãk = D 1 2P kD− 1 2 , with B = P k + (P k)⊤, we have Bij = P kij + di dj P kij ≥ 2 √ di dj P kij = 2à k ij . Therefore, y ⊤ m(B − 2Ãk)yl ≥ 0. Let m = l, then we get πkl ≥ π̃kl . To prove proposition b, we utilize lemma B.1. Since g(Ã) is symmetric, then we have (g2[Π̃])ll = y⊤(g(Ã))2y y⊤y ≥ (y⊤g(Ã)y y⊤y )2 = (g[Π̃]ll) 2. Lemma B.1. Let B ∈ Rn×n is a symmetric matrix, ∀ij, y ∈ Rn, we have y⊤B2y y⊤y ≥ (y⊤By y⊤y )2 . Proof. Since B is symmetric, then we have B = UΛU⊤, here U is matrix of unit eigenvectors of B. From the proof of Proposition 3.2, we obtain that y ⊤B2y y⊤y = ∑ (αiλi) 2∑ α2i and ( y⊤By y⊤y )2 = ( ∑ α2iλi) 2 ( ∑ α2i ) 2 . From Hölder’s inequality, we have ( ∑ (αiλi) 2)( ∑ α2i ) ≥ ( ∑ α2iλi) 2. Therefore, we have ∑ (αiλi) 2∑ α2i ≥ ( ∑ α2iλi) 2 ( ∑ α2i ) 2 .
1. What is the focus of the paper regarding graph neural networks? 2. What are the strengths of the theoretical analysis provided in the paper? 3. What are the weaknesses of the paper, particularly regarding its experiments and comparisons with other works? 4. Do you have any questions about the proposed model and its components, such as the disentanglement block and T-DEMUF architecture? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper It is observed that for Graph Neural Networks (GNNs), the correlation between the labels and the graph structure matters. The paper gives a theoretical analysis of this behavior via interaction probability and frequency distribution. The analysis shows why homophily is favorable for GNNs. Additionally, they are also able to identify some conditions when GNNs can do well. But the key point made is that it is not possible to cater to all possible scenarios with a single filter. The paper further goes to propose a model that builds multiple band-pass filters over the range of [-1, 1] and then aggregates information over these filters to give improved performance. Review The paper is well-written and clear to understand. Strengths: The theoretical analysis looks clean and simple. Equation 6, 7, 8, 9 and 10 provide a good indication of how the prediction error is dependent on pi_l. It also gives insight into the homophily requirements for good performance. The proposed model is simple and gives good performance on datasets of varying homophily scores. Weaknesses: The error analysis is done on MSE rather than Softmax (which I suspect is what is used in the experiments). However, this doesn't take away much from the presented work and the analysis still feels insightful enough. The motivation for disentanglement block is unclear. The paper states Theoretically, piling up sufficient numbers of graph filters to capture all the frequency components can improve prediction performance. However, it is very expensive. To avoid this problem, we consider feature disentanglement - essentially, it is to disentangle frequency distributions of features into different families. Features in the same family are expected to have similar spectral properties, that is, they have similar frequency distributions or have overlap on their main frequency bands. While it is clear that it will be difficult to have large number of band-pass filters, it is unclear how disentanglement of features work to avoid this scenario. Implementation of disentanglement seems to be just random masking. How does it satisfy the statement made Features in the same family are expected to have similar spectral properties ? The motivation for P-DEMUF is pretty clear, but the T-DEMUF architecture motivation is not clear and there is no mention of it anywhere. Also, the equations of T-DEMUF are somewhat confusing. It feels like there might be a typo there, at least by looking at Figure 1. GPRGNN by [Chien et. al] is missing from baseline comparison. In light of the above, I am inclined to rate this paper marginally below threshold. There are few typos in the paper: Var(f_l) in Section 3.4 and in other places it appears has a mistake. Equations for T-DEMUF likely have a typo in there. References: Eli Chien, Jianhao Peng, Pan Li and Olgica Milenkovic. Adaptive Universal Generalized PageRank Graph Neural Network. ICLR, 2021.
ICLR
Title Graph Information Matters: Understanding Graph Filters from Interaction Probability Abstract Graph Neural Networks (GNNs) have received extensive affirmation for their promising performance in graph learning problems. Despite their various neural architectures, most are intrinsically graph filters that provide theoretical foundations for model explanations. In particular, low-pass filters show superiority in label prediction in many benchmarks. However, recent empirical research suggests that models with only low-pass filters do not always perform well. Although increasing attempts to understand graph filters, it is unclear how a particular graph affects the performance of different filters. In this paper, we carry out a comprehensive theoretical analysis of the synergy of graph structure and node features on graph filters’ behaviors in node classification, relying on the introduction of interaction probability and frequency distribution. We show that the homophily degree of graphs significantly affects the prediction error of graph filters. Our theory provides a guideline for graph filters design in a data-driven manner. Since it is hard for a single graph filter to live up to this, we propose a general strategy for exploring a data-specified filter bank. Experimental results show that our model achieves consistent and significant performance improvements across all benchmarks. Furthermore, we empirically validate our theoretical analysis and explain the behavior of baselines and our model. 1 INTRODUCTION Graph Neural Networks (GNNs) have continuously attracted interest as their promising performance in various graph learning problems. It is known that most of GNNs are intrinsically graph filters (Kipf & Welling, 2017; Defferrard et al., 2016; Ortega et al., 2018; Nt & Maehara, 2019). With the theoretical foundation of filters, there is an increasing attempt at model explanation, e.g. explaining the behavior of various GNNs in node classification. Nt & Maehara (2019) investigated the superiority of low-pass filters backed up with theoretical arguments while recent research (Balcilar et al., 2020; Chang et al., 2020; Bo et al., 2021) empirically revealed the weakness of GNNs with only low-pass filters in certain datasets. These contradictory views on low-pass filters pose a significant problem: Why does a filter work on one dataset but not on another? More precisely, for a given filter, what kinds of structure and features are useful for prediction? This makes it clear to us that in order to solve this problem, it is necessary to take into account graph information, including the graph structure, features, and labels. Existing theoretical research is mostly restricted to the investigation of filters themselves such as exploring their expressive power (Oono & Suzuki, 2020; Balcilar et al., 2020), without considering their inconsistency of performance on different graphs. It is clear that structural and feature information lead to the possible inconsistency. However, there has been little explicit analysis of how graph information influences the performance of graph filters. For instance, GNNs have formulated a variety of graph filters in a heuristic manner under a suppressed homophily assumption, i.e., nodes with similar attributes/labels tend to have connections. There remains a paucity of quantitative description of homophily until Pei et al. (2020) designed a rough index to measure it. In this paper, we establish a comprehensive theoretical analysis of the effect of structure and feature information on node label prediction to fill the gap and provide deep insights into the explanation of graph filters. We first establish a systematic investigation on graphs with an indicator in terms of homophily - the interaction probability and a distributional representation of input information - the frequency distribution. The interaction probability derived from random walk theory relates node labels with its local topology and quantifies the degree of clustering of nodes in the same/different class. We argue that interaction probability reflects the difficulty in identifying one class from others. In terms of feature information, we draw on spectral analysis representing features as frequency distributions. Furthermore, we consider the moment of frequency and build an explicit relation with graph structure. Interestingly, we find that the moment of label frequency (noting that a one-hot label vector can be regarded as a special node feature) is determined by interaction probability. The aforementioned preparations underpin our deep understanding of graph filters. We validate the prediction error of a graph filter under two settings: a. fixed graph structure, unravel the influence of input (original or transformed node features); b. given input, show how structure matters, and provide analysis utilizing frequency distribution and interaction probability. The main conclusions are: 1. given structure, the frequency response of an ideal graph filter should be consistent with the main frequency band of label frequency, that is, a matched frequency response is the premise of success; 2. given input, a graph filter essentially tunes the weight of edges - failing to make a homophily degree large enough may cause an unsatisfactory prediction accuracy. These interpretations of graph filters imply a data-driven filter design principle. In addition, we apply these theoretical results to three types of filters - low-pass, high-pass, and band-pass filters with specified form. It shows that a single graph filter is hard to comply with the principle of ideal filters, especially when the homophily degree and label frequency distribution of different classes are very different. For example, when frequency distributions of labels are far from each other, it is hard to find a single filter whose frequency response can cover all the main frequency bands well. In this paper, we leverage a combination of band-pass graph filters to overcome this problem and develop a simple yet effective framework to show how to learn multiple filters depending on datasets. We empirically validate our theoretical analysis and investigate structure and feature information of benchmarks. We verify our model on a variety of datasets and explain the behavior of baselines and our model. Experimental results show that our model achieves a consistent and significant performance improvement across all benchmarks. Our main contributions are: 1.We develop a theoretical analysis of graph information based on the introduction of interaction probability and frequency distribution; 2.We provide a deep understanding of the performance of graph filters illustrating how graph structure and input information matter; 3.We indicate the weakness of GNNs with a single graph filter and propose a general framework to learn a data-specified filter bank which contributes to significant improvement. 2 RELATED WORK In this paper, we focus on the analysis of graph filters in the context of graph neural networks. Since Bruna et al. (2014) defined spectral graph filters and extended convolutional operations to graphs, various spectral graph neural networks have been developed. For example, ChebNet (Defferrard et al., 2016) defines the Chebyshev polynomial filter which can be exactly localized in the k-hop neighborhood. Kipf & Welling (2017) simplified the Chebyshev filters using a first-order approximation and derived the well-known graph convolutional networks (GCNs). Bianchi et al. (2021) proposed the rational auto-regressive moving average graph filters (ARMA) which are more powerful in modeling the localization and provide more flexible graph frequency response, however more computationally expensive and also more unstable. Very recently, Min et al. (2020) augmented conventional GCNs with geometric scattering transforms which enabled band-pass filtering of graph signals and alleviated the oversmoothing issue. In addition, most graph neural networks originally defined in the spatial domain are also found essentially connected to the spectral filtering (Balcilar et al., 2020). By bridging the gap between spatial and spectral graph neural networks, Balcilar et al. (2020) further investigated the expressiveness of all graph neural networks from their spectral analysis. However, their analysis is limited to the spectrum coverage of a graph filter itself and lacks deeper insights into the graph-dependent performance of these filters. Another related topic is the measurement of graph homophily. Beyond the interaction probability that we define in this paper, there are some other heuristic metrics for homophily. Pei et al. (2020) defined a node homophily index to characterize their datasets and help explain their experimental results for Geom_GCN: β = 1#nodes ∑ v #neighbors of v that have the same label as v #neighbors of v . Zhu et al. (2020) defined edge homophily ratio instead and identified a set of key designs that can boost learning from the graph structure in heterophily: h = #edges whose end nodes have same labels#edge . This edge homophily definition is sensitive to the number of classes and size of each class, and Lim et al. (2021) made a modification to alleviate this problem. Our work differentiates from these works in that we not only use our definition to characterize the graph but also directly relate it to the performance of graph filters (or GNNs). 3 THEORETICAL ANALYSIS OF GRAPH INFORMATION 3.1 NOTATION Let Gn = (Vn, En) be an undirected graph with additional self-connection, where Vn = {v0, . . . , vn−1} is the set of nodes and En ⊂ Vn × Vn is the set of edges. Let A ∈ Rn×n be the adjacency matrix and L = D − A be the Laplacian matrix, where D is a diagonal degree matrix with Dii = ∑ j Aij . we denote à = D − 12AD− 1 2 , then L̃ = D− 1 2LD− 1 2 = I − à is the symmetric normalized Laplacian. Let (λi,ui) be a pair of eigenvalue and unit eigenvector of L̃, where 0 = λ0 ≤ · · · ≤ λn−1 ≤ 2. 3.2 PROBLEM SETTING In this paper, we are mainly interested in node classification problems on undirected graphs. Given Gn = (Vn, En), we consider T = {0, . . . ,K−1} as the set of all node labels. For ∀k ∈ T , we denote Ck as the set of nodes with label k and R ∈ RK×K as a size matrix which is a diagonal matrix with Rk = |Ck|. Considering single-label problems in which classes are mutually exclusive, we use onehot encoding to indicate the class label and introduce a label matrix Y ∈ Rn×K = (y0, . . . ,yK−1) to represent the labels of Vn, where yk is the indicator vector of Ck. Obviously, R = Y ⊤Y , Y ⊤1 = diag(R) and Y 1 = 1. A signal x on Gn can be arranged the signal values in a vector form x = (x0, . . . , xn−1) ⊤. Particularly, labels {yk|k ∈ T } are also graph signals. 3.3 A STRUCTURE INDICATOR - INTERACTION PROBABILITY Homophily of graphs is an implicit assumption widely leveraged in graph learning methods including GNNs. It is considered an indisputable common property of most graphs, despite its descriptive and unquantifiable definition, which introduces a variety of uncertainties. In this section, starting with the random walk, we introduce interaction probability to overcome this challenge. For a random walk on Gn, we denote P = D−1A as its transition matrix which is also a row Markov matrix. From the random walk theory, P k is the k-step transition matrix, and P kij is the probability that a random walker starting from node vi arrives at vj after k steps. For a node v and a class Cl, we denote πki (Cl) as the probability that a random walker starting from vi stays in Cl at the k-th step. It is trivial that πki (Cl) = ∑ j∈Cl P k ij with ∑ l∈T π k i (Cl) = 1. πki (Cl) demonstrates the relative preference/closeness of node vi for Cl with k-scale. To meet the homophily assumption, for vi in Cl, πki (Cl) is expected to gap away from others. Since πki (Cl)− ∑ m ̸=l π k i (Cm) = 2πki (Cl)− 1, πki (Cl) can be regarded as a measure of the k-scale homophily degree of node vi. Particularly, for ∀k ∈ N and vi ∈ Cl, πki (Cl) = 1 means that Cl is a community and will never communicate with other classes. However, this case is rare in real graphs. Below, we investigate the homophily of a class and propose a method to measure the communication strength between two classes. Definition 3.1 (k-step interaction probability). For l,m ∈ T , we define Πk as the k-step interaction probability matrix formulated as follows: Πklm = 1 Rl ∑ vi∈Cl πki (Cm) = 1 Rl ∑ vi∈Cl,vj∈Cm P kij = y⊤l P kym y⊤l yl (1) Πk = (Y ⊤Y )−1Y ⊤P kY = R−1Y ⊤P kY. (2) Πklm is the probability that a random walker from Cl arrives at Cm after k steps. Remark 1. Obviously, Πk1 = 1. Πklm is the mean proportion of Cm in the k-hop neighbors of nodes from Cl. Noting that rank(Y ) = K, when K ̸= n, Y R−1Y ⊤ ̸= I , thus (R−1Y ⊤PY )k ̸= R−1Y ⊤P kY , i.e. (Π)m ̸= Πm. More generally, for an arbitrary polynomial function g, R−1Y ⊤g(P )Y is likely not equal to g(R−1Y ⊤P kY ). In the rest of paper, we write g̃(Π) = R−1Y ⊤g(P )Y and g(Π) = g(R−1Y ⊤P kY ). For instance, if g(·) = (·)m, then g̃(Π) = Πm and g(Π) = (Π)m. Also, we denote Πkll, the self-interaction probability, as π k l for short. 1-step interaction probability intuitively reflects the degree of clustering of two classes and ∑k i=1 Π i measures the strength of interaction between classes in the scale of k steps. Since P is not symmetric, Πklm ̸= Πkml. To facilitate analysis, here we propose a symmetric variant of interaction probability to identify the interactions between two classes. We denote this symmetric k-step interaction probability matrix as Π̃k, by replacing P with à = D− 1 2AD− 1 2 , we obtain Π̃k = R− 1 2Y ⊤ÃkY R− 1 2 . Below, we investigate the important properties of Πk and Π̃k. Proposition 3.1. For l,m ∈ T and an arbitrary polynomial function g(·), we have: a. RlΠklm +RmΠ k ml ≥ 2 √ RlRmΠ̃ k lm, where Rl is the l-th diagonal element of R; b. (g̃2(Π̃))ll ≥ (g̃(Π̃)ll)2, where g̃k(Π̃) = R− 1 2Y ⊤gk(Ã)Y R− 1 2 . The proof can be found in Appendix B. Since Π̃k1 ̸= 1, that is, the measure is no longer a probability measure. However, according to Prop.3.1.a ( let m = l ), π̃kl is the lower bound of π k l , and π̃ k l = π k l when Gn is a regular graph. In the rest of theoretical analysis, we use π̃kl to measure the degree of Cl’s clustering. Let g(·) = (·)k, from Prop.3.1.b, we have π̃2kl ≥ (π̃kl )2. In Section.4.2, we leverage this inequality to derive a lower bound of our prediction error and further illustrate how structure influences the performance of a given filter. 3.4 A FEATURE INDICATOR - FREQUENCY DISTRIBUTION Following the graph signal processing (GSP) concepts, λ0, . . . , λn−1 are graph frequencies and u0 . . . ,un−1 are the corresponding frequency components which are invariant of graph filters. Through Fourier transform, we obtain {αi = ⟨ui,x⟩|i = 0, . . . , n− 1} the spectral representation of a graph signal x, called graph signal spectrum. Moreover, a graph signal can be represented as a linear combination of frequency components, i.e., x = ∑ αiui. For a label vector yl which is also a graph signal, we denote {γ0, . . . , γn−1} as its spectrum. There is an intuitive assumption: information of label vectors is all we need for classification - we will validate this assumption in Section 4.1. Under this context, γ2i / ∑ i γ 2 i reflects how much the frequency component uk contributes to the distinctiveness of Cl, without considering the positivity and negativity of effects. Interestingly, we find that the normalized signal spectrum is a histogram/discrete distribution defined below. Definition 3.2 (Frequency distribution). We define f , the frequency of signal x, as a random variable taking values in the set of graph frequencies with probability Pr(f = λk) = α2k /∑ i α 2 i . The probability describes the frequency distribution of signal x. With this definition, we derive distributional representations of signals from their spectral representations/spectra. One can evaluate the signal effect by comparing frequency distributions of signals and label vectors under a specified distribution metric, such as Wasserstein distance. Below, we consider the moment of frequency distribution to show how graph structure influences signal frequency. Proposition 3.2. For G = {V, E}, let f be the frequency of signal x, then E[fn] = x ⊤(I−Ã)nx x⊤x . The proof of this proposition can be found in Appendix B. With the definition of interaction probability, we further represent the moment of the label vector’s frequency. Corollary 3.3. For label frequency fl of yl, we have E[fnl ] = ( g̃(I − Π̃) ) ll with g = (·)n. Recall that g̃(I − Π̃) = R− 12Y ⊤(I − Ã)nY R− 12 , we have E[fl] = 1 − π̃l, E[f2l ] = 1 − 2π̃l + π̃2l and the variance of fl: Var(fl) = π̃2l − (π̃l)2. It can be seen that both the mean and variance of label frequency are close to 0 when π̃l approaches 1, which reflects a high homophily degree (as π̃l ≤ πl ≤ 1). In Section 4.1, we conduct a more detailed analysis of feature information of spectral space with frequency distribution. 4 ANALYSIS OF GRAPH FILTERS A graph filter is defined as a function g with applied Laplacian matrix or adjacency matrix. Denote R[Ã] as a polynomial ring in à over R, here we are mainly interested in g ∈ R[Ã]. In this section, we provide a deep understanding of the performance of graph filters concerning label prediction based on the above theoretical analysis of graph information. In general, there are two major concerns: with fixed graph structure, how does the input impact the performance of a given filter? and with fixed input, how does graph structure impact the performance of a given filter?. In this section, we provide the theoretical analysis of these two questions in Sections 4.1 and 4.2, respectively. The general formulation of the l + 1-th layer of spectral GNNs is X(l+1) = σ(g(Ã)X(l)W (l+1)), here σ is an activation function, X(l) is the output of the l-th layer, X(0) is a feature matrix and W (l+1) is a learnable transformation matrix. We call X(l)W (l+1) the input of g(Ã) in l + 1-th layer and denote X as the input of g(Ã) in the last layer. In the following sections, we discuss the prediction error of spectral GNNs with a given graph filter without activation function before prediction. That is, in the last layer with X as input, g(Ã)X is directly used for prediction. Definition 4.1 (Prediction error). Let X ∈ Rn×K be the input of graph filter g(Ã), Y ∈ Rn×K is the label matrix, the prediction error is formulated by: Er(g,X) =∥ g(Ã)X − Y ∥2F= tr(X⊤g2(Ã)X)− 2tr(X⊤g(Ã)Y )+ ∥ Y ∥2F (3) Remark 2. For a label vector yl, we denote Er(g,xl) =∥ g(Ã)xl − yl ∥2F as the error of g(Ã) predicting class l. Obviously, Er(g,X) = ∑ l∈T Er(g,xl), where xl is the l-th column of X . In particular, we will apply our conclusion to specified filters and make concrete analysis. Definition 4.2. With ϵ ∈ [0, ϵ0] and ϵ′ ∈ [−1, 1], ϵ0 is a small constant, we define low-pass filters gl(ϵ)(Ã), high-pass filters gh(ϵ)(Ã) and band-pass filters gb(ϵ′)(Ã) as: gl(ϵ)(Ã) = ϵI + Ã, gh(ϵ)(Ã) = ϵI − Ã, gb(ϵ′)(Ã) = I − (1 + |ϵ′|)−2(ϵ′I − Ã)2. For λ, an eigenvalue of L̃, we have gl(ϵ)(λ) ∈ [ϵ− 1, 1+ ϵ], gh(ϵ)(λ) ∈ [ϵ− 1, 1+ ϵ] and gb(ϵ′)(λ) ∈ [0, 1] since λ ∈ [0, 2]. Particularly, gl(0) is the GCN filter. 4.1 HOW INPUT MATTERS Denote X̃ = U⊤X = (x̃0, . . . , x̃K−1) and Ỹ = U⊤Y = (ỹ0, . . . , ỹK−1), where U is a matrix with unit eigenvectors of L̃ (recall that eigenvectors of à are consistent with that of L̃), revisiting Er(g,xl) and Er(g,yl) in spectral domain, we have: Er(g,xl) = ∥ g(I − Λ)x̃l − ỹl ∥2F= ∑ i (g(1− λi)αi − γi)2 (4) Er(g,yl) = ∑ i γ2(1− g(1− λi))2 = Rl ∑ i pi(1− g(1− λi))2 = RlE[1− g(1− fl)]2 (5) where Λ is the eigenvalue matrix of L̃, αi and γi are the spectra of xl and yl respectively, pi = Pr(fl = λi), fl is the frequency of yl. For better comparison, we normalize the input xl: ∥ xl ∥2F=∥ yl ∥2F , i.e., ∑ α2i = ∑ γ2i . g is re-scaled function with g([0, 2]) concentrating in [−1, 1]. How input information matter? With normalized feature and graph filters, it indicates that the performance of graph filters greatly depends on label spectra. Particularly, when the frequency response of a graph filter does not fit the label frequency, it might be inferior to all-pass filters, such as MLP. On the other hand, it poses a principle of filter design: make feature response of filters be consistent with the main frequency band of label frequency as much as possible. In terms of input information, it determines the performance of a filter - if the frequency distribution of input vector is far from that of label vector, even an ideal filter would fail. This observation is identical to our assumption in Section 3.4 - information of label vector is all we need and the distance between frequency distribution of input and label vectors reflects its usefulness. Therefore, Er(g,yl) is the lower bound of Er(g,xl) when g(Ã) are given. While an input vector may be useful for distinguishing one class, it may be helpless for another. In most GNNs, they tune the frequency distribution of features with a learnable linear transformation to generate a more informative input. Here, we discuss the Er(g,yl) of three types of filters: Er(gl(ϵ),yl)/Rl = Var(fl − ϵ) + E[fl − ϵ]2 = Var(fl) + (E[fl]− ϵ)2 (6) Er(gh(ϵ),yl)/Rl = Var(2− fl − ϵ) + E[2− fl − ϵ]2 = Var(fl) + (E[fl] + ϵ− 2)2 (7) Er(gb(ϵ′),yl)/Rl ≈ (E[fl] + ϵ′ − 1)4 + 6Var(fl)(E[fl] + ϵ′ − 1)2 + 8(1− ϵ′)Var(fl)E[fl] (1 + |ϵ′|)4 . (8) where we use Var(f2l ) ≈ 4E[fl]2Var(fl) derived from the delta method. Discussion. An interesting observation is that for a class with high dispersive spectrum, efforts of any single filters are to no avail. From Corollary 3.3, we know that E[fl] = 1−π̃l and Var(fl) = π̃2l −(π̃l)2. It demonstrates that higher homophily means lower E[fl], lower Var(fl), and also lower prediction error for low-pass filters. On the other hand, we indicate that, in most cases, band-pass filters are more powerful than low-pass filters, let alone high-pass filters. However, the prediction capacity of a signal filter is very limited when the means of spectra vary widely. 4.2 HOW STRUCTURE MATTERS Above, we catch a glimpse of spectral explanation of the behavior of graph filters. Below, we expand more understanding of graph filters. Assume that with learnable transformation, GNNs enable to generate an informative input. Here we discuss the prediction error of different graph filters under the optimal input Y . We revisit Er(g,yl) using symmetric interaction matrix and propose a lower bound er(g,yl) leveraging Proposition 3.1: Er(g,yl) = y ⊤ l (I − g(Ã))2yl = Rl(I − 2g̃(Π̃) + g̃2(Π̃))ll ≥ er(g,yl) = Rl(I − g̃(Π̃)ll)2. (9) How structural information matters? We indicate that, in the spatial point of view, graph filters can be interpreted as weight-tuning mechanisms on edges. The lower bound clearly demonstrates that a graph filter would have unsatisfactory prediction accuracy if it fails to make the homophily degree of the tuned graph large enough (g[Π̃]ll are far from 1). Applying the prediction error lower bound to aforementioned specified filters, we have: er(gl(ϵ),yl) = (1− π̃l − ϵ)2Rl; er(gh(ϵ),yl) = (1 + π̃l − ϵ)2Rl (10) er(gb(ϵ′),yl) = (1 + |ϵ′|)−4Rl(ϵ′2 − 2ϵ′π̃l + π̃2l )2 ≥ (1 + |ϵ′|)−4Rl(ϵ′ − π̃l)4. (11) Discussion. These error bounds indicate that: 1. a low-pass filter would fail on classes with low homophily degree - in turn, it confirms that the importance of homophily assumption for low-pass filters like GCN - it is identical with our spectral point of view; 2. high-pass filters have poor performances particularly on the high homophily graphs; 3. for a graph whose classes have consistent homophily degree (their self-interaction probabilities concentrate around a constant ϵ̄), gb(ϵ̄) would work better than others. However, it is predictable that any single filters would fail on graphs with diverse self-interaction probabilities. 5 MODEL AND EMPIRICAL STUDY Our theoretical analysis of graph information demonstrates that: 1. when node classes have inconsistent homophily degree or their label frequency distribution are far from each other, a single graph filter is prone to fail; 2. in most cases, band-pass filters would perform better than low-pass and high-pass filters; 3. a feature may contribute to the classification of one class but hinder the discrimination of another. Inspired by these, we propose a disentangled multi band-pass filter framework (DEMUF) which can be applied to any type of graphs no matter what kinds of graph information they have. The key point of our model is to learn multi band-pass filters which are used to capture different disentangled feature information respectively. 5.1 ARCHITECTURE OF TWO FRAMEWORKS OF DEMUF Our framework includes feature disentanglement and frequency filtering. As we have emphasized the limitations of single filters, it is natural to leverage multi graph filters. Theoretically, piling up sufficient numbers of graph filters to capture all the frequency components can improve prediction performance. However, it is very expensive. To avoid this problem, we consider feature disentanglement - essentially, it is to disentangle frequency distributions of features into different families. Features in the same family are expected to have similar spectral properties, that is, they have similar frequency distributions or have overlap on their main frequency bands. Then for each family, we apply a band-pass graph filter to capture their main frequency components. We propose two frameworks with different structures of filters: Plain-DEMUF and Tree-DEMUF (depicted in Fig. 1). The DISENTANGLE block and FILTER block are formulated as follows: Xk = DISENTANGLE(X,Φk) = Φk(X), Hk = FILTER ( Xk, ϵk, hk ) = (gb(ϵk)) hkXk. (12) In our implementation, we provide two samples of DISENTANGLE functions Φk: one is linear transformations, the other is GUMBEL_SOFTMAX (Jang et al., 2017) used to generate learnable masks for feature selection. In terms of the FILTER block, we use the band-pass filter defined in Definition 4.2, i.e., gb(ϵ) = I − (1 + |ϵ|)−2(à − ϵI)2 as the identical filter form. Here, ϵ is the parameter of filter constrained in [−1, 1] noting that 1− ϵ is the center of frequency response gb(ϵ). In each FILTER block, h is the number of layers. The framework of Plain-DEMUF with N graph filters is: H = MLP ( CONCAT ({ FILTER ( DISENTANGLE ( X,Φk ) , ϵk, hk ) , ωk ∣∣∣k = 1, . . . , N})). Based on this, we implement a simple model called P-DEMUF. Precisely, we leverage a GUMBEL_SOFTMAX to generate N learnable masks {M1, . . . ,MN} for feature sampling at once followed by different MLP. That is, Φk(X) = MLPk(X ⊙Mk). Similarly, we develop a model, T-DEMUF, under the framework of Tree-DEMUF formulated by: H1, X1 = FILTER {( DISENTANGLE ( X,Φ1 ) , ϵ, h ) , ( DISENTANGLE ( X,Ψ1 ) , ϵ1, h1 )} Hk+1, Xk+1 = {( DISENTANGLE ( Xk,Φk ) ,FILTER ( DISENTANGLE ( Xk,Ψk ) , ϵk, hk )} H = MLP ( CONCAT ({ ωkHk, k = 1, . . . , N })) . In each T-DEMUF layer, we use GUMBEL_SOFTMAX with different parameters to generate two masks Mk and M ′k and Φk(Xk) = Xk ⊙ Mk and Ψk(Xk) = Xk ⊙ M ′k. In each layer, we stop further disentangling of the branch of Hk by utilizing an additional constraint L(Xk−1, Hk) =∥ Xk−1 ⊙M ′k −Hk ∥22. Noting that Hk = (gb(ϵk))hkXk−1 ⊙M ′k, this constraint is to make the main frequency bands of Hk be consistent with frequency response of (gb(ϵk)) hk . Model discussion. Compared with filter-bank learning methods which directly apply an array of filters to features, our models use subsets of features. It can greatly reduce the amount of computation and parameters and help learning filters more efficiently and effectively. In addition, T-DEMUF uses an additional constraint to guide the filter learning process while P-DEMUF is a combination of multi graph neural networks which would not interfere with each other. Therefore, P-DEMUF is likely to obtain similar filters and require more filters to improve performance than T-DEMUF. The model visualization results in Fig. 2 validate this statement. 5.2 EXPERIMENTS To validate DEMUF, we compare the performances of P-DEMUF and T-DEMUF with that of spectral GNNs, spatial GNNs and MLP on extensive datasets. 5.2.1 EXPERIMENT SETTINGS Datasets. We use four types of real datasets - Citation network, WebKB, Actor co-occurrence network and Wikipedia network, to validate our proposed models. Cora and Citeseer (Sen et al., 2008) are widely used citation benchmarks which represent paper as nodes and citation between two papers as edges. Cornell, Texas, and Wisconsin (Pei et al., 2020) are three subgraphs of WebKB which is a webpage network with web pages as nodes and hyperlinks between them as edges. Chameleon and Squirrel (Rozemberczki et al., 2021) are two Wikipedia networks with web pages as nodes and links between pages as edges. The nodes originally have five classes while Bo et al. (2021) proposed a new classification criteria which divides nodes into three main categories. In this paper, the relabeled networks are called Chameleon2 and Squirrel2. Actor (Tang et al., 2009) is a subgraph of the fillm-director-actor-writer network whose nodes only represent actors and edges represent their collaborations. For all data, we use 60% nodes for training, 20% for validation and 20% for testing. To intuitively show the homophily degree of a dataset, we calculate the mean of self-interaction probability (diagonal of interaction probability matrix) and show it in Table 1. This metric is similar to the node homophily in (Pei et al., 2020). More statistics of datasets can be found in Appendix A. Baselines. We compare our models with four spectral GNNs: GCN (Kipf & Welling, 2017), ChebNet (Defferrard et al., 2016), GIN (Xu et al., 2019) (despite a spatial GNN, we can easily get its spectral form), ARMA (Bianchi et al., 2021). We list their spectral filter forms in Appendix A. In short, GCN is a well-known low-pass filter. The filter shape of GIN depends on its parameter ϵ. In this paper, we fix ϵ = 0.3 and thus it is also a low-pass filter. ChebNet and ARMA are high-order polynomial filters. In addition, we also add three spatial GNNs (whose spectral forms are hardly analyzed): GAT (Veličković et al., 2018), FAGCN (Bo et al., 2021), Geom_GCN (Pei et al., 2020). Both GAT and FAGCN utilize attention mechanism and FAGCN takes high frequency information into account. Geom_GCN is a novel aggregation method based on the geometry of graph (it is related because it was also empirically studied on graphs with different levels of homophily (Pei et al., 2020)). Finally, we also compare with MLP, a baseline without using any graph information. Experimental Setup. For all experiments, we report the mean prediction accuracy on the testing data for 10 runs. We search learning rate, hidden unit, weight decay and dropout for all models in the same search space. Finally, we choose learning rate of 0.01, dropout rate of 0.5, and hidden unit of 32 over all datasets. The number of filters are searched between 2 to 10, and the final setting is: for T-DEMUF, we use 4 filters with 7 layers for Citation networks, 2 filters with 15 layers for all WebKB and Wikipedia networks, 5 filters with 1 layer for Actor. The numbers of MLP layers are 2, 2, 3 and 4, respectively. P-DEMUF uses: 3 filters with 8 layers for Citation networks; 5 filters for Cornell, 4 filters for Wisconsin and 3 filters for Texas - all of them are 1 layer; 7 filters with 9 layers for WebKB; 5 filters with 2 layers for Actor. P-DEMUF applies 2-layer MLP to all benchmarks. In addition, as the setting of benchmarks are the same as that in Geom_GCN, we refer to the results reported in Pei et al. (2020). 5.3 RESULT AND ANALYSIS The experimental results are summarized in Table 1. Our models consistently outperform baselines over most benchmarks with significant improvement. On Cora and Citeseer, the datasets with a high level of homophily, our models are only comparable to GCN and other baselines. However, on all other datasets with a lower level of homophily, our models both obtained great performance gain. To understand the impact of graph homophily on different types of graph filters, let us analyze the performance of all spectral GNNs. On high-homophily datasets, all GNNs perform similarly and the accuracy is much higher than MLP. That means the graph structure information is extremely useful in this case. However, on low-homophily datasets, many of them are even worse than MLP. GCN and GIN, the two low-pass filter based models, perform worst. The two GNNs with high-order graph filters, ChebNet and ARMA, are clearly superior to other models due to their higher spectrum coverage. However, they cannot beat our models with specially designed multiple filters. The reason might be that the high complexity of their filters makes it more difficult to learn one optimal single filter. Finally, our model T-DEMUF yields over 18% higher accuracy than the best baselines (Geom_GCN) on Squirrel; and P-DEMUF yields almost 10% higher accuracy than MLP on Texas In addition, we select some typical datasets and show the frequency distribution on these graphs in Fig. 2. We can obviously see that on Cora the spectrum is focused on low frequency components. This can explain why the low-pass filter based models can also perform well on it. On other datasets, the frequency distribution is more diverse, so the low-pass filters can not match with the important frequency components anymore. In contrast, both of our models, T-DEMUF and P-DEMUF, learn graph filters corresponding well to those components (as shown in the last two rows of Fig. 2). TDEMUF uses fewer number of (more dispersed) filters but achieves comparable or better performance. 6 CONCLUSION In this paper, we propose a theoretical analysis of graph information with the introduction of interaction probability and frequency distribution. We develop a deep understanding of how different structures and input influence the performance of graph filters. We also design a simple framework to learn a filter bank. Empirical results on extensive datasets validate the power of our model. A BENCHMARKS AND MODEL DISCUSSION A.1 STATISTICS INFORMATION OF BENCHMARKS. We provide statistics information of our benchmarks in Table. A.1. A.2 SPECTRAL FILTERS. In our paper, we use four spectral GNN as baselines whose spectral filters are listed as Table.A.2 and define a band-pass filter gb(ϵ) as a quadratic function with respect to the adjacency matrix. Since gb(ϵ)(Ã) = I−(1+ |ϵ′|)−2(ϵI−Ã)2 = (1+ |ϵ|)−2((1+ |ϵ|−ϵ)I+A)((1+ |ϵ|+ϵ)I−A), it is exactly an overlap between a low-pass filter (1+|ϵ|−ϵ)I+A) and a high-pass filter ((1+|ϵ|+ϵ)I−A). That is why gb(ϵ) is a band-pass filter. A.3 MODEL DISCUSSION. A.3.1 MOTIVATION OF DISENTANGLEMENT. Overlap if not disentangled. Without disentanglement, it is highly possible that the learned filters have large overlaps if we do not induce any constraints on them. In our algorithm, we aim to train filters to capture the main frequency information of their input and assign different weights to that captured information depending on how much they contribute to label prediction. Therefore, it is natural to assume that if the input of filters is different, filters are less likely to overlap. Disentanglement can make the “input” different and more adaptable to each filter. Disentanglement reduces the model complexity. With disentanglement, we divide node features into several subsets by learnable masking or map them into several low-dimensional spaces through linear transformations which lowers the dimension of corresponding features for each filter and meanwhile makes the input feature fits better to each filter. A.3.2 MOTIVATION OF T-DEMUF. As we clarified in Section 5.1, our implementation of disentanglement is not random masking but learnable masking leveraging GUMBEL-SOFTMAX. These learnable maskings disentangle node features into several subsets of features. With a constraint L(Xk−1, Hk) =∥ Xk−1 ⊙M ′k −Hk ∥22, for each subset of features we train a band-pass filter to through their main frequency. As shown in Figure.2, this constraint guides the process of filter learning which can help reduce the overlap of filters’ frequency responses such that can reduce the number of graph filters. At the same time, to minimize the supervised loss, maskings are trained to disentangle features whose frequency distribution are similar to those of labels and assign a higher weight; those useless captured feature information will be assigned a lower weight. Weights are also learnable. A.3.3 HOW CAN OUR FILTER BANK SELECTION BE DATA-DRIVEN. In our algorithm, although the form of our band-pass filter gb(ϵ) is predefined, its parameters including ϵ and weight ω are learned from specified graphs. Moreover, the parameters of our feature disentanglement blocks (the linear transformations and learnable masking) are also learned from data which will affect the learning of the filter bank. Therefore, our filter bank selection is data-driven. A.4 MORE EXPERIMENTAL RESULTS. A.4.1 ADDITIONAL BASELINE - GPRGNN. Here we compare our models with a related baseline GPRGNN(Chien et al., 2021). It is worth noting that the splitting in our paper is different from that in GPRGNN. In GPRGNN, its training set consists of the same number of nodes from each class while we just randomly choose our training data. We find that WebKB datasets are sensitive to the way of splitting due to their uneven distribution of labels ( the numbers of classes are: Cornell and Texas: 33/1/18/101/30, Wisconsin: 10/70/118/32/21). Although GPRGNN’s splitting is likely better for model training, our model still outperforms it in the Wikipedia datasets, i.e., Chameleon and Squirrel. In GPRGNN’s setting, the performances of MLP on WebKB are comparable to GPRGNNs’ while in our setting, our proposed models’ performances are much better than MLPs’. We also test T-DEMUF on Actor, Cornell, and Texas following the splitting of GPRGNN. As shown in Table.A.4.1, in most of the benchmarks, our model performs better than GPRGNN. A.4.2 ABLATION STUDY. To show the advantage of using disentanglement, we provide an ablation study on five benchmarks. Here, we propose two ablation models based on P-DEMUF. Recall that the disentanglement block of P-DEMUF consists of learnable masking and linear transformations, we design our ablation models by taking off the component of masking and linear transformation. Also, for fair and intuitive comparison, we simply fix the number of filters as 2. The results shown as Table.A.4.2 validate that if we take off the disentanglement blocks of P-DEMUF, the results become worse in most of benchmarks. B PROOF OF PROPOSITION Here, we provide the proof of Proposition 3.2. Proof. Since x = n−1∑ i=0 αiui, ui is the i-th unit eigenvector of L̃ and λn = u⊤i L̃ nui then we have E[fn] = n−1∑ i=0 P (f = λi)λ n i = ∑ (αiui) ⊤L̃n(αiui)∑ α2i = x⊤L̃nx x⊤x = x⊤(I − Ã)nx x⊤x . (13) Below is the proof of Proposition 3.1. Proof. For P = D−1A and à = D− 1 2AD− 1 2 , and Π, Π̃ defined by Definition 3.1, the inequality can be represented as (RΠk +(Πk)⊤R)lm ≥ 2(R 1 2 Π̃kR 1 2 )lm which is equivalent to prove y⊤m(P k + (P k)⊤)yl ≥ 2y⊤mÃkyl. Noting that, (P k)⊤ = DP kD−1 and Ãk = D 1 2P kD− 1 2 , with B = P k + (P k)⊤, we have Bij = P kij + di dj P kij ≥ 2 √ di dj P kij = 2à k ij . Therefore, y ⊤ m(B − 2Ãk)yl ≥ 0. Let m = l, then we get πkl ≥ π̃kl . To prove proposition b, we utilize lemma B.1. Since g(Ã) is symmetric, then we have (g2[Π̃])ll = y⊤(g(Ã))2y y⊤y ≥ (y⊤g(Ã)y y⊤y )2 = (g[Π̃]ll) 2. Lemma B.1. Let B ∈ Rn×n is a symmetric matrix, ∀ij, y ∈ Rn, we have y⊤B2y y⊤y ≥ (y⊤By y⊤y )2 . Proof. Since B is symmetric, then we have B = UΛU⊤, here U is matrix of unit eigenvectors of B. From the proof of Proposition 3.2, we obtain that y ⊤B2y y⊤y = ∑ (αiλi) 2∑ α2i and ( y⊤By y⊤y )2 = ( ∑ α2iλi) 2 ( ∑ α2i ) 2 . From Hölder’s inequality, we have ( ∑ (αiλi) 2)( ∑ α2i ) ≥ ( ∑ α2iλi) 2. Therefore, we have ∑ (αiλi) 2∑ α2i ≥ ( ∑ α2iλi) 2 ( ∑ α2i ) 2 .
1. What is the focus of the paper regarding graph filters and their applications in graph neural networks? 2. What are the strengths and weaknesses of the proposed approach, particularly in understanding homophily and its relationship to random walk transition matrices? 3. How does the reviewer assess the clarity and quality of the paper's content, including the proposition and definition presented? 4. Are there any concerns regarding the novelty of the paper's contributions, especially compared to prior works such as "Stability Properties of Graph Neural Networks"? 5. Do you have any questions or suggestions regarding the paper's framework, methodology, or conclusions?
Summary Of The Paper Review
Summary Of The Paper This paper studies the applications of graph filters in graph neural networks, and attempts to understand the links between graph spectral analysis, random walks, and homophily. The authors introduce "interaction probability" as a metric for understanding homophily and its relationship to random walk transition matrices. In this framework, the authors claim to understand a notion of graph information, as well as an understanding of how the performance of graph filters depends on both graph structure and input signals, with applications to graph neural networks. Review This paper suffers on many fronts. I wish I could give a more thorough review of this work, but the writing is so poor that assessing the merits of this work would be quite difficult. The authors make many statements with very little explanation. For instance, Proposition 3.1 is largely a jumble of symbols with very little surrounding discussion: it is not explained why I should remotely care about this result. Section 3.4 states things that are largely obvious from a graph signal processing perspective. Definition 4.1 is a strange way to look at prediction error: why are you using a Frobenius norm to evaluate the error of a node classificiation problem? The conclusions reached in Section 4.1 are quite obvious and well-understood in the broad literature. Of course filters should "match the true signal," this is the most basic fact from signal processing! The conclusion that graph neural networks should use a filter bank rather than a single filter is nothing new. See, for instance, the work "Stability Properties of Graph Neural Networks" from Gama et. al. (2020). Overall, the contributions of this paper are extremely lacking. And even if there was a notable contribution of this work, the writing quality obscures it so much for it to be at all useful. I would also like to point out that the authors failed to cite even the most widely-known introductory papers for graph signal processing, such as that of Ortega et. al. (2018). The failure of the authors to do so indicates a severe lack of familiarity with the graph signal processing literature, which this paper claims to contribute to.
ICLR
Title Set Functions for Time Series Abstract Despite the eminent successes of deep neural networks, many architectures are often hard to transfer to irregularly-sampled and asynchronous time series that occur in many real-world datasets, such as healthcare applications. This paper proposes a novel framework for classifying irregularly sampled time series with unaligned measurements, focusing on high scalability and data efficiency. Our method SEFT (Set Functions for Time Series) is based on recent advances in differentiable set function learning, extremely parallelizable, and scales well to very large datasets and online monitoring scenarios. We extensively compare our method to competitors on multiple healthcare time series datasets and show that it performs competitively whilst significantly reducing runtime. N/A Despite the eminent successes of deep neural networks, many architectures are often hard to transfer to irregularly-sampled and asynchronous time series that occur in many real-world datasets, such as healthcare applications. This paper proposes a novel framework for classifying irregularly sampled time series with unaligned measurements, focusing on high scalability and data efficiency. Our method SEFT (Set Functions for Time Series) is based on recent advances in differentiable set function learning, extremely parallelizable, and scales well to very large datasets and online monitoring scenarios. We extensively compare our method to competitors on multiple healthcare time series datasets and show that it performs competitively whilst significantly reducing runtime. 1 INTRODUCTION With the increasing digitalization, measurements over extensive time periods are becoming ubiquitous. Nevertheless, in many application domains, in particular healthcare (Yadav et al., 2018), measurements might not necessarily be observed at a regular rate or could be misaligned. Moreover, the presence or absence of a measurement and its observation frequency may carry information of its own (Little & Rubin, 2014), such that imputing the missing values is not always desired. While some algorithms can be readily applied to datasets with varying length, these methods usually assume regular sampling of the data and/or require the measurements across modalities to be aligned/synchronized, preventing their application to the aforementioned settings. Existing approaches for unaligned measurements, by contrast, typically rely on imputation to obtain a regularlysampled version of a dataset for classification. Learning a suitable imputation scheme, however, requires understanding the underlying dynamics of a system; this task is significantly more complicated and not necessarily required when classification is the main goal. Furthermore, even though a decoupled imputation scheme followed by classification is generally more scalable, it may lose information (in terms of “missingness patterns”) that could be crucial for prediction tasks. In addition, the fact that decoupled schemes perform worse than methods that are trained end-to-end has been has been empirically demonstrated by Li & Marlin (2016). Approaches that jointly optimize both tasks also add a large computational overhead, thus suffering from poor scalability or high memory requirements. Our method is motivated by the understanding that, while RNNs and similar architectures are well suited for capturing and modelling the dynamics of a time series and thus excel at tasks such as forecasting, retaining the order of an input sequence can even be a disadvantage in classification scenarios (Vinyals et al., 2015). We show that by relaxing the condition that a sequence must be processed in order, we can naturally derive an architecture that directly accounts for (i) irregular sampling, and (ii) unsynchronized measurements. Our method SEFT: Set Functions for Time Series, extends recent advances in set function learning to irregular sampled time series classification tasks, yields state-of-the-art performance, is highly scalable and improves over current approaches by almost an order of magnitude in terms of runtime. With SEFT, we propose to rephrase the problem of classifying time series as classifying a set of observations. We show how set functions can be exploited to learn classifiers that are naturally applicable to unaligned and irregularly sampled time series, leading to state-of-the-art performance in irregularly-sampled time series classification tasks. Our approach can be interpreted as learning dataset-specific summary statistics of time series which are optimized to separate instances by class. Furthermore, our method is highly parallelizable and can be readily extended to an online monitoring setup with up to thousands of patients. 2 RELATED WORK This paper focuses on classifying time series with irregular sampling and potentially unaligned measurements. We briefly discuss recent work in this field; all approaches can be broadly grouped into the following three categories. Irregular sampling as missing data While the problem of supervised classification in the presence of missing data is closely related to irregular sampling on time series, there are some core differences. Missing data is usually defined with respect to a number of features that could be observed, whereas time series themselves can have different lengths and a “typical” number of observed values might not exist. Generally, an irregularly-sampled time series can be converted into a missing data problem by discretizing the time axis into non-overlapping intervals, and declaring intervals in which no data was sampled as missing. This approach is followed by Marlin et al. (2012), where a Gaussian Mixture Model was used to do semi-supervised clustering on electronic health records. Similarly, Lipton et al. (2016) discretize the time series into intervals, aggregate multiple measurements within an interval, and add missingness indicators to the input of a Recurrent Neural Network. By contrast, Che et al. (2018) present several variants of the Gated Recurrent Unit (GRU) combined with imputation schemes. Most prominently, the GRU-model was extended to include a decay term (GRU-D), such that the last observed value is decayed to the empirical mean of the time series via a learnable decay term. While these approaches are applicable to irregularly-sampled data, they either rely on imputation schemes or empirical global estimates on the data distribution (our method, by contrast, requires neither), without directly exploiting the global structure of the time series. Frameworks supporting irregular sampling Some frameworks support missing data. For example, Lu et al. (2008) directly defined a kernel on irregularly-sampled time series, permitting subsequent classification and regression with kernel-based classifiers or regression schemes. Furthermore, Gaussian Processes (Williams & Rasmussen, 2006) constitute a common probabilistic model for time series; they directly permit modelling of continuous time data using mean and covariance functions. Along these lines, Li & Marlin (2015) derived a kernel on Gaussian Process Posteriors, allowing the comparison and classification of irregularly-sampled time series using kernel-based classifiers. Nevertheless, all of these approaches still rely on separate tuning/training of the imputation method and the classifier so that structures supporting the classification could be potentially missed in the imputation step. An emerging line of research employs Hawkes processes (Hawkes, 1971; Liniger, 2009), i.e. a specific class of self-exciting point processes, for time series modelling and forecasting (Mei & Eisner, 2017; Yang et al., 2017; Xiao et al., 2017). While Hawkes processes exhibit extraordinary performance in these domains, there is no standardised way of using them for classification. Previous work (Lukasik et al., 2016) trains multiple Hawkes processes (one for each label) and classifies a time series by assigning it the label that maximises the respective likelihood function. Since this approach does not scale to our datasets, we were unable to perform a fair comparison. We conjecture that further research will be required to make Hawkes processes applicable to general time series classification scenarios. End-to-end learning of imputation schemes Methods of this type are composed of two modules with separate responsibilities, namely an imputation scheme and a classifier, where both components are trained discriminatively and end-to-end using gradient-based training. Recently, Li & Marlin (2016) proposed the Gaussian Process Adapters (GP Adapters) framework, where the parameters of a Gaussian Process Kernel are trained alongside a classifier. The Gaussian Process gives rise to a fixed-size representation of the irregularly-sampled time series, making it possible to apply any differentiable classification architecture. This approach was further extended to multivariate time series by Futoma et al. (2017) using Multi-task Gaussian Processes (MGPs) (Bonilla et al., 2008), which allow correlations between the imputed channels. Moreover, Futoma et al. (2017) made the approach more compatible with time series of different lengths by applying a Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) classifier. Motivated by the limited scalability of approaches based on GP Adapters, Shukla & Marlin (2019) suggest an alternative imputation scheme, the interpolation prediction networks. It applies multiple semi-parametric interpolation schemes to obtain a regularly-sampled time series representation. The parameters of the interpolation network are trained with the classifier in an end-to-end setup. 3 PROPOSED METHOD Our paper focuses on the problem of time series classification of irregularly sampled and unaligned time series. We first define the required terms before describing our models 3.1 NOTATION & REQUIREMENTS Definition 1 (Time series). We describe a time series of an instance i as a set Si of M := len(Si) observations sj such that Si := {s1, . . . , sM}. We assume each observation sj to be represented as a tuple (tj , zj ,mj), consisting of a time tj ∈ R+, an observed value zj ∈ R, and a modality indicator mj ∈ {1 . . . D}, where D represents the dimensionality of the time series. We write Ω ⊆ R+ × R × N+ to denote the domain of observations. An entire time series can thus be represented as Si := {(t1, z1,m1) , . . . , (tM , zM ,mM )} , (1) where for notational convenience we omitted the index i. We leave this definition very general on purpose, allowing the length of each time series (comprising all channels, such as “heart rate”, “respiratory rate”, etc. of one instance) to differ, since our models are capable of handling this. Likewise, we neither enforce nor expect all time series to be synchronized, i.e. being sampled at the same time, but rather we permit unaligned or non-synchronized observations in the sense of not having to observe all modalities at each time point. Time series are collected in a dataset D. Definition 2 (Dataset). We consider a dataset D to contain n time series. Elements of D are tuples, i.e. D := {(S1, y1), . . . , (SN , yN )}, where Si denotes the ith time series and yi ∈ {1, . . . , C} its associated class label. Figure 1 gives a high-level overview of our method, including the individual steps required to perform classification. To get a more intuitive grasp of these definitions, we briefly illustrate our time series notation with an example. Let instance i be an in-hospital patient, while the time series represent measurements of two channels of vital parameters during a hospital stay, namely heart rate (HR) and mean arterial blood pressure (MAP). We enumerate those channels as modalities 1 and 2. Counting from admission time, a HR of 60 and 65 beats per minute was measured after 0.5 h and 3.0 h, respectively, whereas MAP values of 80, 85, and 87 mmHg were observed after 0.5 h, 1.7 h, and 2.5 h. According to Definition 1, the time series is thus represented as Si = {(0.5, 60, 1) , (3, 65, 1) , (0.5, 80, 2) , (1.7, 85, 2) , (3, 87, 2)}. In this example, observations are ordered by modality to increase readability; in practice, we are dealing with unordered sets. Definition 3 (Non-synchronized time series). We call a D-dimensional time series nonsynchronized if there is at least one time point tj ∈ R+ at which at least one modality is not observed, i.e. if there exists tj ∈ R+ such that |{(tk, zk,mk) | tk = tj}| 6= D. Furthermore, we assume that no two measurements of the same modalitymk occur at the same time, i.e. ti 6= tj for i 6= j has to be satisfied for all measurements in mk. This assumption is not required for technical reasons but for consistency. It also makes it possible to interpret the results later on. To summarize our generic setup, we do not requireM , the number of observations per time series, to be the same, i.e. len(Si) 6= len(Sj) for i 6= j is permitted, nor do we assume that the time points and modalities of the observations are the same across time series. This setting is common in biomedical time series, for example. Since typical machine learning algorithms are designed to operate on data of a fixed dimension, novel approaches to this non-trivial problem are required. 3.2 OUR MODEL In the following, we describe an approach inspired by differentiable learning of functions that operate on sets (Zaheer et al., 2017; Wagstaff et al., 2019). We phrase the problem of classifying time series on irregular grids as learning a function f on a set of arbitrarily many time series observations following Definition 1, i.e. S = {(t1, z1,m1), . . . , (tM , zM ,mM )}, such that f : S → RC , where S represents a generic time series of arbitrary cardinality and RC corresponds to the logits of the C classes in the dataset. As we previously discussed, we interpret each time series as an unordered set of measurements, where all information is conserved because the observation time is included for each set element. Specifically, we define f to be a set function, i.e. a function that operates on a set and thus has to be invariant to the ordering of the elements in the set. Multiple architectures are applicable to constructing set functions such as Transformers (Lee et al., 2019; Vaswani et al., 2017), or Deep Sets (Zaheer et al., 2017). Due to preliminary experiments, where Transformers suffered from lower generalization performance in our setting1, we base this work on the framework of Zaheer et al. (2017). Intuitively, this can be seen as computing multivariate dataset-specific summary statistics, which are optimized to maximize classification performance. Thus, we sum-decompose the set function f into the form f(S) = g 1 |S| ∑ sj∈S h(sj) (2) where h : Ω → Rd and g : Rd → RC are neural networks, d ∈ N+ determines the dimensionality of the latent representation, and sj represents a single observation of the time series S. We can view the averaged representations 1/|S| ∑ sj∈S h(sj) in general as a dataset-specific summary statistic learned to best distinguish the class labels. Equation 2 also implies the beneficial scalability properties of our approach: each embedding can be calculated independently of the others; hence, the constant computational cost of passing a single observation through the function h is scaled by the number of observations, resulting in a runtime of O(M) for a time series of length M . Recently, Wagstaff et al. (2019) derived requirements for a practical universal function representation of sum-decomposable set functions, i.e the requirements necessary for a sum-decomposable function to represent an arbitrary set-function given that h and g are arbitrarily expressive. In particular, they show that a universal function representation can only be guaranteed provided that d ≥ maxi len(Si) is satisfied. During hyperparameter search we thus independently sample the dimensionality of the aggregation space, and allow it to be in the order of the number of observations that are to be expected in the dataset. Further, we explored the utilization of max, sum, and mean as alternative aggregation functions inspired by Zaheer et al. (2017); Garnelo et al. (2018). 1Please see Section 4.4 for a quantification. Intuition Our method can be connected to Takens’s embedding theorem (Takens, 1981) for dynamical systems: we also observe a set of samples from some unknown (but deterministic) dynamical process; provided the dimensionality of our architecture is sufficiently large2, we are capable of reconstructing the system up to diffeomorphism. The crucial difference is that we do not have to construct a time-delay embedding but rather, we let the network learn an embedding that is suitable for classification. Time encoding In order to represent the time point of an observation on a normalized scale, we employ variant of positional encodings, as introduced by Vaswani et al. (2017). Preliminary results indicated that this encoding scheme reduces the sensitivity towards initialization and training hyperparameters of a model. Specifically, the time encoding converts the one-dimensional time axis into a multi-dimensional input by passing the time t of each observation through multiple sine and cosine functions of varying frequencies. Given a dimensionality τ ∈ N+ of the time encoding, we refer to the encoded position as x ∈ Rτ , where x2k(t) := sin ( t max ts2k/τ ) (3) x2k+1(t) := cos ( t max ts2k/τ ) (4) with k ∈ {0, . . . , τ/2} and max ts representing the maximal time scale that is expected in the data. Intuitively, we select the wavelengths using a geometric progression from 2π to max ts · 2π, and treat the number of steps and the maximum timescale max ts as hyperparameters of the model. For all experiments time encodings were used, such that an observation is represented as sj = (x (tj) , zj ,mj). Loss function If not mentioned otherwise, we choose h and g in Equation 2 to be multilayer perceptron deep neural networks, parametrized by weights θ and ψ, respectively. We thus denote these neural networks by hθ and gψ; their parameters are shared across all instances per dataset. In our training setup, we follow Zaheer et al. (2017) and apply the devised set function to the complete time series, i.e. to the set of all observations for each time series. Overall, we optimize a loss function that is defined as L(θ, ψ) := E(S,y)∈D ` y; gψ 1 |S| ∑ sj∈S hθ(sj) , (5) where `(·) represents a task-specific loss function. In out setup, we either utilize the binary crossentropy in combination with a sigmoid activation function in the last layer for binary classification or multi-label classification tasks and categorical cross-entropy in combination with a softmax activation function in the last layer for multi-class classification tasks. 3.3 ATTENTION-BASED AGGREGATION So far, our method permits encoding sets of arbitrary sizes into a fixed-size representation. For increasingly large set sizes, however, many irrelevant observations could influence the result of the set function. The mean aggregation function is particularly susceptible to this because the influence of an observation to the embedding shrinks proportionally to the size of the set. We thus suggest to use a weighted mean in order to allow the model to decide which observations are relevant and which should be considered irrelevant. This is equivalent to computing an attention a(S, sj) over the set input elements, and subsequently, computing the sum over all elements in the set. Our approach is based on scaled dot-product attention with multiple heads i ∈ {1, . . . ,m} in order to be able to cover different aspects of the aggregated set3. We define a(·), i.e. the attention weight function of an individual time series, to depend on the overall set of observations. This is achieved 2In Takens’s embedding theorem, d > dB is required, where dB refers to the fractal box counting dimension (Liebovitch & Toth, 1989), which is typically well below the size of typical neural network architectures. 3Since we are dealing only with a single instance (i.e. time series) in this section, we use i and j to denote a head and an observation, respectively. by computing an embedding of the set elements using a smaller set function f ′, and projecting the concatenation of the set representation and the individual set elements into a d-dimensional space. Specifically, we have Kj,i = [f ′(S), sj ]TWi where Wi ∈ R(im(f ′)+|sj |)×d and K ∈ R|S|×d. Furthermore, we define a matrix of query points Q ∈ Rm×d, which allow the model to summarize different aspects of the dataset via ej,i = Kj,i ·Qi√ d and aj,i = exp(ej,i)∑ j exp(ej,i) where aj,i represents the amount of attention that head i gives to set element j. The head-specific rowQi of the query matrixQ allows a head to focus on individual aspects (such as the distribution of one or multiple modalities) of a time series. For each head, we multiply the set element embeddings computed via the set function f with the attentions derived for the individual instances, i.e. ri =∑ j aj,if(sj). The computed representation is concatenated and passed to the aggregation network hθ as in a regular set function, i.e. r∗ = [r1 . . . rm]. In our setup, we initialize Q with zeros, such that at the beginning of training, the attention mechanism is equivalent to computing the unweighted mean over the set elements. Overall, this aggregation function is similar to Transformers (Vaswani et al., 2017), but differs from them in a few key aspects. Standard Transformer blocks would use the information from all set elements in order to compute the embedding of an individual set element, leading to a runtime and space complexity of O(n2). In contrast, our approach computes the embeddings of set elements independently, leading lower runtime and memory complexity of O(n). Further, we observed that computing embeddings with information from other set elements (as the Transformer does) actually decreases generalization performance (see Table 1 for details). 4 EXPERIMENTS We executed all experiments and implementations in a unified code base, which we also make available4 to the community. While some of the datasets used subsequently have access restrictions, anybody can gain access after satisfying the defined requirements. This ensures the reproducibility of our results. Please consult Appendix A.2 for further details. 4.1 DATASETS In order to benchmark the proposed method we selected 4 datasets with irregularly-sampled and non-synchronized measurements. Healing MNIST The H-MNIST dataset was introduced by Krishnan et al. (2015) in order to simulate characteristics which typically occur in medical time series. In our setup, we use a variant of this dataset. Every instance of the dataset contains 10 frames, derived from a single instance of MNIST dataset, where the digit is rotated according to an angle uniformly sampled between −90◦ to 90◦. Furthermore, 3 randomly-selected consecutive frames are augmented by a square artefact in the top left corner of the image in order to indicate seasonality in the time series. Finally, 60 % of the data points are randomly discarded in order to yield a final high-dimensional irregularly-sampled time series with non-synchronized measurements. Using these settings each instance has on average 3, 136 observations. MIMIC-III Tasks MIMIC-III (Johnson et al., 2016) is a widely-used, freely-accessible dataset containing around 50, 000 distinct ICU stays. The median length of stay is 2.1 d and a wide range of physiological measurements (e.g. arterial blood pressure, respiration rate, heart rate) are recorded with a resolution of 1 h. Furthermore, laboratory test results, collected at irregular time intervals are available. Recently, Harutyunyan et al. (2019) defined a set of machine learning tasks, labels, and benchmarks using a subset of the MIMIC-III dataset. We trained and evaluated our method and competing methods on the binary mortality prediction task (M3-Mortality) and on the multiclass problem of phenotype classification (M3-Phenotyping), while applying additional filtering described in Appendix A.1. The goal of the mortality prediction task is to predict whether a patient will die during his/her hospital stay using only data from the first 48 hours of the ICU stay. This 4https://osf.io/2hg74/?view_only=8d45fdf237954948a02f1e2bf701cdf1 dataset contains around 21, 000 stays of which approximately 10 % result in death. The phenotype classification task consists of 40, 000 patients, each of which can suffer from a multitude of 25 acute care conditions. Physionet Mortality Prediction Challenge The 2012 Physionet challenge dataset (Goldberger et al., 2000), which we abbreviate P-Mortality, contains 12, 000 ICU stays each of which lasts at least 48 h. For each stay, a set of general descriptors (such as gender, age, height, weight) were collected at admission time. Depending on the course of the stay and patient status, up to 37 time series variables were measured (e.g. blood pressure, lactate, respiration rate, temperature). While some modalities might be measured in regular time intervals (e.g. hourly or daily), some are only collected when required. Not all variables are available for each stay. The goal of the challenge was to predict if—and with which certainty —a patient will die during the hospital stay. The training set consists of 8, 000 stays while the testing set comprises 4, 000 ICU visits. Both datasets are similarly imbalanced, with a prevalence of around 14 %. For simplicity, the general descriptors (such as age and weight), were included as time points with a single observation at the beginning of the stay. This treatment is similar to the approach by Harutyunyan et al. (2019) in the MIMIC-III benchmarking datasets. Please refer to Table A.1, Table A.2, and Table A.3 in the appendix for a more detailed enumeration of samples sizes and label distributions. The total number of samples may slightly deviate from the originally published splits, as time series of excessive length prevented fitting some methods in reasonable time, and were therefore excluded. 4.2 COMPETITOR METHODS GRU-simple GRU-SIMPLE (Che et al., 2018) augments the input at time t of a Gated-RecurrentUnit RNN with a measurement mask mdt and a δt matrix, which contains the time since the last measurement of the corresponding modality d, such that δt = st − st−1 + δdt−1 t > 1,mdt−1 = 0 st − st−1 t > 1,mdt−1 = 1 0 t = 0 where st represents the time associated with time step t. Phased-LSTM The PHASED-LSTM (Neil et al., 2016) introduced a biologically inspired time dependent gating mechanism which regulates access to the hidden and cell state of a Long short-term RNN cell (Hochreiter & Schmidhuber, 1997). While this allows the network to handle event-based sequences with irregularly spaced observations, the approach does not support unaligned measurements. In order to still provide the architecture with all relevant information, we augment the input in a similar fashion as described for the GRU-SIMPLE approach. GRU-D GRU-D or GRU-Decay (Che et al., 2018) contains modifications to the GRU RNN cell, allowing it to decay past observations to the mean imputation of a modality using a learnable decay rate. By additionally providing the measurement masks as an input the recurrent neural network the last feed in value. Learns how fast to decay back to a mean imputation of the missing data modality. Interpolation Prediction Networks IP-NETWORKS (Shukla & Marlin, 2019) apply multiple semiparametric interpolation schemes to irregularly-sampled time series to obtain regularly-sampled representations that cover long-term trends, transients, and also sampling information. The method combines a univariate interpolation step with a subsequent multivariate interpolation; the parameters of the interpolation network are trained with the classifier in an end-to-end fashion. Transformer In the TRANSFORMER architecture (Vaswani et al., 2017) the elements of a sequence are encoded simultaneously and information between sequence elements is captured using MultiHead-Attention blocks. In our case, an individual sequence element corresponds to all measurements available at a given time point, augmented with a measurement indicator. Transformers are normally used for sequence-to-sequence modelling tasks and in our setup were adapted to classification tasks by mean-aggregating the final representation. This representation is then fed into a one-layer MLP to predict logits for the individual classes. 4.3 EXPERIMENTAL SETUP To permit a fair comparison between the methods, we executed hyperparameter searches for each model on each dataset, composed of uniformly sampling 20 parameters according to Appendix A.3. Training was stopped after 20 epochs without improvement of the validation loss, the hyperparameters with the best overall validation performance were selected for quantifying the performance on the test set. The train, validation, and test splits were the same for all models and all evaluations. Final performance on the test set was calculated by 3 independent runs of the models; evaluation took place after the model was restored to the state with the best validation loss. In all subsequent benchmarks, we use the standard deviation of the test performance of these runs as generalization performance estimates. 4.4 RESULTS The results are shown in Table 1. Overall, our proposed method exhibits the lowest per-epoch runtime on most datasets, while either yielding competitive or state-the-art performance. Further, the trade-off between runtime and performance of the proposed method is very good on all datasets (see Figure A.1 and Figure A.2 in the appendix for a visualization of this argument). In order to elucidate the contribution of individual model components, we also provide an ablation study in Table A.4. Here we see that the attention mechanism contributes more to the model performance, while the positional encoding seems to be beneficial for datasets with highly-varying time series lengths, in particular M3-Phenotyping. Opening the black box In the medical domain, it is of particular interest to understand the decisions a model makes based on the input it is provided with. The formulation of our model and its per observation perspective on time series gives it the unique property of being able to quantify to which extent an individual observation contributed to the output of the model. We exemplify this in Figure 2 with a patient time series that was combined with our models attention values, displayed for a set of clinically relevant variables. After reviewing these records with our medical expert, we find that our model is able to pick up regions with drastic changes in individual modalities. Moreover, it is able to inspect other modalities at the same associated time (for instance, at hour 20). This is behaviour similar to what one would expect from an alerted clinician reviewing the logged medical records. Interestingly, we observe that the model attends to known trends (that are consisting with domain knowledge about patient deterioration ultimately resulting in death) such as increase in lactate or hemodynamic instability, as indicated by drops in blood pressure. Furthermore, the model appears to be alerted by persisting low urine output. After several hours, this can be indicative of kidney failure. 5 CONCLUSION In this work, we presented a novel approach for classifying time series with irregularly-sampled and unaligned, that is non-synchronized, observations. Our approach yields state-of-the-art to strongly competitive performance on numerous simulated and real-world datasets, while reducing runtime by almost half. Moreover, we demonstrated that combining the perspective of individual observations with an attention mechanism permits increasing the interpretability of the model. This is particularly relevant for the medical and healthcare applications. For future work, we reserve a more extensive exploration of the learned latent representation to evaluate its utility for clustering of time series or visualization of their similarity. A APPENDIX A.1 DATA FILTERING Due to memory requirements of some of the competitor methods, it was necassary to excluded time series with extremly high number of measurements. For the M3-Phenotyping patients with more than 2000 distinct time points were discarded from training. For M3-Mortality patients with more than 1000 time points were discarded as they contained dramatically different measuring frequencies compared to the rest of the dataset. A.2 IMPLEMENTATIONAL DETAILS All experiments were run using tensorflow 0.15.0rc0 and training was performed on NVIDIA Geforce GTX 1080 GPUs. In order to allow a fair comparison between methods, the input processing pipeline cached model specific representations and transformations of the data. To further increase efficiency of the RNNs, sequences were binned in to buckets of jointly trained instances depending on their sequence length. The buckets were determined according to the (0.25, 0.5, 0.75) quantiles of the length distributions of the datasets. A.3 TRAINING, MODEL ARCHITECTURES AND HYPERPARAMETER SEARCH General All models were trained using the Adam optimizer, while randomly sampling the learning rate from (0.001, 0.0005, 0.00025, 0.0001). Further, the batch size of all methods was sampled from the values (32, 64, 128, 256). Recurrent neural networks For the RNN based methods (GRU-SIMPLE, PHASEDLSTM, GRU-D and IP-NETS), the number of units was sampled in from the values (16, 32, 64, 128, 256, 512). Further, recurrent dropout and input dropout were sampled from the values (0.0, 0.1, 0.2, 0.3). Solely, for the PHASED-LSTM method, we did not apply dropout to the recurrent state and the inputs, as the learnt frequencies were hypothesized to fulfill a similar function as dropout (Neil et al., 2016). SEFT We vary the number of layers, dropout in between the layers and the number of nodes per layer for both the encoding network hθ and the aggregation network gψ from the same ranges. The number of layers is randomly sampled between 1 and 5, the number of nodes in a layer are uniformly sampled from the range (16, 32, 64, 128, 256, 512) and the dropout fraction is sampled from the values (0.0, 0.1, 0.2, 0.3). The width of the embedding space prior to aggregation is sampled from the values (32, 64, 128, 256, 512, 1024, 2048). The aggregation function selected to be one ofmean, sum andmax. The number of dimensions used for the positional embedding τ is selected uniformly from (4, 8, 16) and max ts us selected from the values (10, 100, 1000). SEFT-Attn The parameters for the encoding and aggregation networks are sampled in a similar fashion as for SEFT. In contrast we set the aggregation function to be sum as described in the text. Further we use a constant architecture for the attention network f ′ with 2 layers, 64 nodes per layer, 4 heads and a dimensionality of the dot product space d of 128. We solely sample the amount of attention dropout uniformly from the values (0.0, 0.1, 0.25, 0.5). Transformer We utilize the same model architecture as defined in Vaswani et al. (2017), where we use a one hidden layer MLP as a feed-forward network, with dimensionality of the hidden layer selected to be twice the model dimensionality. The parameters for the Transformer network were sampled according to the following criteria. The dimensionality of the model was sampled uniformly from the values (64, 128, 256, 512, 1024), the number of attention heads per layer from the values (2, 4, 8) and the number of layers from the range [1, 6] ∈ N. Further, we sampled the amount of dropout of the residual connections and the amount of attention dropout uniformly from the values (0.0, 0.1, 0.2, 0.3, 0.5), and the maximal timescale for the time embedding from the values (10, 100, 1000) (similar to the SEFT approach).
1. What is the focus of the paper regarding multi-modal time series classification? 2. What are the strengths and weaknesses of the proposed approach, particularly in its architecture formulation and attention mechanism? 3. Do you have any concerns or suggestions regarding the paper's evaluation, ablation studies, and experiment design? 4. How does the reviewer assess the clarity, motivation, and presentation of the paper's content? 5. What are the potential limitations and benefits of the proposed method, especially in clinical applications and other time series classification datasets?
Review
Review Summary: The work is focused on classification of irregularly sampled and unaligned multi-modal time series. Prior work has primarily focused on imputation methods, either end-to-end or otherwise. This paper approaches the problem as a set function mapping between the time-series tuples to the class label. The proposed method is uses a set encoding of a multi-modal time series input, followed by mode-specific encoding of the tuples which are then aggregated in multiple ways prior to classification. An attention mechanism is attached in order for the model to automatically weigh the relevance of tuples for the classification. The model is compared to imputation based baselines on clinical ICU time series classification tasks. The performance mostly appears comparable across baselines but the proposed method has much better run-times. The paper is for the most part well written, and related work well characterized. The formulation is interesting and clinically relevant as well so the choice of data-sets makes some sense. I have a few concerns about the architecture formulation and lack of clarification and intuition in what appears to be the main contribution of the paper (Sec 3.2 and 3.3) which I will detail below: a. In the evaluation, I really want to see a decoupling between the "time encoding step" and "attention based aggregation" on the performance to figure out to isolate different sources of performance improvements. That is can there be a SEFT without time encoding? If not, why not? I encourage more ablation like studies that look at different sources of performance gains and demonstrate them in experiments. b. The description of Sec 3.3. is really missing key motivation for the choices made around how the attention formulation is designed. For example why does the dot produce include the set elements? What if it doesn't? What is Q supposed to capture? c. Is a_{j,i} shared across instances? Then irrespective of the number of observations per instance, the $j^{th}$ tuple gets similar weights? If not appropriate indexing will help clarify this. d. It would be useful to provide how exactly a label is inferred for a *new* test instance. I have some minor additional feedback (just for presentation and motivation purposes): 1. Authors make a claim in the introduction which should likely be qualified with a citation - "Furthermore, even though a decoupled imputation scheme followed by classification is generally more scalable, it may lose information that is relevant for prediction tasks". How does decoupled imputation imply loss of relevant information? By losing information about which observations are missing and relying on that for prediction? Does this clinically make sense? Or even generally? 2. In Sec 3.3, you probably mean $W_i \in R^{(im(f') + |s_j|) \times d}$. That is parenthesis are missing? 3. What are the +- std errors indicating? Is it cross validation error on a held-out test set? 4. Initially $i$ is indexing samples and by equation (3), (4) $i$ indexes time(?) and in Sec 3.3 $i$ indexes observations? How are observations defined here? is it measurement of specific modality at a specific time instance? Can you clear this in the introduction itself? ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- I have read the authors updated draft and response. The experiments section looks much better now. 1. The overall contribution has less clinical utility in my opinion as generally a patient likely deteriorates over time before an adverse outcome and therefore -- to give the model too much flexibility w.r.t. time ordering doesn't make quite as much sense. This is reflected in the fact that experimental results are not drastically better than other baselines. The authors might be able to show the utility of the method on other time series classification datasets where this is not a limitation of the data itself. However in those settings, it may be a bit hard to beat transformers. Do the authors have a sense of where the benefits of this method really are? 2. Mortality tasks are generally on the simpler side of clinical prediction problems as well. Nonetheless I think the contribution has some utility to the community. I do encourage the authors to try non--clinical datasets for a comparison 3. Please have a discussion that includes limitations and to discuss where the benefits of your methods really lie. A clear and thoughtful discussion is currently missing in your conclusions. With that said, I am updating my score to a 6.
ICLR
Title Set Functions for Time Series Abstract Despite the eminent successes of deep neural networks, many architectures are often hard to transfer to irregularly-sampled and asynchronous time series that occur in many real-world datasets, such as healthcare applications. This paper proposes a novel framework for classifying irregularly sampled time series with unaligned measurements, focusing on high scalability and data efficiency. Our method SEFT (Set Functions for Time Series) is based on recent advances in differentiable set function learning, extremely parallelizable, and scales well to very large datasets and online monitoring scenarios. We extensively compare our method to competitors on multiple healthcare time series datasets and show that it performs competitively whilst significantly reducing runtime. N/A Despite the eminent successes of deep neural networks, many architectures are often hard to transfer to irregularly-sampled and asynchronous time series that occur in many real-world datasets, such as healthcare applications. This paper proposes a novel framework for classifying irregularly sampled time series with unaligned measurements, focusing on high scalability and data efficiency. Our method SEFT (Set Functions for Time Series) is based on recent advances in differentiable set function learning, extremely parallelizable, and scales well to very large datasets and online monitoring scenarios. We extensively compare our method to competitors on multiple healthcare time series datasets and show that it performs competitively whilst significantly reducing runtime. 1 INTRODUCTION With the increasing digitalization, measurements over extensive time periods are becoming ubiquitous. Nevertheless, in many application domains, in particular healthcare (Yadav et al., 2018), measurements might not necessarily be observed at a regular rate or could be misaligned. Moreover, the presence or absence of a measurement and its observation frequency may carry information of its own (Little & Rubin, 2014), such that imputing the missing values is not always desired. While some algorithms can be readily applied to datasets with varying length, these methods usually assume regular sampling of the data and/or require the measurements across modalities to be aligned/synchronized, preventing their application to the aforementioned settings. Existing approaches for unaligned measurements, by contrast, typically rely on imputation to obtain a regularlysampled version of a dataset for classification. Learning a suitable imputation scheme, however, requires understanding the underlying dynamics of a system; this task is significantly more complicated and not necessarily required when classification is the main goal. Furthermore, even though a decoupled imputation scheme followed by classification is generally more scalable, it may lose information (in terms of “missingness patterns”) that could be crucial for prediction tasks. In addition, the fact that decoupled schemes perform worse than methods that are trained end-to-end has been has been empirically demonstrated by Li & Marlin (2016). Approaches that jointly optimize both tasks also add a large computational overhead, thus suffering from poor scalability or high memory requirements. Our method is motivated by the understanding that, while RNNs and similar architectures are well suited for capturing and modelling the dynamics of a time series and thus excel at tasks such as forecasting, retaining the order of an input sequence can even be a disadvantage in classification scenarios (Vinyals et al., 2015). We show that by relaxing the condition that a sequence must be processed in order, we can naturally derive an architecture that directly accounts for (i) irregular sampling, and (ii) unsynchronized measurements. Our method SEFT: Set Functions for Time Series, extends recent advances in set function learning to irregular sampled time series classification tasks, yields state-of-the-art performance, is highly scalable and improves over current approaches by almost an order of magnitude in terms of runtime. With SEFT, we propose to rephrase the problem of classifying time series as classifying a set of observations. We show how set functions can be exploited to learn classifiers that are naturally applicable to unaligned and irregularly sampled time series, leading to state-of-the-art performance in irregularly-sampled time series classification tasks. Our approach can be interpreted as learning dataset-specific summary statistics of time series which are optimized to separate instances by class. Furthermore, our method is highly parallelizable and can be readily extended to an online monitoring setup with up to thousands of patients. 2 RELATED WORK This paper focuses on classifying time series with irregular sampling and potentially unaligned measurements. We briefly discuss recent work in this field; all approaches can be broadly grouped into the following three categories. Irregular sampling as missing data While the problem of supervised classification in the presence of missing data is closely related to irregular sampling on time series, there are some core differences. Missing data is usually defined with respect to a number of features that could be observed, whereas time series themselves can have different lengths and a “typical” number of observed values might not exist. Generally, an irregularly-sampled time series can be converted into a missing data problem by discretizing the time axis into non-overlapping intervals, and declaring intervals in which no data was sampled as missing. This approach is followed by Marlin et al. (2012), where a Gaussian Mixture Model was used to do semi-supervised clustering on electronic health records. Similarly, Lipton et al. (2016) discretize the time series into intervals, aggregate multiple measurements within an interval, and add missingness indicators to the input of a Recurrent Neural Network. By contrast, Che et al. (2018) present several variants of the Gated Recurrent Unit (GRU) combined with imputation schemes. Most prominently, the GRU-model was extended to include a decay term (GRU-D), such that the last observed value is decayed to the empirical mean of the time series via a learnable decay term. While these approaches are applicable to irregularly-sampled data, they either rely on imputation schemes or empirical global estimates on the data distribution (our method, by contrast, requires neither), without directly exploiting the global structure of the time series. Frameworks supporting irregular sampling Some frameworks support missing data. For example, Lu et al. (2008) directly defined a kernel on irregularly-sampled time series, permitting subsequent classification and regression with kernel-based classifiers or regression schemes. Furthermore, Gaussian Processes (Williams & Rasmussen, 2006) constitute a common probabilistic model for time series; they directly permit modelling of continuous time data using mean and covariance functions. Along these lines, Li & Marlin (2015) derived a kernel on Gaussian Process Posteriors, allowing the comparison and classification of irregularly-sampled time series using kernel-based classifiers. Nevertheless, all of these approaches still rely on separate tuning/training of the imputation method and the classifier so that structures supporting the classification could be potentially missed in the imputation step. An emerging line of research employs Hawkes processes (Hawkes, 1971; Liniger, 2009), i.e. a specific class of self-exciting point processes, for time series modelling and forecasting (Mei & Eisner, 2017; Yang et al., 2017; Xiao et al., 2017). While Hawkes processes exhibit extraordinary performance in these domains, there is no standardised way of using them for classification. Previous work (Lukasik et al., 2016) trains multiple Hawkes processes (one for each label) and classifies a time series by assigning it the label that maximises the respective likelihood function. Since this approach does not scale to our datasets, we were unable to perform a fair comparison. We conjecture that further research will be required to make Hawkes processes applicable to general time series classification scenarios. End-to-end learning of imputation schemes Methods of this type are composed of two modules with separate responsibilities, namely an imputation scheme and a classifier, where both components are trained discriminatively and end-to-end using gradient-based training. Recently, Li & Marlin (2016) proposed the Gaussian Process Adapters (GP Adapters) framework, where the parameters of a Gaussian Process Kernel are trained alongside a classifier. The Gaussian Process gives rise to a fixed-size representation of the irregularly-sampled time series, making it possible to apply any differentiable classification architecture. This approach was further extended to multivariate time series by Futoma et al. (2017) using Multi-task Gaussian Processes (MGPs) (Bonilla et al., 2008), which allow correlations between the imputed channels. Moreover, Futoma et al. (2017) made the approach more compatible with time series of different lengths by applying a Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) classifier. Motivated by the limited scalability of approaches based on GP Adapters, Shukla & Marlin (2019) suggest an alternative imputation scheme, the interpolation prediction networks. It applies multiple semi-parametric interpolation schemes to obtain a regularly-sampled time series representation. The parameters of the interpolation network are trained with the classifier in an end-to-end setup. 3 PROPOSED METHOD Our paper focuses on the problem of time series classification of irregularly sampled and unaligned time series. We first define the required terms before describing our models 3.1 NOTATION & REQUIREMENTS Definition 1 (Time series). We describe a time series of an instance i as a set Si of M := len(Si) observations sj such that Si := {s1, . . . , sM}. We assume each observation sj to be represented as a tuple (tj , zj ,mj), consisting of a time tj ∈ R+, an observed value zj ∈ R, and a modality indicator mj ∈ {1 . . . D}, where D represents the dimensionality of the time series. We write Ω ⊆ R+ × R × N+ to denote the domain of observations. An entire time series can thus be represented as Si := {(t1, z1,m1) , . . . , (tM , zM ,mM )} , (1) where for notational convenience we omitted the index i. We leave this definition very general on purpose, allowing the length of each time series (comprising all channels, such as “heart rate”, “respiratory rate”, etc. of one instance) to differ, since our models are capable of handling this. Likewise, we neither enforce nor expect all time series to be synchronized, i.e. being sampled at the same time, but rather we permit unaligned or non-synchronized observations in the sense of not having to observe all modalities at each time point. Time series are collected in a dataset D. Definition 2 (Dataset). We consider a dataset D to contain n time series. Elements of D are tuples, i.e. D := {(S1, y1), . . . , (SN , yN )}, where Si denotes the ith time series and yi ∈ {1, . . . , C} its associated class label. Figure 1 gives a high-level overview of our method, including the individual steps required to perform classification. To get a more intuitive grasp of these definitions, we briefly illustrate our time series notation with an example. Let instance i be an in-hospital patient, while the time series represent measurements of two channels of vital parameters during a hospital stay, namely heart rate (HR) and mean arterial blood pressure (MAP). We enumerate those channels as modalities 1 and 2. Counting from admission time, a HR of 60 and 65 beats per minute was measured after 0.5 h and 3.0 h, respectively, whereas MAP values of 80, 85, and 87 mmHg were observed after 0.5 h, 1.7 h, and 2.5 h. According to Definition 1, the time series is thus represented as Si = {(0.5, 60, 1) , (3, 65, 1) , (0.5, 80, 2) , (1.7, 85, 2) , (3, 87, 2)}. In this example, observations are ordered by modality to increase readability; in practice, we are dealing with unordered sets. Definition 3 (Non-synchronized time series). We call a D-dimensional time series nonsynchronized if there is at least one time point tj ∈ R+ at which at least one modality is not observed, i.e. if there exists tj ∈ R+ such that |{(tk, zk,mk) | tk = tj}| 6= D. Furthermore, we assume that no two measurements of the same modalitymk occur at the same time, i.e. ti 6= tj for i 6= j has to be satisfied for all measurements in mk. This assumption is not required for technical reasons but for consistency. It also makes it possible to interpret the results later on. To summarize our generic setup, we do not requireM , the number of observations per time series, to be the same, i.e. len(Si) 6= len(Sj) for i 6= j is permitted, nor do we assume that the time points and modalities of the observations are the same across time series. This setting is common in biomedical time series, for example. Since typical machine learning algorithms are designed to operate on data of a fixed dimension, novel approaches to this non-trivial problem are required. 3.2 OUR MODEL In the following, we describe an approach inspired by differentiable learning of functions that operate on sets (Zaheer et al., 2017; Wagstaff et al., 2019). We phrase the problem of classifying time series on irregular grids as learning a function f on a set of arbitrarily many time series observations following Definition 1, i.e. S = {(t1, z1,m1), . . . , (tM , zM ,mM )}, such that f : S → RC , where S represents a generic time series of arbitrary cardinality and RC corresponds to the logits of the C classes in the dataset. As we previously discussed, we interpret each time series as an unordered set of measurements, where all information is conserved because the observation time is included for each set element. Specifically, we define f to be a set function, i.e. a function that operates on a set and thus has to be invariant to the ordering of the elements in the set. Multiple architectures are applicable to constructing set functions such as Transformers (Lee et al., 2019; Vaswani et al., 2017), or Deep Sets (Zaheer et al., 2017). Due to preliminary experiments, where Transformers suffered from lower generalization performance in our setting1, we base this work on the framework of Zaheer et al. (2017). Intuitively, this can be seen as computing multivariate dataset-specific summary statistics, which are optimized to maximize classification performance. Thus, we sum-decompose the set function f into the form f(S) = g 1 |S| ∑ sj∈S h(sj) (2) where h : Ω → Rd and g : Rd → RC are neural networks, d ∈ N+ determines the dimensionality of the latent representation, and sj represents a single observation of the time series S. We can view the averaged representations 1/|S| ∑ sj∈S h(sj) in general as a dataset-specific summary statistic learned to best distinguish the class labels. Equation 2 also implies the beneficial scalability properties of our approach: each embedding can be calculated independently of the others; hence, the constant computational cost of passing a single observation through the function h is scaled by the number of observations, resulting in a runtime of O(M) for a time series of length M . Recently, Wagstaff et al. (2019) derived requirements for a practical universal function representation of sum-decomposable set functions, i.e the requirements necessary for a sum-decomposable function to represent an arbitrary set-function given that h and g are arbitrarily expressive. In particular, they show that a universal function representation can only be guaranteed provided that d ≥ maxi len(Si) is satisfied. During hyperparameter search we thus independently sample the dimensionality of the aggregation space, and allow it to be in the order of the number of observations that are to be expected in the dataset. Further, we explored the utilization of max, sum, and mean as alternative aggregation functions inspired by Zaheer et al. (2017); Garnelo et al. (2018). 1Please see Section 4.4 for a quantification. Intuition Our method can be connected to Takens’s embedding theorem (Takens, 1981) for dynamical systems: we also observe a set of samples from some unknown (but deterministic) dynamical process; provided the dimensionality of our architecture is sufficiently large2, we are capable of reconstructing the system up to diffeomorphism. The crucial difference is that we do not have to construct a time-delay embedding but rather, we let the network learn an embedding that is suitable for classification. Time encoding In order to represent the time point of an observation on a normalized scale, we employ variant of positional encodings, as introduced by Vaswani et al. (2017). Preliminary results indicated that this encoding scheme reduces the sensitivity towards initialization and training hyperparameters of a model. Specifically, the time encoding converts the one-dimensional time axis into a multi-dimensional input by passing the time t of each observation through multiple sine and cosine functions of varying frequencies. Given a dimensionality τ ∈ N+ of the time encoding, we refer to the encoded position as x ∈ Rτ , where x2k(t) := sin ( t max ts2k/τ ) (3) x2k+1(t) := cos ( t max ts2k/τ ) (4) with k ∈ {0, . . . , τ/2} and max ts representing the maximal time scale that is expected in the data. Intuitively, we select the wavelengths using a geometric progression from 2π to max ts · 2π, and treat the number of steps and the maximum timescale max ts as hyperparameters of the model. For all experiments time encodings were used, such that an observation is represented as sj = (x (tj) , zj ,mj). Loss function If not mentioned otherwise, we choose h and g in Equation 2 to be multilayer perceptron deep neural networks, parametrized by weights θ and ψ, respectively. We thus denote these neural networks by hθ and gψ; their parameters are shared across all instances per dataset. In our training setup, we follow Zaheer et al. (2017) and apply the devised set function to the complete time series, i.e. to the set of all observations for each time series. Overall, we optimize a loss function that is defined as L(θ, ψ) := E(S,y)∈D ` y; gψ 1 |S| ∑ sj∈S hθ(sj) , (5) where `(·) represents a task-specific loss function. In out setup, we either utilize the binary crossentropy in combination with a sigmoid activation function in the last layer for binary classification or multi-label classification tasks and categorical cross-entropy in combination with a softmax activation function in the last layer for multi-class classification tasks. 3.3 ATTENTION-BASED AGGREGATION So far, our method permits encoding sets of arbitrary sizes into a fixed-size representation. For increasingly large set sizes, however, many irrelevant observations could influence the result of the set function. The mean aggregation function is particularly susceptible to this because the influence of an observation to the embedding shrinks proportionally to the size of the set. We thus suggest to use a weighted mean in order to allow the model to decide which observations are relevant and which should be considered irrelevant. This is equivalent to computing an attention a(S, sj) over the set input elements, and subsequently, computing the sum over all elements in the set. Our approach is based on scaled dot-product attention with multiple heads i ∈ {1, . . . ,m} in order to be able to cover different aspects of the aggregated set3. We define a(·), i.e. the attention weight function of an individual time series, to depend on the overall set of observations. This is achieved 2In Takens’s embedding theorem, d > dB is required, where dB refers to the fractal box counting dimension (Liebovitch & Toth, 1989), which is typically well below the size of typical neural network architectures. 3Since we are dealing only with a single instance (i.e. time series) in this section, we use i and j to denote a head and an observation, respectively. by computing an embedding of the set elements using a smaller set function f ′, and projecting the concatenation of the set representation and the individual set elements into a d-dimensional space. Specifically, we have Kj,i = [f ′(S), sj ]TWi where Wi ∈ R(im(f ′)+|sj |)×d and K ∈ R|S|×d. Furthermore, we define a matrix of query points Q ∈ Rm×d, which allow the model to summarize different aspects of the dataset via ej,i = Kj,i ·Qi√ d and aj,i = exp(ej,i)∑ j exp(ej,i) where aj,i represents the amount of attention that head i gives to set element j. The head-specific rowQi of the query matrixQ allows a head to focus on individual aspects (such as the distribution of one or multiple modalities) of a time series. For each head, we multiply the set element embeddings computed via the set function f with the attentions derived for the individual instances, i.e. ri =∑ j aj,if(sj). The computed representation is concatenated and passed to the aggregation network hθ as in a regular set function, i.e. r∗ = [r1 . . . rm]. In our setup, we initialize Q with zeros, such that at the beginning of training, the attention mechanism is equivalent to computing the unweighted mean over the set elements. Overall, this aggregation function is similar to Transformers (Vaswani et al., 2017), but differs from them in a few key aspects. Standard Transformer blocks would use the information from all set elements in order to compute the embedding of an individual set element, leading to a runtime and space complexity of O(n2). In contrast, our approach computes the embeddings of set elements independently, leading lower runtime and memory complexity of O(n). Further, we observed that computing embeddings with information from other set elements (as the Transformer does) actually decreases generalization performance (see Table 1 for details). 4 EXPERIMENTS We executed all experiments and implementations in a unified code base, which we also make available4 to the community. While some of the datasets used subsequently have access restrictions, anybody can gain access after satisfying the defined requirements. This ensures the reproducibility of our results. Please consult Appendix A.2 for further details. 4.1 DATASETS In order to benchmark the proposed method we selected 4 datasets with irregularly-sampled and non-synchronized measurements. Healing MNIST The H-MNIST dataset was introduced by Krishnan et al. (2015) in order to simulate characteristics which typically occur in medical time series. In our setup, we use a variant of this dataset. Every instance of the dataset contains 10 frames, derived from a single instance of MNIST dataset, where the digit is rotated according to an angle uniformly sampled between −90◦ to 90◦. Furthermore, 3 randomly-selected consecutive frames are augmented by a square artefact in the top left corner of the image in order to indicate seasonality in the time series. Finally, 60 % of the data points are randomly discarded in order to yield a final high-dimensional irregularly-sampled time series with non-synchronized measurements. Using these settings each instance has on average 3, 136 observations. MIMIC-III Tasks MIMIC-III (Johnson et al., 2016) is a widely-used, freely-accessible dataset containing around 50, 000 distinct ICU stays. The median length of stay is 2.1 d and a wide range of physiological measurements (e.g. arterial blood pressure, respiration rate, heart rate) are recorded with a resolution of 1 h. Furthermore, laboratory test results, collected at irregular time intervals are available. Recently, Harutyunyan et al. (2019) defined a set of machine learning tasks, labels, and benchmarks using a subset of the MIMIC-III dataset. We trained and evaluated our method and competing methods on the binary mortality prediction task (M3-Mortality) and on the multiclass problem of phenotype classification (M3-Phenotyping), while applying additional filtering described in Appendix A.1. The goal of the mortality prediction task is to predict whether a patient will die during his/her hospital stay using only data from the first 48 hours of the ICU stay. This 4https://osf.io/2hg74/?view_only=8d45fdf237954948a02f1e2bf701cdf1 dataset contains around 21, 000 stays of which approximately 10 % result in death. The phenotype classification task consists of 40, 000 patients, each of which can suffer from a multitude of 25 acute care conditions. Physionet Mortality Prediction Challenge The 2012 Physionet challenge dataset (Goldberger et al., 2000), which we abbreviate P-Mortality, contains 12, 000 ICU stays each of which lasts at least 48 h. For each stay, a set of general descriptors (such as gender, age, height, weight) were collected at admission time. Depending on the course of the stay and patient status, up to 37 time series variables were measured (e.g. blood pressure, lactate, respiration rate, temperature). While some modalities might be measured in regular time intervals (e.g. hourly or daily), some are only collected when required. Not all variables are available for each stay. The goal of the challenge was to predict if—and with which certainty —a patient will die during the hospital stay. The training set consists of 8, 000 stays while the testing set comprises 4, 000 ICU visits. Both datasets are similarly imbalanced, with a prevalence of around 14 %. For simplicity, the general descriptors (such as age and weight), were included as time points with a single observation at the beginning of the stay. This treatment is similar to the approach by Harutyunyan et al. (2019) in the MIMIC-III benchmarking datasets. Please refer to Table A.1, Table A.2, and Table A.3 in the appendix for a more detailed enumeration of samples sizes and label distributions. The total number of samples may slightly deviate from the originally published splits, as time series of excessive length prevented fitting some methods in reasonable time, and were therefore excluded. 4.2 COMPETITOR METHODS GRU-simple GRU-SIMPLE (Che et al., 2018) augments the input at time t of a Gated-RecurrentUnit RNN with a measurement mask mdt and a δt matrix, which contains the time since the last measurement of the corresponding modality d, such that δt = st − st−1 + δdt−1 t > 1,mdt−1 = 0 st − st−1 t > 1,mdt−1 = 1 0 t = 0 where st represents the time associated with time step t. Phased-LSTM The PHASED-LSTM (Neil et al., 2016) introduced a biologically inspired time dependent gating mechanism which regulates access to the hidden and cell state of a Long short-term RNN cell (Hochreiter & Schmidhuber, 1997). While this allows the network to handle event-based sequences with irregularly spaced observations, the approach does not support unaligned measurements. In order to still provide the architecture with all relevant information, we augment the input in a similar fashion as described for the GRU-SIMPLE approach. GRU-D GRU-D or GRU-Decay (Che et al., 2018) contains modifications to the GRU RNN cell, allowing it to decay past observations to the mean imputation of a modality using a learnable decay rate. By additionally providing the measurement masks as an input the recurrent neural network the last feed in value. Learns how fast to decay back to a mean imputation of the missing data modality. Interpolation Prediction Networks IP-NETWORKS (Shukla & Marlin, 2019) apply multiple semiparametric interpolation schemes to irregularly-sampled time series to obtain regularly-sampled representations that cover long-term trends, transients, and also sampling information. The method combines a univariate interpolation step with a subsequent multivariate interpolation; the parameters of the interpolation network are trained with the classifier in an end-to-end fashion. Transformer In the TRANSFORMER architecture (Vaswani et al., 2017) the elements of a sequence are encoded simultaneously and information between sequence elements is captured using MultiHead-Attention blocks. In our case, an individual sequence element corresponds to all measurements available at a given time point, augmented with a measurement indicator. Transformers are normally used for sequence-to-sequence modelling tasks and in our setup were adapted to classification tasks by mean-aggregating the final representation. This representation is then fed into a one-layer MLP to predict logits for the individual classes. 4.3 EXPERIMENTAL SETUP To permit a fair comparison between the methods, we executed hyperparameter searches for each model on each dataset, composed of uniformly sampling 20 parameters according to Appendix A.3. Training was stopped after 20 epochs without improvement of the validation loss, the hyperparameters with the best overall validation performance were selected for quantifying the performance on the test set. The train, validation, and test splits were the same for all models and all evaluations. Final performance on the test set was calculated by 3 independent runs of the models; evaluation took place after the model was restored to the state with the best validation loss. In all subsequent benchmarks, we use the standard deviation of the test performance of these runs as generalization performance estimates. 4.4 RESULTS The results are shown in Table 1. Overall, our proposed method exhibits the lowest per-epoch runtime on most datasets, while either yielding competitive or state-the-art performance. Further, the trade-off between runtime and performance of the proposed method is very good on all datasets (see Figure A.1 and Figure A.2 in the appendix for a visualization of this argument). In order to elucidate the contribution of individual model components, we also provide an ablation study in Table A.4. Here we see that the attention mechanism contributes more to the model performance, while the positional encoding seems to be beneficial for datasets with highly-varying time series lengths, in particular M3-Phenotyping. Opening the black box In the medical domain, it is of particular interest to understand the decisions a model makes based on the input it is provided with. The formulation of our model and its per observation perspective on time series gives it the unique property of being able to quantify to which extent an individual observation contributed to the output of the model. We exemplify this in Figure 2 with a patient time series that was combined with our models attention values, displayed for a set of clinically relevant variables. After reviewing these records with our medical expert, we find that our model is able to pick up regions with drastic changes in individual modalities. Moreover, it is able to inspect other modalities at the same associated time (for instance, at hour 20). This is behaviour similar to what one would expect from an alerted clinician reviewing the logged medical records. Interestingly, we observe that the model attends to known trends (that are consisting with domain knowledge about patient deterioration ultimately resulting in death) such as increase in lactate or hemodynamic instability, as indicated by drops in blood pressure. Furthermore, the model appears to be alerted by persisting low urine output. After several hours, this can be indicative of kidney failure. 5 CONCLUSION In this work, we presented a novel approach for classifying time series with irregularly-sampled and unaligned, that is non-synchronized, observations. Our approach yields state-of-the-art to strongly competitive performance on numerous simulated and real-world datasets, while reducing runtime by almost half. Moreover, we demonstrated that combining the perspective of individual observations with an attention mechanism permits increasing the interpretability of the model. This is particularly relevant for the medical and healthcare applications. For future work, we reserve a more extensive exploration of the learned latent representation to evaluate its utility for clustering of time series or visualization of their similarity. A APPENDIX A.1 DATA FILTERING Due to memory requirements of some of the competitor methods, it was necassary to excluded time series with extremly high number of measurements. For the M3-Phenotyping patients with more than 2000 distinct time points were discarded from training. For M3-Mortality patients with more than 1000 time points were discarded as they contained dramatically different measuring frequencies compared to the rest of the dataset. A.2 IMPLEMENTATIONAL DETAILS All experiments were run using tensorflow 0.15.0rc0 and training was performed on NVIDIA Geforce GTX 1080 GPUs. In order to allow a fair comparison between methods, the input processing pipeline cached model specific representations and transformations of the data. To further increase efficiency of the RNNs, sequences were binned in to buckets of jointly trained instances depending on their sequence length. The buckets were determined according to the (0.25, 0.5, 0.75) quantiles of the length distributions of the datasets. A.3 TRAINING, MODEL ARCHITECTURES AND HYPERPARAMETER SEARCH General All models were trained using the Adam optimizer, while randomly sampling the learning rate from (0.001, 0.0005, 0.00025, 0.0001). Further, the batch size of all methods was sampled from the values (32, 64, 128, 256). Recurrent neural networks For the RNN based methods (GRU-SIMPLE, PHASEDLSTM, GRU-D and IP-NETS), the number of units was sampled in from the values (16, 32, 64, 128, 256, 512). Further, recurrent dropout and input dropout were sampled from the values (0.0, 0.1, 0.2, 0.3). Solely, for the PHASED-LSTM method, we did not apply dropout to the recurrent state and the inputs, as the learnt frequencies were hypothesized to fulfill a similar function as dropout (Neil et al., 2016). SEFT We vary the number of layers, dropout in between the layers and the number of nodes per layer for both the encoding network hθ and the aggregation network gψ from the same ranges. The number of layers is randomly sampled between 1 and 5, the number of nodes in a layer are uniformly sampled from the range (16, 32, 64, 128, 256, 512) and the dropout fraction is sampled from the values (0.0, 0.1, 0.2, 0.3). The width of the embedding space prior to aggregation is sampled from the values (32, 64, 128, 256, 512, 1024, 2048). The aggregation function selected to be one ofmean, sum andmax. The number of dimensions used for the positional embedding τ is selected uniformly from (4, 8, 16) and max ts us selected from the values (10, 100, 1000). SEFT-Attn The parameters for the encoding and aggregation networks are sampled in a similar fashion as for SEFT. In contrast we set the aggregation function to be sum as described in the text. Further we use a constant architecture for the attention network f ′ with 2 layers, 64 nodes per layer, 4 heads and a dimensionality of the dot product space d of 128. We solely sample the amount of attention dropout uniformly from the values (0.0, 0.1, 0.25, 0.5). Transformer We utilize the same model architecture as defined in Vaswani et al. (2017), where we use a one hidden layer MLP as a feed-forward network, with dimensionality of the hidden layer selected to be twice the model dimensionality. The parameters for the Transformer network were sampled according to the following criteria. The dimensionality of the model was sampled uniformly from the values (64, 128, 256, 512, 1024), the number of attention heads per layer from the values (2, 4, 8) and the number of layers from the range [1, 6] ∈ N. Further, we sampled the amount of dropout of the residual connections and the amount of attention dropout uniformly from the values (0.0, 0.1, 0.2, 0.3, 0.5), and the maximal timescale for the time embedding from the values (10, 100, 1000) (similar to the SEFT approach).
1. What is the main contribution of the paper regarding time series classification? 2. What are the strengths and weaknesses of the proposed approach, particularly in its application to healthcare data? 3. Do you have any concerns about the choice of modeling time series as a set? 4. How does the paper compare with other approaches in the literature, especially those based on RNN and point-process based methods? 5. Are there any parts of the paper that need improvement in terms of clarity and detail?
Review
Review This paper considers the problem of supervised classification of time-series data that are irregularly sampled and asynchronous, with a special focus on the healthcare applications in the experiments. Inspired by the recent progress on differentiable set function learning, the paper proposes an approach called Set Functions for Time Series (SEFT), which views the time series as sets, and use a parametrized sum-decomposing function f as the model for representing the probabilities of different classes, with the sets as the inputs. The problem then reduces to learning the finite dimensional parametrization of the function f under a given loss, which is a differentiable optimization problem that can be learned via standard optimization methods. Together with a positional embedding of the timestamps and an attention-based aggregation, the paper reports improved performance of the proposed approach on a few healthcare time series with asynchronous and irregularly sampled data. In particular, the runtime is largely shortened, while the final accuracy remains competitive to other methods compared in the paper. The idea of SEFT is novel and the results are also showing its promise. In addition, the interpretability shown in section 4.3 is also attractive. However, there are several issues that limit the contribution and maturity of this paper. Firstly, the paper proposes to model time series as a set. But this loses the information of the order of the time series, which can be extremely important in those datasets with long history dependence. In such cases, I'm not convinced that the set modeling would work. The authors should double check the characteristics of the datasets that are used, and see if they lack long history dependence properties in intuition. If so, this should be mentioned clearly. The authors should also make a more fair comparison with other approaches (like those based on RNN) on datasets with strong history dependence, e.g., Memetracker datasets of web postings and limit-order books datasets. Otherwise, it would be not clear whether this set modeling is generally applicable for general time series data. Secondly, the authors missed a large amount of related literature for approaching asynchronous and irregularly sampled time series, namely (marked) point-process based approaches. See papers like [1, 2, 3], to name just a few. The authors should at least include some of the recent approaches in this direction for comparison before claiming the superiority of SEFT. Thirdly, there are a few parts that are not very clear. 1) The discussion about complexity (order m and m\log m) at the bottom of page 1 is weird -- what does this complexity refer to? Does it include the learning of the unknown parameters in the models (like training of the neural networks in this paper)? 2) The loss function in formula (5) is not specified later in the paper (at least hard to find). 3) The Table 1 should be explained in much more details. In particular, why don't we include SEFT-ATTN for H-MNIST? The comment after * is also not clear to me -- is it relevant to why SEFT-ATTN is not included? And what are MICRO/MACRO/WEIGHTED AUC? And why are we using different sets of performance criteria for the first two and last two datasets? Finally, some minor comments: 1) On page 2, "the following methods" should be "the above methods"; 2) on page 3, the meaning of "channels" should be specified clearer; 3) on page 4, in formulae (3) and (4), should there be \pi or 2\pi in the formula? [1] Mei, Hongyuan, and Jason M. Eisner. "The neural hawkes process: A neurally self-modulating multivariate point process." Advances in Neural Information Processing Systems. 2017. [2] Xiao, Shuai, et al. "Joint modeling of event sequence and time series with attentional twin recurrent neural networks." arXiv preprint arXiv:1703.08524 (2017). [3] Yang, Yingxiang, et al. "Online learning for multivariate Hawkes processes." Advances in Neural Information Processing Systems. 2017. ############## post rebuttal ############### After reading the authors' rebuttal, I decide to improve the rating to 5 (reflected as 6 due to the ICLR rating system limitation this year).
ICLR
Title Set Functions for Time Series Abstract Despite the eminent successes of deep neural networks, many architectures are often hard to transfer to irregularly-sampled and asynchronous time series that occur in many real-world datasets, such as healthcare applications. This paper proposes a novel framework for classifying irregularly sampled time series with unaligned measurements, focusing on high scalability and data efficiency. Our method SEFT (Set Functions for Time Series) is based on recent advances in differentiable set function learning, extremely parallelizable, and scales well to very large datasets and online monitoring scenarios. We extensively compare our method to competitors on multiple healthcare time series datasets and show that it performs competitively whilst significantly reducing runtime. N/A Despite the eminent successes of deep neural networks, many architectures are often hard to transfer to irregularly-sampled and asynchronous time series that occur in many real-world datasets, such as healthcare applications. This paper proposes a novel framework for classifying irregularly sampled time series with unaligned measurements, focusing on high scalability and data efficiency. Our method SEFT (Set Functions for Time Series) is based on recent advances in differentiable set function learning, extremely parallelizable, and scales well to very large datasets and online monitoring scenarios. We extensively compare our method to competitors on multiple healthcare time series datasets and show that it performs competitively whilst significantly reducing runtime. 1 INTRODUCTION With the increasing digitalization, measurements over extensive time periods are becoming ubiquitous. Nevertheless, in many application domains, in particular healthcare (Yadav et al., 2018), measurements might not necessarily be observed at a regular rate or could be misaligned. Moreover, the presence or absence of a measurement and its observation frequency may carry information of its own (Little & Rubin, 2014), such that imputing the missing values is not always desired. While some algorithms can be readily applied to datasets with varying length, these methods usually assume regular sampling of the data and/or require the measurements across modalities to be aligned/synchronized, preventing their application to the aforementioned settings. Existing approaches for unaligned measurements, by contrast, typically rely on imputation to obtain a regularlysampled version of a dataset for classification. Learning a suitable imputation scheme, however, requires understanding the underlying dynamics of a system; this task is significantly more complicated and not necessarily required when classification is the main goal. Furthermore, even though a decoupled imputation scheme followed by classification is generally more scalable, it may lose information (in terms of “missingness patterns”) that could be crucial for prediction tasks. In addition, the fact that decoupled schemes perform worse than methods that are trained end-to-end has been has been empirically demonstrated by Li & Marlin (2016). Approaches that jointly optimize both tasks also add a large computational overhead, thus suffering from poor scalability or high memory requirements. Our method is motivated by the understanding that, while RNNs and similar architectures are well suited for capturing and modelling the dynamics of a time series and thus excel at tasks such as forecasting, retaining the order of an input sequence can even be a disadvantage in classification scenarios (Vinyals et al., 2015). We show that by relaxing the condition that a sequence must be processed in order, we can naturally derive an architecture that directly accounts for (i) irregular sampling, and (ii) unsynchronized measurements. Our method SEFT: Set Functions for Time Series, extends recent advances in set function learning to irregular sampled time series classification tasks, yields state-of-the-art performance, is highly scalable and improves over current approaches by almost an order of magnitude in terms of runtime. With SEFT, we propose to rephrase the problem of classifying time series as classifying a set of observations. We show how set functions can be exploited to learn classifiers that are naturally applicable to unaligned and irregularly sampled time series, leading to state-of-the-art performance in irregularly-sampled time series classification tasks. Our approach can be interpreted as learning dataset-specific summary statistics of time series which are optimized to separate instances by class. Furthermore, our method is highly parallelizable and can be readily extended to an online monitoring setup with up to thousands of patients. 2 RELATED WORK This paper focuses on classifying time series with irregular sampling and potentially unaligned measurements. We briefly discuss recent work in this field; all approaches can be broadly grouped into the following three categories. Irregular sampling as missing data While the problem of supervised classification in the presence of missing data is closely related to irregular sampling on time series, there are some core differences. Missing data is usually defined with respect to a number of features that could be observed, whereas time series themselves can have different lengths and a “typical” number of observed values might not exist. Generally, an irregularly-sampled time series can be converted into a missing data problem by discretizing the time axis into non-overlapping intervals, and declaring intervals in which no data was sampled as missing. This approach is followed by Marlin et al. (2012), where a Gaussian Mixture Model was used to do semi-supervised clustering on electronic health records. Similarly, Lipton et al. (2016) discretize the time series into intervals, aggregate multiple measurements within an interval, and add missingness indicators to the input of a Recurrent Neural Network. By contrast, Che et al. (2018) present several variants of the Gated Recurrent Unit (GRU) combined with imputation schemes. Most prominently, the GRU-model was extended to include a decay term (GRU-D), such that the last observed value is decayed to the empirical mean of the time series via a learnable decay term. While these approaches are applicable to irregularly-sampled data, they either rely on imputation schemes or empirical global estimates on the data distribution (our method, by contrast, requires neither), without directly exploiting the global structure of the time series. Frameworks supporting irregular sampling Some frameworks support missing data. For example, Lu et al. (2008) directly defined a kernel on irregularly-sampled time series, permitting subsequent classification and regression with kernel-based classifiers or regression schemes. Furthermore, Gaussian Processes (Williams & Rasmussen, 2006) constitute a common probabilistic model for time series; they directly permit modelling of continuous time data using mean and covariance functions. Along these lines, Li & Marlin (2015) derived a kernel on Gaussian Process Posteriors, allowing the comparison and classification of irregularly-sampled time series using kernel-based classifiers. Nevertheless, all of these approaches still rely on separate tuning/training of the imputation method and the classifier so that structures supporting the classification could be potentially missed in the imputation step. An emerging line of research employs Hawkes processes (Hawkes, 1971; Liniger, 2009), i.e. a specific class of self-exciting point processes, for time series modelling and forecasting (Mei & Eisner, 2017; Yang et al., 2017; Xiao et al., 2017). While Hawkes processes exhibit extraordinary performance in these domains, there is no standardised way of using them for classification. Previous work (Lukasik et al., 2016) trains multiple Hawkes processes (one for each label) and classifies a time series by assigning it the label that maximises the respective likelihood function. Since this approach does not scale to our datasets, we were unable to perform a fair comparison. We conjecture that further research will be required to make Hawkes processes applicable to general time series classification scenarios. End-to-end learning of imputation schemes Methods of this type are composed of two modules with separate responsibilities, namely an imputation scheme and a classifier, where both components are trained discriminatively and end-to-end using gradient-based training. Recently, Li & Marlin (2016) proposed the Gaussian Process Adapters (GP Adapters) framework, where the parameters of a Gaussian Process Kernel are trained alongside a classifier. The Gaussian Process gives rise to a fixed-size representation of the irregularly-sampled time series, making it possible to apply any differentiable classification architecture. This approach was further extended to multivariate time series by Futoma et al. (2017) using Multi-task Gaussian Processes (MGPs) (Bonilla et al., 2008), which allow correlations between the imputed channels. Moreover, Futoma et al. (2017) made the approach more compatible with time series of different lengths by applying a Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) classifier. Motivated by the limited scalability of approaches based on GP Adapters, Shukla & Marlin (2019) suggest an alternative imputation scheme, the interpolation prediction networks. It applies multiple semi-parametric interpolation schemes to obtain a regularly-sampled time series representation. The parameters of the interpolation network are trained with the classifier in an end-to-end setup. 3 PROPOSED METHOD Our paper focuses on the problem of time series classification of irregularly sampled and unaligned time series. We first define the required terms before describing our models 3.1 NOTATION & REQUIREMENTS Definition 1 (Time series). We describe a time series of an instance i as a set Si of M := len(Si) observations sj such that Si := {s1, . . . , sM}. We assume each observation sj to be represented as a tuple (tj , zj ,mj), consisting of a time tj ∈ R+, an observed value zj ∈ R, and a modality indicator mj ∈ {1 . . . D}, where D represents the dimensionality of the time series. We write Ω ⊆ R+ × R × N+ to denote the domain of observations. An entire time series can thus be represented as Si := {(t1, z1,m1) , . . . , (tM , zM ,mM )} , (1) where for notational convenience we omitted the index i. We leave this definition very general on purpose, allowing the length of each time series (comprising all channels, such as “heart rate”, “respiratory rate”, etc. of one instance) to differ, since our models are capable of handling this. Likewise, we neither enforce nor expect all time series to be synchronized, i.e. being sampled at the same time, but rather we permit unaligned or non-synchronized observations in the sense of not having to observe all modalities at each time point. Time series are collected in a dataset D. Definition 2 (Dataset). We consider a dataset D to contain n time series. Elements of D are tuples, i.e. D := {(S1, y1), . . . , (SN , yN )}, where Si denotes the ith time series and yi ∈ {1, . . . , C} its associated class label. Figure 1 gives a high-level overview of our method, including the individual steps required to perform classification. To get a more intuitive grasp of these definitions, we briefly illustrate our time series notation with an example. Let instance i be an in-hospital patient, while the time series represent measurements of two channels of vital parameters during a hospital stay, namely heart rate (HR) and mean arterial blood pressure (MAP). We enumerate those channels as modalities 1 and 2. Counting from admission time, a HR of 60 and 65 beats per minute was measured after 0.5 h and 3.0 h, respectively, whereas MAP values of 80, 85, and 87 mmHg were observed after 0.5 h, 1.7 h, and 2.5 h. According to Definition 1, the time series is thus represented as Si = {(0.5, 60, 1) , (3, 65, 1) , (0.5, 80, 2) , (1.7, 85, 2) , (3, 87, 2)}. In this example, observations are ordered by modality to increase readability; in practice, we are dealing with unordered sets. Definition 3 (Non-synchronized time series). We call a D-dimensional time series nonsynchronized if there is at least one time point tj ∈ R+ at which at least one modality is not observed, i.e. if there exists tj ∈ R+ such that |{(tk, zk,mk) | tk = tj}| 6= D. Furthermore, we assume that no two measurements of the same modalitymk occur at the same time, i.e. ti 6= tj for i 6= j has to be satisfied for all measurements in mk. This assumption is not required for technical reasons but for consistency. It also makes it possible to interpret the results later on. To summarize our generic setup, we do not requireM , the number of observations per time series, to be the same, i.e. len(Si) 6= len(Sj) for i 6= j is permitted, nor do we assume that the time points and modalities of the observations are the same across time series. This setting is common in biomedical time series, for example. Since typical machine learning algorithms are designed to operate on data of a fixed dimension, novel approaches to this non-trivial problem are required. 3.2 OUR MODEL In the following, we describe an approach inspired by differentiable learning of functions that operate on sets (Zaheer et al., 2017; Wagstaff et al., 2019). We phrase the problem of classifying time series on irregular grids as learning a function f on a set of arbitrarily many time series observations following Definition 1, i.e. S = {(t1, z1,m1), . . . , (tM , zM ,mM )}, such that f : S → RC , where S represents a generic time series of arbitrary cardinality and RC corresponds to the logits of the C classes in the dataset. As we previously discussed, we interpret each time series as an unordered set of measurements, where all information is conserved because the observation time is included for each set element. Specifically, we define f to be a set function, i.e. a function that operates on a set and thus has to be invariant to the ordering of the elements in the set. Multiple architectures are applicable to constructing set functions such as Transformers (Lee et al., 2019; Vaswani et al., 2017), or Deep Sets (Zaheer et al., 2017). Due to preliminary experiments, where Transformers suffered from lower generalization performance in our setting1, we base this work on the framework of Zaheer et al. (2017). Intuitively, this can be seen as computing multivariate dataset-specific summary statistics, which are optimized to maximize classification performance. Thus, we sum-decompose the set function f into the form f(S) = g 1 |S| ∑ sj∈S h(sj) (2) where h : Ω → Rd and g : Rd → RC are neural networks, d ∈ N+ determines the dimensionality of the latent representation, and sj represents a single observation of the time series S. We can view the averaged representations 1/|S| ∑ sj∈S h(sj) in general as a dataset-specific summary statistic learned to best distinguish the class labels. Equation 2 also implies the beneficial scalability properties of our approach: each embedding can be calculated independently of the others; hence, the constant computational cost of passing a single observation through the function h is scaled by the number of observations, resulting in a runtime of O(M) for a time series of length M . Recently, Wagstaff et al. (2019) derived requirements for a practical universal function representation of sum-decomposable set functions, i.e the requirements necessary for a sum-decomposable function to represent an arbitrary set-function given that h and g are arbitrarily expressive. In particular, they show that a universal function representation can only be guaranteed provided that d ≥ maxi len(Si) is satisfied. During hyperparameter search we thus independently sample the dimensionality of the aggregation space, and allow it to be in the order of the number of observations that are to be expected in the dataset. Further, we explored the utilization of max, sum, and mean as alternative aggregation functions inspired by Zaheer et al. (2017); Garnelo et al. (2018). 1Please see Section 4.4 for a quantification. Intuition Our method can be connected to Takens’s embedding theorem (Takens, 1981) for dynamical systems: we also observe a set of samples from some unknown (but deterministic) dynamical process; provided the dimensionality of our architecture is sufficiently large2, we are capable of reconstructing the system up to diffeomorphism. The crucial difference is that we do not have to construct a time-delay embedding but rather, we let the network learn an embedding that is suitable for classification. Time encoding In order to represent the time point of an observation on a normalized scale, we employ variant of positional encodings, as introduced by Vaswani et al. (2017). Preliminary results indicated that this encoding scheme reduces the sensitivity towards initialization and training hyperparameters of a model. Specifically, the time encoding converts the one-dimensional time axis into a multi-dimensional input by passing the time t of each observation through multiple sine and cosine functions of varying frequencies. Given a dimensionality τ ∈ N+ of the time encoding, we refer to the encoded position as x ∈ Rτ , where x2k(t) := sin ( t max ts2k/τ ) (3) x2k+1(t) := cos ( t max ts2k/τ ) (4) with k ∈ {0, . . . , τ/2} and max ts representing the maximal time scale that is expected in the data. Intuitively, we select the wavelengths using a geometric progression from 2π to max ts · 2π, and treat the number of steps and the maximum timescale max ts as hyperparameters of the model. For all experiments time encodings were used, such that an observation is represented as sj = (x (tj) , zj ,mj). Loss function If not mentioned otherwise, we choose h and g in Equation 2 to be multilayer perceptron deep neural networks, parametrized by weights θ and ψ, respectively. We thus denote these neural networks by hθ and gψ; their parameters are shared across all instances per dataset. In our training setup, we follow Zaheer et al. (2017) and apply the devised set function to the complete time series, i.e. to the set of all observations for each time series. Overall, we optimize a loss function that is defined as L(θ, ψ) := E(S,y)∈D ` y; gψ 1 |S| ∑ sj∈S hθ(sj) , (5) where `(·) represents a task-specific loss function. In out setup, we either utilize the binary crossentropy in combination with a sigmoid activation function in the last layer for binary classification or multi-label classification tasks and categorical cross-entropy in combination with a softmax activation function in the last layer for multi-class classification tasks. 3.3 ATTENTION-BASED AGGREGATION So far, our method permits encoding sets of arbitrary sizes into a fixed-size representation. For increasingly large set sizes, however, many irrelevant observations could influence the result of the set function. The mean aggregation function is particularly susceptible to this because the influence of an observation to the embedding shrinks proportionally to the size of the set. We thus suggest to use a weighted mean in order to allow the model to decide which observations are relevant and which should be considered irrelevant. This is equivalent to computing an attention a(S, sj) over the set input elements, and subsequently, computing the sum over all elements in the set. Our approach is based on scaled dot-product attention with multiple heads i ∈ {1, . . . ,m} in order to be able to cover different aspects of the aggregated set3. We define a(·), i.e. the attention weight function of an individual time series, to depend on the overall set of observations. This is achieved 2In Takens’s embedding theorem, d > dB is required, where dB refers to the fractal box counting dimension (Liebovitch & Toth, 1989), which is typically well below the size of typical neural network architectures. 3Since we are dealing only with a single instance (i.e. time series) in this section, we use i and j to denote a head and an observation, respectively. by computing an embedding of the set elements using a smaller set function f ′, and projecting the concatenation of the set representation and the individual set elements into a d-dimensional space. Specifically, we have Kj,i = [f ′(S), sj ]TWi where Wi ∈ R(im(f ′)+|sj |)×d and K ∈ R|S|×d. Furthermore, we define a matrix of query points Q ∈ Rm×d, which allow the model to summarize different aspects of the dataset via ej,i = Kj,i ·Qi√ d and aj,i = exp(ej,i)∑ j exp(ej,i) where aj,i represents the amount of attention that head i gives to set element j. The head-specific rowQi of the query matrixQ allows a head to focus on individual aspects (such as the distribution of one or multiple modalities) of a time series. For each head, we multiply the set element embeddings computed via the set function f with the attentions derived for the individual instances, i.e. ri =∑ j aj,if(sj). The computed representation is concatenated and passed to the aggregation network hθ as in a regular set function, i.e. r∗ = [r1 . . . rm]. In our setup, we initialize Q with zeros, such that at the beginning of training, the attention mechanism is equivalent to computing the unweighted mean over the set elements. Overall, this aggregation function is similar to Transformers (Vaswani et al., 2017), but differs from them in a few key aspects. Standard Transformer blocks would use the information from all set elements in order to compute the embedding of an individual set element, leading to a runtime and space complexity of O(n2). In contrast, our approach computes the embeddings of set elements independently, leading lower runtime and memory complexity of O(n). Further, we observed that computing embeddings with information from other set elements (as the Transformer does) actually decreases generalization performance (see Table 1 for details). 4 EXPERIMENTS We executed all experiments and implementations in a unified code base, which we also make available4 to the community. While some of the datasets used subsequently have access restrictions, anybody can gain access after satisfying the defined requirements. This ensures the reproducibility of our results. Please consult Appendix A.2 for further details. 4.1 DATASETS In order to benchmark the proposed method we selected 4 datasets with irregularly-sampled and non-synchronized measurements. Healing MNIST The H-MNIST dataset was introduced by Krishnan et al. (2015) in order to simulate characteristics which typically occur in medical time series. In our setup, we use a variant of this dataset. Every instance of the dataset contains 10 frames, derived from a single instance of MNIST dataset, where the digit is rotated according to an angle uniformly sampled between −90◦ to 90◦. Furthermore, 3 randomly-selected consecutive frames are augmented by a square artefact in the top left corner of the image in order to indicate seasonality in the time series. Finally, 60 % of the data points are randomly discarded in order to yield a final high-dimensional irregularly-sampled time series with non-synchronized measurements. Using these settings each instance has on average 3, 136 observations. MIMIC-III Tasks MIMIC-III (Johnson et al., 2016) is a widely-used, freely-accessible dataset containing around 50, 000 distinct ICU stays. The median length of stay is 2.1 d and a wide range of physiological measurements (e.g. arterial blood pressure, respiration rate, heart rate) are recorded with a resolution of 1 h. Furthermore, laboratory test results, collected at irregular time intervals are available. Recently, Harutyunyan et al. (2019) defined a set of machine learning tasks, labels, and benchmarks using a subset of the MIMIC-III dataset. We trained and evaluated our method and competing methods on the binary mortality prediction task (M3-Mortality) and on the multiclass problem of phenotype classification (M3-Phenotyping), while applying additional filtering described in Appendix A.1. The goal of the mortality prediction task is to predict whether a patient will die during his/her hospital stay using only data from the first 48 hours of the ICU stay. This 4https://osf.io/2hg74/?view_only=8d45fdf237954948a02f1e2bf701cdf1 dataset contains around 21, 000 stays of which approximately 10 % result in death. The phenotype classification task consists of 40, 000 patients, each of which can suffer from a multitude of 25 acute care conditions. Physionet Mortality Prediction Challenge The 2012 Physionet challenge dataset (Goldberger et al., 2000), which we abbreviate P-Mortality, contains 12, 000 ICU stays each of which lasts at least 48 h. For each stay, a set of general descriptors (such as gender, age, height, weight) were collected at admission time. Depending on the course of the stay and patient status, up to 37 time series variables were measured (e.g. blood pressure, lactate, respiration rate, temperature). While some modalities might be measured in regular time intervals (e.g. hourly or daily), some are only collected when required. Not all variables are available for each stay. The goal of the challenge was to predict if—and with which certainty —a patient will die during the hospital stay. The training set consists of 8, 000 stays while the testing set comprises 4, 000 ICU visits. Both datasets are similarly imbalanced, with a prevalence of around 14 %. For simplicity, the general descriptors (such as age and weight), were included as time points with a single observation at the beginning of the stay. This treatment is similar to the approach by Harutyunyan et al. (2019) in the MIMIC-III benchmarking datasets. Please refer to Table A.1, Table A.2, and Table A.3 in the appendix for a more detailed enumeration of samples sizes and label distributions. The total number of samples may slightly deviate from the originally published splits, as time series of excessive length prevented fitting some methods in reasonable time, and were therefore excluded. 4.2 COMPETITOR METHODS GRU-simple GRU-SIMPLE (Che et al., 2018) augments the input at time t of a Gated-RecurrentUnit RNN with a measurement mask mdt and a δt matrix, which contains the time since the last measurement of the corresponding modality d, such that δt = st − st−1 + δdt−1 t > 1,mdt−1 = 0 st − st−1 t > 1,mdt−1 = 1 0 t = 0 where st represents the time associated with time step t. Phased-LSTM The PHASED-LSTM (Neil et al., 2016) introduced a biologically inspired time dependent gating mechanism which regulates access to the hidden and cell state of a Long short-term RNN cell (Hochreiter & Schmidhuber, 1997). While this allows the network to handle event-based sequences with irregularly spaced observations, the approach does not support unaligned measurements. In order to still provide the architecture with all relevant information, we augment the input in a similar fashion as described for the GRU-SIMPLE approach. GRU-D GRU-D or GRU-Decay (Che et al., 2018) contains modifications to the GRU RNN cell, allowing it to decay past observations to the mean imputation of a modality using a learnable decay rate. By additionally providing the measurement masks as an input the recurrent neural network the last feed in value. Learns how fast to decay back to a mean imputation of the missing data modality. Interpolation Prediction Networks IP-NETWORKS (Shukla & Marlin, 2019) apply multiple semiparametric interpolation schemes to irregularly-sampled time series to obtain regularly-sampled representations that cover long-term trends, transients, and also sampling information. The method combines a univariate interpolation step with a subsequent multivariate interpolation; the parameters of the interpolation network are trained with the classifier in an end-to-end fashion. Transformer In the TRANSFORMER architecture (Vaswani et al., 2017) the elements of a sequence are encoded simultaneously and information between sequence elements is captured using MultiHead-Attention blocks. In our case, an individual sequence element corresponds to all measurements available at a given time point, augmented with a measurement indicator. Transformers are normally used for sequence-to-sequence modelling tasks and in our setup were adapted to classification tasks by mean-aggregating the final representation. This representation is then fed into a one-layer MLP to predict logits for the individual classes. 4.3 EXPERIMENTAL SETUP To permit a fair comparison between the methods, we executed hyperparameter searches for each model on each dataset, composed of uniformly sampling 20 parameters according to Appendix A.3. Training was stopped after 20 epochs without improvement of the validation loss, the hyperparameters with the best overall validation performance were selected for quantifying the performance on the test set. The train, validation, and test splits were the same for all models and all evaluations. Final performance on the test set was calculated by 3 independent runs of the models; evaluation took place after the model was restored to the state with the best validation loss. In all subsequent benchmarks, we use the standard deviation of the test performance of these runs as generalization performance estimates. 4.4 RESULTS The results are shown in Table 1. Overall, our proposed method exhibits the lowest per-epoch runtime on most datasets, while either yielding competitive or state-the-art performance. Further, the trade-off between runtime and performance of the proposed method is very good on all datasets (see Figure A.1 and Figure A.2 in the appendix for a visualization of this argument). In order to elucidate the contribution of individual model components, we also provide an ablation study in Table A.4. Here we see that the attention mechanism contributes more to the model performance, while the positional encoding seems to be beneficial for datasets with highly-varying time series lengths, in particular M3-Phenotyping. Opening the black box In the medical domain, it is of particular interest to understand the decisions a model makes based on the input it is provided with. The formulation of our model and its per observation perspective on time series gives it the unique property of being able to quantify to which extent an individual observation contributed to the output of the model. We exemplify this in Figure 2 with a patient time series that was combined with our models attention values, displayed for a set of clinically relevant variables. After reviewing these records with our medical expert, we find that our model is able to pick up regions with drastic changes in individual modalities. Moreover, it is able to inspect other modalities at the same associated time (for instance, at hour 20). This is behaviour similar to what one would expect from an alerted clinician reviewing the logged medical records. Interestingly, we observe that the model attends to known trends (that are consisting with domain knowledge about patient deterioration ultimately resulting in death) such as increase in lactate or hemodynamic instability, as indicated by drops in blood pressure. Furthermore, the model appears to be alerted by persisting low urine output. After several hours, this can be indicative of kidney failure. 5 CONCLUSION In this work, we presented a novel approach for classifying time series with irregularly-sampled and unaligned, that is non-synchronized, observations. Our approach yields state-of-the-art to strongly competitive performance on numerous simulated and real-world datasets, while reducing runtime by almost half. Moreover, we demonstrated that combining the perspective of individual observations with an attention mechanism permits increasing the interpretability of the model. This is particularly relevant for the medical and healthcare applications. For future work, we reserve a more extensive exploration of the learned latent representation to evaluate its utility for clustering of time series or visualization of their similarity. A APPENDIX A.1 DATA FILTERING Due to memory requirements of some of the competitor methods, it was necassary to excluded time series with extremly high number of measurements. For the M3-Phenotyping patients with more than 2000 distinct time points were discarded from training. For M3-Mortality patients with more than 1000 time points were discarded as they contained dramatically different measuring frequencies compared to the rest of the dataset. A.2 IMPLEMENTATIONAL DETAILS All experiments were run using tensorflow 0.15.0rc0 and training was performed on NVIDIA Geforce GTX 1080 GPUs. In order to allow a fair comparison between methods, the input processing pipeline cached model specific representations and transformations of the data. To further increase efficiency of the RNNs, sequences were binned in to buckets of jointly trained instances depending on their sequence length. The buckets were determined according to the (0.25, 0.5, 0.75) quantiles of the length distributions of the datasets. A.3 TRAINING, MODEL ARCHITECTURES AND HYPERPARAMETER SEARCH General All models were trained using the Adam optimizer, while randomly sampling the learning rate from (0.001, 0.0005, 0.00025, 0.0001). Further, the batch size of all methods was sampled from the values (32, 64, 128, 256). Recurrent neural networks For the RNN based methods (GRU-SIMPLE, PHASEDLSTM, GRU-D and IP-NETS), the number of units was sampled in from the values (16, 32, 64, 128, 256, 512). Further, recurrent dropout and input dropout were sampled from the values (0.0, 0.1, 0.2, 0.3). Solely, for the PHASED-LSTM method, we did not apply dropout to the recurrent state and the inputs, as the learnt frequencies were hypothesized to fulfill a similar function as dropout (Neil et al., 2016). SEFT We vary the number of layers, dropout in between the layers and the number of nodes per layer for both the encoding network hθ and the aggregation network gψ from the same ranges. The number of layers is randomly sampled between 1 and 5, the number of nodes in a layer are uniformly sampled from the range (16, 32, 64, 128, 256, 512) and the dropout fraction is sampled from the values (0.0, 0.1, 0.2, 0.3). The width of the embedding space prior to aggregation is sampled from the values (32, 64, 128, 256, 512, 1024, 2048). The aggregation function selected to be one ofmean, sum andmax. The number of dimensions used for the positional embedding τ is selected uniformly from (4, 8, 16) and max ts us selected from the values (10, 100, 1000). SEFT-Attn The parameters for the encoding and aggregation networks are sampled in a similar fashion as for SEFT. In contrast we set the aggregation function to be sum as described in the text. Further we use a constant architecture for the attention network f ′ with 2 layers, 64 nodes per layer, 4 heads and a dimensionality of the dot product space d of 128. We solely sample the amount of attention dropout uniformly from the values (0.0, 0.1, 0.25, 0.5). Transformer We utilize the same model architecture as defined in Vaswani et al. (2017), where we use a one hidden layer MLP as a feed-forward network, with dimensionality of the hidden layer selected to be twice the model dimensionality. The parameters for the Transformer network were sampled according to the following criteria. The dimensionality of the model was sampled uniformly from the values (64, 128, 256, 512, 1024), the number of attention heads per layer from the values (2, 4, 8) and the number of layers from the range [1, 6] ∈ N. Further, we sampled the amount of dropout of the residual connections and the amount of attention dropout uniformly from the values (0.0, 0.1, 0.2, 0.3, 0.5), and the maximal timescale for the time embedding from the values (10, 100, 1000) (similar to the SEFT approach).
1. What is the main idea behind the paper's approach to processing irregular time series data? 2. How does the proposed method differ from traditional sequential algorithms like RNNs? 3. What is the significance of the Transformer architecture in relation to the paper's proposal? 4. Why should the authors include the Transformers in their baselines? 5. What is the issue with the number of data points in the MIMIC-III Mortality benchmark?
Review
Review The paper idea of this paper is straightforward and clear: treat the irregular time series as a bag of events, augment them with time information using positional encoding, and process the events in parallel. The idea is certainly faster than the sequential algorithms such as RNNs and their extensions. However, as shown in the experiments, because it does not encode the "sequential-ness prior" to the model, it is less accurate. Compared to RNNs, the proposed model has better access to the entire length of sequences and does not suffer from the limited memory issues of RNNs and variants. The proposed idea in this paper can be considered a simplified version of the Transformers. Like transformers, the time and order are only provided to the model using the positional encoding and attention is central to aggregation over the sequence. Realizing the relationship with the Transformers not only decreases the novelty degree for this paper but also requires the authors to include the Transformers in the baselines. Finally, the results reported in the experiments are nice, especially for the baseline GRU-D! However, the MIMIC-III Mortality benchmark has a lot more than 21,000 stays to the best of my recollection. Can you please elaborate on how the number of data points has decreased?
ICLR
Title Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning Abstract We formally study how ensemble of deep learning models can improve test accuracy, and how the superior performance of ensemble can be distilled into a single model using knowledge distillation. We consider the challenging case where the ensemble is simply an average of the outputs of a few independently trained neural networks with the same architecture, trained using the same algorithm on the same data set, and they only differ by the random seeds used in the initialization. We show that ensemble/knowledge distillation in deep learning works very differently from traditional learning theory (such as boosting or NTKs). We develop a theory showing that when data has a structure we refer to as “multi-view”, then ensemble of independently trained neural networks can provably improve test accuracy, and such superior test accuracy can also be provably distilled into a single model. Our result sheds light on how ensemble works in deep learning in a way that is completely different from traditional theorems, and how the “dark knowledge” is hidden in the outputs of the ensemble and can be used in distillation.1 1 INTRODUCTION Ensemble (Dietterich, 2000; Hansen & Salamon, 1990; Polikar, 2006) is one of the most powerful techniques in practice to improve the performance of deep learning. By simply averaging the outputs of merely a few (like 3 or 10) independently-trained neural networks of the same architecture, using the same training method over the same training data, it can significantly boost the prediction accuracy over the test set comparing to individual models. The only difference is the randomness used to initialize these networks and/or the randomness during training. Moreover, it is discovered by Hinton et al. (2015) that such superior performance of the ensemble can be transferred into a single model (of the same size as the individual models) using a technique called knowledge distillation: that is, simply train a single model to match the output of the ensemble (such as “90% cat + 10% car”, also known as soft labels) as opposite to the true data labels, over the same training data. On the theory side, there are lots of works studying the superior performance of ensemble from principled perspectives (see full version for citations). However, most of these works only apply to: (1). Boosting: where the coefficients associated with the combinations of the single models are actually trained, instead of simply taking average; (2). Bootstrapping/Bagging: the training data are different for each single model; (3). Ensemble of models of different types and architectures; or (4). Ensemble of random features or decision trees. To the best of our knowledge, none of these cited works apply to the particular type of ensemble that is widely used in deep learning: simply take a uniform average of the output of the learners, which are neural networks with the same architecture and are trained by stochastic gradient descent (SGD) over the same training set. In fact, very critically, for deep learning models: • TRAINING AVERAGE DOES NOT WORK: if one directly trains to learn an average of individual neural networks initialized by different seeds, the performance is much worse than ensemble. • KNOWLEDGE DISTILLATION WORKS: the superior performance of ensemble in deep learning can be distilled into a single model (Hinton et al., 2015). 1Full version of this paper can be found on https://arxiv.org/abs/2012.09816. • SELF-DISTILLATION WORKS: even distilling a single model into another of the same size, there is performance boost. (Furlanello et al., 2018; Mobahi et al., 2020; Zhang et al., 2019) We are unaware of any satisfactory theoretical explanation for the phenomena above. For instance, as we shall argue, some traditional view for why ensemble works, such as ‘ensemble can enlarge the feature space in random feature mappings’, even give contradictory explanations to the above phenomena, thus cannot explain knowledge distillation or ensemble in deep learning. Motivated by this gap between theory and practice we study the following question for multi-class classification: Our theoretical questions: How does ensemble improve the test-time performance in deep learning when we simply (unweightedly) average over a few independently trained neural networks? – Especially when all the neural networks have the same architecture, are trained over the same data set using the same standard training algorithm and only differ by the random seeds, and even when all single models already have 100% training accuracy? How can such superior test-time performance of ensemble be later “distilled” into a single neural network of the same architecture, simply by training the single model to match the output of the ensemble over the same training data set? Our results. We prove for certain multi-class classification tasks with a special structure we refer to as multi-view, with a training set Z consisting of N i.i.d. samples from some unknown distribution D, for certain two-layer convolutional network f with (smoothed-)ReLU activation as learner: • (Single model has bad test accuracy): there is a value µ > 0 such that when a single model f is trained over Z using the cross-entropy loss, via gradient descent (GD) starting from random Gaussian initialization, the model can reach zero training error efficiently. However, w.h.p. the prediction (classification) error of f over D is between 0.49µ and 0.51µ. • (Ensemble provably improves test accuracy): let f1, f2, · · · , fL be L = Ω̃(1) independently trained single models as above, then w.h.p. G = 1L ∑ ℓ fℓ has prediction error ≤ 0.01µ over D. • (Ensemble can be distilled into a single model): if we further train (using GD from random initialization) another single model f0 (same architecture as each fℓ) to match the output of G = 1L ∑ ℓ fℓ merely over the same training data set Z , then f0 can be trained efficiently and w.h.p. f0 will have prediction error ≤ 0.01µ over D as well. • (Self-distillation also improves test accuracy): if we further train (using GD from random ini- tialization) another single model f ′ (same architecture as f1) to match the output of the single model f1 merely over the same training data set Z , then f ′ can be trained efficiently and w.h.p. has prediction error at most≤ 0.26µ overD. The main idea is that self-distillation is performing “implicit ensemble + knowledge distillation”, as we shall argue in Section 4.2. We defer discussions of our empirical results to Section 5. However, we highlight some of the empirical findings, as they shall confirm and justify our theoretical approach studying ensemble and knowledge distillation in deep learning. Specifically, we give empirical evidences showing that: • Knowledge distillation does not work for random feature mappings; and ensemble in deep learning is very different from ensemble in random feature mappings (see Figure 1). • Special structures in data (such as the “multi-view” structure we shall introduce) is needed for ensemble of neural networks to work. • The variance due to label noise or the non-convex landscape of training, in the independentlytrained models, may not be connected to the superior performance of ensemble in deep learning. 2 OUR METHODOLOGY AND INTUITION 2.1 A FAILURE ATTEMPT USING RANDOM FEATURE MAPPINGS The recent advance in deep learning theory shows that under certain circumstances, neural networks can be treated as a linear function over random feature mappings — see (Allen-Zhu et al., 2019b; Arora et al., 2019b; Daniely et al., 2016; Du et al., 2018b; Jacot et al., 2018; Zou et al., 2018) and the references therein. In particular, the theory shows when f : RD+d → R is a neural network with inputs x ∈ Rd and weights W ∈ RD, in some cases, f(W,x) can be approximated by: f(W,x) ≈ f(W0, x) + ⟨W −W0,∇W f(W0, x)⟩ where W0 is the random initialization of the neural network, and ΦW0(x) := ∇W f(W0, x) is the neural tangent kernel (NTK) feature mapping. This is known as the NTK approach. If this approximation holds, then training a neural network can be approximated by learning a linear function over random features ΦW0(x), which is very theory-friendly. Ensemble works for random features / NTK. Traditional theorems (Alhamdoosh & Wang, 2014; Brown et al., 2005a; Bryll et al., 2003; Tsymbal et al., 2005) suggest that the ensemble of independently trained random feature models can indeed significantly improve test-time performance, as it enlarges the feature space from ΦW0(x) to {ΦW (i)0 (x)}i∈[L] for L many independently sampled W (i) 0 . This can be viewed as a feature selection process (Alvarez et al., 2012; Cai et al., 2018; Oliveira et al., 2003; Opitz, 1999; Rokach, 2010), and we have confirmed it for NTK in practice, see Figure 1. However, can we understand ensemble and knowledge distillation in DL as feature selections using NTK? Unfortunately, our empirical results provide many counter examples towards those arguments, see discussions below and Figure 1. Contradiction 1: training average works even better. Although ensemble of linear functions over NTK features with different random seeds: fi(x) = ⟨W (i),ΦW (i)0 (x)⟩ does improve test accuracy, however, such improvement is mainly due to the use of a larger set of random features, whose combinations contain functions that generalize better. To see this, we observe that an even superior performance (than the ensemble) can simply be obtained by directly training F (x) = 1L ( f1+f2+· · ·+fL ) from random initialization. In contrast, recall if fi(x)’s are multi-layer neural networks with different random seeds, then training their average barely gives any better performance comparing to individual networks fi, as now all the fi’s are capable of learning the same set of features. Contradiction 2: knowledge distillation does not work. For NTK feature mappings, we observe that the result obtained by ensemble cannot be distilled at all into individual models, indicating the features selected by ensemble is not contained in the feature Φ W (i) 0 (x) of any individual model. In contrast, in actual deep learning, ensemble does not enlarge feature space: so an individual neural network is capable of learning the features of the ensemble model. In sum, ensemble in deep learning may be very different from ensemble in random features. It may be more accurate to study ensemble / knowledge distillation in deep learning as a feature learning process, instead of a feature selection process. But still, we point out a fundamental difficulty: Key challenge: If a single deep learning model is capable of — through knowledge distillation — learning the features of the ensemble model and achieving better test accuracy comparing to training the single model directly (and the same training accuracy, typically at global optimal of 100%), then why the single model cannot learn these features directly when we train the model to match the true data labels? What is the dark knowledge hidden in the output of ensemble (a.k.a. soft label)2 comparing to the original hard label? 2.2 ENSEMBLE IN DEEP LEARNING: A FEATURE LEARNING PROCESS Before addressing the key challenge, we point out that prior works are very limited with respect to studying neural network training as a feature learning process. Most of the existing works proving that neural networks can learn features only focus on the case when the input is Gaussian or 2For a k-class classification problem, the output of a model g(x) is usually k-dimensional, and represents a soft-max probability distribution over the k target classes. This is known as the soft label. Gaussian-like — see for instance (Kawaguchi, 2016; Soudry & Carmon, 2016; Xie et al., 2016) and many others. However, as we demonstrate in Figure 7 in the full version, Ensemble in DL might not improve test accuracy when inputs are Gaussian-like: Empirically, ensemble does not improve test accuracy in deep learning, in certain scenarios when the distribution of the input data is Gaussian or even mixture of Gaussians. This is true over various learner network structures (fully-connected, residual, convolution neural networks) and various labeling functions (when the labels are generated by linear functions, fully-connected, residual, convolutional networks, with/without label noise, with/without classification margin). Bias variance view of ensemble: Some prior works also try to attribute the benefit of ensemble as reducing the variance of individual solutions due to label noise or non-convex landscape of the training objective. However, reducing such variance can reduce a convex test loss (typically crossentropy), but not necessarily the test classification error. Concretely, the synthetic experiments in Figure 7 show that, after applying ensemble over Gaussian-like inputs, the variance of the model outputs is reduced but the test accuracy is not improved. We give many more empirical evidences to show that the variance (either from label noise or from the non-convex landscape) is usually not the cause for why ensemble works in deep learning, see Section 5. Hence, to understand the true benefit of ensemble in deep learning in theory, we would like to study a setting that can approximate practical deep learning, where: • The input distribution is more structured than standard Gaussian and there is no label noise. (From above discussions, ensemble cannot work for deep learning distribution-freely). • The individual neural networks all are well-trained, in the sense that the training accuracy in the end is 100%, and there is nearly no variance in the test accuracy for individual models. (So training never fails.) In this work, we propose to study a setting of data that we refer to as multi-view, where the above two conditions both hold when we train a two-layer neural networks with (smoothed-)ReLU activations. We also argue that the multi-view structure we consider is fairly common in the data sets used in practice, in particular for vision tasks. We give more details below. 2.3 OUR APPROACH: LEARNING MULTI-VIEW DATA Let us first give a thought experiment to illustrate our approach, and we present the precise mathematical definition of the “multi-view” structure in Section 3. Consider a binary classification problem and four “features” v1, v2, v3, v4. The first two features correspond to the first class label, and the next two features correspond to the second class label. In the data distribution: • When the label is class 1, then:3{ both v1, v2 appears with weight 1, one of v3, v4 appears with weight 0.1 w.p. 80%; only v1 appears with weight 1, one of v3, v4 appears with weight 0.1 w.p. 10%; only v2 appears with weight 1, one of v3, v4 appears with weight 0.1 w.p. 10%. • When the label is class 2, then{ both v3, v4 appears with weight 1, one of v1, v2 appears with weight 0.1 w.p. 80%; only v3 appears with weight 1, one of v1, v2 appears with weight 0.1 w.p. 10%; only v4 appears with weight 1, one of v1, v2 appears with weight 0.1 w.p. 10%. 3One can for simplicity think of “v appears with weight α and w appears with weight β” as data = αv + βw + noise. We call the 80% of the data multi-view data: these are the data where multiple features exist and can be used to classify them correctly. We call the rest 20% of the data single-view data: some features for the correct labels are missing. 4 How individual neural networks learn. Under the multi-view data defined above, if we train a neural network using the cross-entropy loss via gradient descent (GD) from random initialization, during the training process of the individual networks, we show that: • The network will quickly pick up one of the feature v ∈ {v1, v2} for the first label, and one of the features v′ ∈ {v3, v4} for the second label. So, 90% of the training examples, consisting of all the multi-view data and half of the single-view data (those with feature v or v′), are classified correctly. Once classified correctly (with a large margin), these data begin to contribute negligible to gradient by the nature of the cross-entropy loss. • Next, the network will memorize (using e.g. the noise in the data) the remaining 10% of the training examples without learning any new features, due to insufficient amount of left-over samples after the first phase, thus achieving training accuracy 100% but test accuracy 90%. How ensemble improves test accuracy. It is simple why ensemble works. Depending on the randomness of initialization, each individual network will pick up v1 or v2 each w.p. 50%. Hence, as long as we ensemble Õ(1) many independently trained models, w.h.p. their ensemble will pick up both features {v1, v2} and both features {v3, v4}. Thus, all the data will be classified correctly. How knowledge distillation works. Perhaps less obvious is how knowledge distillation works. Since ensemble learns all the features v1, v2, v3, v4, given a multi-view data with label 1, the ensemble will actually output ∝ (2, 0.1), where the 2 comes from features v1, v2 and 0.1 comes from one of v3, v4. On the other hand, an individual model learning only one of v3, v4 will actually output ∝ (2, 0) when the feature v3 or v4 in the data does not match the one learned by the model. Hence, by training the individual model to match the output of the ensemble, the individual model is forced to learn both features v3, v4, even though it has already perfectly classified the training data. This is the “dark knowledge” hidden in the output of the ensemble model. (This theoretical finding is consistent with practice: Figure 8 in the full paper suggests that models trained from knowledge distillation should have learned most of the features, and further computing their ensemble does not give much performance boost.) Significance of our technique. Our work belongs to the generic framework of feature learning in DL where one proves that certain aspects of the algorithm (e.g. the randomness) affects the order 4Meaningfulness of our multi-view hypothesis. Such “multi-view” structure is very common in many of the datasets where deep learning excels. In vision datasets in particular, as illustrated in Figure 2, a car image can be classified as a car by looking at the headlights, the wheels, or the windows. For a typical placement of a car in images, we can observe all these features and use any of these features to classify it as a car. However, there are car images taken from a particular angle, where one or more features can be missing. For example, an image of a car facing forward might be missing the wheel feature. Moreover, some car might also have a small fraction of “cat features”: for example, the headlight might appear similar to cat eyes the ear of a cat. This can be used as the “dark knowledge” by the single model to learn from the ensemble. In Figure 3, we visualize the learned features from an actual neural network to show that they can indeed capture different views. In Figure 5, we plot the “heatmap” for some car images to illustrate that single models (trained from different random seeds) indeed pick up different parts of the input image to classify it as a car. In Figure 9, we manually delete for instance 7/8 of the channels in some intermediate layer of a ResNet, and show that the test accuracy may not be affected by much after ensemble — thus supporting that the multi-view hypothesis can indeed exist even in the intermediate layers of a neural network and ensemble is indeed collecting all these views. where features are learned. This is fundamentally different from convex optimization, such as kernel method, where (with ℓ2 regularization) there is an unique global minimum so the choice of the random seed does not matter (thus, ensemble does not help). There are other works that consider other aspects, such as the choice of learning rate, that can affect the order where the features are learned (Li et al., 2019). Our work is fundamentally different: they only focus on the NTK setting where the features are not learned; we study a feature learning process. Recall, the NTK setting cannot be used to explain ensemble and distillation in DL. Our work extends the reach of traditional machine learning theory, where typically the “generalization” is separated from “optimization.” Such “separate” treatment might not be enough to understand how deep learning works. 3 PROBLEM SETUP The “multi-view” data distribution is a straight-forward generalization of the intuitive setting in Section 2.3. For simplicity, in the main body, we use example choices of the parameters mainly a function of k (such as P = k2, γ = 1k1.5 , µ = k1.2 N , ρ = k −0.01, σ0 = 1/ √ k as we shall see), and we consider the case when k is sufficiently large. In our full version, we shall give a much larger range of parameters for the theorems to hold. 3.1 DATA DISTRIBUTION AND NOTATIONS We consider learning a k-class classification problem over P -patch inputs, where each patch has dimension d. In symbols, each labelled data is represented by (X, y) where X = (x1, x2, · · · , xP ) ∈ (Rd)P is the data vector and y ∈ [k] is the data label. For simplicity, we focus on the case when P = k2, and d = poly(k) for a large polynomial. We consider the setting when k is sufficiently large.5 We use “w.h.p.” to denote with probability at least 1− e−Ω(log2 k), and use Õ, Θ̃, Ω̃ notions to hide polylogarithmic factors in k. We first assume that each label class j ∈ [k] has multiple associated features, say two features for the simplicity of math, represented by unit feature vectors vj,1, vj,2 ∈ Rd. For notation simplicity, we assume that all the features are orthogonal, namely, ∀j, j′ ∈ [k], ∀ℓ, ℓ′ ∈ [2], ∥vj,ℓ∥2 = 1 and vj,ℓ⊥vj′,ℓ′ when (j, ℓ) ̸= (j′, ℓ′) although our work also extends to the “incoherent” case trivially. We denote by V := {vj,1, vj,2}j∈[k] the set of all features. We consider the following data and label distribution. Let Cp be a global constant, s ∈ [1, k0.2] be a sparsity parameter. To be concise, we define the multi-view distribution Dm and single-view distribution Ds together. Due to space limitation, here we hide the specification of the random “noise” and defer it to the full version.6 Definition 3.1 (data distributions Dm and Ds). Given D ∈ {Dm,Ds}, we define (X, y) ∼ D as follows. First choose the label y ∈ [k] uniformly at random. Then, the data vector X is generated 5If we want to work with fixed k, say k = 2, our theorem can also be modified to that setting by increasing the number of features per class. We keep our current setting with two features to simplify the notations. 6At a high level, we shall allow such “noise” to be any feature noise plus Gaussian noise, such as noise =∑ v′∈V αp,v′v ′ + ξp ∈ Rd, where each αp,v′ ∈ [0, γ] can be arbitrary, and ξp ∼ N (0, σ2pI). as follows (also illustrated in Figure 4). 1. Denote V(X) = {vy,1, vy,2} ∪ V ′ as the set of feature vectors used in this data vector X , where V ′ is a set of features uniformly sampled from {vj′,1, vj′,2}j′∈[k]\{y}, each with probability sk . 2. For each v ∈ V(X), pick Cp many disjoint patches in [P ] and denote it as Pv(X) ⊂ [P ] (the distribution of these patches can be arbitrary). We denote P(X) = ∪v∈V(X)Pv(X). 3. If D = Ds is the single-view distribution, pick a value ℓ̂ = ℓ̂(X) ∈ [2] uniformly at random. 4. For each v ∈ V(X) and p ∈ Pv(X), we set xp = zpv + “noise” ∈ Rd, where, the random coefficients zp ≥ 0 satisfy that: In the case of multi-view distribution D = Dm, • ∑ p∈Pv(X) zp ∈ [1, O(1)] when v ∈ {vy,1, vy,2}, 7 • ∑ p∈Pv(X) zp ∈ [Ω(1), 0.4] when v ∈ V(X) \ {vy,1, vy,2}, 8 In the case of single-view distribution D = Ds, • ∑ p∈Pv(X) zp ∈ [1, O(1)] when v = vy,ℓ̂, • ∑ p∈Pv(X) zp ∈ [ρ,O(ρ)] when v = vy,3−ℓ̂, • ∑ p∈Pv(X) zp ∈ [Ω(Γ),Γ] when v ∈ V(X) \ {vy,1, vy,2}. 5. For each p ∈ [P ] \ P(X), we set xp to consist only of “noise”. Remark 3.2. The distribution of how to pick P(X) and assign ∑ p∈Pv(X) zp to each patch in p ∈ Pv(X) can be arbitrary (and can depend on other randomness in the data as well). In particular, we have allowed different features vj,1, vj,2 to show up with different weights in the data (for example, for multi-view data, some view vy,1 can consistently have larger zp comparing to vy,2). Yet, we shall prove that the order to learn these features by the learner network can still be flipped depending on the randomness of network initialization. Interpretation of our data distribution. As we argue more in the full paper, our setting can be tied to a down-sized version of convolutional networks applied to image classification data. With a small kernel size, good features in an image typically appear only at a few patches, and most other patches are random noise or low-magnitude feature noises. More importantly, our noise parameters shall ensure that, the concept class is not learnable by linear classifiers or constant degree polynomials. We believe a (convolutional) neural network with ReLU-like activation is somewhat necessary. Our final data distribution D, and the training data set Z are formally given as follows. Definition 3.3 (D and Z). The distributionD consists of data fromDm w.p. 1−µ and fromDs w.p. µ. We are givenN training samples fromD, and denote the training data set asZ = Zm∪Zs where Zm and Zs respectively represent multi-view and single-view training data. We write (X, y) ∼ Z as (X, y) sampled uniformly at random from the empirical data set, and denote Ns = |Zs|. We again for simplicity focus on the setting when µ = 1poly(k) and we are given samples N = k 1.2/µ so each label i appears at least Ω̃(1) in Zs. Our result trivially applies to many other choices of N . 3.2 LEARNER NETWORK We consider a learner network using the following smoothed ReLU activation function R̃eLU: Definition 3.4. For integer q ≥ 2 and threshold ϱ = 1polylog(k) , the smoothed function R̃eLU(z) := 0 for z ≤ 0; R̃eLU(z) := z q qϱq−1 for z ∈ [0, ϱ]; and R̃eLU(z) := z − (1− 1 q )ϱ for z ≥ ϱ. Since R̃eLU is smooth we denote its gradient as R̃eLU ′ (z). We focus on q = 4 while our result applies to other constants q ≥ 3 (see full version) or most other forms of smoothing. 7For instance, the marginal distribution of Z = ∑ p∈Pv(X) zp can be uniform over [1, 2]. 8For instance, the marginal distribution of Z = ∑ p∈Pv(X) zp can be uniform over [0.2, 0.4]. The learner network F (X) = (F1(X), . . . , Fk(X)) ∈ Rk is a two-layer convolutional network parameterized by wi,r ∈ Rd for i ∈ [k], r ∈ [m], satisfying ∀i ∈ [k] : Fi(X) = ∑ r∈[m] ∑ p∈[P ] R̃eLU(⟨wi,r, xp⟩) Although there exists network with m = 2 that can classify the data correctly (e.g. wi,r = vi,r for r ∈ [2]), in this paper, for efficient optimization purpose it is convenient to work on a moderate level of over-parameterization: m ∈ [polylog(k), k]. Our lower bounds hold for any m in this range and upper bounds hold even for small over-parameterization m = polylog(k). Training a single model. We learn the concept class (namely, the labeled data distribution) using gradient descent with learning rate η > 0, over the cross-entropy loss function L using N training data points Z = {(Xi, yi)}i∈[N ]. We denote the empirical loss as: L(F ) = 1N ∑ i∈[N ] L(F ;Xi, yi) = E(X,y)∼Z [L(F ;X, y)] where L(F ;X, y) = − log e Fy(X)∑ j∈[k] e Fj(X) . We randomly initialize the network F by letting each w (0) i,r ∼ N (0, σ20I) for σ20 = 1/k, which is the most standard initialization people use in practice. To train a single model, at each iteration t we update using gradient descent (GD):9 w (t+1) i,r ← w (t) i,r − η E(X,y)∼Z ∇wi,rL(F (t);X, y) (3.1) We run the algorithm for T = poly(k)/η iterations. We use F (t) to denote the model F with hidden weights {w(t)i,r} at iteration t. Notations. We denote by logiti(F,X) := e Fi(X)∑ j∈[k] e Fj(X) . Using this, we can write down ∀i ∈ [k], r ∈ [m] : −∇wi,rL(F ;X, y) = (1i ̸=y − logiti(F,X))∇wi,rFi(X) . 4 MAIN THEOREMS AND EXPLANATIONS We now state the main theorems (and the one for self-distillation is in the full paper).10 Theorem 1 (single model). For every sufficiently large k > 0, everym ∈ [polylog(k), k], every η ≤ 1 poly(k) , suppose we train a single model using the gradient descent update (3.1) starting from the random initialization defined in Section 3.2, then after T = poly(k)η many iterations, with probability ≥ 1− e−Ω(log2 k), the model F (T ) satisfies: • (training is perfect): meaning for all (X, y) ∈ Z , all i ∈ [k] \ {y}: F (T )y (X) > F (T )i (X). • (test accuracy is consistently bad): meaning that: Pr(X,y)∼D[∃i ∈ [k] \ {y} : F (T ) y (X) < F (T ) i (X)] ∈ [0.49µ, 0.51µ] . We shall give technical intuitions about why Theorem 1 holds in the full version. But, at a high-level, we shall construct a “lottery winning” setM ⊆ [k] × [2] of cardinality |M| ∈ [k(1 − o(1)), k]. It only depends on the random initialization of F . Then, with some effort we can prove that, for every (i, ℓ) ∈ M, at the end of the training F (T ) will learn feature vi,ℓ but not learn feature vi,3−ℓ. This means for those single-view data (X, y) with y = i and ℓ̂(X) = 3 − ℓ, the final network F (T ) will predict its label wrong. This is why the final test accuracy is around 0.5µ. Note the property that test accuracy consistently belongs to the range [0.49µ, 0.51µ] should be reminiscent of message ⑤ in Figure 6, where multiple single models, although starting from different random initialization, in practice does have a relatively small variance in test accuracies. 9Our result also extends to the case when there is a weight decay, discussed in the full version. 10We shall restate these theorems in the full version with more details and a wider range of parameters. Ensemble. Suppose {F [ℓ]}ℓ∈[K] are K = Ω̃(1) independently trained models of F with m = polylog(k) for T = O ( poly(k) η ) iterations (i.e., the same setting as Theorem 1 except we only need a small over-parameterization m = polylog(k)). Let us define their ensemble G(X) = Θ̃(1)K ∑ ℓ F [ℓ](X) (4.1) Theorem 2 (ensemble). In the same setting as Theorem 1 except now we only need a small m = polylog(k), we have for the ensemble model G in (4.1), with probability at least 1− e−Ω(log2 k): • (training is perfect): meaning for all (X, y) ∈ Z , for all i ∈ [k] \ {y}: Gy(X) > Gi(X). • (test accuracy is almost perfect): meaning that: Pr(X,y)∼D[∃i ∈ [k] \ {y} : Gy(X) < Gi(X)] ≤ 0.001µ . As we discussed in Section 2.3, the reason Theorem 2 holds attributes to the fact that those lottery winning setsM depend on the random initialization of the networks; and therefore, when multiple models are put together, their “union” of M shall cover all possible features {vi,ℓ}(i,ℓ)∈[k]×[2]. Moreover, our theorem only requires individual K = Ω̃(1) models for ensemble, which is indeed “averaging the output of a few independently trained models”. 4.1 KNOWLEDGE DISTILLATION FOR ENSEMBLE We consider a knowledge distillation algorithm given the existing ensemble model G (see (4.1)) as follows. For every label i ∈ [k], let us define the truncated scaled logit as (for τ = 1 log2 k ): logitτi (F,X) = emin{τ 2Fi(X),1}/τ∑ j∈[k] e min{τ2Fj(X),1}/τ (4.2) (This should be reminiscent of the logit function with temperature used by the original knowledge distillation work (Hinton et al., 2015); we use truncation instead which is easier to analyze.) Now, we train a new network F from random initialization (where the randomness is independent of all of those used in F [ℓ]). At every iteration t, we update each weight wi,r by: w (t+1) i,r = w (t) i,r − η∇wi,rL(F (t))− η′ E(X,y)∼Z [ ( logitτi (F (t), X)− logitτi (G,X) )−∇wi,rF (t)i (X)] (4.3) Notation. Throughout the paper we denote by [a]+ = max{0, a} and [a]− = min{0, a}. This knowledge distillation method (4.3) is almost identical to the one used in the original work (Hinton et al., 2015), except we use a truncation during the training to make it more (theoretically) stable. Moreover, we update the distillation objective using a larger learning rate η′ comparing to η of the cross-entropy objective. This is also consistent with the training schedule used in (Hinton et al., 2015). Let F (t) be the resulting network obtained by (4.3) at iteration t. We have the following theorem: Theorem 3 (ensemble distillation). Consider the distillation algorithm (4.3) in which G is the ensemble model defined in (4.1). For every k > 0, for m = polylog(k), for every η ≤ 1poly(k) , setting η′ = ηpoly(k), after T = poly(k)η many iterations with probability at least 1 − e −Ω(log2 k), for at least 90% of the iterations t ≤ T : • (training is perfect): meaning for all (X, y) ∈ Z , all i ∈ [k] \ {y}: F (t)y (X) > F (t)i (X). • (test accuracy is almost perfect): meaning that: Pr (X,y)∼D [∃i ∈ [k] \ {y} : F (t)y (X) < F (t) i (X)] ≤ 0.001µ . Remark. Theorem 3 necessarily means that the distilled model F has learned all the features {vi,ℓ}(i,ℓ)∈[k]×[2] from the ensemble model G. This is consistent with our empirical findings in Figure 8: if one trains multiple individual models using knowledge distillation with different random seeds, then their ensemble gives no further performance boost.
1. What is the focus and contribution of the paper regarding ensemble and knowledge distillation? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis? 3. Do you have any concerns or questions about the paper, especially regarding its experiments and conclusions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper takes a step toward understanding ensemble and knowledge distillation. The authors consider the challenging setting where the teacher model is an average of several models of the same structure, or even the teacher has an identical structure as the student model. The authors developed a theory that the distillation from several independently trained neural networks can improve the performance when the data has a "multi-view" structure. Strengths And Weaknesses Strength: The authors present theoretical results showing that a single model is guaranteed to have 0 training error while having a high testing error with high probability; the ensemble can provably improve the testing accuracy. The author shows that the ensemble can be efficiently distilled into a single model. This understanding is fundamentally different from the standard NTK settings. The idea of "multi-view data" is intuitive and provides a natural and convincing explanation for various empirical observations for distillation. Questions and weaknesses: One major concern is that the supplementary material cannot be opened due to a file format error, and I'm not able to see the results of self-distillation and empirical evaluation, which seem to be important aspects of this work. In the example of Section 2.3 and for a single model, why the network can only pick up one of the features in v 1 , v 2 with the 90% of the data, and only memorize the remaining 10% of the data, instead of learning from the other relevant feature in v 1 , v 2 ? Clarity, Quality, Novelty And Reproducibility The idea of "multi-view" is interesting, and provides an intuitive explanation for various observations in distillation.
ICLR
Title Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning Abstract We formally study how ensemble of deep learning models can improve test accuracy, and how the superior performance of ensemble can be distilled into a single model using knowledge distillation. We consider the challenging case where the ensemble is simply an average of the outputs of a few independently trained neural networks with the same architecture, trained using the same algorithm on the same data set, and they only differ by the random seeds used in the initialization. We show that ensemble/knowledge distillation in deep learning works very differently from traditional learning theory (such as boosting or NTKs). We develop a theory showing that when data has a structure we refer to as “multi-view”, then ensemble of independently trained neural networks can provably improve test accuracy, and such superior test accuracy can also be provably distilled into a single model. Our result sheds light on how ensemble works in deep learning in a way that is completely different from traditional theorems, and how the “dark knowledge” is hidden in the outputs of the ensemble and can be used in distillation.1 1 INTRODUCTION Ensemble (Dietterich, 2000; Hansen & Salamon, 1990; Polikar, 2006) is one of the most powerful techniques in practice to improve the performance of deep learning. By simply averaging the outputs of merely a few (like 3 or 10) independently-trained neural networks of the same architecture, using the same training method over the same training data, it can significantly boost the prediction accuracy over the test set comparing to individual models. The only difference is the randomness used to initialize these networks and/or the randomness during training. Moreover, it is discovered by Hinton et al. (2015) that such superior performance of the ensemble can be transferred into a single model (of the same size as the individual models) using a technique called knowledge distillation: that is, simply train a single model to match the output of the ensemble (such as “90% cat + 10% car”, also known as soft labels) as opposite to the true data labels, over the same training data. On the theory side, there are lots of works studying the superior performance of ensemble from principled perspectives (see full version for citations). However, most of these works only apply to: (1). Boosting: where the coefficients associated with the combinations of the single models are actually trained, instead of simply taking average; (2). Bootstrapping/Bagging: the training data are different for each single model; (3). Ensemble of models of different types and architectures; or (4). Ensemble of random features or decision trees. To the best of our knowledge, none of these cited works apply to the particular type of ensemble that is widely used in deep learning: simply take a uniform average of the output of the learners, which are neural networks with the same architecture and are trained by stochastic gradient descent (SGD) over the same training set. In fact, very critically, for deep learning models: • TRAINING AVERAGE DOES NOT WORK: if one directly trains to learn an average of individual neural networks initialized by different seeds, the performance is much worse than ensemble. • KNOWLEDGE DISTILLATION WORKS: the superior performance of ensemble in deep learning can be distilled into a single model (Hinton et al., 2015). 1Full version of this paper can be found on https://arxiv.org/abs/2012.09816. • SELF-DISTILLATION WORKS: even distilling a single model into another of the same size, there is performance boost. (Furlanello et al., 2018; Mobahi et al., 2020; Zhang et al., 2019) We are unaware of any satisfactory theoretical explanation for the phenomena above. For instance, as we shall argue, some traditional view for why ensemble works, such as ‘ensemble can enlarge the feature space in random feature mappings’, even give contradictory explanations to the above phenomena, thus cannot explain knowledge distillation or ensemble in deep learning. Motivated by this gap between theory and practice we study the following question for multi-class classification: Our theoretical questions: How does ensemble improve the test-time performance in deep learning when we simply (unweightedly) average over a few independently trained neural networks? – Especially when all the neural networks have the same architecture, are trained over the same data set using the same standard training algorithm and only differ by the random seeds, and even when all single models already have 100% training accuracy? How can such superior test-time performance of ensemble be later “distilled” into a single neural network of the same architecture, simply by training the single model to match the output of the ensemble over the same training data set? Our results. We prove for certain multi-class classification tasks with a special structure we refer to as multi-view, with a training set Z consisting of N i.i.d. samples from some unknown distribution D, for certain two-layer convolutional network f with (smoothed-)ReLU activation as learner: • (Single model has bad test accuracy): there is a value µ > 0 such that when a single model f is trained over Z using the cross-entropy loss, via gradient descent (GD) starting from random Gaussian initialization, the model can reach zero training error efficiently. However, w.h.p. the prediction (classification) error of f over D is between 0.49µ and 0.51µ. • (Ensemble provably improves test accuracy): let f1, f2, · · · , fL be L = Ω̃(1) independently trained single models as above, then w.h.p. G = 1L ∑ ℓ fℓ has prediction error ≤ 0.01µ over D. • (Ensemble can be distilled into a single model): if we further train (using GD from random initialization) another single model f0 (same architecture as each fℓ) to match the output of G = 1L ∑ ℓ fℓ merely over the same training data set Z , then f0 can be trained efficiently and w.h.p. f0 will have prediction error ≤ 0.01µ over D as well. • (Self-distillation also improves test accuracy): if we further train (using GD from random ini- tialization) another single model f ′ (same architecture as f1) to match the output of the single model f1 merely over the same training data set Z , then f ′ can be trained efficiently and w.h.p. has prediction error at most≤ 0.26µ overD. The main idea is that self-distillation is performing “implicit ensemble + knowledge distillation”, as we shall argue in Section 4.2. We defer discussions of our empirical results to Section 5. However, we highlight some of the empirical findings, as they shall confirm and justify our theoretical approach studying ensemble and knowledge distillation in deep learning. Specifically, we give empirical evidences showing that: • Knowledge distillation does not work for random feature mappings; and ensemble in deep learning is very different from ensemble in random feature mappings (see Figure 1). • Special structures in data (such as the “multi-view” structure we shall introduce) is needed for ensemble of neural networks to work. • The variance due to label noise or the non-convex landscape of training, in the independentlytrained models, may not be connected to the superior performance of ensemble in deep learning. 2 OUR METHODOLOGY AND INTUITION 2.1 A FAILURE ATTEMPT USING RANDOM FEATURE MAPPINGS The recent advance in deep learning theory shows that under certain circumstances, neural networks can be treated as a linear function over random feature mappings — see (Allen-Zhu et al., 2019b; Arora et al., 2019b; Daniely et al., 2016; Du et al., 2018b; Jacot et al., 2018; Zou et al., 2018) and the references therein. In particular, the theory shows when f : RD+d → R is a neural network with inputs x ∈ Rd and weights W ∈ RD, in some cases, f(W,x) can be approximated by: f(W,x) ≈ f(W0, x) + ⟨W −W0,∇W f(W0, x)⟩ where W0 is the random initialization of the neural network, and ΦW0(x) := ∇W f(W0, x) is the neural tangent kernel (NTK) feature mapping. This is known as the NTK approach. If this approximation holds, then training a neural network can be approximated by learning a linear function over random features ΦW0(x), which is very theory-friendly. Ensemble works for random features / NTK. Traditional theorems (Alhamdoosh & Wang, 2014; Brown et al., 2005a; Bryll et al., 2003; Tsymbal et al., 2005) suggest that the ensemble of independently trained random feature models can indeed significantly improve test-time performance, as it enlarges the feature space from ΦW0(x) to {ΦW (i)0 (x)}i∈[L] for L many independently sampled W (i) 0 . This can be viewed as a feature selection process (Alvarez et al., 2012; Cai et al., 2018; Oliveira et al., 2003; Opitz, 1999; Rokach, 2010), and we have confirmed it for NTK in practice, see Figure 1. However, can we understand ensemble and knowledge distillation in DL as feature selections using NTK? Unfortunately, our empirical results provide many counter examples towards those arguments, see discussions below and Figure 1. Contradiction 1: training average works even better. Although ensemble of linear functions over NTK features with different random seeds: fi(x) = ⟨W (i),ΦW (i)0 (x)⟩ does improve test accuracy, however, such improvement is mainly due to the use of a larger set of random features, whose combinations contain functions that generalize better. To see this, we observe that an even superior performance (than the ensemble) can simply be obtained by directly training F (x) = 1L ( f1+f2+· · ·+fL ) from random initialization. In contrast, recall if fi(x)’s are multi-layer neural networks with different random seeds, then training their average barely gives any better performance comparing to individual networks fi, as now all the fi’s are capable of learning the same set of features. Contradiction 2: knowledge distillation does not work. For NTK feature mappings, we observe that the result obtained by ensemble cannot be distilled at all into individual models, indicating the features selected by ensemble is not contained in the feature Φ W (i) 0 (x) of any individual model. In contrast, in actual deep learning, ensemble does not enlarge feature space: so an individual neural network is capable of learning the features of the ensemble model. In sum, ensemble in deep learning may be very different from ensemble in random features. It may be more accurate to study ensemble / knowledge distillation in deep learning as a feature learning process, instead of a feature selection process. But still, we point out a fundamental difficulty: Key challenge: If a single deep learning model is capable of — through knowledge distillation — learning the features of the ensemble model and achieving better test accuracy comparing to training the single model directly (and the same training accuracy, typically at global optimal of 100%), then why the single model cannot learn these features directly when we train the model to match the true data labels? What is the dark knowledge hidden in the output of ensemble (a.k.a. soft label)2 comparing to the original hard label? 2.2 ENSEMBLE IN DEEP LEARNING: A FEATURE LEARNING PROCESS Before addressing the key challenge, we point out that prior works are very limited with respect to studying neural network training as a feature learning process. Most of the existing works proving that neural networks can learn features only focus on the case when the input is Gaussian or 2For a k-class classification problem, the output of a model g(x) is usually k-dimensional, and represents a soft-max probability distribution over the k target classes. This is known as the soft label. Gaussian-like — see for instance (Kawaguchi, 2016; Soudry & Carmon, 2016; Xie et al., 2016) and many others. However, as we demonstrate in Figure 7 in the full version, Ensemble in DL might not improve test accuracy when inputs are Gaussian-like: Empirically, ensemble does not improve test accuracy in deep learning, in certain scenarios when the distribution of the input data is Gaussian or even mixture of Gaussians. This is true over various learner network structures (fully-connected, residual, convolution neural networks) and various labeling functions (when the labels are generated by linear functions, fully-connected, residual, convolutional networks, with/without label noise, with/without classification margin). Bias variance view of ensemble: Some prior works also try to attribute the benefit of ensemble as reducing the variance of individual solutions due to label noise or non-convex landscape of the training objective. However, reducing such variance can reduce a convex test loss (typically crossentropy), but not necessarily the test classification error. Concretely, the synthetic experiments in Figure 7 show that, after applying ensemble over Gaussian-like inputs, the variance of the model outputs is reduced but the test accuracy is not improved. We give many more empirical evidences to show that the variance (either from label noise or from the non-convex landscape) is usually not the cause for why ensemble works in deep learning, see Section 5. Hence, to understand the true benefit of ensemble in deep learning in theory, we would like to study a setting that can approximate practical deep learning, where: • The input distribution is more structured than standard Gaussian and there is no label noise. (From above discussions, ensemble cannot work for deep learning distribution-freely). • The individual neural networks all are well-trained, in the sense that the training accuracy in the end is 100%, and there is nearly no variance in the test accuracy for individual models. (So training never fails.) In this work, we propose to study a setting of data that we refer to as multi-view, where the above two conditions both hold when we train a two-layer neural networks with (smoothed-)ReLU activations. We also argue that the multi-view structure we consider is fairly common in the data sets used in practice, in particular for vision tasks. We give more details below. 2.3 OUR APPROACH: LEARNING MULTI-VIEW DATA Let us first give a thought experiment to illustrate our approach, and we present the precise mathematical definition of the “multi-view” structure in Section 3. Consider a binary classification problem and four “features” v1, v2, v3, v4. The first two features correspond to the first class label, and the next two features correspond to the second class label. In the data distribution: • When the label is class 1, then:3{ both v1, v2 appears with weight 1, one of v3, v4 appears with weight 0.1 w.p. 80%; only v1 appears with weight 1, one of v3, v4 appears with weight 0.1 w.p. 10%; only v2 appears with weight 1, one of v3, v4 appears with weight 0.1 w.p. 10%. • When the label is class 2, then{ both v3, v4 appears with weight 1, one of v1, v2 appears with weight 0.1 w.p. 80%; only v3 appears with weight 1, one of v1, v2 appears with weight 0.1 w.p. 10%; only v4 appears with weight 1, one of v1, v2 appears with weight 0.1 w.p. 10%. 3One can for simplicity think of “v appears with weight α and w appears with weight β” as data = αv + βw + noise. We call the 80% of the data multi-view data: these are the data where multiple features exist and can be used to classify them correctly. We call the rest 20% of the data single-view data: some features for the correct labels are missing. 4 How individual neural networks learn. Under the multi-view data defined above, if we train a neural network using the cross-entropy loss via gradient descent (GD) from random initialization, during the training process of the individual networks, we show that: • The network will quickly pick up one of the feature v ∈ {v1, v2} for the first label, and one of the features v′ ∈ {v3, v4} for the second label. So, 90% of the training examples, consisting of all the multi-view data and half of the single-view data (those with feature v or v′), are classified correctly. Once classified correctly (with a large margin), these data begin to contribute negligible to gradient by the nature of the cross-entropy loss. • Next, the network will memorize (using e.g. the noise in the data) the remaining 10% of the training examples without learning any new features, due to insufficient amount of left-over samples after the first phase, thus achieving training accuracy 100% but test accuracy 90%. How ensemble improves test accuracy. It is simple why ensemble works. Depending on the randomness of initialization, each individual network will pick up v1 or v2 each w.p. 50%. Hence, as long as we ensemble Õ(1) many independently trained models, w.h.p. their ensemble will pick up both features {v1, v2} and both features {v3, v4}. Thus, all the data will be classified correctly. How knowledge distillation works. Perhaps less obvious is how knowledge distillation works. Since ensemble learns all the features v1, v2, v3, v4, given a multi-view data with label 1, the ensemble will actually output ∝ (2, 0.1), where the 2 comes from features v1, v2 and 0.1 comes from one of v3, v4. On the other hand, an individual model learning only one of v3, v4 will actually output ∝ (2, 0) when the feature v3 or v4 in the data does not match the one learned by the model. Hence, by training the individual model to match the output of the ensemble, the individual model is forced to learn both features v3, v4, even though it has already perfectly classified the training data. This is the “dark knowledge” hidden in the output of the ensemble model. (This theoretical finding is consistent with practice: Figure 8 in the full paper suggests that models trained from knowledge distillation should have learned most of the features, and further computing their ensemble does not give much performance boost.) Significance of our technique. Our work belongs to the generic framework of feature learning in DL where one proves that certain aspects of the algorithm (e.g. the randomness) affects the order 4Meaningfulness of our multi-view hypothesis. Such “multi-view” structure is very common in many of the datasets where deep learning excels. In vision datasets in particular, as illustrated in Figure 2, a car image can be classified as a car by looking at the headlights, the wheels, or the windows. For a typical placement of a car in images, we can observe all these features and use any of these features to classify it as a car. However, there are car images taken from a particular angle, where one or more features can be missing. For example, an image of a car facing forward might be missing the wheel feature. Moreover, some car might also have a small fraction of “cat features”: for example, the headlight might appear similar to cat eyes the ear of a cat. This can be used as the “dark knowledge” by the single model to learn from the ensemble. In Figure 3, we visualize the learned features from an actual neural network to show that they can indeed capture different views. In Figure 5, we plot the “heatmap” for some car images to illustrate that single models (trained from different random seeds) indeed pick up different parts of the input image to classify it as a car. In Figure 9, we manually delete for instance 7/8 of the channels in some intermediate layer of a ResNet, and show that the test accuracy may not be affected by much after ensemble — thus supporting that the multi-view hypothesis can indeed exist even in the intermediate layers of a neural network and ensemble is indeed collecting all these views. where features are learned. This is fundamentally different from convex optimization, such as kernel method, where (with ℓ2 regularization) there is an unique global minimum so the choice of the random seed does not matter (thus, ensemble does not help). There are other works that consider other aspects, such as the choice of learning rate, that can affect the order where the features are learned (Li et al., 2019). Our work is fundamentally different: they only focus on the NTK setting where the features are not learned; we study a feature learning process. Recall, the NTK setting cannot be used to explain ensemble and distillation in DL. Our work extends the reach of traditional machine learning theory, where typically the “generalization” is separated from “optimization.” Such “separate” treatment might not be enough to understand how deep learning works. 3 PROBLEM SETUP The “multi-view” data distribution is a straight-forward generalization of the intuitive setting in Section 2.3. For simplicity, in the main body, we use example choices of the parameters mainly a function of k (such as P = k2, γ = 1k1.5 , µ = k1.2 N , ρ = k −0.01, σ0 = 1/ √ k as we shall see), and we consider the case when k is sufficiently large. In our full version, we shall give a much larger range of parameters for the theorems to hold. 3.1 DATA DISTRIBUTION AND NOTATIONS We consider learning a k-class classification problem over P -patch inputs, where each patch has dimension d. In symbols, each labelled data is represented by (X, y) where X = (x1, x2, · · · , xP ) ∈ (Rd)P is the data vector and y ∈ [k] is the data label. For simplicity, we focus on the case when P = k2, and d = poly(k) for a large polynomial. We consider the setting when k is sufficiently large.5 We use “w.h.p.” to denote with probability at least 1− e−Ω(log2 k), and use Õ, Θ̃, Ω̃ notions to hide polylogarithmic factors in k. We first assume that each label class j ∈ [k] has multiple associated features, say two features for the simplicity of math, represented by unit feature vectors vj,1, vj,2 ∈ Rd. For notation simplicity, we assume that all the features are orthogonal, namely, ∀j, j′ ∈ [k], ∀ℓ, ℓ′ ∈ [2], ∥vj,ℓ∥2 = 1 and vj,ℓ⊥vj′,ℓ′ when (j, ℓ) ̸= (j′, ℓ′) although our work also extends to the “incoherent” case trivially. We denote by V := {vj,1, vj,2}j∈[k] the set of all features. We consider the following data and label distribution. Let Cp be a global constant, s ∈ [1, k0.2] be a sparsity parameter. To be concise, we define the multi-view distribution Dm and single-view distribution Ds together. Due to space limitation, here we hide the specification of the random “noise” and defer it to the full version.6 Definition 3.1 (data distributions Dm and Ds). Given D ∈ {Dm,Ds}, we define (X, y) ∼ D as follows. First choose the label y ∈ [k] uniformly at random. Then, the data vector X is generated 5If we want to work with fixed k, say k = 2, our theorem can also be modified to that setting by increasing the number of features per class. We keep our current setting with two features to simplify the notations. 6At a high level, we shall allow such “noise” to be any feature noise plus Gaussian noise, such as noise =∑ v′∈V αp,v′v ′ + ξp ∈ Rd, where each αp,v′ ∈ [0, γ] can be arbitrary, and ξp ∼ N (0, σ2pI). as follows (also illustrated in Figure 4). 1. Denote V(X) = {vy,1, vy,2} ∪ V ′ as the set of feature vectors used in this data vector X , where V ′ is a set of features uniformly sampled from {vj′,1, vj′,2}j′∈[k]\{y}, each with probability sk . 2. For each v ∈ V(X), pick Cp many disjoint patches in [P ] and denote it as Pv(X) ⊂ [P ] (the distribution of these patches can be arbitrary). We denote P(X) = ∪v∈V(X)Pv(X). 3. If D = Ds is the single-view distribution, pick a value ℓ̂ = ℓ̂(X) ∈ [2] uniformly at random. 4. For each v ∈ V(X) and p ∈ Pv(X), we set xp = zpv + “noise” ∈ Rd, where, the random coefficients zp ≥ 0 satisfy that: In the case of multi-view distribution D = Dm, • ∑ p∈Pv(X) zp ∈ [1, O(1)] when v ∈ {vy,1, vy,2}, 7 • ∑ p∈Pv(X) zp ∈ [Ω(1), 0.4] when v ∈ V(X) \ {vy,1, vy,2}, 8 In the case of single-view distribution D = Ds, • ∑ p∈Pv(X) zp ∈ [1, O(1)] when v = vy,ℓ̂, • ∑ p∈Pv(X) zp ∈ [ρ,O(ρ)] when v = vy,3−ℓ̂, • ∑ p∈Pv(X) zp ∈ [Ω(Γ),Γ] when v ∈ V(X) \ {vy,1, vy,2}. 5. For each p ∈ [P ] \ P(X), we set xp to consist only of “noise”. Remark 3.2. The distribution of how to pick P(X) and assign ∑ p∈Pv(X) zp to each patch in p ∈ Pv(X) can be arbitrary (and can depend on other randomness in the data as well). In particular, we have allowed different features vj,1, vj,2 to show up with different weights in the data (for example, for multi-view data, some view vy,1 can consistently have larger zp comparing to vy,2). Yet, we shall prove that the order to learn these features by the learner network can still be flipped depending on the randomness of network initialization. Interpretation of our data distribution. As we argue more in the full paper, our setting can be tied to a down-sized version of convolutional networks applied to image classification data. With a small kernel size, good features in an image typically appear only at a few patches, and most other patches are random noise or low-magnitude feature noises. More importantly, our noise parameters shall ensure that, the concept class is not learnable by linear classifiers or constant degree polynomials. We believe a (convolutional) neural network with ReLU-like activation is somewhat necessary. Our final data distribution D, and the training data set Z are formally given as follows. Definition 3.3 (D and Z). The distributionD consists of data fromDm w.p. 1−µ and fromDs w.p. µ. We are givenN training samples fromD, and denote the training data set asZ = Zm∪Zs where Zm and Zs respectively represent multi-view and single-view training data. We write (X, y) ∼ Z as (X, y) sampled uniformly at random from the empirical data set, and denote Ns = |Zs|. We again for simplicity focus on the setting when µ = 1poly(k) and we are given samples N = k 1.2/µ so each label i appears at least Ω̃(1) in Zs. Our result trivially applies to many other choices of N . 3.2 LEARNER NETWORK We consider a learner network using the following smoothed ReLU activation function R̃eLU: Definition 3.4. For integer q ≥ 2 and threshold ϱ = 1polylog(k) , the smoothed function R̃eLU(z) := 0 for z ≤ 0; R̃eLU(z) := z q qϱq−1 for z ∈ [0, ϱ]; and R̃eLU(z) := z − (1− 1 q )ϱ for z ≥ ϱ. Since R̃eLU is smooth we denote its gradient as R̃eLU ′ (z). We focus on q = 4 while our result applies to other constants q ≥ 3 (see full version) or most other forms of smoothing. 7For instance, the marginal distribution of Z = ∑ p∈Pv(X) zp can be uniform over [1, 2]. 8For instance, the marginal distribution of Z = ∑ p∈Pv(X) zp can be uniform over [0.2, 0.4]. The learner network F (X) = (F1(X), . . . , Fk(X)) ∈ Rk is a two-layer convolutional network parameterized by wi,r ∈ Rd for i ∈ [k], r ∈ [m], satisfying ∀i ∈ [k] : Fi(X) = ∑ r∈[m] ∑ p∈[P ] R̃eLU(⟨wi,r, xp⟩) Although there exists network with m = 2 that can classify the data correctly (e.g. wi,r = vi,r for r ∈ [2]), in this paper, for efficient optimization purpose it is convenient to work on a moderate level of over-parameterization: m ∈ [polylog(k), k]. Our lower bounds hold for any m in this range and upper bounds hold even for small over-parameterization m = polylog(k). Training a single model. We learn the concept class (namely, the labeled data distribution) using gradient descent with learning rate η > 0, over the cross-entropy loss function L using N training data points Z = {(Xi, yi)}i∈[N ]. We denote the empirical loss as: L(F ) = 1N ∑ i∈[N ] L(F ;Xi, yi) = E(X,y)∼Z [L(F ;X, y)] where L(F ;X, y) = − log e Fy(X)∑ j∈[k] e Fj(X) . We randomly initialize the network F by letting each w (0) i,r ∼ N (0, σ20I) for σ20 = 1/k, which is the most standard initialization people use in practice. To train a single model, at each iteration t we update using gradient descent (GD):9 w (t+1) i,r ← w (t) i,r − η E(X,y)∼Z ∇wi,rL(F (t);X, y) (3.1) We run the algorithm for T = poly(k)/η iterations. We use F (t) to denote the model F with hidden weights {w(t)i,r} at iteration t. Notations. We denote by logiti(F,X) := e Fi(X)∑ j∈[k] e Fj(X) . Using this, we can write down ∀i ∈ [k], r ∈ [m] : −∇wi,rL(F ;X, y) = (1i ̸=y − logiti(F,X))∇wi,rFi(X) . 4 MAIN THEOREMS AND EXPLANATIONS We now state the main theorems (and the one for self-distillation is in the full paper).10 Theorem 1 (single model). For every sufficiently large k > 0, everym ∈ [polylog(k), k], every η ≤ 1 poly(k) , suppose we train a single model using the gradient descent update (3.1) starting from the random initialization defined in Section 3.2, then after T = poly(k)η many iterations, with probability ≥ 1− e−Ω(log2 k), the model F (T ) satisfies: • (training is perfect): meaning for all (X, y) ∈ Z , all i ∈ [k] \ {y}: F (T )y (X) > F (T )i (X). • (test accuracy is consistently bad): meaning that: Pr(X,y)∼D[∃i ∈ [k] \ {y} : F (T ) y (X) < F (T ) i (X)] ∈ [0.49µ, 0.51µ] . We shall give technical intuitions about why Theorem 1 holds in the full version. But, at a high-level, we shall construct a “lottery winning” setM ⊆ [k] × [2] of cardinality |M| ∈ [k(1 − o(1)), k]. It only depends on the random initialization of F . Then, with some effort we can prove that, for every (i, ℓ) ∈ M, at the end of the training F (T ) will learn feature vi,ℓ but not learn feature vi,3−ℓ. This means for those single-view data (X, y) with y = i and ℓ̂(X) = 3 − ℓ, the final network F (T ) will predict its label wrong. This is why the final test accuracy is around 0.5µ. Note the property that test accuracy consistently belongs to the range [0.49µ, 0.51µ] should be reminiscent of message ⑤ in Figure 6, where multiple single models, although starting from different random initialization, in practice does have a relatively small variance in test accuracies. 9Our result also extends to the case when there is a weight decay, discussed in the full version. 10We shall restate these theorems in the full version with more details and a wider range of parameters. Ensemble. Suppose {F [ℓ]}ℓ∈[K] are K = Ω̃(1) independently trained models of F with m = polylog(k) for T = O ( poly(k) η ) iterations (i.e., the same setting as Theorem 1 except we only need a small over-parameterization m = polylog(k)). Let us define their ensemble G(X) = Θ̃(1)K ∑ ℓ F [ℓ](X) (4.1) Theorem 2 (ensemble). In the same setting as Theorem 1 except now we only need a small m = polylog(k), we have for the ensemble model G in (4.1), with probability at least 1− e−Ω(log2 k): • (training is perfect): meaning for all (X, y) ∈ Z , for all i ∈ [k] \ {y}: Gy(X) > Gi(X). • (test accuracy is almost perfect): meaning that: Pr(X,y)∼D[∃i ∈ [k] \ {y} : Gy(X) < Gi(X)] ≤ 0.001µ . As we discussed in Section 2.3, the reason Theorem 2 holds attributes to the fact that those lottery winning setsM depend on the random initialization of the networks; and therefore, when multiple models are put together, their “union” of M shall cover all possible features {vi,ℓ}(i,ℓ)∈[k]×[2]. Moreover, our theorem only requires individual K = Ω̃(1) models for ensemble, which is indeed “averaging the output of a few independently trained models”. 4.1 KNOWLEDGE DISTILLATION FOR ENSEMBLE We consider a knowledge distillation algorithm given the existing ensemble model G (see (4.1)) as follows. For every label i ∈ [k], let us define the truncated scaled logit as (for τ = 1 log2 k ): logitτi (F,X) = emin{τ 2Fi(X),1}/τ∑ j∈[k] e min{τ2Fj(X),1}/τ (4.2) (This should be reminiscent of the logit function with temperature used by the original knowledge distillation work (Hinton et al., 2015); we use truncation instead which is easier to analyze.) Now, we train a new network F from random initialization (where the randomness is independent of all of those used in F [ℓ]). At every iteration t, we update each weight wi,r by: w (t+1) i,r = w (t) i,r − η∇wi,rL(F (t))− η′ E(X,y)∼Z [ ( logitτi (F (t), X)− logitτi (G,X) )−∇wi,rF (t)i (X)] (4.3) Notation. Throughout the paper we denote by [a]+ = max{0, a} and [a]− = min{0, a}. This knowledge distillation method (4.3) is almost identical to the one used in the original work (Hinton et al., 2015), except we use a truncation during the training to make it more (theoretically) stable. Moreover, we update the distillation objective using a larger learning rate η′ comparing to η of the cross-entropy objective. This is also consistent with the training schedule used in (Hinton et al., 2015). Let F (t) be the resulting network obtained by (4.3) at iteration t. We have the following theorem: Theorem 3 (ensemble distillation). Consider the distillation algorithm (4.3) in which G is the ensemble model defined in (4.1). For every k > 0, for m = polylog(k), for every η ≤ 1poly(k) , setting η′ = ηpoly(k), after T = poly(k)η many iterations with probability at least 1 − e −Ω(log2 k), for at least 90% of the iterations t ≤ T : • (training is perfect): meaning for all (X, y) ∈ Z , all i ∈ [k] \ {y}: F (t)y (X) > F (t)i (X). • (test accuracy is almost perfect): meaning that: Pr (X,y)∼D [∃i ∈ [k] \ {y} : F (t)y (X) < F (t) i (X)] ≤ 0.001µ . Remark. Theorem 3 necessarily means that the distilled model F has learned all the features {vi,ℓ}(i,ℓ)∈[k]×[2] from the ensemble model G. This is consistent with our empirical findings in Figure 8: if one trains multiple individual models using knowledge distillation with different random seeds, then their ensemble gives no further performance boost.
1. What is the focus of the paper regarding theoretical explanations in deep learning? 2. What are the strengths of the proposed approach, particularly in terms of novelty and matching theory with practice? 3. What are the weaknesses of the paper, especially regarding artificial setups and the need for further simplification? 4. How would you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies the theoretical explanation of why the ensemble and knowledge distillation works for the deep learning model. It shows the different behavior of classic parameterization of neural networks and the random feature model (e.g. NTK). It then explains via the idea of multi-view data that the network infers via generalizable learning of a subset of features and memorizing another subset of data. Such analysis differs from the classic analysis that decomposes optimization and generalization. Overall, I find the insight of the paper quite interesting, and such intuition is well supported by the empirical evidence using the practical network. The construction of the multi-view data for analysis is closely related to the practical settings. Strengths And Weaknesses Strength: The novelty of the paper is quite strong. I believe the such analysis is new and sound. There is quite a good match between theory and practice, especially the construction of the multiview dataset is an exciting and meaningful proxy for the practical case while being theory-friendly. The writing of the main context is well organized. It is easy for the reader to get to know the most critical intuition of the theory. I like the experiment used in this paper. It is concise but well supports the theory. Weakness: I am not able to open the supplementary of the paper but reviewing this paper, reading the appendix is very necessary and thus my reading of the appendix is via the arxiv. The structure of the appendix is good but I feel a more simplified overall technical/intuition is desirable. To obtain such a theory, there are still a lot of 'artificial' in the problem setup such as the use of smoothed ReLU, and specific restrictions of the data distribution. Those slightly increase the gap between theory and practice. But this point does not downgrade my point, understanding deep learning in theory is very hard and this paper has done excellent work toward that. Clarity, Quality, Novelty And Reproducibility Clarity: Good Quality: Good Novelty: Godd
ICLR
Title Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning Abstract We formally study how ensemble of deep learning models can improve test accuracy, and how the superior performance of ensemble can be distilled into a single model using knowledge distillation. We consider the challenging case where the ensemble is simply an average of the outputs of a few independently trained neural networks with the same architecture, trained using the same algorithm on the same data set, and they only differ by the random seeds used in the initialization. We show that ensemble/knowledge distillation in deep learning works very differently from traditional learning theory (such as boosting or NTKs). We develop a theory showing that when data has a structure we refer to as “multi-view”, then ensemble of independently trained neural networks can provably improve test accuracy, and such superior test accuracy can also be provably distilled into a single model. Our result sheds light on how ensemble works in deep learning in a way that is completely different from traditional theorems, and how the “dark knowledge” is hidden in the outputs of the ensemble and can be used in distillation.1 1 INTRODUCTION Ensemble (Dietterich, 2000; Hansen & Salamon, 1990; Polikar, 2006) is one of the most powerful techniques in practice to improve the performance of deep learning. By simply averaging the outputs of merely a few (like 3 or 10) independently-trained neural networks of the same architecture, using the same training method over the same training data, it can significantly boost the prediction accuracy over the test set comparing to individual models. The only difference is the randomness used to initialize these networks and/or the randomness during training. Moreover, it is discovered by Hinton et al. (2015) that such superior performance of the ensemble can be transferred into a single model (of the same size as the individual models) using a technique called knowledge distillation: that is, simply train a single model to match the output of the ensemble (such as “90% cat + 10% car”, also known as soft labels) as opposite to the true data labels, over the same training data. On the theory side, there are lots of works studying the superior performance of ensemble from principled perspectives (see full version for citations). However, most of these works only apply to: (1). Boosting: where the coefficients associated with the combinations of the single models are actually trained, instead of simply taking average; (2). Bootstrapping/Bagging: the training data are different for each single model; (3). Ensemble of models of different types and architectures; or (4). Ensemble of random features or decision trees. To the best of our knowledge, none of these cited works apply to the particular type of ensemble that is widely used in deep learning: simply take a uniform average of the output of the learners, which are neural networks with the same architecture and are trained by stochastic gradient descent (SGD) over the same training set. In fact, very critically, for deep learning models: • TRAINING AVERAGE DOES NOT WORK: if one directly trains to learn an average of individual neural networks initialized by different seeds, the performance is much worse than ensemble. • KNOWLEDGE DISTILLATION WORKS: the superior performance of ensemble in deep learning can be distilled into a single model (Hinton et al., 2015). 1Full version of this paper can be found on https://arxiv.org/abs/2012.09816. • SELF-DISTILLATION WORKS: even distilling a single model into another of the same size, there is performance boost. (Furlanello et al., 2018; Mobahi et al., 2020; Zhang et al., 2019) We are unaware of any satisfactory theoretical explanation for the phenomena above. For instance, as we shall argue, some traditional view for why ensemble works, such as ‘ensemble can enlarge the feature space in random feature mappings’, even give contradictory explanations to the above phenomena, thus cannot explain knowledge distillation or ensemble in deep learning. Motivated by this gap between theory and practice we study the following question for multi-class classification: Our theoretical questions: How does ensemble improve the test-time performance in deep learning when we simply (unweightedly) average over a few independently trained neural networks? – Especially when all the neural networks have the same architecture, are trained over the same data set using the same standard training algorithm and only differ by the random seeds, and even when all single models already have 100% training accuracy? How can such superior test-time performance of ensemble be later “distilled” into a single neural network of the same architecture, simply by training the single model to match the output of the ensemble over the same training data set? Our results. We prove for certain multi-class classification tasks with a special structure we refer to as multi-view, with a training set Z consisting of N i.i.d. samples from some unknown distribution D, for certain two-layer convolutional network f with (smoothed-)ReLU activation as learner: • (Single model has bad test accuracy): there is a value µ > 0 such that when a single model f is trained over Z using the cross-entropy loss, via gradient descent (GD) starting from random Gaussian initialization, the model can reach zero training error efficiently. However, w.h.p. the prediction (classification) error of f over D is between 0.49µ and 0.51µ. • (Ensemble provably improves test accuracy): let f1, f2, · · · , fL be L = Ω̃(1) independently trained single models as above, then w.h.p. G = 1L ∑ ℓ fℓ has prediction error ≤ 0.01µ over D. • (Ensemble can be distilled into a single model): if we further train (using GD from random initialization) another single model f0 (same architecture as each fℓ) to match the output of G = 1L ∑ ℓ fℓ merely over the same training data set Z , then f0 can be trained efficiently and w.h.p. f0 will have prediction error ≤ 0.01µ over D as well. • (Self-distillation also improves test accuracy): if we further train (using GD from random ini- tialization) another single model f ′ (same architecture as f1) to match the output of the single model f1 merely over the same training data set Z , then f ′ can be trained efficiently and w.h.p. has prediction error at most≤ 0.26µ overD. The main idea is that self-distillation is performing “implicit ensemble + knowledge distillation”, as we shall argue in Section 4.2. We defer discussions of our empirical results to Section 5. However, we highlight some of the empirical findings, as they shall confirm and justify our theoretical approach studying ensemble and knowledge distillation in deep learning. Specifically, we give empirical evidences showing that: • Knowledge distillation does not work for random feature mappings; and ensemble in deep learning is very different from ensemble in random feature mappings (see Figure 1). • Special structures in data (such as the “multi-view” structure we shall introduce) is needed for ensemble of neural networks to work. • The variance due to label noise or the non-convex landscape of training, in the independentlytrained models, may not be connected to the superior performance of ensemble in deep learning. 2 OUR METHODOLOGY AND INTUITION 2.1 A FAILURE ATTEMPT USING RANDOM FEATURE MAPPINGS The recent advance in deep learning theory shows that under certain circumstances, neural networks can be treated as a linear function over random feature mappings — see (Allen-Zhu et al., 2019b; Arora et al., 2019b; Daniely et al., 2016; Du et al., 2018b; Jacot et al., 2018; Zou et al., 2018) and the references therein. In particular, the theory shows when f : RD+d → R is a neural network with inputs x ∈ Rd and weights W ∈ RD, in some cases, f(W,x) can be approximated by: f(W,x) ≈ f(W0, x) + ⟨W −W0,∇W f(W0, x)⟩ where W0 is the random initialization of the neural network, and ΦW0(x) := ∇W f(W0, x) is the neural tangent kernel (NTK) feature mapping. This is known as the NTK approach. If this approximation holds, then training a neural network can be approximated by learning a linear function over random features ΦW0(x), which is very theory-friendly. Ensemble works for random features / NTK. Traditional theorems (Alhamdoosh & Wang, 2014; Brown et al., 2005a; Bryll et al., 2003; Tsymbal et al., 2005) suggest that the ensemble of independently trained random feature models can indeed significantly improve test-time performance, as it enlarges the feature space from ΦW0(x) to {ΦW (i)0 (x)}i∈[L] for L many independently sampled W (i) 0 . This can be viewed as a feature selection process (Alvarez et al., 2012; Cai et al., 2018; Oliveira et al., 2003; Opitz, 1999; Rokach, 2010), and we have confirmed it for NTK in practice, see Figure 1. However, can we understand ensemble and knowledge distillation in DL as feature selections using NTK? Unfortunately, our empirical results provide many counter examples towards those arguments, see discussions below and Figure 1. Contradiction 1: training average works even better. Although ensemble of linear functions over NTK features with different random seeds: fi(x) = ⟨W (i),ΦW (i)0 (x)⟩ does improve test accuracy, however, such improvement is mainly due to the use of a larger set of random features, whose combinations contain functions that generalize better. To see this, we observe that an even superior performance (than the ensemble) can simply be obtained by directly training F (x) = 1L ( f1+f2+· · ·+fL ) from random initialization. In contrast, recall if fi(x)’s are multi-layer neural networks with different random seeds, then training their average barely gives any better performance comparing to individual networks fi, as now all the fi’s are capable of learning the same set of features. Contradiction 2: knowledge distillation does not work. For NTK feature mappings, we observe that the result obtained by ensemble cannot be distilled at all into individual models, indicating the features selected by ensemble is not contained in the feature Φ W (i) 0 (x) of any individual model. In contrast, in actual deep learning, ensemble does not enlarge feature space: so an individual neural network is capable of learning the features of the ensemble model. In sum, ensemble in deep learning may be very different from ensemble in random features. It may be more accurate to study ensemble / knowledge distillation in deep learning as a feature learning process, instead of a feature selection process. But still, we point out a fundamental difficulty: Key challenge: If a single deep learning model is capable of — through knowledge distillation — learning the features of the ensemble model and achieving better test accuracy comparing to training the single model directly (and the same training accuracy, typically at global optimal of 100%), then why the single model cannot learn these features directly when we train the model to match the true data labels? What is the dark knowledge hidden in the output of ensemble (a.k.a. soft label)2 comparing to the original hard label? 2.2 ENSEMBLE IN DEEP LEARNING: A FEATURE LEARNING PROCESS Before addressing the key challenge, we point out that prior works are very limited with respect to studying neural network training as a feature learning process. Most of the existing works proving that neural networks can learn features only focus on the case when the input is Gaussian or 2For a k-class classification problem, the output of a model g(x) is usually k-dimensional, and represents a soft-max probability distribution over the k target classes. This is known as the soft label. Gaussian-like — see for instance (Kawaguchi, 2016; Soudry & Carmon, 2016; Xie et al., 2016) and many others. However, as we demonstrate in Figure 7 in the full version, Ensemble in DL might not improve test accuracy when inputs are Gaussian-like: Empirically, ensemble does not improve test accuracy in deep learning, in certain scenarios when the distribution of the input data is Gaussian or even mixture of Gaussians. This is true over various learner network structures (fully-connected, residual, convolution neural networks) and various labeling functions (when the labels are generated by linear functions, fully-connected, residual, convolutional networks, with/without label noise, with/without classification margin). Bias variance view of ensemble: Some prior works also try to attribute the benefit of ensemble as reducing the variance of individual solutions due to label noise or non-convex landscape of the training objective. However, reducing such variance can reduce a convex test loss (typically crossentropy), but not necessarily the test classification error. Concretely, the synthetic experiments in Figure 7 show that, after applying ensemble over Gaussian-like inputs, the variance of the model outputs is reduced but the test accuracy is not improved. We give many more empirical evidences to show that the variance (either from label noise or from the non-convex landscape) is usually not the cause for why ensemble works in deep learning, see Section 5. Hence, to understand the true benefit of ensemble in deep learning in theory, we would like to study a setting that can approximate practical deep learning, where: • The input distribution is more structured than standard Gaussian and there is no label noise. (From above discussions, ensemble cannot work for deep learning distribution-freely). • The individual neural networks all are well-trained, in the sense that the training accuracy in the end is 100%, and there is nearly no variance in the test accuracy for individual models. (So training never fails.) In this work, we propose to study a setting of data that we refer to as multi-view, where the above two conditions both hold when we train a two-layer neural networks with (smoothed-)ReLU activations. We also argue that the multi-view structure we consider is fairly common in the data sets used in practice, in particular for vision tasks. We give more details below. 2.3 OUR APPROACH: LEARNING MULTI-VIEW DATA Let us first give a thought experiment to illustrate our approach, and we present the precise mathematical definition of the “multi-view” structure in Section 3. Consider a binary classification problem and four “features” v1, v2, v3, v4. The first two features correspond to the first class label, and the next two features correspond to the second class label. In the data distribution: • When the label is class 1, then:3{ both v1, v2 appears with weight 1, one of v3, v4 appears with weight 0.1 w.p. 80%; only v1 appears with weight 1, one of v3, v4 appears with weight 0.1 w.p. 10%; only v2 appears with weight 1, one of v3, v4 appears with weight 0.1 w.p. 10%. • When the label is class 2, then{ both v3, v4 appears with weight 1, one of v1, v2 appears with weight 0.1 w.p. 80%; only v3 appears with weight 1, one of v1, v2 appears with weight 0.1 w.p. 10%; only v4 appears with weight 1, one of v1, v2 appears with weight 0.1 w.p. 10%. 3One can for simplicity think of “v appears with weight α and w appears with weight β” as data = αv + βw + noise. We call the 80% of the data multi-view data: these are the data where multiple features exist and can be used to classify them correctly. We call the rest 20% of the data single-view data: some features for the correct labels are missing. 4 How individual neural networks learn. Under the multi-view data defined above, if we train a neural network using the cross-entropy loss via gradient descent (GD) from random initialization, during the training process of the individual networks, we show that: • The network will quickly pick up one of the feature v ∈ {v1, v2} for the first label, and one of the features v′ ∈ {v3, v4} for the second label. So, 90% of the training examples, consisting of all the multi-view data and half of the single-view data (those with feature v or v′), are classified correctly. Once classified correctly (with a large margin), these data begin to contribute negligible to gradient by the nature of the cross-entropy loss. • Next, the network will memorize (using e.g. the noise in the data) the remaining 10% of the training examples without learning any new features, due to insufficient amount of left-over samples after the first phase, thus achieving training accuracy 100% but test accuracy 90%. How ensemble improves test accuracy. It is simple why ensemble works. Depending on the randomness of initialization, each individual network will pick up v1 or v2 each w.p. 50%. Hence, as long as we ensemble Õ(1) many independently trained models, w.h.p. their ensemble will pick up both features {v1, v2} and both features {v3, v4}. Thus, all the data will be classified correctly. How knowledge distillation works. Perhaps less obvious is how knowledge distillation works. Since ensemble learns all the features v1, v2, v3, v4, given a multi-view data with label 1, the ensemble will actually output ∝ (2, 0.1), where the 2 comes from features v1, v2 and 0.1 comes from one of v3, v4. On the other hand, an individual model learning only one of v3, v4 will actually output ∝ (2, 0) when the feature v3 or v4 in the data does not match the one learned by the model. Hence, by training the individual model to match the output of the ensemble, the individual model is forced to learn both features v3, v4, even though it has already perfectly classified the training data. This is the “dark knowledge” hidden in the output of the ensemble model. (This theoretical finding is consistent with practice: Figure 8 in the full paper suggests that models trained from knowledge distillation should have learned most of the features, and further computing their ensemble does not give much performance boost.) Significance of our technique. Our work belongs to the generic framework of feature learning in DL where one proves that certain aspects of the algorithm (e.g. the randomness) affects the order 4Meaningfulness of our multi-view hypothesis. Such “multi-view” structure is very common in many of the datasets where deep learning excels. In vision datasets in particular, as illustrated in Figure 2, a car image can be classified as a car by looking at the headlights, the wheels, or the windows. For a typical placement of a car in images, we can observe all these features and use any of these features to classify it as a car. However, there are car images taken from a particular angle, where one or more features can be missing. For example, an image of a car facing forward might be missing the wheel feature. Moreover, some car might also have a small fraction of “cat features”: for example, the headlight might appear similar to cat eyes the ear of a cat. This can be used as the “dark knowledge” by the single model to learn from the ensemble. In Figure 3, we visualize the learned features from an actual neural network to show that they can indeed capture different views. In Figure 5, we plot the “heatmap” for some car images to illustrate that single models (trained from different random seeds) indeed pick up different parts of the input image to classify it as a car. In Figure 9, we manually delete for instance 7/8 of the channels in some intermediate layer of a ResNet, and show that the test accuracy may not be affected by much after ensemble — thus supporting that the multi-view hypothesis can indeed exist even in the intermediate layers of a neural network and ensemble is indeed collecting all these views. where features are learned. This is fundamentally different from convex optimization, such as kernel method, where (with ℓ2 regularization) there is an unique global minimum so the choice of the random seed does not matter (thus, ensemble does not help). There are other works that consider other aspects, such as the choice of learning rate, that can affect the order where the features are learned (Li et al., 2019). Our work is fundamentally different: they only focus on the NTK setting where the features are not learned; we study a feature learning process. Recall, the NTK setting cannot be used to explain ensemble and distillation in DL. Our work extends the reach of traditional machine learning theory, where typically the “generalization” is separated from “optimization.” Such “separate” treatment might not be enough to understand how deep learning works. 3 PROBLEM SETUP The “multi-view” data distribution is a straight-forward generalization of the intuitive setting in Section 2.3. For simplicity, in the main body, we use example choices of the parameters mainly a function of k (such as P = k2, γ = 1k1.5 , µ = k1.2 N , ρ = k −0.01, σ0 = 1/ √ k as we shall see), and we consider the case when k is sufficiently large. In our full version, we shall give a much larger range of parameters for the theorems to hold. 3.1 DATA DISTRIBUTION AND NOTATIONS We consider learning a k-class classification problem over P -patch inputs, where each patch has dimension d. In symbols, each labelled data is represented by (X, y) where X = (x1, x2, · · · , xP ) ∈ (Rd)P is the data vector and y ∈ [k] is the data label. For simplicity, we focus on the case when P = k2, and d = poly(k) for a large polynomial. We consider the setting when k is sufficiently large.5 We use “w.h.p.” to denote with probability at least 1− e−Ω(log2 k), and use Õ, Θ̃, Ω̃ notions to hide polylogarithmic factors in k. We first assume that each label class j ∈ [k] has multiple associated features, say two features for the simplicity of math, represented by unit feature vectors vj,1, vj,2 ∈ Rd. For notation simplicity, we assume that all the features are orthogonal, namely, ∀j, j′ ∈ [k], ∀ℓ, ℓ′ ∈ [2], ∥vj,ℓ∥2 = 1 and vj,ℓ⊥vj′,ℓ′ when (j, ℓ) ̸= (j′, ℓ′) although our work also extends to the “incoherent” case trivially. We denote by V := {vj,1, vj,2}j∈[k] the set of all features. We consider the following data and label distribution. Let Cp be a global constant, s ∈ [1, k0.2] be a sparsity parameter. To be concise, we define the multi-view distribution Dm and single-view distribution Ds together. Due to space limitation, here we hide the specification of the random “noise” and defer it to the full version.6 Definition 3.1 (data distributions Dm and Ds). Given D ∈ {Dm,Ds}, we define (X, y) ∼ D as follows. First choose the label y ∈ [k] uniformly at random. Then, the data vector X is generated 5If we want to work with fixed k, say k = 2, our theorem can also be modified to that setting by increasing the number of features per class. We keep our current setting with two features to simplify the notations. 6At a high level, we shall allow such “noise” to be any feature noise plus Gaussian noise, such as noise =∑ v′∈V αp,v′v ′ + ξp ∈ Rd, where each αp,v′ ∈ [0, γ] can be arbitrary, and ξp ∼ N (0, σ2pI). as follows (also illustrated in Figure 4). 1. Denote V(X) = {vy,1, vy,2} ∪ V ′ as the set of feature vectors used in this data vector X , where V ′ is a set of features uniformly sampled from {vj′,1, vj′,2}j′∈[k]\{y}, each with probability sk . 2. For each v ∈ V(X), pick Cp many disjoint patches in [P ] and denote it as Pv(X) ⊂ [P ] (the distribution of these patches can be arbitrary). We denote P(X) = ∪v∈V(X)Pv(X). 3. If D = Ds is the single-view distribution, pick a value ℓ̂ = ℓ̂(X) ∈ [2] uniformly at random. 4. For each v ∈ V(X) and p ∈ Pv(X), we set xp = zpv + “noise” ∈ Rd, where, the random coefficients zp ≥ 0 satisfy that: In the case of multi-view distribution D = Dm, • ∑ p∈Pv(X) zp ∈ [1, O(1)] when v ∈ {vy,1, vy,2}, 7 • ∑ p∈Pv(X) zp ∈ [Ω(1), 0.4] when v ∈ V(X) \ {vy,1, vy,2}, 8 In the case of single-view distribution D = Ds, • ∑ p∈Pv(X) zp ∈ [1, O(1)] when v = vy,ℓ̂, • ∑ p∈Pv(X) zp ∈ [ρ,O(ρ)] when v = vy,3−ℓ̂, • ∑ p∈Pv(X) zp ∈ [Ω(Γ),Γ] when v ∈ V(X) \ {vy,1, vy,2}. 5. For each p ∈ [P ] \ P(X), we set xp to consist only of “noise”. Remark 3.2. The distribution of how to pick P(X) and assign ∑ p∈Pv(X) zp to each patch in p ∈ Pv(X) can be arbitrary (and can depend on other randomness in the data as well). In particular, we have allowed different features vj,1, vj,2 to show up with different weights in the data (for example, for multi-view data, some view vy,1 can consistently have larger zp comparing to vy,2). Yet, we shall prove that the order to learn these features by the learner network can still be flipped depending on the randomness of network initialization. Interpretation of our data distribution. As we argue more in the full paper, our setting can be tied to a down-sized version of convolutional networks applied to image classification data. With a small kernel size, good features in an image typically appear only at a few patches, and most other patches are random noise or low-magnitude feature noises. More importantly, our noise parameters shall ensure that, the concept class is not learnable by linear classifiers or constant degree polynomials. We believe a (convolutional) neural network with ReLU-like activation is somewhat necessary. Our final data distribution D, and the training data set Z are formally given as follows. Definition 3.3 (D and Z). The distributionD consists of data fromDm w.p. 1−µ and fromDs w.p. µ. We are givenN training samples fromD, and denote the training data set asZ = Zm∪Zs where Zm and Zs respectively represent multi-view and single-view training data. We write (X, y) ∼ Z as (X, y) sampled uniformly at random from the empirical data set, and denote Ns = |Zs|. We again for simplicity focus on the setting when µ = 1poly(k) and we are given samples N = k 1.2/µ so each label i appears at least Ω̃(1) in Zs. Our result trivially applies to many other choices of N . 3.2 LEARNER NETWORK We consider a learner network using the following smoothed ReLU activation function R̃eLU: Definition 3.4. For integer q ≥ 2 and threshold ϱ = 1polylog(k) , the smoothed function R̃eLU(z) := 0 for z ≤ 0; R̃eLU(z) := z q qϱq−1 for z ∈ [0, ϱ]; and R̃eLU(z) := z − (1− 1 q )ϱ for z ≥ ϱ. Since R̃eLU is smooth we denote its gradient as R̃eLU ′ (z). We focus on q = 4 while our result applies to other constants q ≥ 3 (see full version) or most other forms of smoothing. 7For instance, the marginal distribution of Z = ∑ p∈Pv(X) zp can be uniform over [1, 2]. 8For instance, the marginal distribution of Z = ∑ p∈Pv(X) zp can be uniform over [0.2, 0.4]. The learner network F (X) = (F1(X), . . . , Fk(X)) ∈ Rk is a two-layer convolutional network parameterized by wi,r ∈ Rd for i ∈ [k], r ∈ [m], satisfying ∀i ∈ [k] : Fi(X) = ∑ r∈[m] ∑ p∈[P ] R̃eLU(⟨wi,r, xp⟩) Although there exists network with m = 2 that can classify the data correctly (e.g. wi,r = vi,r for r ∈ [2]), in this paper, for efficient optimization purpose it is convenient to work on a moderate level of over-parameterization: m ∈ [polylog(k), k]. Our lower bounds hold for any m in this range and upper bounds hold even for small over-parameterization m = polylog(k). Training a single model. We learn the concept class (namely, the labeled data distribution) using gradient descent with learning rate η > 0, over the cross-entropy loss function L using N training data points Z = {(Xi, yi)}i∈[N ]. We denote the empirical loss as: L(F ) = 1N ∑ i∈[N ] L(F ;Xi, yi) = E(X,y)∼Z [L(F ;X, y)] where L(F ;X, y) = − log e Fy(X)∑ j∈[k] e Fj(X) . We randomly initialize the network F by letting each w (0) i,r ∼ N (0, σ20I) for σ20 = 1/k, which is the most standard initialization people use in practice. To train a single model, at each iteration t we update using gradient descent (GD):9 w (t+1) i,r ← w (t) i,r − η E(X,y)∼Z ∇wi,rL(F (t);X, y) (3.1) We run the algorithm for T = poly(k)/η iterations. We use F (t) to denote the model F with hidden weights {w(t)i,r} at iteration t. Notations. We denote by logiti(F,X) := e Fi(X)∑ j∈[k] e Fj(X) . Using this, we can write down ∀i ∈ [k], r ∈ [m] : −∇wi,rL(F ;X, y) = (1i ̸=y − logiti(F,X))∇wi,rFi(X) . 4 MAIN THEOREMS AND EXPLANATIONS We now state the main theorems (and the one for self-distillation is in the full paper).10 Theorem 1 (single model). For every sufficiently large k > 0, everym ∈ [polylog(k), k], every η ≤ 1 poly(k) , suppose we train a single model using the gradient descent update (3.1) starting from the random initialization defined in Section 3.2, then after T = poly(k)η many iterations, with probability ≥ 1− e−Ω(log2 k), the model F (T ) satisfies: • (training is perfect): meaning for all (X, y) ∈ Z , all i ∈ [k] \ {y}: F (T )y (X) > F (T )i (X). • (test accuracy is consistently bad): meaning that: Pr(X,y)∼D[∃i ∈ [k] \ {y} : F (T ) y (X) < F (T ) i (X)] ∈ [0.49µ, 0.51µ] . We shall give technical intuitions about why Theorem 1 holds in the full version. But, at a high-level, we shall construct a “lottery winning” setM ⊆ [k] × [2] of cardinality |M| ∈ [k(1 − o(1)), k]. It only depends on the random initialization of F . Then, with some effort we can prove that, for every (i, ℓ) ∈ M, at the end of the training F (T ) will learn feature vi,ℓ but not learn feature vi,3−ℓ. This means for those single-view data (X, y) with y = i and ℓ̂(X) = 3 − ℓ, the final network F (T ) will predict its label wrong. This is why the final test accuracy is around 0.5µ. Note the property that test accuracy consistently belongs to the range [0.49µ, 0.51µ] should be reminiscent of message ⑤ in Figure 6, where multiple single models, although starting from different random initialization, in practice does have a relatively small variance in test accuracies. 9Our result also extends to the case when there is a weight decay, discussed in the full version. 10We shall restate these theorems in the full version with more details and a wider range of parameters. Ensemble. Suppose {F [ℓ]}ℓ∈[K] are K = Ω̃(1) independently trained models of F with m = polylog(k) for T = O ( poly(k) η ) iterations (i.e., the same setting as Theorem 1 except we only need a small over-parameterization m = polylog(k)). Let us define their ensemble G(X) = Θ̃(1)K ∑ ℓ F [ℓ](X) (4.1) Theorem 2 (ensemble). In the same setting as Theorem 1 except now we only need a small m = polylog(k), we have for the ensemble model G in (4.1), with probability at least 1− e−Ω(log2 k): • (training is perfect): meaning for all (X, y) ∈ Z , for all i ∈ [k] \ {y}: Gy(X) > Gi(X). • (test accuracy is almost perfect): meaning that: Pr(X,y)∼D[∃i ∈ [k] \ {y} : Gy(X) < Gi(X)] ≤ 0.001µ . As we discussed in Section 2.3, the reason Theorem 2 holds attributes to the fact that those lottery winning setsM depend on the random initialization of the networks; and therefore, when multiple models are put together, their “union” of M shall cover all possible features {vi,ℓ}(i,ℓ)∈[k]×[2]. Moreover, our theorem only requires individual K = Ω̃(1) models for ensemble, which is indeed “averaging the output of a few independently trained models”. 4.1 KNOWLEDGE DISTILLATION FOR ENSEMBLE We consider a knowledge distillation algorithm given the existing ensemble model G (see (4.1)) as follows. For every label i ∈ [k], let us define the truncated scaled logit as (for τ = 1 log2 k ): logitτi (F,X) = emin{τ 2Fi(X),1}/τ∑ j∈[k] e min{τ2Fj(X),1}/τ (4.2) (This should be reminiscent of the logit function with temperature used by the original knowledge distillation work (Hinton et al., 2015); we use truncation instead which is easier to analyze.) Now, we train a new network F from random initialization (where the randomness is independent of all of those used in F [ℓ]). At every iteration t, we update each weight wi,r by: w (t+1) i,r = w (t) i,r − η∇wi,rL(F (t))− η′ E(X,y)∼Z [ ( logitτi (F (t), X)− logitτi (G,X) )−∇wi,rF (t)i (X)] (4.3) Notation. Throughout the paper we denote by [a]+ = max{0, a} and [a]− = min{0, a}. This knowledge distillation method (4.3) is almost identical to the one used in the original work (Hinton et al., 2015), except we use a truncation during the training to make it more (theoretically) stable. Moreover, we update the distillation objective using a larger learning rate η′ comparing to η of the cross-entropy objective. This is also consistent with the training schedule used in (Hinton et al., 2015). Let F (t) be the resulting network obtained by (4.3) at iteration t. We have the following theorem: Theorem 3 (ensemble distillation). Consider the distillation algorithm (4.3) in which G is the ensemble model defined in (4.1). For every k > 0, for m = polylog(k), for every η ≤ 1poly(k) , setting η′ = ηpoly(k), after T = poly(k)η many iterations with probability at least 1 − e −Ω(log2 k), for at least 90% of the iterations t ≤ T : • (training is perfect): meaning for all (X, y) ∈ Z , all i ∈ [k] \ {y}: F (t)y (X) > F (t)i (X). • (test accuracy is almost perfect): meaning that: Pr (X,y)∼D [∃i ∈ [k] \ {y} : F (t)y (X) < F (t) i (X)] ≤ 0.001µ . Remark. Theorem 3 necessarily means that the distilled model F has learned all the features {vi,ℓ}(i,ℓ)∈[k]×[2] from the ensemble model G. This is consistent with our empirical findings in Figure 8: if one trains multiple individual models using knowledge distillation with different random seeds, then their ensemble gives no further performance boost.
1. What are the key contributions and novel aspects introduced by the paper in ensemble and knowledge distillation? 2. What are the strengths of the paper, particularly in its writing, presentation, empirical results, and theoretical analysis? 3. Are there any potential future works to better exploit the theories presented in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper studies the theories of ensemble and knowledge distillation. Insightful studies regarding NTKs and deep neural networks are performed and novel theoretical results based on the multi-view structure of data are established. I get many new insights after reading this paper. Strengths And Weaknesses Strength The writing and presentation are good. It is a pleasure to read this paper. The empirical results on neural tangent features and deep neural networks are novel and interesting. The authors then provide thorough explanations for them, which is appreciated. The theoretical results are rigorous. Weaknesses We know new theories usually motivate new algorithms. I want to know if there are some potential future works to better exploit these theories. Clarity, Quality, Novelty And Reproducibility All aspects are good.
ICLR
Title Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning Abstract We formally study how ensemble of deep learning models can improve test accuracy, and how the superior performance of ensemble can be distilled into a single model using knowledge distillation. We consider the challenging case where the ensemble is simply an average of the outputs of a few independently trained neural networks with the same architecture, trained using the same algorithm on the same data set, and they only differ by the random seeds used in the initialization. We show that ensemble/knowledge distillation in deep learning works very differently from traditional learning theory (such as boosting or NTKs). We develop a theory showing that when data has a structure we refer to as “multi-view”, then ensemble of independently trained neural networks can provably improve test accuracy, and such superior test accuracy can also be provably distilled into a single model. Our result sheds light on how ensemble works in deep learning in a way that is completely different from traditional theorems, and how the “dark knowledge” is hidden in the outputs of the ensemble and can be used in distillation.1 1 INTRODUCTION Ensemble (Dietterich, 2000; Hansen & Salamon, 1990; Polikar, 2006) is one of the most powerful techniques in practice to improve the performance of deep learning. By simply averaging the outputs of merely a few (like 3 or 10) independently-trained neural networks of the same architecture, using the same training method over the same training data, it can significantly boost the prediction accuracy over the test set comparing to individual models. The only difference is the randomness used to initialize these networks and/or the randomness during training. Moreover, it is discovered by Hinton et al. (2015) that such superior performance of the ensemble can be transferred into a single model (of the same size as the individual models) using a technique called knowledge distillation: that is, simply train a single model to match the output of the ensemble (such as “90% cat + 10% car”, also known as soft labels) as opposite to the true data labels, over the same training data. On the theory side, there are lots of works studying the superior performance of ensemble from principled perspectives (see full version for citations). However, most of these works only apply to: (1). Boosting: where the coefficients associated with the combinations of the single models are actually trained, instead of simply taking average; (2). Bootstrapping/Bagging: the training data are different for each single model; (3). Ensemble of models of different types and architectures; or (4). Ensemble of random features or decision trees. To the best of our knowledge, none of these cited works apply to the particular type of ensemble that is widely used in deep learning: simply take a uniform average of the output of the learners, which are neural networks with the same architecture and are trained by stochastic gradient descent (SGD) over the same training set. In fact, very critically, for deep learning models: • TRAINING AVERAGE DOES NOT WORK: if one directly trains to learn an average of individual neural networks initialized by different seeds, the performance is much worse than ensemble. • KNOWLEDGE DISTILLATION WORKS: the superior performance of ensemble in deep learning can be distilled into a single model (Hinton et al., 2015). 1Full version of this paper can be found on https://arxiv.org/abs/2012.09816. • SELF-DISTILLATION WORKS: even distilling a single model into another of the same size, there is performance boost. (Furlanello et al., 2018; Mobahi et al., 2020; Zhang et al., 2019) We are unaware of any satisfactory theoretical explanation for the phenomena above. For instance, as we shall argue, some traditional view for why ensemble works, such as ‘ensemble can enlarge the feature space in random feature mappings’, even give contradictory explanations to the above phenomena, thus cannot explain knowledge distillation or ensemble in deep learning. Motivated by this gap between theory and practice we study the following question for multi-class classification: Our theoretical questions: How does ensemble improve the test-time performance in deep learning when we simply (unweightedly) average over a few independently trained neural networks? – Especially when all the neural networks have the same architecture, are trained over the same data set using the same standard training algorithm and only differ by the random seeds, and even when all single models already have 100% training accuracy? How can such superior test-time performance of ensemble be later “distilled” into a single neural network of the same architecture, simply by training the single model to match the output of the ensemble over the same training data set? Our results. We prove for certain multi-class classification tasks with a special structure we refer to as multi-view, with a training set Z consisting of N i.i.d. samples from some unknown distribution D, for certain two-layer convolutional network f with (smoothed-)ReLU activation as learner: • (Single model has bad test accuracy): there is a value µ > 0 such that when a single model f is trained over Z using the cross-entropy loss, via gradient descent (GD) starting from random Gaussian initialization, the model can reach zero training error efficiently. However, w.h.p. the prediction (classification) error of f over D is between 0.49µ and 0.51µ. • (Ensemble provably improves test accuracy): let f1, f2, · · · , fL be L = Ω̃(1) independently trained single models as above, then w.h.p. G = 1L ∑ ℓ fℓ has prediction error ≤ 0.01µ over D. • (Ensemble can be distilled into a single model): if we further train (using GD from random initialization) another single model f0 (same architecture as each fℓ) to match the output of G = 1L ∑ ℓ fℓ merely over the same training data set Z , then f0 can be trained efficiently and w.h.p. f0 will have prediction error ≤ 0.01µ over D as well. • (Self-distillation also improves test accuracy): if we further train (using GD from random ini- tialization) another single model f ′ (same architecture as f1) to match the output of the single model f1 merely over the same training data set Z , then f ′ can be trained efficiently and w.h.p. has prediction error at most≤ 0.26µ overD. The main idea is that self-distillation is performing “implicit ensemble + knowledge distillation”, as we shall argue in Section 4.2. We defer discussions of our empirical results to Section 5. However, we highlight some of the empirical findings, as they shall confirm and justify our theoretical approach studying ensemble and knowledge distillation in deep learning. Specifically, we give empirical evidences showing that: • Knowledge distillation does not work for random feature mappings; and ensemble in deep learning is very different from ensemble in random feature mappings (see Figure 1). • Special structures in data (such as the “multi-view” structure we shall introduce) is needed for ensemble of neural networks to work. • The variance due to label noise or the non-convex landscape of training, in the independentlytrained models, may not be connected to the superior performance of ensemble in deep learning. 2 OUR METHODOLOGY AND INTUITION 2.1 A FAILURE ATTEMPT USING RANDOM FEATURE MAPPINGS The recent advance in deep learning theory shows that under certain circumstances, neural networks can be treated as a linear function over random feature mappings — see (Allen-Zhu et al., 2019b; Arora et al., 2019b; Daniely et al., 2016; Du et al., 2018b; Jacot et al., 2018; Zou et al., 2018) and the references therein. In particular, the theory shows when f : RD+d → R is a neural network with inputs x ∈ Rd and weights W ∈ RD, in some cases, f(W,x) can be approximated by: f(W,x) ≈ f(W0, x) + ⟨W −W0,∇W f(W0, x)⟩ where W0 is the random initialization of the neural network, and ΦW0(x) := ∇W f(W0, x) is the neural tangent kernel (NTK) feature mapping. This is known as the NTK approach. If this approximation holds, then training a neural network can be approximated by learning a linear function over random features ΦW0(x), which is very theory-friendly. Ensemble works for random features / NTK. Traditional theorems (Alhamdoosh & Wang, 2014; Brown et al., 2005a; Bryll et al., 2003; Tsymbal et al., 2005) suggest that the ensemble of independently trained random feature models can indeed significantly improve test-time performance, as it enlarges the feature space from ΦW0(x) to {ΦW (i)0 (x)}i∈[L] for L many independently sampled W (i) 0 . This can be viewed as a feature selection process (Alvarez et al., 2012; Cai et al., 2018; Oliveira et al., 2003; Opitz, 1999; Rokach, 2010), and we have confirmed it for NTK in practice, see Figure 1. However, can we understand ensemble and knowledge distillation in DL as feature selections using NTK? Unfortunately, our empirical results provide many counter examples towards those arguments, see discussions below and Figure 1. Contradiction 1: training average works even better. Although ensemble of linear functions over NTK features with different random seeds: fi(x) = ⟨W (i),ΦW (i)0 (x)⟩ does improve test accuracy, however, such improvement is mainly due to the use of a larger set of random features, whose combinations contain functions that generalize better. To see this, we observe that an even superior performance (than the ensemble) can simply be obtained by directly training F (x) = 1L ( f1+f2+· · ·+fL ) from random initialization. In contrast, recall if fi(x)’s are multi-layer neural networks with different random seeds, then training their average barely gives any better performance comparing to individual networks fi, as now all the fi’s are capable of learning the same set of features. Contradiction 2: knowledge distillation does not work. For NTK feature mappings, we observe that the result obtained by ensemble cannot be distilled at all into individual models, indicating the features selected by ensemble is not contained in the feature Φ W (i) 0 (x) of any individual model. In contrast, in actual deep learning, ensemble does not enlarge feature space: so an individual neural network is capable of learning the features of the ensemble model. In sum, ensemble in deep learning may be very different from ensemble in random features. It may be more accurate to study ensemble / knowledge distillation in deep learning as a feature learning process, instead of a feature selection process. But still, we point out a fundamental difficulty: Key challenge: If a single deep learning model is capable of — through knowledge distillation — learning the features of the ensemble model and achieving better test accuracy comparing to training the single model directly (and the same training accuracy, typically at global optimal of 100%), then why the single model cannot learn these features directly when we train the model to match the true data labels? What is the dark knowledge hidden in the output of ensemble (a.k.a. soft label)2 comparing to the original hard label? 2.2 ENSEMBLE IN DEEP LEARNING: A FEATURE LEARNING PROCESS Before addressing the key challenge, we point out that prior works are very limited with respect to studying neural network training as a feature learning process. Most of the existing works proving that neural networks can learn features only focus on the case when the input is Gaussian or 2For a k-class classification problem, the output of a model g(x) is usually k-dimensional, and represents a soft-max probability distribution over the k target classes. This is known as the soft label. Gaussian-like — see for instance (Kawaguchi, 2016; Soudry & Carmon, 2016; Xie et al., 2016) and many others. However, as we demonstrate in Figure 7 in the full version, Ensemble in DL might not improve test accuracy when inputs are Gaussian-like: Empirically, ensemble does not improve test accuracy in deep learning, in certain scenarios when the distribution of the input data is Gaussian or even mixture of Gaussians. This is true over various learner network structures (fully-connected, residual, convolution neural networks) and various labeling functions (when the labels are generated by linear functions, fully-connected, residual, convolutional networks, with/without label noise, with/without classification margin). Bias variance view of ensemble: Some prior works also try to attribute the benefit of ensemble as reducing the variance of individual solutions due to label noise or non-convex landscape of the training objective. However, reducing such variance can reduce a convex test loss (typically crossentropy), but not necessarily the test classification error. Concretely, the synthetic experiments in Figure 7 show that, after applying ensemble over Gaussian-like inputs, the variance of the model outputs is reduced but the test accuracy is not improved. We give many more empirical evidences to show that the variance (either from label noise or from the non-convex landscape) is usually not the cause for why ensemble works in deep learning, see Section 5. Hence, to understand the true benefit of ensemble in deep learning in theory, we would like to study a setting that can approximate practical deep learning, where: • The input distribution is more structured than standard Gaussian and there is no label noise. (From above discussions, ensemble cannot work for deep learning distribution-freely). • The individual neural networks all are well-trained, in the sense that the training accuracy in the end is 100%, and there is nearly no variance in the test accuracy for individual models. (So training never fails.) In this work, we propose to study a setting of data that we refer to as multi-view, where the above two conditions both hold when we train a two-layer neural networks with (smoothed-)ReLU activations. We also argue that the multi-view structure we consider is fairly common in the data sets used in practice, in particular for vision tasks. We give more details below. 2.3 OUR APPROACH: LEARNING MULTI-VIEW DATA Let us first give a thought experiment to illustrate our approach, and we present the precise mathematical definition of the “multi-view” structure in Section 3. Consider a binary classification problem and four “features” v1, v2, v3, v4. The first two features correspond to the first class label, and the next two features correspond to the second class label. In the data distribution: • When the label is class 1, then:3{ both v1, v2 appears with weight 1, one of v3, v4 appears with weight 0.1 w.p. 80%; only v1 appears with weight 1, one of v3, v4 appears with weight 0.1 w.p. 10%; only v2 appears with weight 1, one of v3, v4 appears with weight 0.1 w.p. 10%. • When the label is class 2, then{ both v3, v4 appears with weight 1, one of v1, v2 appears with weight 0.1 w.p. 80%; only v3 appears with weight 1, one of v1, v2 appears with weight 0.1 w.p. 10%; only v4 appears with weight 1, one of v1, v2 appears with weight 0.1 w.p. 10%. 3One can for simplicity think of “v appears with weight α and w appears with weight β” as data = αv + βw + noise. We call the 80% of the data multi-view data: these are the data where multiple features exist and can be used to classify them correctly. We call the rest 20% of the data single-view data: some features for the correct labels are missing. 4 How individual neural networks learn. Under the multi-view data defined above, if we train a neural network using the cross-entropy loss via gradient descent (GD) from random initialization, during the training process of the individual networks, we show that: • The network will quickly pick up one of the feature v ∈ {v1, v2} for the first label, and one of the features v′ ∈ {v3, v4} for the second label. So, 90% of the training examples, consisting of all the multi-view data and half of the single-view data (those with feature v or v′), are classified correctly. Once classified correctly (with a large margin), these data begin to contribute negligible to gradient by the nature of the cross-entropy loss. • Next, the network will memorize (using e.g. the noise in the data) the remaining 10% of the training examples without learning any new features, due to insufficient amount of left-over samples after the first phase, thus achieving training accuracy 100% but test accuracy 90%. How ensemble improves test accuracy. It is simple why ensemble works. Depending on the randomness of initialization, each individual network will pick up v1 or v2 each w.p. 50%. Hence, as long as we ensemble Õ(1) many independently trained models, w.h.p. their ensemble will pick up both features {v1, v2} and both features {v3, v4}. Thus, all the data will be classified correctly. How knowledge distillation works. Perhaps less obvious is how knowledge distillation works. Since ensemble learns all the features v1, v2, v3, v4, given a multi-view data with label 1, the ensemble will actually output ∝ (2, 0.1), where the 2 comes from features v1, v2 and 0.1 comes from one of v3, v4. On the other hand, an individual model learning only one of v3, v4 will actually output ∝ (2, 0) when the feature v3 or v4 in the data does not match the one learned by the model. Hence, by training the individual model to match the output of the ensemble, the individual model is forced to learn both features v3, v4, even though it has already perfectly classified the training data. This is the “dark knowledge” hidden in the output of the ensemble model. (This theoretical finding is consistent with practice: Figure 8 in the full paper suggests that models trained from knowledge distillation should have learned most of the features, and further computing their ensemble does not give much performance boost.) Significance of our technique. Our work belongs to the generic framework of feature learning in DL where one proves that certain aspects of the algorithm (e.g. the randomness) affects the order 4Meaningfulness of our multi-view hypothesis. Such “multi-view” structure is very common in many of the datasets where deep learning excels. In vision datasets in particular, as illustrated in Figure 2, a car image can be classified as a car by looking at the headlights, the wheels, or the windows. For a typical placement of a car in images, we can observe all these features and use any of these features to classify it as a car. However, there are car images taken from a particular angle, where one or more features can be missing. For example, an image of a car facing forward might be missing the wheel feature. Moreover, some car might also have a small fraction of “cat features”: for example, the headlight might appear similar to cat eyes the ear of a cat. This can be used as the “dark knowledge” by the single model to learn from the ensemble. In Figure 3, we visualize the learned features from an actual neural network to show that they can indeed capture different views. In Figure 5, we plot the “heatmap” for some car images to illustrate that single models (trained from different random seeds) indeed pick up different parts of the input image to classify it as a car. In Figure 9, we manually delete for instance 7/8 of the channels in some intermediate layer of a ResNet, and show that the test accuracy may not be affected by much after ensemble — thus supporting that the multi-view hypothesis can indeed exist even in the intermediate layers of a neural network and ensemble is indeed collecting all these views. where features are learned. This is fundamentally different from convex optimization, such as kernel method, where (with ℓ2 regularization) there is an unique global minimum so the choice of the random seed does not matter (thus, ensemble does not help). There are other works that consider other aspects, such as the choice of learning rate, that can affect the order where the features are learned (Li et al., 2019). Our work is fundamentally different: they only focus on the NTK setting where the features are not learned; we study a feature learning process. Recall, the NTK setting cannot be used to explain ensemble and distillation in DL. Our work extends the reach of traditional machine learning theory, where typically the “generalization” is separated from “optimization.” Such “separate” treatment might not be enough to understand how deep learning works. 3 PROBLEM SETUP The “multi-view” data distribution is a straight-forward generalization of the intuitive setting in Section 2.3. For simplicity, in the main body, we use example choices of the parameters mainly a function of k (such as P = k2, γ = 1k1.5 , µ = k1.2 N , ρ = k −0.01, σ0 = 1/ √ k as we shall see), and we consider the case when k is sufficiently large. In our full version, we shall give a much larger range of parameters for the theorems to hold. 3.1 DATA DISTRIBUTION AND NOTATIONS We consider learning a k-class classification problem over P -patch inputs, where each patch has dimension d. In symbols, each labelled data is represented by (X, y) where X = (x1, x2, · · · , xP ) ∈ (Rd)P is the data vector and y ∈ [k] is the data label. For simplicity, we focus on the case when P = k2, and d = poly(k) for a large polynomial. We consider the setting when k is sufficiently large.5 We use “w.h.p.” to denote with probability at least 1− e−Ω(log2 k), and use Õ, Θ̃, Ω̃ notions to hide polylogarithmic factors in k. We first assume that each label class j ∈ [k] has multiple associated features, say two features for the simplicity of math, represented by unit feature vectors vj,1, vj,2 ∈ Rd. For notation simplicity, we assume that all the features are orthogonal, namely, ∀j, j′ ∈ [k], ∀ℓ, ℓ′ ∈ [2], ∥vj,ℓ∥2 = 1 and vj,ℓ⊥vj′,ℓ′ when (j, ℓ) ̸= (j′, ℓ′) although our work also extends to the “incoherent” case trivially. We denote by V := {vj,1, vj,2}j∈[k] the set of all features. We consider the following data and label distribution. Let Cp be a global constant, s ∈ [1, k0.2] be a sparsity parameter. To be concise, we define the multi-view distribution Dm and single-view distribution Ds together. Due to space limitation, here we hide the specification of the random “noise” and defer it to the full version.6 Definition 3.1 (data distributions Dm and Ds). Given D ∈ {Dm,Ds}, we define (X, y) ∼ D as follows. First choose the label y ∈ [k] uniformly at random. Then, the data vector X is generated 5If we want to work with fixed k, say k = 2, our theorem can also be modified to that setting by increasing the number of features per class. We keep our current setting with two features to simplify the notations. 6At a high level, we shall allow such “noise” to be any feature noise plus Gaussian noise, such as noise =∑ v′∈V αp,v′v ′ + ξp ∈ Rd, where each αp,v′ ∈ [0, γ] can be arbitrary, and ξp ∼ N (0, σ2pI). as follows (also illustrated in Figure 4). 1. Denote V(X) = {vy,1, vy,2} ∪ V ′ as the set of feature vectors used in this data vector X , where V ′ is a set of features uniformly sampled from {vj′,1, vj′,2}j′∈[k]\{y}, each with probability sk . 2. For each v ∈ V(X), pick Cp many disjoint patches in [P ] and denote it as Pv(X) ⊂ [P ] (the distribution of these patches can be arbitrary). We denote P(X) = ∪v∈V(X)Pv(X). 3. If D = Ds is the single-view distribution, pick a value ℓ̂ = ℓ̂(X) ∈ [2] uniformly at random. 4. For each v ∈ V(X) and p ∈ Pv(X), we set xp = zpv + “noise” ∈ Rd, where, the random coefficients zp ≥ 0 satisfy that: In the case of multi-view distribution D = Dm, • ∑ p∈Pv(X) zp ∈ [1, O(1)] when v ∈ {vy,1, vy,2}, 7 • ∑ p∈Pv(X) zp ∈ [Ω(1), 0.4] when v ∈ V(X) \ {vy,1, vy,2}, 8 In the case of single-view distribution D = Ds, • ∑ p∈Pv(X) zp ∈ [1, O(1)] when v = vy,ℓ̂, • ∑ p∈Pv(X) zp ∈ [ρ,O(ρ)] when v = vy,3−ℓ̂, • ∑ p∈Pv(X) zp ∈ [Ω(Γ),Γ] when v ∈ V(X) \ {vy,1, vy,2}. 5. For each p ∈ [P ] \ P(X), we set xp to consist only of “noise”. Remark 3.2. The distribution of how to pick P(X) and assign ∑ p∈Pv(X) zp to each patch in p ∈ Pv(X) can be arbitrary (and can depend on other randomness in the data as well). In particular, we have allowed different features vj,1, vj,2 to show up with different weights in the data (for example, for multi-view data, some view vy,1 can consistently have larger zp comparing to vy,2). Yet, we shall prove that the order to learn these features by the learner network can still be flipped depending on the randomness of network initialization. Interpretation of our data distribution. As we argue more in the full paper, our setting can be tied to a down-sized version of convolutional networks applied to image classification data. With a small kernel size, good features in an image typically appear only at a few patches, and most other patches are random noise or low-magnitude feature noises. More importantly, our noise parameters shall ensure that, the concept class is not learnable by linear classifiers or constant degree polynomials. We believe a (convolutional) neural network with ReLU-like activation is somewhat necessary. Our final data distribution D, and the training data set Z are formally given as follows. Definition 3.3 (D and Z). The distributionD consists of data fromDm w.p. 1−µ and fromDs w.p. µ. We are givenN training samples fromD, and denote the training data set asZ = Zm∪Zs where Zm and Zs respectively represent multi-view and single-view training data. We write (X, y) ∼ Z as (X, y) sampled uniformly at random from the empirical data set, and denote Ns = |Zs|. We again for simplicity focus on the setting when µ = 1poly(k) and we are given samples N = k 1.2/µ so each label i appears at least Ω̃(1) in Zs. Our result trivially applies to many other choices of N . 3.2 LEARNER NETWORK We consider a learner network using the following smoothed ReLU activation function R̃eLU: Definition 3.4. For integer q ≥ 2 and threshold ϱ = 1polylog(k) , the smoothed function R̃eLU(z) := 0 for z ≤ 0; R̃eLU(z) := z q qϱq−1 for z ∈ [0, ϱ]; and R̃eLU(z) := z − (1− 1 q )ϱ for z ≥ ϱ. Since R̃eLU is smooth we denote its gradient as R̃eLU ′ (z). We focus on q = 4 while our result applies to other constants q ≥ 3 (see full version) or most other forms of smoothing. 7For instance, the marginal distribution of Z = ∑ p∈Pv(X) zp can be uniform over [1, 2]. 8For instance, the marginal distribution of Z = ∑ p∈Pv(X) zp can be uniform over [0.2, 0.4]. The learner network F (X) = (F1(X), . . . , Fk(X)) ∈ Rk is a two-layer convolutional network parameterized by wi,r ∈ Rd for i ∈ [k], r ∈ [m], satisfying ∀i ∈ [k] : Fi(X) = ∑ r∈[m] ∑ p∈[P ] R̃eLU(⟨wi,r, xp⟩) Although there exists network with m = 2 that can classify the data correctly (e.g. wi,r = vi,r for r ∈ [2]), in this paper, for efficient optimization purpose it is convenient to work on a moderate level of over-parameterization: m ∈ [polylog(k), k]. Our lower bounds hold for any m in this range and upper bounds hold even for small over-parameterization m = polylog(k). Training a single model. We learn the concept class (namely, the labeled data distribution) using gradient descent with learning rate η > 0, over the cross-entropy loss function L using N training data points Z = {(Xi, yi)}i∈[N ]. We denote the empirical loss as: L(F ) = 1N ∑ i∈[N ] L(F ;Xi, yi) = E(X,y)∼Z [L(F ;X, y)] where L(F ;X, y) = − log e Fy(X)∑ j∈[k] e Fj(X) . We randomly initialize the network F by letting each w (0) i,r ∼ N (0, σ20I) for σ20 = 1/k, which is the most standard initialization people use in practice. To train a single model, at each iteration t we update using gradient descent (GD):9 w (t+1) i,r ← w (t) i,r − η E(X,y)∼Z ∇wi,rL(F (t);X, y) (3.1) We run the algorithm for T = poly(k)/η iterations. We use F (t) to denote the model F with hidden weights {w(t)i,r} at iteration t. Notations. We denote by logiti(F,X) := e Fi(X)∑ j∈[k] e Fj(X) . Using this, we can write down ∀i ∈ [k], r ∈ [m] : −∇wi,rL(F ;X, y) = (1i ̸=y − logiti(F,X))∇wi,rFi(X) . 4 MAIN THEOREMS AND EXPLANATIONS We now state the main theorems (and the one for self-distillation is in the full paper).10 Theorem 1 (single model). For every sufficiently large k > 0, everym ∈ [polylog(k), k], every η ≤ 1 poly(k) , suppose we train a single model using the gradient descent update (3.1) starting from the random initialization defined in Section 3.2, then after T = poly(k)η many iterations, with probability ≥ 1− e−Ω(log2 k), the model F (T ) satisfies: • (training is perfect): meaning for all (X, y) ∈ Z , all i ∈ [k] \ {y}: F (T )y (X) > F (T )i (X). • (test accuracy is consistently bad): meaning that: Pr(X,y)∼D[∃i ∈ [k] \ {y} : F (T ) y (X) < F (T ) i (X)] ∈ [0.49µ, 0.51µ] . We shall give technical intuitions about why Theorem 1 holds in the full version. But, at a high-level, we shall construct a “lottery winning” setM ⊆ [k] × [2] of cardinality |M| ∈ [k(1 − o(1)), k]. It only depends on the random initialization of F . Then, with some effort we can prove that, for every (i, ℓ) ∈ M, at the end of the training F (T ) will learn feature vi,ℓ but not learn feature vi,3−ℓ. This means for those single-view data (X, y) with y = i and ℓ̂(X) = 3 − ℓ, the final network F (T ) will predict its label wrong. This is why the final test accuracy is around 0.5µ. Note the property that test accuracy consistently belongs to the range [0.49µ, 0.51µ] should be reminiscent of message ⑤ in Figure 6, where multiple single models, although starting from different random initialization, in practice does have a relatively small variance in test accuracies. 9Our result also extends to the case when there is a weight decay, discussed in the full version. 10We shall restate these theorems in the full version with more details and a wider range of parameters. Ensemble. Suppose {F [ℓ]}ℓ∈[K] are K = Ω̃(1) independently trained models of F with m = polylog(k) for T = O ( poly(k) η ) iterations (i.e., the same setting as Theorem 1 except we only need a small over-parameterization m = polylog(k)). Let us define their ensemble G(X) = Θ̃(1)K ∑ ℓ F [ℓ](X) (4.1) Theorem 2 (ensemble). In the same setting as Theorem 1 except now we only need a small m = polylog(k), we have for the ensemble model G in (4.1), with probability at least 1− e−Ω(log2 k): • (training is perfect): meaning for all (X, y) ∈ Z , for all i ∈ [k] \ {y}: Gy(X) > Gi(X). • (test accuracy is almost perfect): meaning that: Pr(X,y)∼D[∃i ∈ [k] \ {y} : Gy(X) < Gi(X)] ≤ 0.001µ . As we discussed in Section 2.3, the reason Theorem 2 holds attributes to the fact that those lottery winning setsM depend on the random initialization of the networks; and therefore, when multiple models are put together, their “union” of M shall cover all possible features {vi,ℓ}(i,ℓ)∈[k]×[2]. Moreover, our theorem only requires individual K = Ω̃(1) models for ensemble, which is indeed “averaging the output of a few independently trained models”. 4.1 KNOWLEDGE DISTILLATION FOR ENSEMBLE We consider a knowledge distillation algorithm given the existing ensemble model G (see (4.1)) as follows. For every label i ∈ [k], let us define the truncated scaled logit as (for τ = 1 log2 k ): logitτi (F,X) = emin{τ 2Fi(X),1}/τ∑ j∈[k] e min{τ2Fj(X),1}/τ (4.2) (This should be reminiscent of the logit function with temperature used by the original knowledge distillation work (Hinton et al., 2015); we use truncation instead which is easier to analyze.) Now, we train a new network F from random initialization (where the randomness is independent of all of those used in F [ℓ]). At every iteration t, we update each weight wi,r by: w (t+1) i,r = w (t) i,r − η∇wi,rL(F (t))− η′ E(X,y)∼Z [ ( logitτi (F (t), X)− logitτi (G,X) )−∇wi,rF (t)i (X)] (4.3) Notation. Throughout the paper we denote by [a]+ = max{0, a} and [a]− = min{0, a}. This knowledge distillation method (4.3) is almost identical to the one used in the original work (Hinton et al., 2015), except we use a truncation during the training to make it more (theoretically) stable. Moreover, we update the distillation objective using a larger learning rate η′ comparing to η of the cross-entropy objective. This is also consistent with the training schedule used in (Hinton et al., 2015). Let F (t) be the resulting network obtained by (4.3) at iteration t. We have the following theorem: Theorem 3 (ensemble distillation). Consider the distillation algorithm (4.3) in which G is the ensemble model defined in (4.1). For every k > 0, for m = polylog(k), for every η ≤ 1poly(k) , setting η′ = ηpoly(k), after T = poly(k)η many iterations with probability at least 1 − e −Ω(log2 k), for at least 90% of the iterations t ≤ T : • (training is perfect): meaning for all (X, y) ∈ Z , all i ∈ [k] \ {y}: F (t)y (X) > F (t)i (X). • (test accuracy is almost perfect): meaning that: Pr (X,y)∼D [∃i ∈ [k] \ {y} : F (t)y (X) < F (t) i (X)] ≤ 0.001µ . Remark. Theorem 3 necessarily means that the distilled model F has learned all the features {vi,ℓ}(i,ℓ)∈[k]×[2] from the ensemble model G. This is consistent with our empirical findings in Figure 8: if one trains multiple individual models using knowledge distillation with different random seeds, then their ensemble gives no further performance boost.
1. What is the focus of the paper regarding ensemble, knowledge distillation, and self-distillation in deep learning? 2. What are the strengths of the proposed approach, particularly in terms of its ability to explain the phenomenon in deep learning? 3. Do you have any concerns or questions regarding the applicability of the proposed method to other types of data structures, such as text or graphs? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper investigated the superior performance of ensemble, knowledge distillation and self-distillation in the field of deep learning. As previous principles of ensemble on traditional machine learning algorithms do not apply to deep learning methods, a novel perspective of "multi-view" data was proposed to explain the phenomenon. Both empirical and theoretical analysis support multi-view theory for ensemble in deep learning. Strengths And Weaknesses Strengths: The paper analyzed an underexplored problem of how ensemble works in deep learning methods. It pointed out contradictions when applying previous principles of traditional algorithms and shed light on understanding deep learning. For the theory side, the analysis was rigorous with clear definitions and concrete examples to help better digest the theorems. Weaknesses: I am just curious whether the hypothesis of multi-view data structure can be extended to different data structures such as texts and graphs. Those data are very different from image data discussed in this paper and 'multi-view' might not be applicable to them. It is not a good practice to include "self-distillation" in the title but defer all details about it to the appendix, which was somewhat misleading even though it was due to page limit. Clarity, Quality, Novelty And Reproducibility Overall, the paper is clearly written and well organized. As it was primarily a theory paper, it was fine to include few empirical results in the main paper. The multi-view data structure was novel to the community and made the first attempt to interpret how ensemble and knowledge distillation works in the field of deep learning. In addition, the theoretical analysis was rigorous except for some minor typos in Contradiction 2 in page 2, where f i and g i were used mixedly.
ICLR
Title Maximum Likelihood Estimation for Multimodal Learning with Missing Modality Abstract Multimodal learning has achieved great successes in many scenarios. Compared with unimodal learning, it can effectively combine the information from different modalities to improve the performance of learning tasks. In reality, the multimodal data may have missing modalities due to various reasons, such as sensor failure and data transmission error. In previous works, the information of the modalitymissing data has not been well exploited. To address this problem, we propose an efficient approach based on maximum likelihood estimation to incorporate the knowledge in the modality-missing data. Specifically, we design a likelihood function to characterize the conditional distributions of the modality-complete data and the modality-missing data, which is theoretically optimal. Moreover, we develop a generalized form of the softmax function to effectively implement maximum likelihood estimation in an end-to-end manner. Such training strategy guarantees the computability of our algorithm capably. Finally, we conduct a series of experiments on real-world multimodal datasets. Our results demonstrate the effectiveness of the proposed approach, even when 95% of the training data has missing modality. 1 INTRODUCTION Multimodal learning is an important research area, which builds models to process and relate information between different modalities (Ngiam et al., 2011; Srivastava & Salakhutdinov, 2014; Baltrušaitis et al., 2018). Compared with unimodal learning, multimodal learning can achieve better performance by properly utilizing the multimodal data. It has been successfully used in many applications, such as multimodal emotion recognition (Soleymani et al., 2011; Mittal et al., 2020), multimedia event detection (Li et al., 2020), and visual question-answering (Yu et al., 2019). With the emergence of big data, multimodal learning becomes more and more important to combine the multimodal data from different sources. A number of previous works (Tzirakis et al., 2017; Zhang et al., 2017; Elliott et al., 2017; Kim et al., 2020; Zhang et al., 2020) have achieved great successes based on complete observations during the training process. However, in practice, the multimodal data may have missing modalities (Du et al., 2018; Ma et al., 2021a;b). This may be caused by various reasons. For instance, the sensor that collects the multimodal data is damaged or the network transmission fails. Examples of the multimodal data are shown in Figure 1. In the past years, different approaches have been proposed to deal with modality missing. A simple and typical way (Hastie et al., 2009) is to directly discard the data with missing modalities. Since the information contained in the modality-missing data is neglected, such method often has limited performance. In addition, researchers (Tran et al., 2017; Chen & Zhang, 2020; Liu et al., 2021; Ma et al., 2021b) have proposed approaches to heuristically combine the information of the modalitymissing data. However, most of these works lack theoretical explanations, and these empirical methods are often implemented using multiple training stages rather than an end-to-end manner, which lead to the information of the modality-missing data not being well exploited. To tackle above issues, we propose an efficient approach based on maximum likelihood estimation to effectively utilize the modality-missing data. To be specific, we present a likelihood function to characterize the conditional distributions of the modality-complete data and the modality-missing data, which is theoretically optimal. Furthermore, we adopt a generalized form of the softmax function to efficiently implement our maximum likelihood estimation algorithm. Such training strategy guarantees the computability of our framework in an end-to-end scheme. In this way, our approach can effectively leverage the information of the modality-missing data during the training process, Finally, we perform several experiments on real-world multimodal datasets, including eNTERFACE’05 (Martin et al., 2006) and RAVDESS (Livingstone & Russo, 2018). The results show the effectiveness of our approach in handling the problem of modality missing. To summarize, our contribution is three-fold: • We design a likelihood function to learn the conditional distributions of the modalitycomplete data and the modality-missing data, which is theoretically optimal. • We develop a generalized form of the softmax function to implement our maximum likelihood estimation framework in an end-to-end manner, which is more effective than previous works. • We conduct a series of experiments on real-world multimodal datasets. The results validate the effectiveness of our approach, even when 95% of the training data has missing modality. 2 METHODOLOGY Our goal is to deal with the problem of modality missing in multimodal learning based on maximum likelihood estimation. In the following, we first show the problem formulation, and then describe the details of our framework. 2.1 PROBLEM FORMULATION In this paper, we consider that the multimodal data has two modalities. Here, the random variables corresponding to these two modalities and their category labels are denoted as X , Y , and Z, respectively. In the training process, we assume that there are two independently observed datasets: modality-complete and modality-missing. We use DXY Z = { (x (i) c , y (i) c , z (i) c ) | z(i)c ∈ Z = {1, 2, · · · , |Z|} }nc i=1 to represent the modality-complete dataset, where x(i)c and y (i) c represent the two modalities of the i-th sample of DXY Z respectively, z (i) c is their corresponding category label, and the size of DXY Z is nc. We then use DXZ = { (x (i) m , z (i) m ) | z(i)m ∈ Z = {1, 2, · · · , |Z|} }nm i=1 to represent the modality-missing dataset, where the size of DXZ is nm. In addition, we adopt [DXY Z ]XY to represent { (x (i) c , y (i) c ) }nc i=1 . [DXY Z ]Z , [DXZ ]X , and [DXZ ]Z are expressed in the same way. The multimodal data of DXY Z and DXZ are assumed to be i.i.d. generated from an unknown underlying joint distribution. By utilizing the knowledge of the modality-complete data and the modality-missing data, we hope our framework can predict the category labels correctly. 2.2 MAXIMUM LIKELIHOOD ESTIMATION FOR MISSING MODALITY In this section, we first present how to design a likelihood function to learn the conditional distributions of the modality-complete data and the modality-missing data. Then, we show that by adopting a generalized form of the softmax function, we design a training strategy to implement our algorithm. 2.2.1 LIKELIHOOD FUNCTION ANALYSES Maximum likelihood estimation is a statistical method of using the observed data to estimate the distribution by maximizing the likelihood function. The estimated distribution makes the observed data most likely (Myung, 2003). With this idea, we study the likelihood function on datasets DXY Z and DXZ . For the classification task, the conditional likelihood is commonly used. Inspired by this, we use a model QXY Z to learn the underlying joint distribution of DXY Z and DXZ . The conditional likelihood can be represented as: ` , P ([DXY Z ]Z , [DXZ ]Z | [DXY Z ]XY , [DXZ ]X ;QXY Z) a = P ([DXY Z ]Z | [DXY Z ]XY ;QXY Z) · P ([DXZ ]Z | [DXZ ]X ;QXY Z) b = ∏ (x,y,z)∈DXY Z QZ|XY (z|xy) · ∏ (x,z)∈DXZ QZ|X(z|x) (1) where the step a follows from the fact that datasets DXY Z and DXZ are observed independently, and the step b is due to that samples in each dataset are i.i.d. QZ|XY and QZ|X are conditional distributions of QXY Z . In this way, we show the likelihood function using the information of DXY Z and DXZ . Then, we use the negative log-likelihood as the loss function to train our deep learning model, i.e., L , − log ` = − ∑ (x,y,z)∈DXY Z logQZ|XY (z|xy)− ∑ (x,z)∈DXZ logQZ|X(z|x) (2) It is worth noting that in (Daniels, 1961; Lehmann, 2004), maximum likelihood estimation is proved to be an asymptotically-efficient strategy, which guarantees the theoretical optimality of our method to deal with modality missing. To optimize L, we use deep neural networks to extract the k-dimensional feature representations from the observation (x, y, z), which are represented as f(x) = [f1(x), f2(x), · · · , fk(x)]T, g(y) = [g1(y), g2(y), · · · , gk(y)]T, and h(z) = [h1(z), h2(z), · · · , hk(z)]T, respectively. We then utilize these features to learn QZ|XY and QZ|X in L. Our framework is shown in Figure 2. In this way, we show the log-likelihood function L. By characterizing the conditional distributions of the modality-complete data and the modality-missing data, it leverages the underlying structure information behind the multimodal data, which constitutes the theoretical basis of our framework. 2.2.2 MAXIMUM LIKELIHOOD ESTIMATION IMPLEMENTATION In fact, it is not easy to optimize the log-likelihood function L in Equation (2) by designing neural networks, which is mainly due to two reasons. Firstly, the representations of the high-dimensional data and the procedure to model them are complicated. Secondly, since QZ|XY and QZ|X in L are related, how to build models to learn their relationships is difficult. To address these two issues, we develop a generalized form of the softmax function to describe QXY Z as follows 1: QXY Z(x, y, z) = RX(x)RY (y)RZ(z) exp(φ T(f(x), g(y))h(z))∑ x′,y′,z′ RX(x ′)RY (y′)RZ(z′) exp(φT(f(x′), g(y′))h(z′)) (3) where φ(f , g) represents the function to fuse features f and g. We study three forms of φ to investigate its effect in our framework, as shown in Figure 3. RX , RY , and RZ represent the underlying marginal distributions of the variables X , Y , and Z, respectively. Their use makes the denominator of Equation (3) expressed in the form of the mean over RX , RY , and RZ , which serves as the normalization to make QXY Z a valid distribution and is helpful for our further derivation. In addition, the generalized softmax function we propose can be regarded as a generalization of softmax learning in (Xu et al., 2018) from unimodal learning to multimodal learning. In this way, we show the distribution QXY Z by adopting a generalized form of the softmax function, which has the following two benefits. Firstly, by depicting the representation of QXY Z , we can further derive QZ|XY and QZ|X . It makes our approach a unified framework to combine the information of the modality-complete data and the modality-missing data. Secondly, it avoids modeling the relationship between QZ|XY and QZ|X . In fact, the correlation between the high-dimensional data can be rather complex. Then, we derive conditional distributions QZ|XY and QZ|X from Equation (3): QZ|XY (z|xy) = RZ(z) exp(φT(f(x), g(y))h(z))∑ z′ RZ(z ′) exp(φT(f(x), g(y))h(z′)) (4) and QZ|X(z|x) = RZ(z) ∑ y′ RY (y ′) exp(φT(f(x), g(y′))h(z))∑ z′ RZ(z ′) ∑ y′ RY (y ′) exp(φT(f(x), g(y′))h(z′)) (5) We can observe that by introducing RX , RY , and RZ into QXY Z , the derived QZ|XY and QZ|X are expressed in the form of the mean over RY and RZ . In practice, we can use the empirical mean as an estimation. Correspondingly, by plugging Equations (4) and (5) into Equation (2), we can summarize the detailed steps to compute our objective function L, as shown in Algorithm 1. It is worth pointing out that when we compute QZ|X , we need to use the information of the modality y. Since in the training process, the modality y of the dataset DXZ is missing, we utilize samples of the modality y of the dataset DXY Z to compute QZ|X . Finally, we utilize neural networks to extract features f , g, and h from the modality-complete data and the modality-missing data to optimize our log-likelihood functionL. It performs classification directly, which does not need to explicitly complement the modality-missing data before the classification task. 1Strictly speaking, RX and RY are probability density functions, and RZ is a probability mass function. The denominator of Equation (3) should be integrated over RX and RY . We use summation here for the simplicity of exposition. Algorithm 1 Compute our objective function on a mini-batch. Input: A modality-complete batch { (x (i) c , y (i) c , z (i) c ) }n1 i=1 , where n1 is the batch size. A modality-missing batch { (x (i) m , z (i) m ) }n2 i=1 , where n2 is the batch size. Neural networks with k output units: f , g, and h. Output: The value of our objective L. 1: Compute empirical label distribution R̂Z : R̂Z(z)← ∑n1 i=1 1(z (i) c =z)+ ∑n2 i=1 1(z (i) m =z) n1+n2 , z = 1, 2, · · · , |Z| 2: Compute QZ|XY : QZ|XY (z (i) c |x(i)c , y(i)c )← R̂Z(z(i)c ) exp(φ T(f(x(i)c ),g(y (i) c ))h(z))∑|Z| z′=1 RZ(z ′) exp(φT(f(x (i) c ),g(y (i) c ))h(z′)) , i = 1, · · · , n1 3: Compute QZ|X : QZ|X(z (i) m |x(i)m ) ← R̂Z(z(i)m ) 1 n1 ∑n1 j=1 exp(φ T(f(x(i)m ),g(y (j) c ))h(z (i) m ))∑|Z| z′=1 RZ(z ′) 1n1 ∑n1 j=1 exp(φ T(f(x (i) m ),g(y (j) c ))h(z′)) , i = 1, · · · , n2 4: Compute our empirical objective L: −∑n1i=1 logQZ|XY (z (i) c |x(i)c , y(i)c )− ∑n2 i=1 logQZ|X(z (i) m |x(i)m ) 3 EXPERIMENTS In this section, we first describe the real-world multimodal datasets used in our experiment, then explain the experimental settings and baseline methods, and finally give the experimental results to show the effectiveness of our approach. 3.1 DATASETS We perform experiments on two public real-world multimodal datasets: eNTERFACE’05 (Martin et al., 2006) and RAVDESS (Livingstone & Russo, 2018). eNTERFACE’05 is an audio-visual emotion database in English. It contains 42 subjects eliciting the six basic emotions: anger, disgust, fear, happiness, sadness, and surprise. There are 213 videos for happiness, and 216 videos for each of the remaining emotions. Following (Ma et al., 2020), we extract 30 segment samples from each video and then obtain a processed dataset with 38,790 samples. RAVDESS is a multimodal database of emotional speech and song, which consists of 24 professional actors in a neutral North American accent. Here, we use the speech part, which includes calm, happy, sad, angry, fearful, surprise, and disgust expressions. Each recording is also in the video form. Similar to the eNTERFACE’05 dataset, we only consider six basic emotions, each of which has 5,760 segment samples. 3.2 EXPERIMENTAL SETTINGS We perform experiments on the processed eNTERFACE’05 and RAVDESS datasets. Each segment of these two datasets has a duration of 0.5 seconds. As shown in ((Ma et al., 2020)), consecutive frames within 0.5 seconds usually contain the same emotion in a similar way, which inspired us to choose the central frame of each segment as the visual modality. This technique not only makes that the visual data contains enough emotional information, but also avoids the redundancy in multiple frames. Besides, the log Mel-spectrogram is extracted from each segment as the audio modality, which is similar to the RGB image. We then feed these data into our framework to obtain the classification result. ResNet-50 (He et al., 2016) is used as the backbone of visual network f and audio network g to extract features from visual and audio modalities, respectively. In addition, we transform the corresponding label into the one-hot form and then extract the label feature using label network h with a fully connected layer. f , g, and h are trained together. On each processed dataset, we split all data into three parts: training set, validation set, and test set. Their proportions are 70%, 15%, and 15%. In practice, modality missing often occurs with a high missing rate (Suo et al., 2019; Ma et al., 2021b). Here, in the training stage, we study three kinds of missing rates: 80%, 90%, and 95%. The case where the audio modality is missing and the case where the visual modality is missing are investigated respectively. Following (Yu et al., 2020; Chen & Zhang, 2020; Du et al., 2021), we set modality missing arising during the training phase to show that a large amount of unimodal data can assist the training of our multimodal learning framework. In the inference phase, we use Equation (4) to predict the class label of the given test data. Finally, we run each experiment five times and report average test accuracies to evaluate the performance of our approach and baseline methods. Adam optimizer (Kingma & Ba, 2015) is used to train neural networks with the learning rate of 0.0001. Both the size of modality-complete batch and the size of modality-missing batch are set to 90. The number of epochs is set to 100. All experiments are implemented by Pytorch (Paszke et al., 2019) on a NVIDIA TITAN V GPU card. 3.3 BASELINE METHODS To show the effectiveness of our method, we compare our approach with the following methods which can also handle modality missing to some extent. • Discarding Modality-incomplete Data (Lower Bound): One simple strategy to handle modality missing is to directly discard the modality-incomplete data, and then only use the modality-complete data for the classification task. This method does not use the information of the data with missing modalities. In our maximum likelihood estimation model, this is equivalent to calculating QZ|XY without calculating QZ|X . Therefore, this method can also be used as the ablation study of our approach. • Hirschfeld-Gebelein-Renyi Maximal Correlation (Hirschfeld, 1935; Gebelein, 1941; Rényi, 1959) (HGR MC): HGR MC is a statistical measure which calculates the dependence between different random variables. It has been successfully used for multimodal learning (Ma et al., 2021a; 2020; Wang et al., 2019; Xu & Huang, 2020). Here, we use it further to deal with modality missing. For the modality-complete data, we learn the maximal correlation between x, y, and z. For the modality-missing data, we learn the maximal correlation between x and z. • Zero Padding (ZP): Padding the feature representation of the missing modality with zero is another widely used way to cope with incomplete modalities (Jo et al., 2019; Chen et al., 2020; Shen et al., 2020). For ZP, we consider two forms of φ to fuse features f and g: addition and concatenation. The reason why the form of outer product is not studied here is that if the feature of one modality is zero, the outer product of it and the non-zero feature of another modality is also zero, which makes the modality-missing data useless. • Autoencoder (AE): An autoencoder is a neural network framework used to learn the representation from the training data. Some previous approaches apply autoencoders to complement For a fair comparison, we make that each method has the same network architecture and training strategy, and report the classification results of each method after the same number of repeated experiments. 3.4 EXPERIMENTAL RESULTS We first conduct classification experiments on the eNTERFACE’05 and RAVDESS datasets by comparing our framework with other methods. The experimental setting is shown in Section 3.2. We report the classification accuracy of each method in each setting. The results are shown in Table 1 and Table 2. We have the following summarizations from Table 1 and Table 2: (1) The methods of AE, HGR MC, ZP, and ours can improve the classification accuracy compared to the Lower Bound method which only uses the modality-complete data. Our method achieves the highest classification performance among all methods under different settings. The higher the missing rate, the more obvious the gap between other methods and our method. These show that our maximum likelihood estimation approach are more effective to tackle modality missing compared with other methods. (2) Different forms of φ will affect the classification performance. For example, for our approach, addition and outer product perform better than concatenation on the eNTERFACE’05 dataset. However, on the RAVDESS dataset, the concatenation form of φ achieves higher classification performance than the addition and outer product forms under some settings. This indicates that in different settings, the discrimination ability of the learned feature representations is different. We need to design the appropriate form of φ to fuse features of the multimodal data. (3) When the visual modality is missing, the classification accuracy is lower than that when the audio modality is missing, indicating that the visual modality has a more significant contribution to the classification performance, which is consistent with previous works (Zhang et al., 2017; Ma et al., 2020). In addition, we show the classification confusion matrices using the methods of AE, HGR MC, ZP, and ours when the missing rate of visual modality reaches 95% on the eNTERFACE’05 dataset, as shown in Figure 4. It can be seen that the classification accuracy of each emotion using AE or HGR MC is not high, which indicates that they can only deal with modality missing to a certain extent. The overall classification performance of ZP is lower than ours, but the classification accuracy of “happiness” is slightly higher than ours. This shows that different emotions have different clues for the classification task. We then investigate the effect of the backbone in coping with modality missing. In the above experiments, we use ResNet-50 as the backbone of different methods to extract feature representations. Here, we replace ResNet-50 with ResNet-34 (He et al., 2016) and VGG-16 (Simonyan & Zisserman, 2015) respectively, and conduct experiments to compare the performance of different backbones when 95% of the training data has missing visual modality on the RAVDESS dataset, as shown in Figure 5. We can observe that compared with VGG-16 and ResNet-34, ResNet-50 achieves the highest performance. In addition, no matter what kind of backbone is based on, the classification accuracy using our method is the highest, followed by using AE, ZP and HGR MC, and the lowest using Lower Bound, which shows that our approach can take effect for different backbones. 4 RELATED WORKS Multimodal learning has achieved great successes in many applications. An important topic in this field is multimodal representations (Baltrušaitis et al., 2018; Zhu et al., 2020), which learn feature representations from the multimodal data by using the information of different modalities. How to learn good representations is investigated in (Ngiam et al., 2011; Wu et al., 2014; Pan et al., 2016; Xu et al., 2015). Another important topic is multimodal fusion (Atrey et al., 2010; Poria et al., 2017), which combines the information from different modalities to make predictions. Feature-based fusion is one of the most common types of multimodal fusion. It concatenates the feature representations extracted from different modalities. This fusion approach is adopted by previous works (Tzirakis et al., 2017; Zhang et al., 2017; Castellano et al., 2008; Zhang et al., 2016). Modality missing is a key challenge in applying multimodal learning to the real world. To cope with the problem of modality missing, a few methods have been proposed. For example, Ma et al. (2021b) propose a Bayesian meta learning framework to perturb the latent feature space so that embeddings of single modality can approximate embeddings of full modality. Tran et al. (2017) propose a cascaded residual autoencoder for imputation with missing modalities, which is composed of a set of stacked residual autoencoders that iteratively model the residuals. Chen & Zhang (2020) propose a heterogeneous graph-based multimodal fusion approach to enable multimodal fusion of incomplete data within a heterogeneous graph structure. Liu et al. (2021) propose an autoencoder framework to complement the missing data in the kernel space while taking into account the structural information of data and the inherent association between multiple views. The above approaches can combine the information of the modality-missing data to some extent. Our work is significantly different from them. The reason lies in the following two facts. Firstly, by exploiting the likelihood function to learn the conditional distributions of the modality-complete data and the modality-missing data, our method has a theoretical guarantee, which is skipped by previous works. Secondly, the training process of our approach is in an end-to-end manner, while the training processes of most above methods are relatively cumbersome. 5 CONCLUSION Multimodal learning is a hot topic in the academic and industry communities, of which a key challenge is modality missing. In practice, the multimodal data may not be complete due to various reasons. Most previous works cannot effectively utilize the modality-missing data for the learning task. To address this problem, we propose an efficient approach to leverage the knowledge in the modalitymissing data during the training stage. Specifically, we present a framework based on maximum likelihood estimation to characterize the conditional distributions of the modality-complete data and the modality-missing data, which has a theoretical guarantee. Furthermore, we develop a generalized form of the softmax function to effectively implement our maximum likelihood estimation framework in an end-to-end way. We conduct experiments on the eNTERFACE’05 dataset and the RAVDESS dataset for multimodal learning to demonstrate the effectiveness of our approach. In the future, we can further extend our framework to other multimodal learning domains. REPRODUCIBILITY STATEMENT We provide our code in “supplement.zip”. In this folder, “eNTERFACE_preprocess.py” and “RAVDESS_preprocess.py” extract segment samples from the original videos of the eNTERFACE’05 dataset and the RAVDESS dataset, respectively. “mle.py” shows the function to compute our maximum likelihood estimation algorithm.
1. What is the main contribution of the paper regarding the classification accuracy improvement? 2. What are the concerns regarding the approaches used in the paper? 3. How does the reviewer assess the baseline comparison provided in the paper? 4. Are there any questions about the experimental results presented in the paper? 5. Do you have any suggestions for further investigation to support the claims made in the paper?
Summary Of The Paper Review
Summary Of The Paper The authors propose a probabilistic framework to improve the classification accuracy in instances when there exists missing data in the multi-modality datasets (where one of the modalities is the predictive label; however, this label is not assumed missing). To this end, they propose a generalized softmax function as the joint distribution of all modalities and the label, from which conditional distributions are derived, for computing the maximum likelihood estimate (MLE). Experimental results on eNTERFACE and RAVDESS datasets demonstrate improvements in classification accuracy over baselines. In addition, the authors investigate the influence of the influence of the backbone models and the fusion functions. Review The main contribution of the paper is the proposal of the generalized softmax function, to model the joint distribution of all modalities and the label. The generalized softmax function consists of the product of the marginal distributions of the modalities and the label, as if they were independent, subsequently compensating for this (the dependence among the modalities and the label) via an exponential function, enhanced with feature extraction models. This joint distribution leads to computationally efficient conditional distributions. However, there are a few concerns about the approaches and the evaluations in this paper: The most significant concern is about the baseline comparison. The authors set these baselines as instances of specifically defined simpler models (or their own model) in order to highlight specific manner in which those models deal with the missing modality. However, there are many prior works focused on solving the exact missing modality problem (see below). The authors should, thus, compare against those baselines instead of deriving their own baseline model instances. [1] Multimodal Generative Models for Scalable Weakly-Supervised Learning (Wu and Goodman); [2] Private-Shared Disentangled Multimodal VAE for Learning of Latent Representations (Lee and Pavlovic); [3] MHVAE: a Human-Inspired Deep Hierarchical Generative Model for Multimodal Representation Learning (Vasco, Melo, and Paiva). In the experiments (Tab.1 and Tab.2), the visual missing rate and the audio missing rates are likely those in the training set. It is not clear what are the missing rates are for the testing set, if any. The authors should clarify this. In Fig.4 (c) and (d), the classification accuracy of the “happiness” emotion by ZP is higher than that by the proposed method. This single value may be caused by several factors, hidden or accidental. Thus, it may not be sound to claim that “the proposed method is more efficient to exploit the information in most categories”. To support that claim, more investigation is needed and the authors should present it.
ICLR
Title Maximum Likelihood Estimation for Multimodal Learning with Missing Modality Abstract Multimodal learning has achieved great successes in many scenarios. Compared with unimodal learning, it can effectively combine the information from different modalities to improve the performance of learning tasks. In reality, the multimodal data may have missing modalities due to various reasons, such as sensor failure and data transmission error. In previous works, the information of the modalitymissing data has not been well exploited. To address this problem, we propose an efficient approach based on maximum likelihood estimation to incorporate the knowledge in the modality-missing data. Specifically, we design a likelihood function to characterize the conditional distributions of the modality-complete data and the modality-missing data, which is theoretically optimal. Moreover, we develop a generalized form of the softmax function to effectively implement maximum likelihood estimation in an end-to-end manner. Such training strategy guarantees the computability of our algorithm capably. Finally, we conduct a series of experiments on real-world multimodal datasets. Our results demonstrate the effectiveness of the proposed approach, even when 95% of the training data has missing modality. 1 INTRODUCTION Multimodal learning is an important research area, which builds models to process and relate information between different modalities (Ngiam et al., 2011; Srivastava & Salakhutdinov, 2014; Baltrušaitis et al., 2018). Compared with unimodal learning, multimodal learning can achieve better performance by properly utilizing the multimodal data. It has been successfully used in many applications, such as multimodal emotion recognition (Soleymani et al., 2011; Mittal et al., 2020), multimedia event detection (Li et al., 2020), and visual question-answering (Yu et al., 2019). With the emergence of big data, multimodal learning becomes more and more important to combine the multimodal data from different sources. A number of previous works (Tzirakis et al., 2017; Zhang et al., 2017; Elliott et al., 2017; Kim et al., 2020; Zhang et al., 2020) have achieved great successes based on complete observations during the training process. However, in practice, the multimodal data may have missing modalities (Du et al., 2018; Ma et al., 2021a;b). This may be caused by various reasons. For instance, the sensor that collects the multimodal data is damaged or the network transmission fails. Examples of the multimodal data are shown in Figure 1. In the past years, different approaches have been proposed to deal with modality missing. A simple and typical way (Hastie et al., 2009) is to directly discard the data with missing modalities. Since the information contained in the modality-missing data is neglected, such method often has limited performance. In addition, researchers (Tran et al., 2017; Chen & Zhang, 2020; Liu et al., 2021; Ma et al., 2021b) have proposed approaches to heuristically combine the information of the modalitymissing data. However, most of these works lack theoretical explanations, and these empirical methods are often implemented using multiple training stages rather than an end-to-end manner, which lead to the information of the modality-missing data not being well exploited. To tackle above issues, we propose an efficient approach based on maximum likelihood estimation to effectively utilize the modality-missing data. To be specific, we present a likelihood function to characterize the conditional distributions of the modality-complete data and the modality-missing data, which is theoretically optimal. Furthermore, we adopt a generalized form of the softmax function to efficiently implement our maximum likelihood estimation algorithm. Such training strategy guarantees the computability of our framework in an end-to-end scheme. In this way, our approach can effectively leverage the information of the modality-missing data during the training process, Finally, we perform several experiments on real-world multimodal datasets, including eNTERFACE’05 (Martin et al., 2006) and RAVDESS (Livingstone & Russo, 2018). The results show the effectiveness of our approach in handling the problem of modality missing. To summarize, our contribution is three-fold: • We design a likelihood function to learn the conditional distributions of the modalitycomplete data and the modality-missing data, which is theoretically optimal. • We develop a generalized form of the softmax function to implement our maximum likelihood estimation framework in an end-to-end manner, which is more effective than previous works. • We conduct a series of experiments on real-world multimodal datasets. The results validate the effectiveness of our approach, even when 95% of the training data has missing modality. 2 METHODOLOGY Our goal is to deal with the problem of modality missing in multimodal learning based on maximum likelihood estimation. In the following, we first show the problem formulation, and then describe the details of our framework. 2.1 PROBLEM FORMULATION In this paper, we consider that the multimodal data has two modalities. Here, the random variables corresponding to these two modalities and their category labels are denoted as X , Y , and Z, respectively. In the training process, we assume that there are two independently observed datasets: modality-complete and modality-missing. We use DXY Z = { (x (i) c , y (i) c , z (i) c ) | z(i)c ∈ Z = {1, 2, · · · , |Z|} }nc i=1 to represent the modality-complete dataset, where x(i)c and y (i) c represent the two modalities of the i-th sample of DXY Z respectively, z (i) c is their corresponding category label, and the size of DXY Z is nc. We then use DXZ = { (x (i) m , z (i) m ) | z(i)m ∈ Z = {1, 2, · · · , |Z|} }nm i=1 to represent the modality-missing dataset, where the size of DXZ is nm. In addition, we adopt [DXY Z ]XY to represent { (x (i) c , y (i) c ) }nc i=1 . [DXY Z ]Z , [DXZ ]X , and [DXZ ]Z are expressed in the same way. The multimodal data of DXY Z and DXZ are assumed to be i.i.d. generated from an unknown underlying joint distribution. By utilizing the knowledge of the modality-complete data and the modality-missing data, we hope our framework can predict the category labels correctly. 2.2 MAXIMUM LIKELIHOOD ESTIMATION FOR MISSING MODALITY In this section, we first present how to design a likelihood function to learn the conditional distributions of the modality-complete data and the modality-missing data. Then, we show that by adopting a generalized form of the softmax function, we design a training strategy to implement our algorithm. 2.2.1 LIKELIHOOD FUNCTION ANALYSES Maximum likelihood estimation is a statistical method of using the observed data to estimate the distribution by maximizing the likelihood function. The estimated distribution makes the observed data most likely (Myung, 2003). With this idea, we study the likelihood function on datasets DXY Z and DXZ . For the classification task, the conditional likelihood is commonly used. Inspired by this, we use a model QXY Z to learn the underlying joint distribution of DXY Z and DXZ . The conditional likelihood can be represented as: ` , P ([DXY Z ]Z , [DXZ ]Z | [DXY Z ]XY , [DXZ ]X ;QXY Z) a = P ([DXY Z ]Z | [DXY Z ]XY ;QXY Z) · P ([DXZ ]Z | [DXZ ]X ;QXY Z) b = ∏ (x,y,z)∈DXY Z QZ|XY (z|xy) · ∏ (x,z)∈DXZ QZ|X(z|x) (1) where the step a follows from the fact that datasets DXY Z and DXZ are observed independently, and the step b is due to that samples in each dataset are i.i.d. QZ|XY and QZ|X are conditional distributions of QXY Z . In this way, we show the likelihood function using the information of DXY Z and DXZ . Then, we use the negative log-likelihood as the loss function to train our deep learning model, i.e., L , − log ` = − ∑ (x,y,z)∈DXY Z logQZ|XY (z|xy)− ∑ (x,z)∈DXZ logQZ|X(z|x) (2) It is worth noting that in (Daniels, 1961; Lehmann, 2004), maximum likelihood estimation is proved to be an asymptotically-efficient strategy, which guarantees the theoretical optimality of our method to deal with modality missing. To optimize L, we use deep neural networks to extract the k-dimensional feature representations from the observation (x, y, z), which are represented as f(x) = [f1(x), f2(x), · · · , fk(x)]T, g(y) = [g1(y), g2(y), · · · , gk(y)]T, and h(z) = [h1(z), h2(z), · · · , hk(z)]T, respectively. We then utilize these features to learn QZ|XY and QZ|X in L. Our framework is shown in Figure 2. In this way, we show the log-likelihood function L. By characterizing the conditional distributions of the modality-complete data and the modality-missing data, it leverages the underlying structure information behind the multimodal data, which constitutes the theoretical basis of our framework. 2.2.2 MAXIMUM LIKELIHOOD ESTIMATION IMPLEMENTATION In fact, it is not easy to optimize the log-likelihood function L in Equation (2) by designing neural networks, which is mainly due to two reasons. Firstly, the representations of the high-dimensional data and the procedure to model them are complicated. Secondly, since QZ|XY and QZ|X in L are related, how to build models to learn their relationships is difficult. To address these two issues, we develop a generalized form of the softmax function to describe QXY Z as follows 1: QXY Z(x, y, z) = RX(x)RY (y)RZ(z) exp(φ T(f(x), g(y))h(z))∑ x′,y′,z′ RX(x ′)RY (y′)RZ(z′) exp(φT(f(x′), g(y′))h(z′)) (3) where φ(f , g) represents the function to fuse features f and g. We study three forms of φ to investigate its effect in our framework, as shown in Figure 3. RX , RY , and RZ represent the underlying marginal distributions of the variables X , Y , and Z, respectively. Their use makes the denominator of Equation (3) expressed in the form of the mean over RX , RY , and RZ , which serves as the normalization to make QXY Z a valid distribution and is helpful for our further derivation. In addition, the generalized softmax function we propose can be regarded as a generalization of softmax learning in (Xu et al., 2018) from unimodal learning to multimodal learning. In this way, we show the distribution QXY Z by adopting a generalized form of the softmax function, which has the following two benefits. Firstly, by depicting the representation of QXY Z , we can further derive QZ|XY and QZ|X . It makes our approach a unified framework to combine the information of the modality-complete data and the modality-missing data. Secondly, it avoids modeling the relationship between QZ|XY and QZ|X . In fact, the correlation between the high-dimensional data can be rather complex. Then, we derive conditional distributions QZ|XY and QZ|X from Equation (3): QZ|XY (z|xy) = RZ(z) exp(φT(f(x), g(y))h(z))∑ z′ RZ(z ′) exp(φT(f(x), g(y))h(z′)) (4) and QZ|X(z|x) = RZ(z) ∑ y′ RY (y ′) exp(φT(f(x), g(y′))h(z))∑ z′ RZ(z ′) ∑ y′ RY (y ′) exp(φT(f(x), g(y′))h(z′)) (5) We can observe that by introducing RX , RY , and RZ into QXY Z , the derived QZ|XY and QZ|X are expressed in the form of the mean over RY and RZ . In practice, we can use the empirical mean as an estimation. Correspondingly, by plugging Equations (4) and (5) into Equation (2), we can summarize the detailed steps to compute our objective function L, as shown in Algorithm 1. It is worth pointing out that when we compute QZ|X , we need to use the information of the modality y. Since in the training process, the modality y of the dataset DXZ is missing, we utilize samples of the modality y of the dataset DXY Z to compute QZ|X . Finally, we utilize neural networks to extract features f , g, and h from the modality-complete data and the modality-missing data to optimize our log-likelihood functionL. It performs classification directly, which does not need to explicitly complement the modality-missing data before the classification task. 1Strictly speaking, RX and RY are probability density functions, and RZ is a probability mass function. The denominator of Equation (3) should be integrated over RX and RY . We use summation here for the simplicity of exposition. Algorithm 1 Compute our objective function on a mini-batch. Input: A modality-complete batch { (x (i) c , y (i) c , z (i) c ) }n1 i=1 , where n1 is the batch size. A modality-missing batch { (x (i) m , z (i) m ) }n2 i=1 , where n2 is the batch size. Neural networks with k output units: f , g, and h. Output: The value of our objective L. 1: Compute empirical label distribution R̂Z : R̂Z(z)← ∑n1 i=1 1(z (i) c =z)+ ∑n2 i=1 1(z (i) m =z) n1+n2 , z = 1, 2, · · · , |Z| 2: Compute QZ|XY : QZ|XY (z (i) c |x(i)c , y(i)c )← R̂Z(z(i)c ) exp(φ T(f(x(i)c ),g(y (i) c ))h(z))∑|Z| z′=1 RZ(z ′) exp(φT(f(x (i) c ),g(y (i) c ))h(z′)) , i = 1, · · · , n1 3: Compute QZ|X : QZ|X(z (i) m |x(i)m ) ← R̂Z(z(i)m ) 1 n1 ∑n1 j=1 exp(φ T(f(x(i)m ),g(y (j) c ))h(z (i) m ))∑|Z| z′=1 RZ(z ′) 1n1 ∑n1 j=1 exp(φ T(f(x (i) m ),g(y (j) c ))h(z′)) , i = 1, · · · , n2 4: Compute our empirical objective L: −∑n1i=1 logQZ|XY (z (i) c |x(i)c , y(i)c )− ∑n2 i=1 logQZ|X(z (i) m |x(i)m ) 3 EXPERIMENTS In this section, we first describe the real-world multimodal datasets used in our experiment, then explain the experimental settings and baseline methods, and finally give the experimental results to show the effectiveness of our approach. 3.1 DATASETS We perform experiments on two public real-world multimodal datasets: eNTERFACE’05 (Martin et al., 2006) and RAVDESS (Livingstone & Russo, 2018). eNTERFACE’05 is an audio-visual emotion database in English. It contains 42 subjects eliciting the six basic emotions: anger, disgust, fear, happiness, sadness, and surprise. There are 213 videos for happiness, and 216 videos for each of the remaining emotions. Following (Ma et al., 2020), we extract 30 segment samples from each video and then obtain a processed dataset with 38,790 samples. RAVDESS is a multimodal database of emotional speech and song, which consists of 24 professional actors in a neutral North American accent. Here, we use the speech part, which includes calm, happy, sad, angry, fearful, surprise, and disgust expressions. Each recording is also in the video form. Similar to the eNTERFACE’05 dataset, we only consider six basic emotions, each of which has 5,760 segment samples. 3.2 EXPERIMENTAL SETTINGS We perform experiments on the processed eNTERFACE’05 and RAVDESS datasets. Each segment of these two datasets has a duration of 0.5 seconds. As shown in ((Ma et al., 2020)), consecutive frames within 0.5 seconds usually contain the same emotion in a similar way, which inspired us to choose the central frame of each segment as the visual modality. This technique not only makes that the visual data contains enough emotional information, but also avoids the redundancy in multiple frames. Besides, the log Mel-spectrogram is extracted from each segment as the audio modality, which is similar to the RGB image. We then feed these data into our framework to obtain the classification result. ResNet-50 (He et al., 2016) is used as the backbone of visual network f and audio network g to extract features from visual and audio modalities, respectively. In addition, we transform the corresponding label into the one-hot form and then extract the label feature using label network h with a fully connected layer. f , g, and h are trained together. On each processed dataset, we split all data into three parts: training set, validation set, and test set. Their proportions are 70%, 15%, and 15%. In practice, modality missing often occurs with a high missing rate (Suo et al., 2019; Ma et al., 2021b). Here, in the training stage, we study three kinds of missing rates: 80%, 90%, and 95%. The case where the audio modality is missing and the case where the visual modality is missing are investigated respectively. Following (Yu et al., 2020; Chen & Zhang, 2020; Du et al., 2021), we set modality missing arising during the training phase to show that a large amount of unimodal data can assist the training of our multimodal learning framework. In the inference phase, we use Equation (4) to predict the class label of the given test data. Finally, we run each experiment five times and report average test accuracies to evaluate the performance of our approach and baseline methods. Adam optimizer (Kingma & Ba, 2015) is used to train neural networks with the learning rate of 0.0001. Both the size of modality-complete batch and the size of modality-missing batch are set to 90. The number of epochs is set to 100. All experiments are implemented by Pytorch (Paszke et al., 2019) on a NVIDIA TITAN V GPU card. 3.3 BASELINE METHODS To show the effectiveness of our method, we compare our approach with the following methods which can also handle modality missing to some extent. • Discarding Modality-incomplete Data (Lower Bound): One simple strategy to handle modality missing is to directly discard the modality-incomplete data, and then only use the modality-complete data for the classification task. This method does not use the information of the data with missing modalities. In our maximum likelihood estimation model, this is equivalent to calculating QZ|XY without calculating QZ|X . Therefore, this method can also be used as the ablation study of our approach. • Hirschfeld-Gebelein-Renyi Maximal Correlation (Hirschfeld, 1935; Gebelein, 1941; Rényi, 1959) (HGR MC): HGR MC is a statistical measure which calculates the dependence between different random variables. It has been successfully used for multimodal learning (Ma et al., 2021a; 2020; Wang et al., 2019; Xu & Huang, 2020). Here, we use it further to deal with modality missing. For the modality-complete data, we learn the maximal correlation between x, y, and z. For the modality-missing data, we learn the maximal correlation between x and z. • Zero Padding (ZP): Padding the feature representation of the missing modality with zero is another widely used way to cope with incomplete modalities (Jo et al., 2019; Chen et al., 2020; Shen et al., 2020). For ZP, we consider two forms of φ to fuse features f and g: addition and concatenation. The reason why the form of outer product is not studied here is that if the feature of one modality is zero, the outer product of it and the non-zero feature of another modality is also zero, which makes the modality-missing data useless. • Autoencoder (AE): An autoencoder is a neural network framework used to learn the representation from the training data. Some previous approaches apply autoencoders to complement For a fair comparison, we make that each method has the same network architecture and training strategy, and report the classification results of each method after the same number of repeated experiments. 3.4 EXPERIMENTAL RESULTS We first conduct classification experiments on the eNTERFACE’05 and RAVDESS datasets by comparing our framework with other methods. The experimental setting is shown in Section 3.2. We report the classification accuracy of each method in each setting. The results are shown in Table 1 and Table 2. We have the following summarizations from Table 1 and Table 2: (1) The methods of AE, HGR MC, ZP, and ours can improve the classification accuracy compared to the Lower Bound method which only uses the modality-complete data. Our method achieves the highest classification performance among all methods under different settings. The higher the missing rate, the more obvious the gap between other methods and our method. These show that our maximum likelihood estimation approach are more effective to tackle modality missing compared with other methods. (2) Different forms of φ will affect the classification performance. For example, for our approach, addition and outer product perform better than concatenation on the eNTERFACE’05 dataset. However, on the RAVDESS dataset, the concatenation form of φ achieves higher classification performance than the addition and outer product forms under some settings. This indicates that in different settings, the discrimination ability of the learned feature representations is different. We need to design the appropriate form of φ to fuse features of the multimodal data. (3) When the visual modality is missing, the classification accuracy is lower than that when the audio modality is missing, indicating that the visual modality has a more significant contribution to the classification performance, which is consistent with previous works (Zhang et al., 2017; Ma et al., 2020). In addition, we show the classification confusion matrices using the methods of AE, HGR MC, ZP, and ours when the missing rate of visual modality reaches 95% on the eNTERFACE’05 dataset, as shown in Figure 4. It can be seen that the classification accuracy of each emotion using AE or HGR MC is not high, which indicates that they can only deal with modality missing to a certain extent. The overall classification performance of ZP is lower than ours, but the classification accuracy of “happiness” is slightly higher than ours. This shows that different emotions have different clues for the classification task. We then investigate the effect of the backbone in coping with modality missing. In the above experiments, we use ResNet-50 as the backbone of different methods to extract feature representations. Here, we replace ResNet-50 with ResNet-34 (He et al., 2016) and VGG-16 (Simonyan & Zisserman, 2015) respectively, and conduct experiments to compare the performance of different backbones when 95% of the training data has missing visual modality on the RAVDESS dataset, as shown in Figure 5. We can observe that compared with VGG-16 and ResNet-34, ResNet-50 achieves the highest performance. In addition, no matter what kind of backbone is based on, the classification accuracy using our method is the highest, followed by using AE, ZP and HGR MC, and the lowest using Lower Bound, which shows that our approach can take effect for different backbones. 4 RELATED WORKS Multimodal learning has achieved great successes in many applications. An important topic in this field is multimodal representations (Baltrušaitis et al., 2018; Zhu et al., 2020), which learn feature representations from the multimodal data by using the information of different modalities. How to learn good representations is investigated in (Ngiam et al., 2011; Wu et al., 2014; Pan et al., 2016; Xu et al., 2015). Another important topic is multimodal fusion (Atrey et al., 2010; Poria et al., 2017), which combines the information from different modalities to make predictions. Feature-based fusion is one of the most common types of multimodal fusion. It concatenates the feature representations extracted from different modalities. This fusion approach is adopted by previous works (Tzirakis et al., 2017; Zhang et al., 2017; Castellano et al., 2008; Zhang et al., 2016). Modality missing is a key challenge in applying multimodal learning to the real world. To cope with the problem of modality missing, a few methods have been proposed. For example, Ma et al. (2021b) propose a Bayesian meta learning framework to perturb the latent feature space so that embeddings of single modality can approximate embeddings of full modality. Tran et al. (2017) propose a cascaded residual autoencoder for imputation with missing modalities, which is composed of a set of stacked residual autoencoders that iteratively model the residuals. Chen & Zhang (2020) propose a heterogeneous graph-based multimodal fusion approach to enable multimodal fusion of incomplete data within a heterogeneous graph structure. Liu et al. (2021) propose an autoencoder framework to complement the missing data in the kernel space while taking into account the structural information of data and the inherent association between multiple views. The above approaches can combine the information of the modality-missing data to some extent. Our work is significantly different from them. The reason lies in the following two facts. Firstly, by exploiting the likelihood function to learn the conditional distributions of the modality-complete data and the modality-missing data, our method has a theoretical guarantee, which is skipped by previous works. Secondly, the training process of our approach is in an end-to-end manner, while the training processes of most above methods are relatively cumbersome. 5 CONCLUSION Multimodal learning is a hot topic in the academic and industry communities, of which a key challenge is modality missing. In practice, the multimodal data may not be complete due to various reasons. Most previous works cannot effectively utilize the modality-missing data for the learning task. To address this problem, we propose an efficient approach to leverage the knowledge in the modalitymissing data during the training stage. Specifically, we present a framework based on maximum likelihood estimation to characterize the conditional distributions of the modality-complete data and the modality-missing data, which has a theoretical guarantee. Furthermore, we develop a generalized form of the softmax function to effectively implement our maximum likelihood estimation framework in an end-to-end way. We conduct experiments on the eNTERFACE’05 dataset and the RAVDESS dataset for multimodal learning to demonstrate the effectiveness of our approach. In the future, we can further extend our framework to other multimodal learning domains. REPRODUCIBILITY STATEMENT We provide our code in “supplement.zip”. In this folder, “eNTERFACE_preprocess.py” and “RAVDESS_preprocess.py” extract segment samples from the original videos of the eNTERFACE’05 dataset and the RAVDESS dataset, respectively. “mle.py” shows the function to compute our maximum likelihood estimation algorithm.
1. How does the proposed approach utilize multimodal data effectively? 2. Are there any limitations or exceptions where unimodal learning may be more effective than multimodal learning? 3. Can you provide more information about implementing the empirical distribution in the algorithm? 4. Why was a 100% missing rate condition not considered in the experiments on eNTERFACES'05? 5. How does the proposed method differ from other approaches mentioned in related work? 6. Are there any plans to experimentally compare the performance of the proposed method with other frameworks that handle missing modality?
Summary Of The Paper Review
Summary Of The Paper This submission proposed a maximum likelihood estimation framework combined with a generalized softmax function to resolve multimodal emotion recognition with missing modality. Two emotion recognition datasets are used in experiments to make comparison with several baseline methods. The results suggest that the proposed approach outperforms these compared methods. Moreover, according to the authors, the end-to-end nature of this framework makes it more efficient than previous works. Review In Introduction , the authors states that "Compared with unimodal learning, multimodal learning can effectively utilize the multimodal data to achieve better performance.", actually in some cases multimodal data must be utilized properly to make multimodal learning more effective than unimodal learning. For e.g., researchers have found that the best unimodal model can outperform its multimodal counterpart in this paper: "W. Wang, et al. What Makes Training Multi-modal Classification Networks Hard?" The authors should try to make the statements more accurate. The author have mentioned in Page 4 that "In the following, we will show that we use empirical distribution to implement these underlying marginal distribution in our algorithm.", but the reviewer could not find any descriptions in the following paragraphs. In experiments on eNTERFACES’05, the condition with 100% missing rate should be considered, which could be helpful to demonstrate whether the left 5% data in 95%-missing case is indeed used for the task or just because the other complete modality of data. The authors mentioned several different methods dealing with missing modality in related works, but no experiments to compare the performances between the proposed methods and the mentioned framework. At the same time, the comparative methods in this submission are less persuasive.
ICLR
Title Maximum Likelihood Estimation for Multimodal Learning with Missing Modality Abstract Multimodal learning has achieved great successes in many scenarios. Compared with unimodal learning, it can effectively combine the information from different modalities to improve the performance of learning tasks. In reality, the multimodal data may have missing modalities due to various reasons, such as sensor failure and data transmission error. In previous works, the information of the modalitymissing data has not been well exploited. To address this problem, we propose an efficient approach based on maximum likelihood estimation to incorporate the knowledge in the modality-missing data. Specifically, we design a likelihood function to characterize the conditional distributions of the modality-complete data and the modality-missing data, which is theoretically optimal. Moreover, we develop a generalized form of the softmax function to effectively implement maximum likelihood estimation in an end-to-end manner. Such training strategy guarantees the computability of our algorithm capably. Finally, we conduct a series of experiments on real-world multimodal datasets. Our results demonstrate the effectiveness of the proposed approach, even when 95% of the training data has missing modality. 1 INTRODUCTION Multimodal learning is an important research area, which builds models to process and relate information between different modalities (Ngiam et al., 2011; Srivastava & Salakhutdinov, 2014; Baltrušaitis et al., 2018). Compared with unimodal learning, multimodal learning can achieve better performance by properly utilizing the multimodal data. It has been successfully used in many applications, such as multimodal emotion recognition (Soleymani et al., 2011; Mittal et al., 2020), multimedia event detection (Li et al., 2020), and visual question-answering (Yu et al., 2019). With the emergence of big data, multimodal learning becomes more and more important to combine the multimodal data from different sources. A number of previous works (Tzirakis et al., 2017; Zhang et al., 2017; Elliott et al., 2017; Kim et al., 2020; Zhang et al., 2020) have achieved great successes based on complete observations during the training process. However, in practice, the multimodal data may have missing modalities (Du et al., 2018; Ma et al., 2021a;b). This may be caused by various reasons. For instance, the sensor that collects the multimodal data is damaged or the network transmission fails. Examples of the multimodal data are shown in Figure 1. In the past years, different approaches have been proposed to deal with modality missing. A simple and typical way (Hastie et al., 2009) is to directly discard the data with missing modalities. Since the information contained in the modality-missing data is neglected, such method often has limited performance. In addition, researchers (Tran et al., 2017; Chen & Zhang, 2020; Liu et al., 2021; Ma et al., 2021b) have proposed approaches to heuristically combine the information of the modalitymissing data. However, most of these works lack theoretical explanations, and these empirical methods are often implemented using multiple training stages rather than an end-to-end manner, which lead to the information of the modality-missing data not being well exploited. To tackle above issues, we propose an efficient approach based on maximum likelihood estimation to effectively utilize the modality-missing data. To be specific, we present a likelihood function to characterize the conditional distributions of the modality-complete data and the modality-missing data, which is theoretically optimal. Furthermore, we adopt a generalized form of the softmax function to efficiently implement our maximum likelihood estimation algorithm. Such training strategy guarantees the computability of our framework in an end-to-end scheme. In this way, our approach can effectively leverage the information of the modality-missing data during the training process, Finally, we perform several experiments on real-world multimodal datasets, including eNTERFACE’05 (Martin et al., 2006) and RAVDESS (Livingstone & Russo, 2018). The results show the effectiveness of our approach in handling the problem of modality missing. To summarize, our contribution is three-fold: • We design a likelihood function to learn the conditional distributions of the modalitycomplete data and the modality-missing data, which is theoretically optimal. • We develop a generalized form of the softmax function to implement our maximum likelihood estimation framework in an end-to-end manner, which is more effective than previous works. • We conduct a series of experiments on real-world multimodal datasets. The results validate the effectiveness of our approach, even when 95% of the training data has missing modality. 2 METHODOLOGY Our goal is to deal with the problem of modality missing in multimodal learning based on maximum likelihood estimation. In the following, we first show the problem formulation, and then describe the details of our framework. 2.1 PROBLEM FORMULATION In this paper, we consider that the multimodal data has two modalities. Here, the random variables corresponding to these two modalities and their category labels are denoted as X , Y , and Z, respectively. In the training process, we assume that there are two independently observed datasets: modality-complete and modality-missing. We use DXY Z = { (x (i) c , y (i) c , z (i) c ) | z(i)c ∈ Z = {1, 2, · · · , |Z|} }nc i=1 to represent the modality-complete dataset, where x(i)c and y (i) c represent the two modalities of the i-th sample of DXY Z respectively, z (i) c is their corresponding category label, and the size of DXY Z is nc. We then use DXZ = { (x (i) m , z (i) m ) | z(i)m ∈ Z = {1, 2, · · · , |Z|} }nm i=1 to represent the modality-missing dataset, where the size of DXZ is nm. In addition, we adopt [DXY Z ]XY to represent { (x (i) c , y (i) c ) }nc i=1 . [DXY Z ]Z , [DXZ ]X , and [DXZ ]Z are expressed in the same way. The multimodal data of DXY Z and DXZ are assumed to be i.i.d. generated from an unknown underlying joint distribution. By utilizing the knowledge of the modality-complete data and the modality-missing data, we hope our framework can predict the category labels correctly. 2.2 MAXIMUM LIKELIHOOD ESTIMATION FOR MISSING MODALITY In this section, we first present how to design a likelihood function to learn the conditional distributions of the modality-complete data and the modality-missing data. Then, we show that by adopting a generalized form of the softmax function, we design a training strategy to implement our algorithm. 2.2.1 LIKELIHOOD FUNCTION ANALYSES Maximum likelihood estimation is a statistical method of using the observed data to estimate the distribution by maximizing the likelihood function. The estimated distribution makes the observed data most likely (Myung, 2003). With this idea, we study the likelihood function on datasets DXY Z and DXZ . For the classification task, the conditional likelihood is commonly used. Inspired by this, we use a model QXY Z to learn the underlying joint distribution of DXY Z and DXZ . The conditional likelihood can be represented as: ` , P ([DXY Z ]Z , [DXZ ]Z | [DXY Z ]XY , [DXZ ]X ;QXY Z) a = P ([DXY Z ]Z | [DXY Z ]XY ;QXY Z) · P ([DXZ ]Z | [DXZ ]X ;QXY Z) b = ∏ (x,y,z)∈DXY Z QZ|XY (z|xy) · ∏ (x,z)∈DXZ QZ|X(z|x) (1) where the step a follows from the fact that datasets DXY Z and DXZ are observed independently, and the step b is due to that samples in each dataset are i.i.d. QZ|XY and QZ|X are conditional distributions of QXY Z . In this way, we show the likelihood function using the information of DXY Z and DXZ . Then, we use the negative log-likelihood as the loss function to train our deep learning model, i.e., L , − log ` = − ∑ (x,y,z)∈DXY Z logQZ|XY (z|xy)− ∑ (x,z)∈DXZ logQZ|X(z|x) (2) It is worth noting that in (Daniels, 1961; Lehmann, 2004), maximum likelihood estimation is proved to be an asymptotically-efficient strategy, which guarantees the theoretical optimality of our method to deal with modality missing. To optimize L, we use deep neural networks to extract the k-dimensional feature representations from the observation (x, y, z), which are represented as f(x) = [f1(x), f2(x), · · · , fk(x)]T, g(y) = [g1(y), g2(y), · · · , gk(y)]T, and h(z) = [h1(z), h2(z), · · · , hk(z)]T, respectively. We then utilize these features to learn QZ|XY and QZ|X in L. Our framework is shown in Figure 2. In this way, we show the log-likelihood function L. By characterizing the conditional distributions of the modality-complete data and the modality-missing data, it leverages the underlying structure information behind the multimodal data, which constitutes the theoretical basis of our framework. 2.2.2 MAXIMUM LIKELIHOOD ESTIMATION IMPLEMENTATION In fact, it is not easy to optimize the log-likelihood function L in Equation (2) by designing neural networks, which is mainly due to two reasons. Firstly, the representations of the high-dimensional data and the procedure to model them are complicated. Secondly, since QZ|XY and QZ|X in L are related, how to build models to learn their relationships is difficult. To address these two issues, we develop a generalized form of the softmax function to describe QXY Z as follows 1: QXY Z(x, y, z) = RX(x)RY (y)RZ(z) exp(φ T(f(x), g(y))h(z))∑ x′,y′,z′ RX(x ′)RY (y′)RZ(z′) exp(φT(f(x′), g(y′))h(z′)) (3) where φ(f , g) represents the function to fuse features f and g. We study three forms of φ to investigate its effect in our framework, as shown in Figure 3. RX , RY , and RZ represent the underlying marginal distributions of the variables X , Y , and Z, respectively. Their use makes the denominator of Equation (3) expressed in the form of the mean over RX , RY , and RZ , which serves as the normalization to make QXY Z a valid distribution and is helpful for our further derivation. In addition, the generalized softmax function we propose can be regarded as a generalization of softmax learning in (Xu et al., 2018) from unimodal learning to multimodal learning. In this way, we show the distribution QXY Z by adopting a generalized form of the softmax function, which has the following two benefits. Firstly, by depicting the representation of QXY Z , we can further derive QZ|XY and QZ|X . It makes our approach a unified framework to combine the information of the modality-complete data and the modality-missing data. Secondly, it avoids modeling the relationship between QZ|XY and QZ|X . In fact, the correlation between the high-dimensional data can be rather complex. Then, we derive conditional distributions QZ|XY and QZ|X from Equation (3): QZ|XY (z|xy) = RZ(z) exp(φT(f(x), g(y))h(z))∑ z′ RZ(z ′) exp(φT(f(x), g(y))h(z′)) (4) and QZ|X(z|x) = RZ(z) ∑ y′ RY (y ′) exp(φT(f(x), g(y′))h(z))∑ z′ RZ(z ′) ∑ y′ RY (y ′) exp(φT(f(x), g(y′))h(z′)) (5) We can observe that by introducing RX , RY , and RZ into QXY Z , the derived QZ|XY and QZ|X are expressed in the form of the mean over RY and RZ . In practice, we can use the empirical mean as an estimation. Correspondingly, by plugging Equations (4) and (5) into Equation (2), we can summarize the detailed steps to compute our objective function L, as shown in Algorithm 1. It is worth pointing out that when we compute QZ|X , we need to use the information of the modality y. Since in the training process, the modality y of the dataset DXZ is missing, we utilize samples of the modality y of the dataset DXY Z to compute QZ|X . Finally, we utilize neural networks to extract features f , g, and h from the modality-complete data and the modality-missing data to optimize our log-likelihood functionL. It performs classification directly, which does not need to explicitly complement the modality-missing data before the classification task. 1Strictly speaking, RX and RY are probability density functions, and RZ is a probability mass function. The denominator of Equation (3) should be integrated over RX and RY . We use summation here for the simplicity of exposition. Algorithm 1 Compute our objective function on a mini-batch. Input: A modality-complete batch { (x (i) c , y (i) c , z (i) c ) }n1 i=1 , where n1 is the batch size. A modality-missing batch { (x (i) m , z (i) m ) }n2 i=1 , where n2 is the batch size. Neural networks with k output units: f , g, and h. Output: The value of our objective L. 1: Compute empirical label distribution R̂Z : R̂Z(z)← ∑n1 i=1 1(z (i) c =z)+ ∑n2 i=1 1(z (i) m =z) n1+n2 , z = 1, 2, · · · , |Z| 2: Compute QZ|XY : QZ|XY (z (i) c |x(i)c , y(i)c )← R̂Z(z(i)c ) exp(φ T(f(x(i)c ),g(y (i) c ))h(z))∑|Z| z′=1 RZ(z ′) exp(φT(f(x (i) c ),g(y (i) c ))h(z′)) , i = 1, · · · , n1 3: Compute QZ|X : QZ|X(z (i) m |x(i)m ) ← R̂Z(z(i)m ) 1 n1 ∑n1 j=1 exp(φ T(f(x(i)m ),g(y (j) c ))h(z (i) m ))∑|Z| z′=1 RZ(z ′) 1n1 ∑n1 j=1 exp(φ T(f(x (i) m ),g(y (j) c ))h(z′)) , i = 1, · · · , n2 4: Compute our empirical objective L: −∑n1i=1 logQZ|XY (z (i) c |x(i)c , y(i)c )− ∑n2 i=1 logQZ|X(z (i) m |x(i)m ) 3 EXPERIMENTS In this section, we first describe the real-world multimodal datasets used in our experiment, then explain the experimental settings and baseline methods, and finally give the experimental results to show the effectiveness of our approach. 3.1 DATASETS We perform experiments on two public real-world multimodal datasets: eNTERFACE’05 (Martin et al., 2006) and RAVDESS (Livingstone & Russo, 2018). eNTERFACE’05 is an audio-visual emotion database in English. It contains 42 subjects eliciting the six basic emotions: anger, disgust, fear, happiness, sadness, and surprise. There are 213 videos for happiness, and 216 videos for each of the remaining emotions. Following (Ma et al., 2020), we extract 30 segment samples from each video and then obtain a processed dataset with 38,790 samples. RAVDESS is a multimodal database of emotional speech and song, which consists of 24 professional actors in a neutral North American accent. Here, we use the speech part, which includes calm, happy, sad, angry, fearful, surprise, and disgust expressions. Each recording is also in the video form. Similar to the eNTERFACE’05 dataset, we only consider six basic emotions, each of which has 5,760 segment samples. 3.2 EXPERIMENTAL SETTINGS We perform experiments on the processed eNTERFACE’05 and RAVDESS datasets. Each segment of these two datasets has a duration of 0.5 seconds. As shown in ((Ma et al., 2020)), consecutive frames within 0.5 seconds usually contain the same emotion in a similar way, which inspired us to choose the central frame of each segment as the visual modality. This technique not only makes that the visual data contains enough emotional information, but also avoids the redundancy in multiple frames. Besides, the log Mel-spectrogram is extracted from each segment as the audio modality, which is similar to the RGB image. We then feed these data into our framework to obtain the classification result. ResNet-50 (He et al., 2016) is used as the backbone of visual network f and audio network g to extract features from visual and audio modalities, respectively. In addition, we transform the corresponding label into the one-hot form and then extract the label feature using label network h with a fully connected layer. f , g, and h are trained together. On each processed dataset, we split all data into three parts: training set, validation set, and test set. Their proportions are 70%, 15%, and 15%. In practice, modality missing often occurs with a high missing rate (Suo et al., 2019; Ma et al., 2021b). Here, in the training stage, we study three kinds of missing rates: 80%, 90%, and 95%. The case where the audio modality is missing and the case where the visual modality is missing are investigated respectively. Following (Yu et al., 2020; Chen & Zhang, 2020; Du et al., 2021), we set modality missing arising during the training phase to show that a large amount of unimodal data can assist the training of our multimodal learning framework. In the inference phase, we use Equation (4) to predict the class label of the given test data. Finally, we run each experiment five times and report average test accuracies to evaluate the performance of our approach and baseline methods. Adam optimizer (Kingma & Ba, 2015) is used to train neural networks with the learning rate of 0.0001. Both the size of modality-complete batch and the size of modality-missing batch are set to 90. The number of epochs is set to 100. All experiments are implemented by Pytorch (Paszke et al., 2019) on a NVIDIA TITAN V GPU card. 3.3 BASELINE METHODS To show the effectiveness of our method, we compare our approach with the following methods which can also handle modality missing to some extent. • Discarding Modality-incomplete Data (Lower Bound): One simple strategy to handle modality missing is to directly discard the modality-incomplete data, and then only use the modality-complete data for the classification task. This method does not use the information of the data with missing modalities. In our maximum likelihood estimation model, this is equivalent to calculating QZ|XY without calculating QZ|X . Therefore, this method can also be used as the ablation study of our approach. • Hirschfeld-Gebelein-Renyi Maximal Correlation (Hirschfeld, 1935; Gebelein, 1941; Rényi, 1959) (HGR MC): HGR MC is a statistical measure which calculates the dependence between different random variables. It has been successfully used for multimodal learning (Ma et al., 2021a; 2020; Wang et al., 2019; Xu & Huang, 2020). Here, we use it further to deal with modality missing. For the modality-complete data, we learn the maximal correlation between x, y, and z. For the modality-missing data, we learn the maximal correlation between x and z. • Zero Padding (ZP): Padding the feature representation of the missing modality with zero is another widely used way to cope with incomplete modalities (Jo et al., 2019; Chen et al., 2020; Shen et al., 2020). For ZP, we consider two forms of φ to fuse features f and g: addition and concatenation. The reason why the form of outer product is not studied here is that if the feature of one modality is zero, the outer product of it and the non-zero feature of another modality is also zero, which makes the modality-missing data useless. • Autoencoder (AE): An autoencoder is a neural network framework used to learn the representation from the training data. Some previous approaches apply autoencoders to complement For a fair comparison, we make that each method has the same network architecture and training strategy, and report the classification results of each method after the same number of repeated experiments. 3.4 EXPERIMENTAL RESULTS We first conduct classification experiments on the eNTERFACE’05 and RAVDESS datasets by comparing our framework with other methods. The experimental setting is shown in Section 3.2. We report the classification accuracy of each method in each setting. The results are shown in Table 1 and Table 2. We have the following summarizations from Table 1 and Table 2: (1) The methods of AE, HGR MC, ZP, and ours can improve the classification accuracy compared to the Lower Bound method which only uses the modality-complete data. Our method achieves the highest classification performance among all methods under different settings. The higher the missing rate, the more obvious the gap between other methods and our method. These show that our maximum likelihood estimation approach are more effective to tackle modality missing compared with other methods. (2) Different forms of φ will affect the classification performance. For example, for our approach, addition and outer product perform better than concatenation on the eNTERFACE’05 dataset. However, on the RAVDESS dataset, the concatenation form of φ achieves higher classification performance than the addition and outer product forms under some settings. This indicates that in different settings, the discrimination ability of the learned feature representations is different. We need to design the appropriate form of φ to fuse features of the multimodal data. (3) When the visual modality is missing, the classification accuracy is lower than that when the audio modality is missing, indicating that the visual modality has a more significant contribution to the classification performance, which is consistent with previous works (Zhang et al., 2017; Ma et al., 2020). In addition, we show the classification confusion matrices using the methods of AE, HGR MC, ZP, and ours when the missing rate of visual modality reaches 95% on the eNTERFACE’05 dataset, as shown in Figure 4. It can be seen that the classification accuracy of each emotion using AE or HGR MC is not high, which indicates that they can only deal with modality missing to a certain extent. The overall classification performance of ZP is lower than ours, but the classification accuracy of “happiness” is slightly higher than ours. This shows that different emotions have different clues for the classification task. We then investigate the effect of the backbone in coping with modality missing. In the above experiments, we use ResNet-50 as the backbone of different methods to extract feature representations. Here, we replace ResNet-50 with ResNet-34 (He et al., 2016) and VGG-16 (Simonyan & Zisserman, 2015) respectively, and conduct experiments to compare the performance of different backbones when 95% of the training data has missing visual modality on the RAVDESS dataset, as shown in Figure 5. We can observe that compared with VGG-16 and ResNet-34, ResNet-50 achieves the highest performance. In addition, no matter what kind of backbone is based on, the classification accuracy using our method is the highest, followed by using AE, ZP and HGR MC, and the lowest using Lower Bound, which shows that our approach can take effect for different backbones. 4 RELATED WORKS Multimodal learning has achieved great successes in many applications. An important topic in this field is multimodal representations (Baltrušaitis et al., 2018; Zhu et al., 2020), which learn feature representations from the multimodal data by using the information of different modalities. How to learn good representations is investigated in (Ngiam et al., 2011; Wu et al., 2014; Pan et al., 2016; Xu et al., 2015). Another important topic is multimodal fusion (Atrey et al., 2010; Poria et al., 2017), which combines the information from different modalities to make predictions. Feature-based fusion is one of the most common types of multimodal fusion. It concatenates the feature representations extracted from different modalities. This fusion approach is adopted by previous works (Tzirakis et al., 2017; Zhang et al., 2017; Castellano et al., 2008; Zhang et al., 2016). Modality missing is a key challenge in applying multimodal learning to the real world. To cope with the problem of modality missing, a few methods have been proposed. For example, Ma et al. (2021b) propose a Bayesian meta learning framework to perturb the latent feature space so that embeddings of single modality can approximate embeddings of full modality. Tran et al. (2017) propose a cascaded residual autoencoder for imputation with missing modalities, which is composed of a set of stacked residual autoencoders that iteratively model the residuals. Chen & Zhang (2020) propose a heterogeneous graph-based multimodal fusion approach to enable multimodal fusion of incomplete data within a heterogeneous graph structure. Liu et al. (2021) propose an autoencoder framework to complement the missing data in the kernel space while taking into account the structural information of data and the inherent association between multiple views. The above approaches can combine the information of the modality-missing data to some extent. Our work is significantly different from them. The reason lies in the following two facts. Firstly, by exploiting the likelihood function to learn the conditional distributions of the modality-complete data and the modality-missing data, our method has a theoretical guarantee, which is skipped by previous works. Secondly, the training process of our approach is in an end-to-end manner, while the training processes of most above methods are relatively cumbersome. 5 CONCLUSION Multimodal learning is a hot topic in the academic and industry communities, of which a key challenge is modality missing. In practice, the multimodal data may not be complete due to various reasons. Most previous works cannot effectively utilize the modality-missing data for the learning task. To address this problem, we propose an efficient approach to leverage the knowledge in the modalitymissing data during the training stage. Specifically, we present a framework based on maximum likelihood estimation to characterize the conditional distributions of the modality-complete data and the modality-missing data, which has a theoretical guarantee. Furthermore, we develop a generalized form of the softmax function to effectively implement our maximum likelihood estimation framework in an end-to-end way. We conduct experiments on the eNTERFACE’05 dataset and the RAVDESS dataset for multimodal learning to demonstrate the effectiveness of our approach. In the future, we can further extend our framework to other multimodal learning domains. REPRODUCIBILITY STATEMENT We provide our code in “supplement.zip”. In this folder, “eNTERFACE_preprocess.py” and “RAVDESS_preprocess.py” extract segment samples from the original videos of the eNTERFACE’05 dataset and the RAVDESS dataset, respectively. “mle.py” shows the function to compute our maximum likelihood estimation algorithm.
1. What is the main contribution of the paper on multimodal learning? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its novelty and limitations? 3. How does the reviewer assess the experimental analysis and its validity regarding the used datasets, tasks, and comparison with prior art? 4. Are there any concerns regarding the data processing and the choice of splitting strategy? 5. How does the reviewer evaluate the discussion and claims made by the authors regarding the addressed challenge and future scenarios?
Summary Of The Paper Review
Summary Of The Paper This paper deals with multimodal learning with missing modality in training. Specifically, the proposed method is based on maximum likelihood estimation to obtain the conditional distributions of the so-called "the modality complete data" and "the modality-missing data" in which a multimodal softmax function is defined to implement this framework in an end-to-end manner. Review Strengths: - Multimodal learning has achieved great success for many applications and having missing modality is an important challenge to be tackled. This paper presents a simple, end-to-end method, in a way novel and contributing to the field as being based on maximum likelihood, which is not presented by prior art. Weakness: - The biggest limitation of this work is considering only the multimodal data having two modalities as well as not even discussing how the proposed method behaves in case of having more modalities. - Although the paper presents a task-free method, the experimental analysis were limited to two datasets, both are addressing the same task: emotion recognition. As also mentioned in the introduction there are several other tasks that the proposed method could have been tested on. Indeed, the related work (such as Ma et al. SMIL: "Multimodal Learning with Severely Missing Modality".) was tested on several different tasks. I suggest authors to either change the paper including the title, abstract and related work and target categorical emotion recognition or comprehensively extend the experimental analysis such that the proposed method would be tested and validated on several other tasks. - It is also important to mention that the proposed method was tested only for categorical emotion recognition, while emotion datasets are typically multi-labeled (as humans cannot elicit only one emotion at a time) and also include continuous values therefore, regression task might be targeted too - I also found the used datasets limited in terms of their size. In case authors would like to keep the emotion recognition task as the testbed, I suggest them using a much larger dataset called: CMU-MOSEI, also having other modalities than video and audio. Indeed some related work was tested on CMU-MOSEI and/or CMU-MOSI such as Ma et al. SMIL: "Multimodal Learning with Severely Missing Modality". - Another limitation regarding the experimental analysis performed is that: as modality only visual and audio data were used. Testing on combinations of several other modalities: text, depth data, data of mocap, accelerometer, gyro-meter would improve the validity of the proposed method. - “In addition, the generalized softmax function we propose….” Generalized softmax function might be misleading, it more sounds like the used softmax has tolerance to the diversity of samples belonging to different classes or somehow a domain adaption is being applied. But these are not the cases. - It is unclear why authors think that the multimodal softmax is a contribution. Eq. 3 and following equations look like standard softmax was written for multimodal data instead of first fusing the data and representing the fused data as a single feature vector. On the other hand, the fusion of the data is performed through standard strategies: addition, concatenation and multiplication. I expect authors to clarify the contribution in this respect. - There are also lack of information regarding how the data is being processed. In detail: a) What does “we take central frame” as visual modality mean? Do you take a bunch of frames and use only the central frame? If so what is the motivation behind this? What is the window size? In fact, it is more frequent to apply spatio-temporal processing, for example, processing motion and appearance in facial images for emotion recognition. Thus, I do not understand the rationale behind discarding the temporal information. b) Another issue is reading the audio data; it is not clear what the audio data chunk selected to calculate the log Mel-spectrogram. c) “On each processed dataset, we split all data into three parts: training set, validation set, and test set. Their proportions are 70%, 15%, and 15%.” Are you randomly picking these splits and applying sort of a k-fold cross validation or these splits are obtained only once and fixed? Do you guarantee that you use exactly the same split for all baseline methods, this is a matter because the used datasets are relatively small? I am aware that prior art on emotion recognition uses 5 or 10 fold cross validation for the same datasets, and I am not sure why authors have selected a different data splitting strategy. - The proposed method was compared with some relatively simpler baselines such as zero padding, but the comparative study should include the SOTA methods, e.g., Ma et al (2021b), Tran et al. (2017), Chen & Zhang (2020), Liu et al. (2021), Suo et al., (2019). Given this lack of comparison, I believe that the claim of authors “……which lead to the information of the modality-missing data not being well exploited to” was not justified as well. - I believe authors should include a better discussion why they tackle with the missing data only in training but never take into account that there could be missing modality in testing as well. I think in a practical scenario it is more possible to train a model with a full set of modalities, while during test some of the modalities are either completely or only for some test samples missing. - Tables should include the results of unimodal data processing to allow reader to understand which modality perform better than other when used alone, and include the results of processing complete data (i.e., no missing modality) as the upper bound. - “When the visual modality is missing, the classification accuracy is lower than that when the audio modality is missing, indicating that the visual modality has a more significant contribution to the classification performance, which is consistent with previous works (Zhang et al., 2017; Ma et al., 2020).” I believe the citations in this sentence is a bit irrelevant. In detail, the authors are not using neither the same feature sets nor the same datasets with the cited works.
ICLR
Title BAM: Bayes with Adaptive Memory Abstract Online learning via Bayes’ theorem allows new data to be continuously integrated into an agent’s current beliefs. However, a naive application of Bayesian methods in non-stationary environments leads to slow adaptation and results in state estimates that may converge confidently to the wrong parameter value. A common solution when learning in changing environments is to discard/downweight past data; however, this simple mechanism of “forgetting” fails to account for the fact that many real-world environments involve revisiting similar states. We propose a new framework, Bayes with Adaptive Memory (BAM), that takes advantage of past experience by allowing the agent to choose which past observations to remember and which to forget. We demonstrate that BAM generalizes many popular Bayesian update rules for non-stationary environments. Through a variety of experiments, we demonstrate the ability of BAM to continuously adapt in an ever-changing world. 1 INTRODUCTION The ability of an agent to continuously modulate its belief while interacting with a non-stationary environment is a hallmark of intelligence and has garnered a lot of attention in recent years (Zhang et al., 2020; Ebrahimi et al., 2020; Xie et al., 2020). The Bayesian framework enables online learning by providing a principled way to incorporate new observations into an agent’s model of the world (Jaynes, 2003; Gelman et al., 2013). Through the use of Bayes’ theorem, the agent can combine its own (subjective) a priori knowledge with data to achieve an updated belief encoded by the posterior distribution. The Bayesian framework is a particularly appealing option for online learning because Bayes’ theorem is closed under recursion, enabling continuous updates in what is commonly referred to as the recursive Bayes method (Wakefield, 2013). As an example, suppose the agent first observes a batch of data, D1, and then later observes another batch of data, D2. We can express the agent’s posterior distribution over the world, where the world is represented by θ, as p(θ|D1,D2) = p(D2|θ)p(θ|D1) p(D2|D1) , (1) where p(D2|D1) = ∫ p(D2|θ)p(θ|D1)dθ. (2) Equation 1 demonstrates the elegance and simplicity of recursive Bayes: at time t, the agent recycles its previous posterior, p(θ|D<t), where D<t = {D1, · · · ,Dt−1}, into its current prior and then combines it with a newly observed batch of data, Dt, to obtain an updated posterior, p(θ|D≤t). ∗corresponding authors At first glance, it would appear that a naive application of recursive Bayes would suffice for most online learning tasks. However, the recursive Bayes method relies on the assumption that the world is stationary, i.e. D1,D2, · · · are all independent and identically distributed. When this assumption is violated, recursive Bayes can fail catastrophically. As an illustration, consider the law of total variance: Var(θ|D<t) = E[Var(θ|D<t,Dt) ∣∣D<t] + Var(E[θ|D<t,Dt]∣∣D<t). (3) Since both terms on the right hand side are positive, equation 3 reveals that in expectation, the variance of the posterior decreases as more data is seen regardless of the actual distribution of Dt, i.e. Var(θ|D<t) ≥ E[Var(θ|D<t,Dt) ∣∣D<t]. (4) In fact, for some models equation 4 is true with probability 1; we demonstrate examples in Appendix A. Thus, if the parameters of the environment, θ, were to change, the variance of the posterior would still decrease, becoming more certain of a potentially obsolete parameter estimate. Modeling the environment as stationary when it is actually changing also keeps the learning speed of the agent artificially low, as tighter posteriors prevent large jumps in learning. This is the opposite of what an intelligent agent should do in such an event: if the environment changes, we would expect the agent’s uncertainty and learning speed to increase in response. As was elegantly stated by Monton (2002), the problem with naive use of recursive Bayes is that "Such a Bayesian never forgets." Previous approaches on enabling recursive Bayes to work in non-stationary settings have primarily focused on forgetting past experience either through the use of changepoint detection (Adams & MacKay, 2007; Li et al., 2021), or by exponentially weighting past experiences (Moens, 2018; Moens & Zénon, 2019; Masegosa et al., 2020). While empirically successful, their focus on forgetting the past means that revisited states are treated as novel. In this work we take an alternative approach to online Bayesian learning in non-stationary environments by endowing an agent with an explicit memory module. Crucially, the addition of a memory buffer equips the agent with the ability to modulate its uncertainty by choosing what past experiences to both forget and remember. We call our approach Bayes with Adaptive Memory (BAM) and demonstrate its wide applicability and effectiveness on a number of non-stationary learning tasks. 2 BAYES WITH ADAPTIVE MEMORY The generative model is assumed to evolve according to θt ∼ p(θt|θt−1, t), (5) Dt ∼ pt(D) ≡ p(D|θt), (6) where equation 5 is the latent dynamics that dictate the evolution of the environment parameters, θt, and equation 6 is the likelihood whose parametric form is fixed throughout time, i.e. pt(D) = N (θt, σ2). Equations 5 and 6 define a state-space model, which allows one to infer θt through Bayesian filtering (Särkkä, 2013) p(θt|D≤t) ∝ p(Dt|θt)p(θt|D<t), (7) p(θt|D<t) = ∫ p(θt|θt−1, t)p(θt−1|D<t)dθt−1. (8) The parameterization of equations 5 and 6 dictate the tractability of equations 7 and 8. If a priori an agent knew that equation 5 is a linear dynamical system with additive white Gaussian noise and equation 6 is also Gaussian whose conditional mean is a linear function of θt, then the Kalman filter can be used (Kalman, 1960). For more complicated latent dynamics and/or likelihood models, methods such as particle filtering (Doucet & Johansen, 2009) and unscented Kalman filtering (Julier & Uhlmann, 1997) can be used. Crucially, Bayesian filtering methods assume that the latent dynamics governed by equation 5 are known; however, this is rarely the case in practice. Instead of making assumptions on the parametric form of equation 5, we take a different approach. In BAM, the agent maintains a memory buffer, D<t, that stores previous observations of the environment. At time t the agent obtains a new batch of data, Dt ∼ pt(D). How should the agent combine the newly observed data, Dt, with its stored memory, D<t, to update its belief as encoded by the posterior distribution? In recursive Bayes, the posterior distribution is computed according to1 p(θt|Dt,D<t) ∝ p(Dt|θt)p(θt|D<t), (9) p(θt|D<t) ∝ p(θt) t−1∏ j=1 p(Dj |θt), (10) where we refer to p(θt) as the base prior. Equation 10 allows us to interpret recursive Bayes as the agent constructing a dynamic prior, p(θt|D<t), using all the experiences stored in its memory buffer. This works under the stationarity assumption; when this assumption is violated, the application of Bayes’ theorem can lead to confidently wrong results as the "distance" between pi(D) and pj(D) can be vast. An alternative is for the agent to completely forget all of its past experiences p(θt|Dt) ∝ p(Dt|θt)p(θt). (11) While equation 11 may be viable in situations where Dt is sufficiently informative, it is wasteful when experiences in the memory buffer may help infer θt. BAM dynamically finds a middle ground between these two extremes of remembering (equation 10) and forgetting (equation 11) everything by allowing the agent to choose which data to use from its memory buffer to construct the prior. Specifically, the agent is endowed with a time-dependent readout weight, Wt = [wt,1, wt,2, · · · , wt,t−1] where wt,j ∈ [0, 1]. Given a new datum Dt, BAM constructs its posterior according to p(θt|Dt,D<t,Wt) ∝ p(θt)p(Dt|θt) t−1∏ j=1 p(Dj |θt)wt,j . (12) We can rewrite equation 12 as p(θt|Dt,D<t,Wt) = p(Dt|θt)p(θt|D<t,Wt) p(Dt|D<t,Wt) , (13) where p(θt|D<t,Wt) ∝ p(θt) t−1∏ j=1 p(Dj |θt)wt,j , (14) and p(Dt|D<t,Wt) = ∫ p(Dt|θt)p(θt|D<t,Wt)dθt. (15) The prior construction in equation 14 is akin to recursive Bayes, but now the agent can dynamically and adaptively change its prior by using the readout weights, Wt, to weigh the importance of previous experience where at the extreme, it can choose to completely forget a previous experience, wt,j = 0, or fully remember it, wt,j = 1. For simplicity, we restrict the readout weights to be binary, i.e. wt,j ∈ {0, 1}. The combination of a memory buffer, D<t, with a time-dependent readout weight, Wt, allows BAM to generalize many previously proposed approaches. By setting wt,1 = wt,2 = · · · = wt,t−1 = 1, we recover recursive Bayes (equation 10). By setting wt,1 = wt,2 = · · · = wt,t−1 = α, where 0 ≤ α ≤ 1 we recover the power priors approach of Ibrahim et al. (2015). By setting wt,j = αt−1−j , where 0 ≤ α ≤ 1, we recover exponential forgetting (Moens, 2018; Moens & Zénon, 2019; Masegosa et al., 2020). Lastly, by setting a particular subset of the readout weights to be 0, we recover Bayesian unlearning (Nguyen et al., 2020). The ability to adaptively change its prior implies that BAM can increase/decrease its uncertainty as the situation demands; subsequently, this modulates the agent’s learning speed. Using variance as a proxy for uncertainty, one would expect that the variance of the prior used in BAM (equation 14) is always at least as large as the variance of the prior used in recursive Bayes (equation 10). We formalize this for the case of binary readout weights in the following proposition. Proposition 1. Let p(θ|D<t,Wt) be the prior used by BAM, defined in equation 14 and let p(θ|D<t) be the recursive Bayes prior, defined in equation 13. Then E[Var(θ|D<t,Wt) ∣∣Wt] ≥ E[Var(θ|D<t)], ∀Wt ∈ {0, 1}t−1. (16) Proof. Proof is in Appendix B. 1Recursive Bayes is equivalent to Bayesian filtering when p(θt|θt−1, t) = δ(θt = θt−1). 2.1 SELECTION OF READOUT WEIGHTS VIA BAYESIAN MODEL-SELECTION While the previous section demonstrated the flexibility of BAM, the question remains: how should the readout weights, Wt, be set? Equation 13 allows us to view different readout weights as different models. Through this lens, we can follow the spirit of Bayesian model selection (Gelman et al., 2013) and compute a posterior over the readout weights p(Wt|Dt,D<t) ∝ p(Wt|D<t)p(Dt|Wt,D<t). (17) For practicality, we compute the maximum a posteriori (MAP) estimate of equation 17 (Gelman et al., 2013) and use that as the value of the readout weight Wt = argmax W∈{0,1}t−1 log p(Dt|W,D<t) + log p(W |D<t), (18) = argmax W∈{0,1}t−1 log ∫ p(Dt|θt)p(θt|W,D<t)dθt + log p(W |D<t). (19) The first term of equation 18 is the log marginal likelihood, which measures the likelihood ofDt being distributed according to the predictive distribution, p(D|W,D<t) while the prior, log p(W |D<t), acts as a regularizer. This procedure of constantly updating the readout weights through equation 18 can be interpreted as providing Bayes a feedback mechanism: equation 18 allows the agent to directly measure its ability to fit the observed data using different combination of experiences in its buffer via the readout weight, and then choosing the readout weight that leads to best fit. In contrast, standard Bayesian inference is an open-loop procedure: data, likelihood and prior are given and a posterior is spat out, irrespective of the fit of the model to the data (Simpson et al., 2017). Still left is the question of how do we design the prior, p(W |D<t). In certain scenarios, using an uninformative prior, i.e. p(W |D<t) ∝ 1, may suffice if the data is very informative and/or the number of data points in Dt is large. In scenarios where these conditions are not met, it is important to use an informative prior as it reduces the chance of overfitting. In general, the design of priors is highly nontrivial (Winkler, 1967; Gelman et al., 2013; Simpson et al., 2017). While there exists many potential options, we use penalized model complexity priors proposed by Simpson et al. (2017) as they are designed to reduce the chance of overfitting. Following Simpson et al. (2017), we parameterize the prior as p(W |D<t) ∝ exp ( −λ √ 2DKL[p(θt|W,D<t)‖p(θt)] ) , (20) where λ ∈ [0,∞) is a hyperparameter that controls the strength of the prior.2 Equation 20 encodes our prior belief that we favor values ofWt that produce simpler models, where simplicity is quantified as the Kullback-Leibler divergence between p(θt|Wt,D<t) and the base prior, p(θt). Plugging equation 20 into equation 18 we get Wt = argmax W∈{0,1}t−1 log p(Dt|W,D<t)− λ √ 2DKL[p(θt|W,D<t)‖p(θt)]. (21) In general, solving equation 21 is difficult as the number of possible readout weights is 2(t−1), making brute force solutions practically infeasible. While there exists many approaches for performing discrete optimization, we found that using a simple greedy approach sufficed for our experiments; in the interest of space, we defer discussion regarding this to Appendix C. 3 RELATED WORKS A variety of approaches have been proposed for learning in non-stationary environments. In signal processing, adaptive filtering techniques such as recursive least squares (RLS) and least mean square filtering (LMS) are the de facto approaches for filtering in non-stationary environments (Haykin, 2008). While empirically successful, RLS and LMS are only applicable for a limited range of models, i.e. linear models. In contrast, BAM is a general purpose algorithm that can be deployed on a wide variety of models. 2λ = 0 recovers the uninformative prior case, p(Wt|D<t) ∝ 1. If the latent dynamics are known—or assumed to be known—then Bayesian filtering can be employed. A popular approach is to model the latent dynamics (equation 5) as an autoregressive process (Kurle et al., 2020; Rimella & Whiteley, 2020). While this approach has been popular, it is only applicable for models where the parameters are real-valued. A seminal work on Bayesian filtering is the Bayesian online changepoint detction (BOCD) algorithm of Adams & MacKay (2007), where the latent dynamics (equation 5) are modeled to be piece-wise constant. While BOCD is broadly applicable and has seen empirical success, the caveat is that an agent forgets all previous experience when a change is detected; thus, previously visited states appear novel to the agent and learning must begin from scratch. An extension to BOCD was proposed by Li et al. (2021), where when a change is detected a scaled version of the previous posterior is used as the prior. While similar in spirit to BAM, we note that the approach proposed in Li et al. (2021) is designed for Gaussian distributions, while BAM can work with arbitrary distributions. Moreover, the approach in Li et al. (2021) can only increase the uncertainty by a fixed pre-determined amount while BAM can adaptively modulate its uncertainty. Previous works have proposed solutions for making recursive Bayes more suited for use in nonstationary environments through exponential forgetting of past data (Moens, 2018; Moens & Zénon, 2019; Masegosa et al., 2020). While these models have also seen empirical success, their focus have been on forgetting past experiences which prevents the agent to leverage past experiences that are relevant. In BAM, the agent is focused not only on forgetting irrelevant experiences but remembering relevant experiences as well. The use of readout weights in BAM can be seen as an instance of likelihood tempering, which has been used to perform robust Bayesian inference (Wang et al., 2017) and to help with approximate Bayesian inference schemes (Neal, 1996; 2001; Mandt et al., 2016). While previous works focus on the offline case where data has already been collected, BAM focuses on the online case where the agent adaptively tempers the likelihood. The concept of an external memory buffer has recently been explored in machine learning (Gemici et al., 2017; Wu et al., 2018; Marblestone et al., 2020). While similar in spirit to BAM, most works use a softmax as their readout weight. As a byproduct, the agent must select an element from the buffer even if it isn’t applicable to the task at hand! BAM has no such restriction, and can ignore all the previous data in the buffer, resetting back to the base prior. 4 EXPERIMENTS To demonstrate the versatility of BAM, we apply it in a variety of scenarios. As BAM is a learning paradigm, it can be implemented as a module in a larger framework allowing it to be easily used in settings such as control/reinforcement learning and domain adaptation (Thompson, 1933; Osband et al., 2018; Lowrey et al., 2018; Yoon et al., 2018). BAM requires the ability to construct the posterior, p(θt|D<t,Wt), and evaluate the log marginal likelihood, log p(Dt|D<t,Wt). In general, the posterior and log marginal likelihood are only available analytically for conjugate priors (Gelman et al., 2013). While approaches exist for approximating the posterior (Robert et al., 2004; Brooks et al., 2011; Blei et al., 2017) and the log marginal likelihood (Robert et al., 2004; Gelman et al., 2013; Grosse et al., 2015), we restrict ourselves to only use conjugate priors to ensure any benefits of BAM are not due to uncertain effects of approximations. The use of conjugate priors also allows us to use sufficient statistics to compute posteriors, allowing BAM to scale amicably when the number of data points in a batch is large (Casella & Berger, 2021). 4.1 EXPERIMENT 1: INFERENCE IN A NON-STATIONARY ENVIRONMENT To evaluate BAM on online inference in a non-stationary environment, we generate data from the following model θt = a sin ( 2πt 100 ) + b, (22) p(Dt|θt) = Binomial(15, θt), (23) where a = 0.3 and b = 0.5 are chosen such that the lower and upper bounds for θt are 0.2 and 0.8, respectively. We evaluate BAM with no regularization, λ = 0, and with regularization, where λ = 0.1; as the data is discrete, there is a possibility that BAM could overfit, thus a priori we would expect the regularized BAM to perform better. We compare against recursive Bayes, Bayesian exponential forgetting (BF) and Bayesian online changepoint detection (BOCD). 3 Figure 1 demonstrates the weakness of recursive Bayes; as it views more data, the posterior gets more confident. This reduces the learning speed of the agent, preventing it from accurately tracking θt, and causing it to converge to the average with an extremely low posterior variance. BOCD tracks the parameter relatively well, though its estimates are slightly delayed. As BOCD lacks the ability to recall useful data from the past, its posterior variance resets every time a changepoint is detected. BAM is able to track θt and doesn’t suffer from temporal lag seen in the BOCD results, though the lack of regularization leads to estimates that are not as smooth as BOCD. The posterior variance of BAM reflects that the agent remembers relevant history and forgets irrelevant history, reducing the jump in posterior variance when revisiting a previously seen state. Lastly, we can see that BAM with regularization leads to smoother estimates but tends to be less confident compared to the other methods. 4.2 EXPERIMENT 2: CONTROLS In this section we illustrate the benefit of memory by applying BAM on a learning task to model non-linear dynamics for controls. The task is an analytical version of Cartpole (Barto et al., 1983), where the goal is to swing-up a pole on a moving cart. Non-stationarity is introduced by changing the environment’s gravity over time. We explore the performance of BAM under two different information models. In the episodic setting, the agent is told when a change occurs, but not the value of the new gravity parameter. In the continual learning setting, the agent is not informed of environmental changes.4 The reward for the task is the cosine of the angle of the pole on the cart, where an angle of 0◦ is the vertical ‘up’ position. To make the problem amenable for analytical posterior and log marginal likelihood computations, we model the nonlinear dynamics using linear regression with random Fourier features (RFF) (Rahimi & Recht, 2007) xt = xt−1 +Mφ(xt−1, at) + εt, εt ∼ N (0, σ2I), (24) 3The timescale parameter for BOCD is 1/100, which is the frequency of the sinusoid. The weighting term for BF is 0.8. 4For both settings, the number of data points in a batch is relatively large, leading the log marginal likelihood to overtake the prior in equation 21. As regularization has little effect, results are shown for λ = 0. where xt ∈ Rdx is the state vector, at ∈ Rda is the action vector, εt ∈ Rdx is state noise and φ is our RFF function. For simplicity, we assume a fixed noise variance of σ2 = 10−6. This parameterization allows us to perform Bayesian linear regression over M which is analytically tractable (Gelman et al., 2013). Full details can be found in Appendix D.1. 4.2.1 EPISODIC ONE-SHOT In this setting our simulated Cartpole hypothetically moves between different planets—hence a change in gravity—while still being asked to swing the pole up. In an episode, gravity is fixed for the duration of 15 trials, where each trial resets the Cartpole to a random initial state, x0. Each trial produces a trajectory of states and actions of length H that are batched into one unit of data, such that each episode contributes 15 experiences; thus the datum for trial t is Dt = {([xj , aj ], [xj − xj−1])}Hj=1. We compare BAM to recursive Bayes in a one-shot manner: after the first trial of a new episode, BAM computes a weight vector over all previously encountered trial data to inform a posterior for the duration of the episode. Recursive Bayes is reset to the base prior at the beginning of a new episode. Both proceed to update their belief every trial in an episode. We show in Figure 2 results over 5 random seeds where the expected score for a ground truth model is shown as a reference. The first time BAM encounters a novel planet, it resets its prior to the base prior and starts learning from scratch, similar to recursive Bayes. On subsequent visits however, BAM is able to leverage its past experiences to quickly adapt and recover high levels of performance. As recursive Bayes starts from scratch, it will again need multiple trials to develop a competent model. 4.2.2 CONTINUAL LEARNING In addition to the challenge of adapting to a different environment, we also test BAM when the agent is not informed of the change, such that adaption must happen continually. In this scenario without explicit episodes, the gravity of the world can change after a trial, unbeknownst to the agent. Similar to the previous setting, a datum for trial t is Dt = {([xj , aj ], [xj − xj−1])}Hj=1. While it is straightforward to run BAM in this setting, we also investigate combining BAM with BOCD, which we denote as BAM + BOCD. In BOCD, the detection of a changepoint causes the posterior distribution to be reset to the base prior. In BAM + BOCD, the detection of a changepoint is used as signal for when the agent should adapt its prior by computing Wt, to obtain p(θt|Wt,D<t); this avoids rerunning the optimization procedure after each trial. We show in Figure 3 that while BOCD works as intended, without BAM the Cartpole has to relearn a model from the prior belief, leading to significant dips in the expected reward. While all methods are able to adapt when the environment is in a constant state, the use of past information allows BAM and BAM + BOCD to quickly adapt. We can see that BAM and BAM + BOCD perform very similarly to each other, suggesting that we can bypass unnecessary computation. 4.3 EXPERIMENT 3: NON-STATIONARY BANDIT A common environment for testing the performance of online learning algorithms is the bandits setting (Sutton & Barto, 2018). We study a non-stationary version of the bandits setting where each arm switches between two values asynchronously, such that the best arm could be, at any point in time, a previously low value arm. Gaussian noise with σ = 0.25 is additionally added to the current arm value. Sample arm values can be found in Figure 5. For stationary bandits, a popular algorithm is Thompson sampling (Thompson, 1933) in which a posterior over each arm is continually updated via recursive Bayes. These posteriors are then leveraged to decide which arm the agent should pull, where the posterior uncertainty allows the agent to automatically switch between exploration and exploitation. In the non-stationary setting, we would expect vanilla Thompson sampling to fail as the arm posteriors would continue becoming more certain, as is evident from section 4.1. While there are many approaches for how to adapt BAM to perform well in the non-stationary bandits setting, we take a simple approach and combine BAM with the upper confidence bound (UCB) bandit algorithm (Agrawal, 1995), which we call UCBAM; in the interest of space, we provide an algorithm table in Appendix D.3.1. We compare UCBAM against UCB, Thompson sampling, Bayesian exponential forgetting + Thompson sampling and a BOCD + Thompson sampling scheme proposed by Mellor & Shapiro (2013); hyperparameter values can be found in Appendix D.3. From Figure 4, we see that UCBAM outperforms the other methods for both 10 and 50 arms. Thompson sampling fails to capture the true current values of the arms and suffers a large penalty while exploration afforded by UCB enables better performance. BOCD achieves low regret in the 10 arm setting, but reverts to its prior too often to perform well with 50 arms. 4.4 EXPERIMENT 4: DOMAIN ADAPTATION WITH ROTATED MNIST In the image classification setting, we often want to operate across a variety of domains. Traditional approaches include learning a single high capacity model or encoding assumptions about the domain structure into the system (Jaderberg et al., 2015; Worrall et al., 2017). Instead, we use a simple multivariate linear regression model where the targets are one-hot encoded labels, taking the highest output as the selected class. We consider a setting where the distribution of domains is known and is the same at both train and test time and evaluate BAM’s ability to classify given a small number of labeled examples from the domains to adapt its belief. To achieve this, we create a rotated MNIST dataset. 32 domains were created, where each domain was comprised of 1,875 randomly sampled without replacement from the training set. In a domain, the images are rotated by an angle sampled uniformly at random from 0 to π. Each domain is treated as one batch of data in the memory buffer, i.e. Di = {(xij , yij)}1875j=1 . We split and rotate the test set similarly into 8 domains and give 10 labeled examples from each to find readout weights over the training data. We calculate the average accuracy over all test domains and collect results over 10 random seeds. While OLS trained over all domains get a mean and standard deviation accuracy of 55% ± 3.7% accuracy, BAM is able to achieve a test set accuracy of 71.8% ± 5.2%, showing that BAM is able to leverage previous experiences to adapt to novel domains. 5 CONCLUSION AND FUTURE WORK In this work we present BAM, a flexible Bayesian framework that allows agents to adapt to nonstationary environments. Our key contribution is the addition of a memory buffer to the Bayesian framework, which allows the agent to adaptively change its prior by choosing which past experiences to remember and which to forget. Empirically, we show the proposed approach is general enough to be deployed in a variety of problem settings such as online inference, control, non-stationary bandits and domain adaptation. To ensure that we isolated the benefits of BAM, the experiments focused on conjugate-prior distributions as it allowed us to compute the prior/posterior and the log-marginal likelihood in closed form. Future work will focus on leveraging advances in streaming variational inference (Broderick et al., 2013; Kurle et al., 2020) to allow BAM to be deployed on more complicated models, i.e. Bayesian deep neural networks. For simplicity, we focused on binary values for the readout weights as it allowed for a simple greedy discrete optimization algorithm to be used. We imagine that allowing the weights to be any value between 0 and 1 will increase performance in certain settings and allow BAM to have a much larger repertoire of priors that it can construct, as well as suggest different optimization algorithms to use within the framework. Finally, efficient memory buffer schemes will be explored to avoid the ’infinite memory’ problem of continual learning, enabling BAM to operate with efficiency indefinitely. 6 ACKNOWLEDGMENTS The authors thank Ayesha Vermani, Matthew Dowling and Il Memming Park for insightful discussions and feedback. A EXAMPLES OF POSTERIORS WITH DECREASING VARIANCE In this section we will provide two cases where the variance of the posterior is non-increasing with probability 1 as more data is collected, regardless of the observed data. For simplicity we stick to only 1D, though we are confident these results extend to their multi-dimensional extensions. A.1 BAYESIAN ESTIMATION OF MEAN OF NORMAL DISTRIBUTION The likelihood is of the form p(y|θ) = N (θ, σ2), (25) where σ2 > 0 is known. We use a normal prior p(θ) = N (θ̄0, τ0), (26) where τ0 > 0. Given arbitrary data y1, · · · , yN ∼ p(y1:N ) we get that the posterior is of the form p(θ|y1:N ) = N (θ̄N , τN ), (27) where τN = (τ −1 0 +Nσ −2)−1 = σ2τ0 σ2 +Nτ0 , (28) θ̄N = τN ( τ−10 θ̄0 + σ −2 N∑ n=1 yn ) . (29) We observe that the posterior variance, equation 28, is not a function of the observed data. In fact, the posterior variance is deterministic given N , τ0 and σ2. In this particular setting, we can show that τN is a strictly decreasing function of N . To prove that τ0 > τ1 > · · · > τn > · · · > τN , it suffices to show that τn−1 > τn, ∀n ∈ {1, · · ·N}, (30) which is equivalent to showing that τn τn−1 < 1, ∀n ∈ {1, · · ·N}. (31) Before proceeding, we note that as Bayes’ theorem is closed under recursion, we can always express the posterior variance as τn = (τn−1 + σ −2)−1 = σ2τn−1 σ2 + τn−1 . (32) Computing τn/τn−1 τn τn−1 = σ2τn−1 σ2 + τn−1 × 1 τn−1 , (33) = σ2 σ2 + τn−1 . (34) Because τn > 0, ∀n ∈ {0, · · · , N}, (35) we have that σ2 < σ2 + τn−1, and conclude that τn/τn−1 < 1. A.2 BAYESIAN LINEAR REGRESSION Next, we consider the setting of Bayesian linear regression with known variance. The likelihood is of the form p(yi|xi, θ) = N (θxi, σ2), xi ∈ R, (36) where σ2 > 0 is known. We use a normal prior p(θ) = N (θ̄0, τ0), (37) where τ0 > 0. Given arbitrary observations (x1, y1), . . . , (xn, yn), we have that the posterior is of the form p(θ|x1:N , y1:N ) = N (θ̄N , τN ), (38) where τN = ( τ−10 + σ −2 N∑ n=1 x2n )−1 = σ2τ0 σ2 + τ0 ∑N n=1 x 2 n , (39) θ̄N = τN (τ −1 0 θ̄0 + σ −2 N∑ n=1 xnyn). (40) To prove that τ0 ≥ τ1 ≥ · · · ≥ τn ≥ · · · ≥ τN , it suffices to show that τn τn−1 ≤ 1, ∀xn ∈ R, ∀n ∈ {1, · · · , N}. (41) Again, due to the Bayes being closed under recursion, we can always rewrite the posterior variance as τn = ( τ−1n−1 + σ −2x2n )−1 = σ2τn−1 σ2 + τn−1x2n . (42) So τn τn−1 = σ2τn−1 σ2 + τn−1x2n × 1 τn−1 , (43) = σ2 σ2 + τn−1x2n . (44) As x2n ≥ 0, we have that τn/τn−1 ≤ 1, which completes the proof. B PROOF OF PROPOSITION 1 For clarity, we rewrite the proposition below Proposition. Let p(θ|D<t,Wt) ∝ p(θ) t−1∏ j=1 p(Dj |θ)wt,j , wt,j ∈ {0, 1}, (45) be the prior used in BAM and let p(θ|D<t) ∝ p(θ) t−1∏ j=1 p(Dj |θ), (46) be the recursive Bayes prior. Then E [ Var(θ|D<t,Wt) ∣∣Wt] ≥ E[Var(θ|D<t)], ∀Wt ∈ {0, 1}t−1. (47) Proof. We begin by describing some simple cases, before presenting the proof for the general case. Case 1: All the readout weights are 1. If all the readout weights are 1, i.e. Wt = 1 then p(θ|D<t,Wt = 1) = p(θ|D<t), (48) recovering the recursive Bayes prior. Thus E [ Var(θ|D<t,Wt = 1) ∣∣Wt = 1] = E[Var(θ|D<t)]. (49) Case 2: All the readout weights are 0. If all the readout weights are 0, i.e. Wt = 0 then p(θ|D<t,Wt = 0) = p(θ), (50) recovering the base prior. The law of total variance states Var(θ) = E [Var(θ|D<t)] + Var(E[θ|D<t]). (51) As both terms on the right-hand side are positive, this implies that E [ Var(θ|D<t,Wt = 0) ∣∣Wt = 0] = Var(θ) ≥ E [Var(θ|D<t)] . (52) Case 3: General case Let r be the indices of the readout weight set to 1 (“remembered”) and f be the indices of the readout weights set to 0 (“forgotten”). We can express the memory buffer as D<t = Dr ∪ Df where Dr are the data points selected by the readout weights and Df are the data points that are ignored. We can rewrite the BAM prior as p(θ|D<t,Wt) = p(θ|Dr), (53) which is equivalent to applying Bayes theorem using Dr. Similarly, we can rewrite the recursive Bayes prior as p(θ|D<t) = p(θ|Dr,Df) ∝ p(Df|θ)p(θ|Dr). (54) Using the law of total variance, we get Var(θ|D<t,W ) = Var(θ|Dr) = E [ Var(θ|D<t) ∣∣Dr]+ Var (E[θ|D<t]∣∣Dr) , (55) where again, the above implies Var(θ|Dr) ≥ E [ Var(θ|D<t) ∣∣Dr] . (56) As the above inequality holds for all values of Dr, it also holds under expectation as well E[Var(θ|Dr) ∣∣Wt] ≥ E [Var(θ|D<t)∣∣Wt] . (57) Since Var(θ|D<t) is the variance under the recursive Bayes model, it is not a function ofWt, allowing the conditioning on Wt to be dropped E [ Var(θ|D<t) ∣∣Wt] = E [Var(θ|D<t)] . (58) Applying our definition of Dr recovers the desired result: E[Var(θ|D<t,Wt) ∣∣Wt] ≥ E [Var(θ|D<t)] . (59) C DISCUSSION OF GREEDY DISCRETE OPTIMIZATION As the number of choices is 2(t−1), it is impractical to use brute force methods for solving the discrete optimization problem defined in equation 21. For simplicity, we use two types of greedy approaches for discrete optimization. In both cases, each element in memory is evaluated against a target datum with the inner term of equation 19, the log marginal likelihood and regularization term. The first is a bottom-up approach, where we start with all readout weights set to 0 and greedily add the most beneficial associated datum until the combined score decreases. Pseudo code is displayed in Algorithm 1. Note that this is similar in spirit to the stepwise selection approach used for selecting variables in linear regression (Hocking, 1976). Algorithm 1: Bottom-Up Greedy for BAM Data: memoryD<t, targetDt, prior p, regularizer strength λ priorscore← log p(Dt) ; for size(D<t) do for eachDi inD<t do if W[i] > 0 then scores[i]← log ∫ p(Dt|θt)p(θt|W,D<t)dθt + log p(W |D<t) else scores[i] = -Inf end end score, idx = findmax(scores) ; if score > priorscore then W [idx]← 1 ; priorscore← score ; p = posterior(p,D<t[idx]) else returnW end end Result: Readout weightsW In the second approach, the readout weight starts at 0. The contribution of each datum in D<t is evaluated independently (and can be done practically in parallel with either multi-core CPUs or GPUs). These scores are filtered to only be scores better than the base prior’s likelihood. The top q% percentile of remaining scores are chosen and their corresponding readout weight value are set to 1. Pseudo code is displayed in Algorithm 2. This approach is faster than bottom-up as only one round of optimization is needed but the combination of each of the individual experiences could potentially lead to sub-optimal performance. Additionally, the percentile cutoff may needlessly include or exclude weight values. In practice, we found that the two approaches performed similarly with the main exception being the MNIST experiment, where the parallel approach was significantly worse than bottom-up. Algorithm 2: Parallel selection for BAM Data: memoryD<t, targetDt, regularizer strength λ, prior distribution p, cutoff q priorscore = log p(Dt) ; for eachDi inD<t do scores[i]← log ∫ p(Dt|θt)p(θt|Di)dθt + log p(W |Di) end cutoff = quantile(scores > priorscore, q) ; for each in scores do if scores[i] > cutoff then W [i]← 1 else W [i]← 0 end end Result: Readout weightsW D EXPERIMENTAL SETTINGS D.1 CONTROLS For our controls experiments, we used Model Predictive Path Integral control (Williams et al., 2017), a model predictive control (MPC) algorithm with a planning horizon of 50 timesteps and 32 sample trajectories. Our sampling covariance was 0.4 for each controlled joint–in the case of Cartpole, the action space is 1. The temperature parameter we used was 0.5. Planning with a probabilistic model involves each sampling trajectory to use a different model sampled from the current belief (as opposed to a sampled model per timestep); planning rollouts included noise, such that xt = xt−1 +M ′φ(xt−1, at) + εt, εt ∼ N (0, σ2I), (60) where M ′ is sampled from the current belief. φ is the random Fourier features function from (Rahimi & Recht, 2007) where we use 200 features with a bandwidth calculated as the mean pairwise distance of the inputs (states and actions) which is 6.0. To learn M , we use Bayesian linear regression where each row of M is modeled as being independent. We place a multivariate Normal prior on each of the rows with a prior mean of all 0s and prior precision of 10−4I . The Cartpole model’s initial state distribution for positions and velocities were sampled uniformly from -0.05 to 0.05, with the angle of the cart being π such that it points down. This sets up the swing-up problem. For the episodic one-shot experiment, we perform MPC for 200 timesteps as one trial. 15 trials make one episode, with the dynamical properties of the environment (i.e. gravity) fixed for the duration of the trial. We vary the gravity parameter of the model by selecting gravity values from celestial bodies of the Solar System; we used Earth, Mars, and Neptune at 9.81, 3.72, and 11.15 m/s2, respectively. At the start of a new episode, each method’s beliefs are reset to the base prior, and each method proceeds to update their respective beliefs accordingly. BAM retains each trail’s datum in memory across episodes. For the continual learning experiment, we do not inform our agent that the model dynamics have changed, i.e. we never reset the agent’s belief to a prior. Instead, we use Bayesian Online Changepoint Detection (BOCD) to discern if the underlying model distribution has changed. BOCD is compared against BAM, both with and without changepoint detection; while BOCD resets to a prior when a change is detected, BAM optimizes for a weight vector over the previously experienced data. The BOCD switching parameter λ for its hazard function was set to 0.11. The agent attempts the task for 60 trials, with the environment experiencing changes 3 times during said trials. D.2 DOMAIN ADAPTATION WITH ROTATED MNIST We ran 10 independent Bayesian linear regressions, one for each dimension of the one-hot encoded target. As the prior, we use a multivariate Normal distribution with a prior mean of all 0s and prior precision of 0.1I . Similar to the controls experiment, we assume the additive noise is fixed and set to σ2 = 10−4. As regularization had little effect, we set λ = 0. D.3 NON-STATIONARY BANDITS For both UCB and UCBAM, we use a confidence-level function of f(t) = 1 + t log2(t). The timescale parameter for BOCD + Thompson sampling is 0.016, which is the expected frequency of the arm switches. The weighting term for Bayesian exponential forgetting + Thompson sampling is 0.8. D.3.1 DESCRIPTION OF UCBAM The challenge of bandit settings is the need to explore, especially in the non-stationary setting we devised. As such, UCB is a well known algorithm for leveraging the uncertainty in the arm values to enable exploration. We combine this frequentist method with BAM as follows. When we assume to ‘know’ the current best arm value, we exploit it and keep a belief over its distribution with BAM. The signal for whether the best arm is ‘known’ is if the likelihood of the current arm’s value is higher with our current arm belief or higher with the naive base prior. If the base prior produces a higher likelihood, we assume the current arm distribution is incorrect (and will be updated with BAM), and we default to the UCB metric for arm selection. This simple combination of methods in this setting allows for the exploration benefits of UCB with the quick recognition of high value arms due to BAM and subsequent exploitation. Algorithm 3: UCBAM Data: prior distribution p K ← number of arms ; b = copy(p), empty D, K times ; # belief and memory per arm known← false ; for each iteration do if known then arm← thompson(b1...K) else arm← UCB choice end v← pull(arm) ; if log(p(v)) ≥ log(barm(v)) then known← false else known← true end barm = BAM(p,Darm<t , v) ; # BAM posterior update D<t = [D<t, v] ; # add value to memory end
1. What is the focus and contribution of the paper regarding Bayesian learning in non-stationary environments? 2. What are the strengths of the proposed method, particularly its theoretical development and novelty? 3. Do you have any minor concerns or suggestions regarding the paper's content, such as equation notation, regularization, and algorithm naming?
Summary Of The Paper Review
Summary Of The Paper In this paper, a method BAM is proposed for Bayesian learning in non-stationary environments: basically, at each time step, each previous datum may or may not be incorporated into the new posterior, so that old data from different states can be ignored, while old data from the same or similar states is remembered. The method doesn't rely on parametric assumptions. Experiments demonstrate it to work well in various scenarios. Review Strengths: The theoretical development is principled; This approach seems to be novel (though I'm not familiar with the literature on Bayesian methods for non-stationary environments); The paper is clearly written. Concerns: None. Minor points: In equations (11) and (12), shouldn't θ t be θ , as it is assumed not to change with t here? Equation (17) is just a normalisation constant. Since (14) and (16) are also given only up to constants, is it worthwhile to include? Several times in section 2.1: "preventing BAM from overfitting" sounds like BAM won't overfit at all. Could you change the wording to reflect that regularisation reduces the chance/severity of overfitting, without preventing it altogether? Below equation (23) and in appendix C: ( t − 1 ) ! should be 2 t − 1 Algorithm 1: While "Bottom's Up" [sic] is a nice name, I think the term you're looking for is "Bottom-Up" :)
ICLR
Title BAM: Bayes with Adaptive Memory Abstract Online learning via Bayes’ theorem allows new data to be continuously integrated into an agent’s current beliefs. However, a naive application of Bayesian methods in non-stationary environments leads to slow adaptation and results in state estimates that may converge confidently to the wrong parameter value. A common solution when learning in changing environments is to discard/downweight past data; however, this simple mechanism of “forgetting” fails to account for the fact that many real-world environments involve revisiting similar states. We propose a new framework, Bayes with Adaptive Memory (BAM), that takes advantage of past experience by allowing the agent to choose which past observations to remember and which to forget. We demonstrate that BAM generalizes many popular Bayesian update rules for non-stationary environments. Through a variety of experiments, we demonstrate the ability of BAM to continuously adapt in an ever-changing world. 1 INTRODUCTION The ability of an agent to continuously modulate its belief while interacting with a non-stationary environment is a hallmark of intelligence and has garnered a lot of attention in recent years (Zhang et al., 2020; Ebrahimi et al., 2020; Xie et al., 2020). The Bayesian framework enables online learning by providing a principled way to incorporate new observations into an agent’s model of the world (Jaynes, 2003; Gelman et al., 2013). Through the use of Bayes’ theorem, the agent can combine its own (subjective) a priori knowledge with data to achieve an updated belief encoded by the posterior distribution. The Bayesian framework is a particularly appealing option for online learning because Bayes’ theorem is closed under recursion, enabling continuous updates in what is commonly referred to as the recursive Bayes method (Wakefield, 2013). As an example, suppose the agent first observes a batch of data, D1, and then later observes another batch of data, D2. We can express the agent’s posterior distribution over the world, where the world is represented by θ, as p(θ|D1,D2) = p(D2|θ)p(θ|D1) p(D2|D1) , (1) where p(D2|D1) = ∫ p(D2|θ)p(θ|D1)dθ. (2) Equation 1 demonstrates the elegance and simplicity of recursive Bayes: at time t, the agent recycles its previous posterior, p(θ|D<t), where D<t = {D1, · · · ,Dt−1}, into its current prior and then combines it with a newly observed batch of data, Dt, to obtain an updated posterior, p(θ|D≤t). ∗corresponding authors At first glance, it would appear that a naive application of recursive Bayes would suffice for most online learning tasks. However, the recursive Bayes method relies on the assumption that the world is stationary, i.e. D1,D2, · · · are all independent and identically distributed. When this assumption is violated, recursive Bayes can fail catastrophically. As an illustration, consider the law of total variance: Var(θ|D<t) = E[Var(θ|D<t,Dt) ∣∣D<t] + Var(E[θ|D<t,Dt]∣∣D<t). (3) Since both terms on the right hand side are positive, equation 3 reveals that in expectation, the variance of the posterior decreases as more data is seen regardless of the actual distribution of Dt, i.e. Var(θ|D<t) ≥ E[Var(θ|D<t,Dt) ∣∣D<t]. (4) In fact, for some models equation 4 is true with probability 1; we demonstrate examples in Appendix A. Thus, if the parameters of the environment, θ, were to change, the variance of the posterior would still decrease, becoming more certain of a potentially obsolete parameter estimate. Modeling the environment as stationary when it is actually changing also keeps the learning speed of the agent artificially low, as tighter posteriors prevent large jumps in learning. This is the opposite of what an intelligent agent should do in such an event: if the environment changes, we would expect the agent’s uncertainty and learning speed to increase in response. As was elegantly stated by Monton (2002), the problem with naive use of recursive Bayes is that "Such a Bayesian never forgets." Previous approaches on enabling recursive Bayes to work in non-stationary settings have primarily focused on forgetting past experience either through the use of changepoint detection (Adams & MacKay, 2007; Li et al., 2021), or by exponentially weighting past experiences (Moens, 2018; Moens & Zénon, 2019; Masegosa et al., 2020). While empirically successful, their focus on forgetting the past means that revisited states are treated as novel. In this work we take an alternative approach to online Bayesian learning in non-stationary environments by endowing an agent with an explicit memory module. Crucially, the addition of a memory buffer equips the agent with the ability to modulate its uncertainty by choosing what past experiences to both forget and remember. We call our approach Bayes with Adaptive Memory (BAM) and demonstrate its wide applicability and effectiveness on a number of non-stationary learning tasks. 2 BAYES WITH ADAPTIVE MEMORY The generative model is assumed to evolve according to θt ∼ p(θt|θt−1, t), (5) Dt ∼ pt(D) ≡ p(D|θt), (6) where equation 5 is the latent dynamics that dictate the evolution of the environment parameters, θt, and equation 6 is the likelihood whose parametric form is fixed throughout time, i.e. pt(D) = N (θt, σ2). Equations 5 and 6 define a state-space model, which allows one to infer θt through Bayesian filtering (Särkkä, 2013) p(θt|D≤t) ∝ p(Dt|θt)p(θt|D<t), (7) p(θt|D<t) = ∫ p(θt|θt−1, t)p(θt−1|D<t)dθt−1. (8) The parameterization of equations 5 and 6 dictate the tractability of equations 7 and 8. If a priori an agent knew that equation 5 is a linear dynamical system with additive white Gaussian noise and equation 6 is also Gaussian whose conditional mean is a linear function of θt, then the Kalman filter can be used (Kalman, 1960). For more complicated latent dynamics and/or likelihood models, methods such as particle filtering (Doucet & Johansen, 2009) and unscented Kalman filtering (Julier & Uhlmann, 1997) can be used. Crucially, Bayesian filtering methods assume that the latent dynamics governed by equation 5 are known; however, this is rarely the case in practice. Instead of making assumptions on the parametric form of equation 5, we take a different approach. In BAM, the agent maintains a memory buffer, D<t, that stores previous observations of the environment. At time t the agent obtains a new batch of data, Dt ∼ pt(D). How should the agent combine the newly observed data, Dt, with its stored memory, D<t, to update its belief as encoded by the posterior distribution? In recursive Bayes, the posterior distribution is computed according to1 p(θt|Dt,D<t) ∝ p(Dt|θt)p(θt|D<t), (9) p(θt|D<t) ∝ p(θt) t−1∏ j=1 p(Dj |θt), (10) where we refer to p(θt) as the base prior. Equation 10 allows us to interpret recursive Bayes as the agent constructing a dynamic prior, p(θt|D<t), using all the experiences stored in its memory buffer. This works under the stationarity assumption; when this assumption is violated, the application of Bayes’ theorem can lead to confidently wrong results as the "distance" between pi(D) and pj(D) can be vast. An alternative is for the agent to completely forget all of its past experiences p(θt|Dt) ∝ p(Dt|θt)p(θt). (11) While equation 11 may be viable in situations where Dt is sufficiently informative, it is wasteful when experiences in the memory buffer may help infer θt. BAM dynamically finds a middle ground between these two extremes of remembering (equation 10) and forgetting (equation 11) everything by allowing the agent to choose which data to use from its memory buffer to construct the prior. Specifically, the agent is endowed with a time-dependent readout weight, Wt = [wt,1, wt,2, · · · , wt,t−1] where wt,j ∈ [0, 1]. Given a new datum Dt, BAM constructs its posterior according to p(θt|Dt,D<t,Wt) ∝ p(θt)p(Dt|θt) t−1∏ j=1 p(Dj |θt)wt,j . (12) We can rewrite equation 12 as p(θt|Dt,D<t,Wt) = p(Dt|θt)p(θt|D<t,Wt) p(Dt|D<t,Wt) , (13) where p(θt|D<t,Wt) ∝ p(θt) t−1∏ j=1 p(Dj |θt)wt,j , (14) and p(Dt|D<t,Wt) = ∫ p(Dt|θt)p(θt|D<t,Wt)dθt. (15) The prior construction in equation 14 is akin to recursive Bayes, but now the agent can dynamically and adaptively change its prior by using the readout weights, Wt, to weigh the importance of previous experience where at the extreme, it can choose to completely forget a previous experience, wt,j = 0, or fully remember it, wt,j = 1. For simplicity, we restrict the readout weights to be binary, i.e. wt,j ∈ {0, 1}. The combination of a memory buffer, D<t, with a time-dependent readout weight, Wt, allows BAM to generalize many previously proposed approaches. By setting wt,1 = wt,2 = · · · = wt,t−1 = 1, we recover recursive Bayes (equation 10). By setting wt,1 = wt,2 = · · · = wt,t−1 = α, where 0 ≤ α ≤ 1 we recover the power priors approach of Ibrahim et al. (2015). By setting wt,j = αt−1−j , where 0 ≤ α ≤ 1, we recover exponential forgetting (Moens, 2018; Moens & Zénon, 2019; Masegosa et al., 2020). Lastly, by setting a particular subset of the readout weights to be 0, we recover Bayesian unlearning (Nguyen et al., 2020). The ability to adaptively change its prior implies that BAM can increase/decrease its uncertainty as the situation demands; subsequently, this modulates the agent’s learning speed. Using variance as a proxy for uncertainty, one would expect that the variance of the prior used in BAM (equation 14) is always at least as large as the variance of the prior used in recursive Bayes (equation 10). We formalize this for the case of binary readout weights in the following proposition. Proposition 1. Let p(θ|D<t,Wt) be the prior used by BAM, defined in equation 14 and let p(θ|D<t) be the recursive Bayes prior, defined in equation 13. Then E[Var(θ|D<t,Wt) ∣∣Wt] ≥ E[Var(θ|D<t)], ∀Wt ∈ {0, 1}t−1. (16) Proof. Proof is in Appendix B. 1Recursive Bayes is equivalent to Bayesian filtering when p(θt|θt−1, t) = δ(θt = θt−1). 2.1 SELECTION OF READOUT WEIGHTS VIA BAYESIAN MODEL-SELECTION While the previous section demonstrated the flexibility of BAM, the question remains: how should the readout weights, Wt, be set? Equation 13 allows us to view different readout weights as different models. Through this lens, we can follow the spirit of Bayesian model selection (Gelman et al., 2013) and compute a posterior over the readout weights p(Wt|Dt,D<t) ∝ p(Wt|D<t)p(Dt|Wt,D<t). (17) For practicality, we compute the maximum a posteriori (MAP) estimate of equation 17 (Gelman et al., 2013) and use that as the value of the readout weight Wt = argmax W∈{0,1}t−1 log p(Dt|W,D<t) + log p(W |D<t), (18) = argmax W∈{0,1}t−1 log ∫ p(Dt|θt)p(θt|W,D<t)dθt + log p(W |D<t). (19) The first term of equation 18 is the log marginal likelihood, which measures the likelihood ofDt being distributed according to the predictive distribution, p(D|W,D<t) while the prior, log p(W |D<t), acts as a regularizer. This procedure of constantly updating the readout weights through equation 18 can be interpreted as providing Bayes a feedback mechanism: equation 18 allows the agent to directly measure its ability to fit the observed data using different combination of experiences in its buffer via the readout weight, and then choosing the readout weight that leads to best fit. In contrast, standard Bayesian inference is an open-loop procedure: data, likelihood and prior are given and a posterior is spat out, irrespective of the fit of the model to the data (Simpson et al., 2017). Still left is the question of how do we design the prior, p(W |D<t). In certain scenarios, using an uninformative prior, i.e. p(W |D<t) ∝ 1, may suffice if the data is very informative and/or the number of data points in Dt is large. In scenarios where these conditions are not met, it is important to use an informative prior as it reduces the chance of overfitting. In general, the design of priors is highly nontrivial (Winkler, 1967; Gelman et al., 2013; Simpson et al., 2017). While there exists many potential options, we use penalized model complexity priors proposed by Simpson et al. (2017) as they are designed to reduce the chance of overfitting. Following Simpson et al. (2017), we parameterize the prior as p(W |D<t) ∝ exp ( −λ √ 2DKL[p(θt|W,D<t)‖p(θt)] ) , (20) where λ ∈ [0,∞) is a hyperparameter that controls the strength of the prior.2 Equation 20 encodes our prior belief that we favor values ofWt that produce simpler models, where simplicity is quantified as the Kullback-Leibler divergence between p(θt|Wt,D<t) and the base prior, p(θt). Plugging equation 20 into equation 18 we get Wt = argmax W∈{0,1}t−1 log p(Dt|W,D<t)− λ √ 2DKL[p(θt|W,D<t)‖p(θt)]. (21) In general, solving equation 21 is difficult as the number of possible readout weights is 2(t−1), making brute force solutions practically infeasible. While there exists many approaches for performing discrete optimization, we found that using a simple greedy approach sufficed for our experiments; in the interest of space, we defer discussion regarding this to Appendix C. 3 RELATED WORKS A variety of approaches have been proposed for learning in non-stationary environments. In signal processing, adaptive filtering techniques such as recursive least squares (RLS) and least mean square filtering (LMS) are the de facto approaches for filtering in non-stationary environments (Haykin, 2008). While empirically successful, RLS and LMS are only applicable for a limited range of models, i.e. linear models. In contrast, BAM is a general purpose algorithm that can be deployed on a wide variety of models. 2λ = 0 recovers the uninformative prior case, p(Wt|D<t) ∝ 1. If the latent dynamics are known—or assumed to be known—then Bayesian filtering can be employed. A popular approach is to model the latent dynamics (equation 5) as an autoregressive process (Kurle et al., 2020; Rimella & Whiteley, 2020). While this approach has been popular, it is only applicable for models where the parameters are real-valued. A seminal work on Bayesian filtering is the Bayesian online changepoint detction (BOCD) algorithm of Adams & MacKay (2007), where the latent dynamics (equation 5) are modeled to be piece-wise constant. While BOCD is broadly applicable and has seen empirical success, the caveat is that an agent forgets all previous experience when a change is detected; thus, previously visited states appear novel to the agent and learning must begin from scratch. An extension to BOCD was proposed by Li et al. (2021), where when a change is detected a scaled version of the previous posterior is used as the prior. While similar in spirit to BAM, we note that the approach proposed in Li et al. (2021) is designed for Gaussian distributions, while BAM can work with arbitrary distributions. Moreover, the approach in Li et al. (2021) can only increase the uncertainty by a fixed pre-determined amount while BAM can adaptively modulate its uncertainty. Previous works have proposed solutions for making recursive Bayes more suited for use in nonstationary environments through exponential forgetting of past data (Moens, 2018; Moens & Zénon, 2019; Masegosa et al., 2020). While these models have also seen empirical success, their focus have been on forgetting past experiences which prevents the agent to leverage past experiences that are relevant. In BAM, the agent is focused not only on forgetting irrelevant experiences but remembering relevant experiences as well. The use of readout weights in BAM can be seen as an instance of likelihood tempering, which has been used to perform robust Bayesian inference (Wang et al., 2017) and to help with approximate Bayesian inference schemes (Neal, 1996; 2001; Mandt et al., 2016). While previous works focus on the offline case where data has already been collected, BAM focuses on the online case where the agent adaptively tempers the likelihood. The concept of an external memory buffer has recently been explored in machine learning (Gemici et al., 2017; Wu et al., 2018; Marblestone et al., 2020). While similar in spirit to BAM, most works use a softmax as their readout weight. As a byproduct, the agent must select an element from the buffer even if it isn’t applicable to the task at hand! BAM has no such restriction, and can ignore all the previous data in the buffer, resetting back to the base prior. 4 EXPERIMENTS To demonstrate the versatility of BAM, we apply it in a variety of scenarios. As BAM is a learning paradigm, it can be implemented as a module in a larger framework allowing it to be easily used in settings such as control/reinforcement learning and domain adaptation (Thompson, 1933; Osband et al., 2018; Lowrey et al., 2018; Yoon et al., 2018). BAM requires the ability to construct the posterior, p(θt|D<t,Wt), and evaluate the log marginal likelihood, log p(Dt|D<t,Wt). In general, the posterior and log marginal likelihood are only available analytically for conjugate priors (Gelman et al., 2013). While approaches exist for approximating the posterior (Robert et al., 2004; Brooks et al., 2011; Blei et al., 2017) and the log marginal likelihood (Robert et al., 2004; Gelman et al., 2013; Grosse et al., 2015), we restrict ourselves to only use conjugate priors to ensure any benefits of BAM are not due to uncertain effects of approximations. The use of conjugate priors also allows us to use sufficient statistics to compute posteriors, allowing BAM to scale amicably when the number of data points in a batch is large (Casella & Berger, 2021). 4.1 EXPERIMENT 1: INFERENCE IN A NON-STATIONARY ENVIRONMENT To evaluate BAM on online inference in a non-stationary environment, we generate data from the following model θt = a sin ( 2πt 100 ) + b, (22) p(Dt|θt) = Binomial(15, θt), (23) where a = 0.3 and b = 0.5 are chosen such that the lower and upper bounds for θt are 0.2 and 0.8, respectively. We evaluate BAM with no regularization, λ = 0, and with regularization, where λ = 0.1; as the data is discrete, there is a possibility that BAM could overfit, thus a priori we would expect the regularized BAM to perform better. We compare against recursive Bayes, Bayesian exponential forgetting (BF) and Bayesian online changepoint detection (BOCD). 3 Figure 1 demonstrates the weakness of recursive Bayes; as it views more data, the posterior gets more confident. This reduces the learning speed of the agent, preventing it from accurately tracking θt, and causing it to converge to the average with an extremely low posterior variance. BOCD tracks the parameter relatively well, though its estimates are slightly delayed. As BOCD lacks the ability to recall useful data from the past, its posterior variance resets every time a changepoint is detected. BAM is able to track θt and doesn’t suffer from temporal lag seen in the BOCD results, though the lack of regularization leads to estimates that are not as smooth as BOCD. The posterior variance of BAM reflects that the agent remembers relevant history and forgets irrelevant history, reducing the jump in posterior variance when revisiting a previously seen state. Lastly, we can see that BAM with regularization leads to smoother estimates but tends to be less confident compared to the other methods. 4.2 EXPERIMENT 2: CONTROLS In this section we illustrate the benefit of memory by applying BAM on a learning task to model non-linear dynamics for controls. The task is an analytical version of Cartpole (Barto et al., 1983), where the goal is to swing-up a pole on a moving cart. Non-stationarity is introduced by changing the environment’s gravity over time. We explore the performance of BAM under two different information models. In the episodic setting, the agent is told when a change occurs, but not the value of the new gravity parameter. In the continual learning setting, the agent is not informed of environmental changes.4 The reward for the task is the cosine of the angle of the pole on the cart, where an angle of 0◦ is the vertical ‘up’ position. To make the problem amenable for analytical posterior and log marginal likelihood computations, we model the nonlinear dynamics using linear regression with random Fourier features (RFF) (Rahimi & Recht, 2007) xt = xt−1 +Mφ(xt−1, at) + εt, εt ∼ N (0, σ2I), (24) 3The timescale parameter for BOCD is 1/100, which is the frequency of the sinusoid. The weighting term for BF is 0.8. 4For both settings, the number of data points in a batch is relatively large, leading the log marginal likelihood to overtake the prior in equation 21. As regularization has little effect, results are shown for λ = 0. where xt ∈ Rdx is the state vector, at ∈ Rda is the action vector, εt ∈ Rdx is state noise and φ is our RFF function. For simplicity, we assume a fixed noise variance of σ2 = 10−6. This parameterization allows us to perform Bayesian linear regression over M which is analytically tractable (Gelman et al., 2013). Full details can be found in Appendix D.1. 4.2.1 EPISODIC ONE-SHOT In this setting our simulated Cartpole hypothetically moves between different planets—hence a change in gravity—while still being asked to swing the pole up. In an episode, gravity is fixed for the duration of 15 trials, where each trial resets the Cartpole to a random initial state, x0. Each trial produces a trajectory of states and actions of length H that are batched into one unit of data, such that each episode contributes 15 experiences; thus the datum for trial t is Dt = {([xj , aj ], [xj − xj−1])}Hj=1. We compare BAM to recursive Bayes in a one-shot manner: after the first trial of a new episode, BAM computes a weight vector over all previously encountered trial data to inform a posterior for the duration of the episode. Recursive Bayes is reset to the base prior at the beginning of a new episode. Both proceed to update their belief every trial in an episode. We show in Figure 2 results over 5 random seeds where the expected score for a ground truth model is shown as a reference. The first time BAM encounters a novel planet, it resets its prior to the base prior and starts learning from scratch, similar to recursive Bayes. On subsequent visits however, BAM is able to leverage its past experiences to quickly adapt and recover high levels of performance. As recursive Bayes starts from scratch, it will again need multiple trials to develop a competent model. 4.2.2 CONTINUAL LEARNING In addition to the challenge of adapting to a different environment, we also test BAM when the agent is not informed of the change, such that adaption must happen continually. In this scenario without explicit episodes, the gravity of the world can change after a trial, unbeknownst to the agent. Similar to the previous setting, a datum for trial t is Dt = {([xj , aj ], [xj − xj−1])}Hj=1. While it is straightforward to run BAM in this setting, we also investigate combining BAM with BOCD, which we denote as BAM + BOCD. In BOCD, the detection of a changepoint causes the posterior distribution to be reset to the base prior. In BAM + BOCD, the detection of a changepoint is used as signal for when the agent should adapt its prior by computing Wt, to obtain p(θt|Wt,D<t); this avoids rerunning the optimization procedure after each trial. We show in Figure 3 that while BOCD works as intended, without BAM the Cartpole has to relearn a model from the prior belief, leading to significant dips in the expected reward. While all methods are able to adapt when the environment is in a constant state, the use of past information allows BAM and BAM + BOCD to quickly adapt. We can see that BAM and BAM + BOCD perform very similarly to each other, suggesting that we can bypass unnecessary computation. 4.3 EXPERIMENT 3: NON-STATIONARY BANDIT A common environment for testing the performance of online learning algorithms is the bandits setting (Sutton & Barto, 2018). We study a non-stationary version of the bandits setting where each arm switches between two values asynchronously, such that the best arm could be, at any point in time, a previously low value arm. Gaussian noise with σ = 0.25 is additionally added to the current arm value. Sample arm values can be found in Figure 5. For stationary bandits, a popular algorithm is Thompson sampling (Thompson, 1933) in which a posterior over each arm is continually updated via recursive Bayes. These posteriors are then leveraged to decide which arm the agent should pull, where the posterior uncertainty allows the agent to automatically switch between exploration and exploitation. In the non-stationary setting, we would expect vanilla Thompson sampling to fail as the arm posteriors would continue becoming more certain, as is evident from section 4.1. While there are many approaches for how to adapt BAM to perform well in the non-stationary bandits setting, we take a simple approach and combine BAM with the upper confidence bound (UCB) bandit algorithm (Agrawal, 1995), which we call UCBAM; in the interest of space, we provide an algorithm table in Appendix D.3.1. We compare UCBAM against UCB, Thompson sampling, Bayesian exponential forgetting + Thompson sampling and a BOCD + Thompson sampling scheme proposed by Mellor & Shapiro (2013); hyperparameter values can be found in Appendix D.3. From Figure 4, we see that UCBAM outperforms the other methods for both 10 and 50 arms. Thompson sampling fails to capture the true current values of the arms and suffers a large penalty while exploration afforded by UCB enables better performance. BOCD achieves low regret in the 10 arm setting, but reverts to its prior too often to perform well with 50 arms. 4.4 EXPERIMENT 4: DOMAIN ADAPTATION WITH ROTATED MNIST In the image classification setting, we often want to operate across a variety of domains. Traditional approaches include learning a single high capacity model or encoding assumptions about the domain structure into the system (Jaderberg et al., 2015; Worrall et al., 2017). Instead, we use a simple multivariate linear regression model where the targets are one-hot encoded labels, taking the highest output as the selected class. We consider a setting where the distribution of domains is known and is the same at both train and test time and evaluate BAM’s ability to classify given a small number of labeled examples from the domains to adapt its belief. To achieve this, we create a rotated MNIST dataset. 32 domains were created, where each domain was comprised of 1,875 randomly sampled without replacement from the training set. In a domain, the images are rotated by an angle sampled uniformly at random from 0 to π. Each domain is treated as one batch of data in the memory buffer, i.e. Di = {(xij , yij)}1875j=1 . We split and rotate the test set similarly into 8 domains and give 10 labeled examples from each to find readout weights over the training data. We calculate the average accuracy over all test domains and collect results over 10 random seeds. While OLS trained over all domains get a mean and standard deviation accuracy of 55% ± 3.7% accuracy, BAM is able to achieve a test set accuracy of 71.8% ± 5.2%, showing that BAM is able to leverage previous experiences to adapt to novel domains. 5 CONCLUSION AND FUTURE WORK In this work we present BAM, a flexible Bayesian framework that allows agents to adapt to nonstationary environments. Our key contribution is the addition of a memory buffer to the Bayesian framework, which allows the agent to adaptively change its prior by choosing which past experiences to remember and which to forget. Empirically, we show the proposed approach is general enough to be deployed in a variety of problem settings such as online inference, control, non-stationary bandits and domain adaptation. To ensure that we isolated the benefits of BAM, the experiments focused on conjugate-prior distributions as it allowed us to compute the prior/posterior and the log-marginal likelihood in closed form. Future work will focus on leveraging advances in streaming variational inference (Broderick et al., 2013; Kurle et al., 2020) to allow BAM to be deployed on more complicated models, i.e. Bayesian deep neural networks. For simplicity, we focused on binary values for the readout weights as it allowed for a simple greedy discrete optimization algorithm to be used. We imagine that allowing the weights to be any value between 0 and 1 will increase performance in certain settings and allow BAM to have a much larger repertoire of priors that it can construct, as well as suggest different optimization algorithms to use within the framework. Finally, efficient memory buffer schemes will be explored to avoid the ’infinite memory’ problem of continual learning, enabling BAM to operate with efficiency indefinitely. 6 ACKNOWLEDGMENTS The authors thank Ayesha Vermani, Matthew Dowling and Il Memming Park for insightful discussions and feedback. A EXAMPLES OF POSTERIORS WITH DECREASING VARIANCE In this section we will provide two cases where the variance of the posterior is non-increasing with probability 1 as more data is collected, regardless of the observed data. For simplicity we stick to only 1D, though we are confident these results extend to their multi-dimensional extensions. A.1 BAYESIAN ESTIMATION OF MEAN OF NORMAL DISTRIBUTION The likelihood is of the form p(y|θ) = N (θ, σ2), (25) where σ2 > 0 is known. We use a normal prior p(θ) = N (θ̄0, τ0), (26) where τ0 > 0. Given arbitrary data y1, · · · , yN ∼ p(y1:N ) we get that the posterior is of the form p(θ|y1:N ) = N (θ̄N , τN ), (27) where τN = (τ −1 0 +Nσ −2)−1 = σ2τ0 σ2 +Nτ0 , (28) θ̄N = τN ( τ−10 θ̄0 + σ −2 N∑ n=1 yn ) . (29) We observe that the posterior variance, equation 28, is not a function of the observed data. In fact, the posterior variance is deterministic given N , τ0 and σ2. In this particular setting, we can show that τN is a strictly decreasing function of N . To prove that τ0 > τ1 > · · · > τn > · · · > τN , it suffices to show that τn−1 > τn, ∀n ∈ {1, · · ·N}, (30) which is equivalent to showing that τn τn−1 < 1, ∀n ∈ {1, · · ·N}. (31) Before proceeding, we note that as Bayes’ theorem is closed under recursion, we can always express the posterior variance as τn = (τn−1 + σ −2)−1 = σ2τn−1 σ2 + τn−1 . (32) Computing τn/τn−1 τn τn−1 = σ2τn−1 σ2 + τn−1 × 1 τn−1 , (33) = σ2 σ2 + τn−1 . (34) Because τn > 0, ∀n ∈ {0, · · · , N}, (35) we have that σ2 < σ2 + τn−1, and conclude that τn/τn−1 < 1. A.2 BAYESIAN LINEAR REGRESSION Next, we consider the setting of Bayesian linear regression with known variance. The likelihood is of the form p(yi|xi, θ) = N (θxi, σ2), xi ∈ R, (36) where σ2 > 0 is known. We use a normal prior p(θ) = N (θ̄0, τ0), (37) where τ0 > 0. Given arbitrary observations (x1, y1), . . . , (xn, yn), we have that the posterior is of the form p(θ|x1:N , y1:N ) = N (θ̄N , τN ), (38) where τN = ( τ−10 + σ −2 N∑ n=1 x2n )−1 = σ2τ0 σ2 + τ0 ∑N n=1 x 2 n , (39) θ̄N = τN (τ −1 0 θ̄0 + σ −2 N∑ n=1 xnyn). (40) To prove that τ0 ≥ τ1 ≥ · · · ≥ τn ≥ · · · ≥ τN , it suffices to show that τn τn−1 ≤ 1, ∀xn ∈ R, ∀n ∈ {1, · · · , N}. (41) Again, due to the Bayes being closed under recursion, we can always rewrite the posterior variance as τn = ( τ−1n−1 + σ −2x2n )−1 = σ2τn−1 σ2 + τn−1x2n . (42) So τn τn−1 = σ2τn−1 σ2 + τn−1x2n × 1 τn−1 , (43) = σ2 σ2 + τn−1x2n . (44) As x2n ≥ 0, we have that τn/τn−1 ≤ 1, which completes the proof. B PROOF OF PROPOSITION 1 For clarity, we rewrite the proposition below Proposition. Let p(θ|D<t,Wt) ∝ p(θ) t−1∏ j=1 p(Dj |θ)wt,j , wt,j ∈ {0, 1}, (45) be the prior used in BAM and let p(θ|D<t) ∝ p(θ) t−1∏ j=1 p(Dj |θ), (46) be the recursive Bayes prior. Then E [ Var(θ|D<t,Wt) ∣∣Wt] ≥ E[Var(θ|D<t)], ∀Wt ∈ {0, 1}t−1. (47) Proof. We begin by describing some simple cases, before presenting the proof for the general case. Case 1: All the readout weights are 1. If all the readout weights are 1, i.e. Wt = 1 then p(θ|D<t,Wt = 1) = p(θ|D<t), (48) recovering the recursive Bayes prior. Thus E [ Var(θ|D<t,Wt = 1) ∣∣Wt = 1] = E[Var(θ|D<t)]. (49) Case 2: All the readout weights are 0. If all the readout weights are 0, i.e. Wt = 0 then p(θ|D<t,Wt = 0) = p(θ), (50) recovering the base prior. The law of total variance states Var(θ) = E [Var(θ|D<t)] + Var(E[θ|D<t]). (51) As both terms on the right-hand side are positive, this implies that E [ Var(θ|D<t,Wt = 0) ∣∣Wt = 0] = Var(θ) ≥ E [Var(θ|D<t)] . (52) Case 3: General case Let r be the indices of the readout weight set to 1 (“remembered”) and f be the indices of the readout weights set to 0 (“forgotten”). We can express the memory buffer as D<t = Dr ∪ Df where Dr are the data points selected by the readout weights and Df are the data points that are ignored. We can rewrite the BAM prior as p(θ|D<t,Wt) = p(θ|Dr), (53) which is equivalent to applying Bayes theorem using Dr. Similarly, we can rewrite the recursive Bayes prior as p(θ|D<t) = p(θ|Dr,Df) ∝ p(Df|θ)p(θ|Dr). (54) Using the law of total variance, we get Var(θ|D<t,W ) = Var(θ|Dr) = E [ Var(θ|D<t) ∣∣Dr]+ Var (E[θ|D<t]∣∣Dr) , (55) where again, the above implies Var(θ|Dr) ≥ E [ Var(θ|D<t) ∣∣Dr] . (56) As the above inequality holds for all values of Dr, it also holds under expectation as well E[Var(θ|Dr) ∣∣Wt] ≥ E [Var(θ|D<t)∣∣Wt] . (57) Since Var(θ|D<t) is the variance under the recursive Bayes model, it is not a function ofWt, allowing the conditioning on Wt to be dropped E [ Var(θ|D<t) ∣∣Wt] = E [Var(θ|D<t)] . (58) Applying our definition of Dr recovers the desired result: E[Var(θ|D<t,Wt) ∣∣Wt] ≥ E [Var(θ|D<t)] . (59) C DISCUSSION OF GREEDY DISCRETE OPTIMIZATION As the number of choices is 2(t−1), it is impractical to use brute force methods for solving the discrete optimization problem defined in equation 21. For simplicity, we use two types of greedy approaches for discrete optimization. In both cases, each element in memory is evaluated against a target datum with the inner term of equation 19, the log marginal likelihood and regularization term. The first is a bottom-up approach, where we start with all readout weights set to 0 and greedily add the most beneficial associated datum until the combined score decreases. Pseudo code is displayed in Algorithm 1. Note that this is similar in spirit to the stepwise selection approach used for selecting variables in linear regression (Hocking, 1976). Algorithm 1: Bottom-Up Greedy for BAM Data: memoryD<t, targetDt, prior p, regularizer strength λ priorscore← log p(Dt) ; for size(D<t) do for eachDi inD<t do if W[i] > 0 then scores[i]← log ∫ p(Dt|θt)p(θt|W,D<t)dθt + log p(W |D<t) else scores[i] = -Inf end end score, idx = findmax(scores) ; if score > priorscore then W [idx]← 1 ; priorscore← score ; p = posterior(p,D<t[idx]) else returnW end end Result: Readout weightsW In the second approach, the readout weight starts at 0. The contribution of each datum in D<t is evaluated independently (and can be done practically in parallel with either multi-core CPUs or GPUs). These scores are filtered to only be scores better than the base prior’s likelihood. The top q% percentile of remaining scores are chosen and their corresponding readout weight value are set to 1. Pseudo code is displayed in Algorithm 2. This approach is faster than bottom-up as only one round of optimization is needed but the combination of each of the individual experiences could potentially lead to sub-optimal performance. Additionally, the percentile cutoff may needlessly include or exclude weight values. In practice, we found that the two approaches performed similarly with the main exception being the MNIST experiment, where the parallel approach was significantly worse than bottom-up. Algorithm 2: Parallel selection for BAM Data: memoryD<t, targetDt, regularizer strength λ, prior distribution p, cutoff q priorscore = log p(Dt) ; for eachDi inD<t do scores[i]← log ∫ p(Dt|θt)p(θt|Di)dθt + log p(W |Di) end cutoff = quantile(scores > priorscore, q) ; for each in scores do if scores[i] > cutoff then W [i]← 1 else W [i]← 0 end end Result: Readout weightsW D EXPERIMENTAL SETTINGS D.1 CONTROLS For our controls experiments, we used Model Predictive Path Integral control (Williams et al., 2017), a model predictive control (MPC) algorithm with a planning horizon of 50 timesteps and 32 sample trajectories. Our sampling covariance was 0.4 for each controlled joint–in the case of Cartpole, the action space is 1. The temperature parameter we used was 0.5. Planning with a probabilistic model involves each sampling trajectory to use a different model sampled from the current belief (as opposed to a sampled model per timestep); planning rollouts included noise, such that xt = xt−1 +M ′φ(xt−1, at) + εt, εt ∼ N (0, σ2I), (60) where M ′ is sampled from the current belief. φ is the random Fourier features function from (Rahimi & Recht, 2007) where we use 200 features with a bandwidth calculated as the mean pairwise distance of the inputs (states and actions) which is 6.0. To learn M , we use Bayesian linear regression where each row of M is modeled as being independent. We place a multivariate Normal prior on each of the rows with a prior mean of all 0s and prior precision of 10−4I . The Cartpole model’s initial state distribution for positions and velocities were sampled uniformly from -0.05 to 0.05, with the angle of the cart being π such that it points down. This sets up the swing-up problem. For the episodic one-shot experiment, we perform MPC for 200 timesteps as one trial. 15 trials make one episode, with the dynamical properties of the environment (i.e. gravity) fixed for the duration of the trial. We vary the gravity parameter of the model by selecting gravity values from celestial bodies of the Solar System; we used Earth, Mars, and Neptune at 9.81, 3.72, and 11.15 m/s2, respectively. At the start of a new episode, each method’s beliefs are reset to the base prior, and each method proceeds to update their respective beliefs accordingly. BAM retains each trail’s datum in memory across episodes. For the continual learning experiment, we do not inform our agent that the model dynamics have changed, i.e. we never reset the agent’s belief to a prior. Instead, we use Bayesian Online Changepoint Detection (BOCD) to discern if the underlying model distribution has changed. BOCD is compared against BAM, both with and without changepoint detection; while BOCD resets to a prior when a change is detected, BAM optimizes for a weight vector over the previously experienced data. The BOCD switching parameter λ for its hazard function was set to 0.11. The agent attempts the task for 60 trials, with the environment experiencing changes 3 times during said trials. D.2 DOMAIN ADAPTATION WITH ROTATED MNIST We ran 10 independent Bayesian linear regressions, one for each dimension of the one-hot encoded target. As the prior, we use a multivariate Normal distribution with a prior mean of all 0s and prior precision of 0.1I . Similar to the controls experiment, we assume the additive noise is fixed and set to σ2 = 10−4. As regularization had little effect, we set λ = 0. D.3 NON-STATIONARY BANDITS For both UCB and UCBAM, we use a confidence-level function of f(t) = 1 + t log2(t). The timescale parameter for BOCD + Thompson sampling is 0.016, which is the expected frequency of the arm switches. The weighting term for Bayesian exponential forgetting + Thompson sampling is 0.8. D.3.1 DESCRIPTION OF UCBAM The challenge of bandit settings is the need to explore, especially in the non-stationary setting we devised. As such, UCB is a well known algorithm for leveraging the uncertainty in the arm values to enable exploration. We combine this frequentist method with BAM as follows. When we assume to ‘know’ the current best arm value, we exploit it and keep a belief over its distribution with BAM. The signal for whether the best arm is ‘known’ is if the likelihood of the current arm’s value is higher with our current arm belief or higher with the naive base prior. If the base prior produces a higher likelihood, we assume the current arm distribution is incorrect (and will be updated with BAM), and we default to the UCB metric for arm selection. This simple combination of methods in this setting allows for the exploration benefits of UCB with the quick recognition of high value arms due to BAM and subsequent exploitation. Algorithm 3: UCBAM Data: prior distribution p K ← number of arms ; b = copy(p), empty D, K times ; # belief and memory per arm known← false ; for each iteration do if known then arm← thompson(b1...K) else arm← UCB choice end v← pull(arm) ; if log(p(v)) ≥ log(barm(v)) then known← false else known← true end barm = BAM(p,Darm<t , v) ; # BAM posterior update D<t = [D<t, v] ; # add value to memory end
1. What is the focus and contribution of the paper regarding Bayesian update rules? 2. What are the strengths of the proposed framework, particularly in adapting to non-stationary environments? 3. Do you have any concerns or limitations regarding the practical applications of the approach? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors propose a new framework, Bayes Augmented with Memory (BAM), that takes advantage of past experience by allowing the agent to choose which past observations to remember and which to forget and demonstrate that BAM generalizes many popular Bayesian update rules for non-stationary environments. Review In this paper, the authors propose a new framework, Bayes Augmented with Memory (BAM), that takes advantage of past experience by allowing the agent to choose which past observations to remember and which to forget and demonstrate that BAM generalizes many popular Bayesian update rules for non-stationary environments. The variety of experiments demonstrate the ability of BAM to continuously adapt in an ever-changing world. To the best of my knowledge, this is generally a good paper with a clear central idea. I have only two minor concerns: For simplicity, this paper focused on binary values for the readout weights as it allowed for a simple greedy discrete optimization algorithm to be used. This assumption limits its practical application scenarios. Although a simple greedy discrete optimization algorithm to be used, it still makes the proposed BAM have a relatively high time complexity. The paper could be improved if the authors can provide the time complexity of the proposed BAM. If it is difficulty, the comparison results in terms of the running time are needed.
ICLR
Title BAM: Bayes with Adaptive Memory Abstract Online learning via Bayes’ theorem allows new data to be continuously integrated into an agent’s current beliefs. However, a naive application of Bayesian methods in non-stationary environments leads to slow adaptation and results in state estimates that may converge confidently to the wrong parameter value. A common solution when learning in changing environments is to discard/downweight past data; however, this simple mechanism of “forgetting” fails to account for the fact that many real-world environments involve revisiting similar states. We propose a new framework, Bayes with Adaptive Memory (BAM), that takes advantage of past experience by allowing the agent to choose which past observations to remember and which to forget. We demonstrate that BAM generalizes many popular Bayesian update rules for non-stationary environments. Through a variety of experiments, we demonstrate the ability of BAM to continuously adapt in an ever-changing world. 1 INTRODUCTION The ability of an agent to continuously modulate its belief while interacting with a non-stationary environment is a hallmark of intelligence and has garnered a lot of attention in recent years (Zhang et al., 2020; Ebrahimi et al., 2020; Xie et al., 2020). The Bayesian framework enables online learning by providing a principled way to incorporate new observations into an agent’s model of the world (Jaynes, 2003; Gelman et al., 2013). Through the use of Bayes’ theorem, the agent can combine its own (subjective) a priori knowledge with data to achieve an updated belief encoded by the posterior distribution. The Bayesian framework is a particularly appealing option for online learning because Bayes’ theorem is closed under recursion, enabling continuous updates in what is commonly referred to as the recursive Bayes method (Wakefield, 2013). As an example, suppose the agent first observes a batch of data, D1, and then later observes another batch of data, D2. We can express the agent’s posterior distribution over the world, where the world is represented by θ, as p(θ|D1,D2) = p(D2|θ)p(θ|D1) p(D2|D1) , (1) where p(D2|D1) = ∫ p(D2|θ)p(θ|D1)dθ. (2) Equation 1 demonstrates the elegance and simplicity of recursive Bayes: at time t, the agent recycles its previous posterior, p(θ|D<t), where D<t = {D1, · · · ,Dt−1}, into its current prior and then combines it with a newly observed batch of data, Dt, to obtain an updated posterior, p(θ|D≤t). ∗corresponding authors At first glance, it would appear that a naive application of recursive Bayes would suffice for most online learning tasks. However, the recursive Bayes method relies on the assumption that the world is stationary, i.e. D1,D2, · · · are all independent and identically distributed. When this assumption is violated, recursive Bayes can fail catastrophically. As an illustration, consider the law of total variance: Var(θ|D<t) = E[Var(θ|D<t,Dt) ∣∣D<t] + Var(E[θ|D<t,Dt]∣∣D<t). (3) Since both terms on the right hand side are positive, equation 3 reveals that in expectation, the variance of the posterior decreases as more data is seen regardless of the actual distribution of Dt, i.e. Var(θ|D<t) ≥ E[Var(θ|D<t,Dt) ∣∣D<t]. (4) In fact, for some models equation 4 is true with probability 1; we demonstrate examples in Appendix A. Thus, if the parameters of the environment, θ, were to change, the variance of the posterior would still decrease, becoming more certain of a potentially obsolete parameter estimate. Modeling the environment as stationary when it is actually changing also keeps the learning speed of the agent artificially low, as tighter posteriors prevent large jumps in learning. This is the opposite of what an intelligent agent should do in such an event: if the environment changes, we would expect the agent’s uncertainty and learning speed to increase in response. As was elegantly stated by Monton (2002), the problem with naive use of recursive Bayes is that "Such a Bayesian never forgets." Previous approaches on enabling recursive Bayes to work in non-stationary settings have primarily focused on forgetting past experience either through the use of changepoint detection (Adams & MacKay, 2007; Li et al., 2021), or by exponentially weighting past experiences (Moens, 2018; Moens & Zénon, 2019; Masegosa et al., 2020). While empirically successful, their focus on forgetting the past means that revisited states are treated as novel. In this work we take an alternative approach to online Bayesian learning in non-stationary environments by endowing an agent with an explicit memory module. Crucially, the addition of a memory buffer equips the agent with the ability to modulate its uncertainty by choosing what past experiences to both forget and remember. We call our approach Bayes with Adaptive Memory (BAM) and demonstrate its wide applicability and effectiveness on a number of non-stationary learning tasks. 2 BAYES WITH ADAPTIVE MEMORY The generative model is assumed to evolve according to θt ∼ p(θt|θt−1, t), (5) Dt ∼ pt(D) ≡ p(D|θt), (6) where equation 5 is the latent dynamics that dictate the evolution of the environment parameters, θt, and equation 6 is the likelihood whose parametric form is fixed throughout time, i.e. pt(D) = N (θt, σ2). Equations 5 and 6 define a state-space model, which allows one to infer θt through Bayesian filtering (Särkkä, 2013) p(θt|D≤t) ∝ p(Dt|θt)p(θt|D<t), (7) p(θt|D<t) = ∫ p(θt|θt−1, t)p(θt−1|D<t)dθt−1. (8) The parameterization of equations 5 and 6 dictate the tractability of equations 7 and 8. If a priori an agent knew that equation 5 is a linear dynamical system with additive white Gaussian noise and equation 6 is also Gaussian whose conditional mean is a linear function of θt, then the Kalman filter can be used (Kalman, 1960). For more complicated latent dynamics and/or likelihood models, methods such as particle filtering (Doucet & Johansen, 2009) and unscented Kalman filtering (Julier & Uhlmann, 1997) can be used. Crucially, Bayesian filtering methods assume that the latent dynamics governed by equation 5 are known; however, this is rarely the case in practice. Instead of making assumptions on the parametric form of equation 5, we take a different approach. In BAM, the agent maintains a memory buffer, D<t, that stores previous observations of the environment. At time t the agent obtains a new batch of data, Dt ∼ pt(D). How should the agent combine the newly observed data, Dt, with its stored memory, D<t, to update its belief as encoded by the posterior distribution? In recursive Bayes, the posterior distribution is computed according to1 p(θt|Dt,D<t) ∝ p(Dt|θt)p(θt|D<t), (9) p(θt|D<t) ∝ p(θt) t−1∏ j=1 p(Dj |θt), (10) where we refer to p(θt) as the base prior. Equation 10 allows us to interpret recursive Bayes as the agent constructing a dynamic prior, p(θt|D<t), using all the experiences stored in its memory buffer. This works under the stationarity assumption; when this assumption is violated, the application of Bayes’ theorem can lead to confidently wrong results as the "distance" between pi(D) and pj(D) can be vast. An alternative is for the agent to completely forget all of its past experiences p(θt|Dt) ∝ p(Dt|θt)p(θt). (11) While equation 11 may be viable in situations where Dt is sufficiently informative, it is wasteful when experiences in the memory buffer may help infer θt. BAM dynamically finds a middle ground between these two extremes of remembering (equation 10) and forgetting (equation 11) everything by allowing the agent to choose which data to use from its memory buffer to construct the prior. Specifically, the agent is endowed with a time-dependent readout weight, Wt = [wt,1, wt,2, · · · , wt,t−1] where wt,j ∈ [0, 1]. Given a new datum Dt, BAM constructs its posterior according to p(θt|Dt,D<t,Wt) ∝ p(θt)p(Dt|θt) t−1∏ j=1 p(Dj |θt)wt,j . (12) We can rewrite equation 12 as p(θt|Dt,D<t,Wt) = p(Dt|θt)p(θt|D<t,Wt) p(Dt|D<t,Wt) , (13) where p(θt|D<t,Wt) ∝ p(θt) t−1∏ j=1 p(Dj |θt)wt,j , (14) and p(Dt|D<t,Wt) = ∫ p(Dt|θt)p(θt|D<t,Wt)dθt. (15) The prior construction in equation 14 is akin to recursive Bayes, but now the agent can dynamically and adaptively change its prior by using the readout weights, Wt, to weigh the importance of previous experience where at the extreme, it can choose to completely forget a previous experience, wt,j = 0, or fully remember it, wt,j = 1. For simplicity, we restrict the readout weights to be binary, i.e. wt,j ∈ {0, 1}. The combination of a memory buffer, D<t, with a time-dependent readout weight, Wt, allows BAM to generalize many previously proposed approaches. By setting wt,1 = wt,2 = · · · = wt,t−1 = 1, we recover recursive Bayes (equation 10). By setting wt,1 = wt,2 = · · · = wt,t−1 = α, where 0 ≤ α ≤ 1 we recover the power priors approach of Ibrahim et al. (2015). By setting wt,j = αt−1−j , where 0 ≤ α ≤ 1, we recover exponential forgetting (Moens, 2018; Moens & Zénon, 2019; Masegosa et al., 2020). Lastly, by setting a particular subset of the readout weights to be 0, we recover Bayesian unlearning (Nguyen et al., 2020). The ability to adaptively change its prior implies that BAM can increase/decrease its uncertainty as the situation demands; subsequently, this modulates the agent’s learning speed. Using variance as a proxy for uncertainty, one would expect that the variance of the prior used in BAM (equation 14) is always at least as large as the variance of the prior used in recursive Bayes (equation 10). We formalize this for the case of binary readout weights in the following proposition. Proposition 1. Let p(θ|D<t,Wt) be the prior used by BAM, defined in equation 14 and let p(θ|D<t) be the recursive Bayes prior, defined in equation 13. Then E[Var(θ|D<t,Wt) ∣∣Wt] ≥ E[Var(θ|D<t)], ∀Wt ∈ {0, 1}t−1. (16) Proof. Proof is in Appendix B. 1Recursive Bayes is equivalent to Bayesian filtering when p(θt|θt−1, t) = δ(θt = θt−1). 2.1 SELECTION OF READOUT WEIGHTS VIA BAYESIAN MODEL-SELECTION While the previous section demonstrated the flexibility of BAM, the question remains: how should the readout weights, Wt, be set? Equation 13 allows us to view different readout weights as different models. Through this lens, we can follow the spirit of Bayesian model selection (Gelman et al., 2013) and compute a posterior over the readout weights p(Wt|Dt,D<t) ∝ p(Wt|D<t)p(Dt|Wt,D<t). (17) For practicality, we compute the maximum a posteriori (MAP) estimate of equation 17 (Gelman et al., 2013) and use that as the value of the readout weight Wt = argmax W∈{0,1}t−1 log p(Dt|W,D<t) + log p(W |D<t), (18) = argmax W∈{0,1}t−1 log ∫ p(Dt|θt)p(θt|W,D<t)dθt + log p(W |D<t). (19) The first term of equation 18 is the log marginal likelihood, which measures the likelihood ofDt being distributed according to the predictive distribution, p(D|W,D<t) while the prior, log p(W |D<t), acts as a regularizer. This procedure of constantly updating the readout weights through equation 18 can be interpreted as providing Bayes a feedback mechanism: equation 18 allows the agent to directly measure its ability to fit the observed data using different combination of experiences in its buffer via the readout weight, and then choosing the readout weight that leads to best fit. In contrast, standard Bayesian inference is an open-loop procedure: data, likelihood and prior are given and a posterior is spat out, irrespective of the fit of the model to the data (Simpson et al., 2017). Still left is the question of how do we design the prior, p(W |D<t). In certain scenarios, using an uninformative prior, i.e. p(W |D<t) ∝ 1, may suffice if the data is very informative and/or the number of data points in Dt is large. In scenarios where these conditions are not met, it is important to use an informative prior as it reduces the chance of overfitting. In general, the design of priors is highly nontrivial (Winkler, 1967; Gelman et al., 2013; Simpson et al., 2017). While there exists many potential options, we use penalized model complexity priors proposed by Simpson et al. (2017) as they are designed to reduce the chance of overfitting. Following Simpson et al. (2017), we parameterize the prior as p(W |D<t) ∝ exp ( −λ √ 2DKL[p(θt|W,D<t)‖p(θt)] ) , (20) where λ ∈ [0,∞) is a hyperparameter that controls the strength of the prior.2 Equation 20 encodes our prior belief that we favor values ofWt that produce simpler models, where simplicity is quantified as the Kullback-Leibler divergence between p(θt|Wt,D<t) and the base prior, p(θt). Plugging equation 20 into equation 18 we get Wt = argmax W∈{0,1}t−1 log p(Dt|W,D<t)− λ √ 2DKL[p(θt|W,D<t)‖p(θt)]. (21) In general, solving equation 21 is difficult as the number of possible readout weights is 2(t−1), making brute force solutions practically infeasible. While there exists many approaches for performing discrete optimization, we found that using a simple greedy approach sufficed for our experiments; in the interest of space, we defer discussion regarding this to Appendix C. 3 RELATED WORKS A variety of approaches have been proposed for learning in non-stationary environments. In signal processing, adaptive filtering techniques such as recursive least squares (RLS) and least mean square filtering (LMS) are the de facto approaches for filtering in non-stationary environments (Haykin, 2008). While empirically successful, RLS and LMS are only applicable for a limited range of models, i.e. linear models. In contrast, BAM is a general purpose algorithm that can be deployed on a wide variety of models. 2λ = 0 recovers the uninformative prior case, p(Wt|D<t) ∝ 1. If the latent dynamics are known—or assumed to be known—then Bayesian filtering can be employed. A popular approach is to model the latent dynamics (equation 5) as an autoregressive process (Kurle et al., 2020; Rimella & Whiteley, 2020). While this approach has been popular, it is only applicable for models where the parameters are real-valued. A seminal work on Bayesian filtering is the Bayesian online changepoint detction (BOCD) algorithm of Adams & MacKay (2007), where the latent dynamics (equation 5) are modeled to be piece-wise constant. While BOCD is broadly applicable and has seen empirical success, the caveat is that an agent forgets all previous experience when a change is detected; thus, previously visited states appear novel to the agent and learning must begin from scratch. An extension to BOCD was proposed by Li et al. (2021), where when a change is detected a scaled version of the previous posterior is used as the prior. While similar in spirit to BAM, we note that the approach proposed in Li et al. (2021) is designed for Gaussian distributions, while BAM can work with arbitrary distributions. Moreover, the approach in Li et al. (2021) can only increase the uncertainty by a fixed pre-determined amount while BAM can adaptively modulate its uncertainty. Previous works have proposed solutions for making recursive Bayes more suited for use in nonstationary environments through exponential forgetting of past data (Moens, 2018; Moens & Zénon, 2019; Masegosa et al., 2020). While these models have also seen empirical success, their focus have been on forgetting past experiences which prevents the agent to leverage past experiences that are relevant. In BAM, the agent is focused not only on forgetting irrelevant experiences but remembering relevant experiences as well. The use of readout weights in BAM can be seen as an instance of likelihood tempering, which has been used to perform robust Bayesian inference (Wang et al., 2017) and to help with approximate Bayesian inference schemes (Neal, 1996; 2001; Mandt et al., 2016). While previous works focus on the offline case where data has already been collected, BAM focuses on the online case where the agent adaptively tempers the likelihood. The concept of an external memory buffer has recently been explored in machine learning (Gemici et al., 2017; Wu et al., 2018; Marblestone et al., 2020). While similar in spirit to BAM, most works use a softmax as their readout weight. As a byproduct, the agent must select an element from the buffer even if it isn’t applicable to the task at hand! BAM has no such restriction, and can ignore all the previous data in the buffer, resetting back to the base prior. 4 EXPERIMENTS To demonstrate the versatility of BAM, we apply it in a variety of scenarios. As BAM is a learning paradigm, it can be implemented as a module in a larger framework allowing it to be easily used in settings such as control/reinforcement learning and domain adaptation (Thompson, 1933; Osband et al., 2018; Lowrey et al., 2018; Yoon et al., 2018). BAM requires the ability to construct the posterior, p(θt|D<t,Wt), and evaluate the log marginal likelihood, log p(Dt|D<t,Wt). In general, the posterior and log marginal likelihood are only available analytically for conjugate priors (Gelman et al., 2013). While approaches exist for approximating the posterior (Robert et al., 2004; Brooks et al., 2011; Blei et al., 2017) and the log marginal likelihood (Robert et al., 2004; Gelman et al., 2013; Grosse et al., 2015), we restrict ourselves to only use conjugate priors to ensure any benefits of BAM are not due to uncertain effects of approximations. The use of conjugate priors also allows us to use sufficient statistics to compute posteriors, allowing BAM to scale amicably when the number of data points in a batch is large (Casella & Berger, 2021). 4.1 EXPERIMENT 1: INFERENCE IN A NON-STATIONARY ENVIRONMENT To evaluate BAM on online inference in a non-stationary environment, we generate data from the following model θt = a sin ( 2πt 100 ) + b, (22) p(Dt|θt) = Binomial(15, θt), (23) where a = 0.3 and b = 0.5 are chosen such that the lower and upper bounds for θt are 0.2 and 0.8, respectively. We evaluate BAM with no regularization, λ = 0, and with regularization, where λ = 0.1; as the data is discrete, there is a possibility that BAM could overfit, thus a priori we would expect the regularized BAM to perform better. We compare against recursive Bayes, Bayesian exponential forgetting (BF) and Bayesian online changepoint detection (BOCD). 3 Figure 1 demonstrates the weakness of recursive Bayes; as it views more data, the posterior gets more confident. This reduces the learning speed of the agent, preventing it from accurately tracking θt, and causing it to converge to the average with an extremely low posterior variance. BOCD tracks the parameter relatively well, though its estimates are slightly delayed. As BOCD lacks the ability to recall useful data from the past, its posterior variance resets every time a changepoint is detected. BAM is able to track θt and doesn’t suffer from temporal lag seen in the BOCD results, though the lack of regularization leads to estimates that are not as smooth as BOCD. The posterior variance of BAM reflects that the agent remembers relevant history and forgets irrelevant history, reducing the jump in posterior variance when revisiting a previously seen state. Lastly, we can see that BAM with regularization leads to smoother estimates but tends to be less confident compared to the other methods. 4.2 EXPERIMENT 2: CONTROLS In this section we illustrate the benefit of memory by applying BAM on a learning task to model non-linear dynamics for controls. The task is an analytical version of Cartpole (Barto et al., 1983), where the goal is to swing-up a pole on a moving cart. Non-stationarity is introduced by changing the environment’s gravity over time. We explore the performance of BAM under two different information models. In the episodic setting, the agent is told when a change occurs, but not the value of the new gravity parameter. In the continual learning setting, the agent is not informed of environmental changes.4 The reward for the task is the cosine of the angle of the pole on the cart, where an angle of 0◦ is the vertical ‘up’ position. To make the problem amenable for analytical posterior and log marginal likelihood computations, we model the nonlinear dynamics using linear regression with random Fourier features (RFF) (Rahimi & Recht, 2007) xt = xt−1 +Mφ(xt−1, at) + εt, εt ∼ N (0, σ2I), (24) 3The timescale parameter for BOCD is 1/100, which is the frequency of the sinusoid. The weighting term for BF is 0.8. 4For both settings, the number of data points in a batch is relatively large, leading the log marginal likelihood to overtake the prior in equation 21. As regularization has little effect, results are shown for λ = 0. where xt ∈ Rdx is the state vector, at ∈ Rda is the action vector, εt ∈ Rdx is state noise and φ is our RFF function. For simplicity, we assume a fixed noise variance of σ2 = 10−6. This parameterization allows us to perform Bayesian linear regression over M which is analytically tractable (Gelman et al., 2013). Full details can be found in Appendix D.1. 4.2.1 EPISODIC ONE-SHOT In this setting our simulated Cartpole hypothetically moves between different planets—hence a change in gravity—while still being asked to swing the pole up. In an episode, gravity is fixed for the duration of 15 trials, where each trial resets the Cartpole to a random initial state, x0. Each trial produces a trajectory of states and actions of length H that are batched into one unit of data, such that each episode contributes 15 experiences; thus the datum for trial t is Dt = {([xj , aj ], [xj − xj−1])}Hj=1. We compare BAM to recursive Bayes in a one-shot manner: after the first trial of a new episode, BAM computes a weight vector over all previously encountered trial data to inform a posterior for the duration of the episode. Recursive Bayes is reset to the base prior at the beginning of a new episode. Both proceed to update their belief every trial in an episode. We show in Figure 2 results over 5 random seeds where the expected score for a ground truth model is shown as a reference. The first time BAM encounters a novel planet, it resets its prior to the base prior and starts learning from scratch, similar to recursive Bayes. On subsequent visits however, BAM is able to leverage its past experiences to quickly adapt and recover high levels of performance. As recursive Bayes starts from scratch, it will again need multiple trials to develop a competent model. 4.2.2 CONTINUAL LEARNING In addition to the challenge of adapting to a different environment, we also test BAM when the agent is not informed of the change, such that adaption must happen continually. In this scenario without explicit episodes, the gravity of the world can change after a trial, unbeknownst to the agent. Similar to the previous setting, a datum for trial t is Dt = {([xj , aj ], [xj − xj−1])}Hj=1. While it is straightforward to run BAM in this setting, we also investigate combining BAM with BOCD, which we denote as BAM + BOCD. In BOCD, the detection of a changepoint causes the posterior distribution to be reset to the base prior. In BAM + BOCD, the detection of a changepoint is used as signal for when the agent should adapt its prior by computing Wt, to obtain p(θt|Wt,D<t); this avoids rerunning the optimization procedure after each trial. We show in Figure 3 that while BOCD works as intended, without BAM the Cartpole has to relearn a model from the prior belief, leading to significant dips in the expected reward. While all methods are able to adapt when the environment is in a constant state, the use of past information allows BAM and BAM + BOCD to quickly adapt. We can see that BAM and BAM + BOCD perform very similarly to each other, suggesting that we can bypass unnecessary computation. 4.3 EXPERIMENT 3: NON-STATIONARY BANDIT A common environment for testing the performance of online learning algorithms is the bandits setting (Sutton & Barto, 2018). We study a non-stationary version of the bandits setting where each arm switches between two values asynchronously, such that the best arm could be, at any point in time, a previously low value arm. Gaussian noise with σ = 0.25 is additionally added to the current arm value. Sample arm values can be found in Figure 5. For stationary bandits, a popular algorithm is Thompson sampling (Thompson, 1933) in which a posterior over each arm is continually updated via recursive Bayes. These posteriors are then leveraged to decide which arm the agent should pull, where the posterior uncertainty allows the agent to automatically switch between exploration and exploitation. In the non-stationary setting, we would expect vanilla Thompson sampling to fail as the arm posteriors would continue becoming more certain, as is evident from section 4.1. While there are many approaches for how to adapt BAM to perform well in the non-stationary bandits setting, we take a simple approach and combine BAM with the upper confidence bound (UCB) bandit algorithm (Agrawal, 1995), which we call UCBAM; in the interest of space, we provide an algorithm table in Appendix D.3.1. We compare UCBAM against UCB, Thompson sampling, Bayesian exponential forgetting + Thompson sampling and a BOCD + Thompson sampling scheme proposed by Mellor & Shapiro (2013); hyperparameter values can be found in Appendix D.3. From Figure 4, we see that UCBAM outperforms the other methods for both 10 and 50 arms. Thompson sampling fails to capture the true current values of the arms and suffers a large penalty while exploration afforded by UCB enables better performance. BOCD achieves low regret in the 10 arm setting, but reverts to its prior too often to perform well with 50 arms. 4.4 EXPERIMENT 4: DOMAIN ADAPTATION WITH ROTATED MNIST In the image classification setting, we often want to operate across a variety of domains. Traditional approaches include learning a single high capacity model or encoding assumptions about the domain structure into the system (Jaderberg et al., 2015; Worrall et al., 2017). Instead, we use a simple multivariate linear regression model where the targets are one-hot encoded labels, taking the highest output as the selected class. We consider a setting where the distribution of domains is known and is the same at both train and test time and evaluate BAM’s ability to classify given a small number of labeled examples from the domains to adapt its belief. To achieve this, we create a rotated MNIST dataset. 32 domains were created, where each domain was comprised of 1,875 randomly sampled without replacement from the training set. In a domain, the images are rotated by an angle sampled uniformly at random from 0 to π. Each domain is treated as one batch of data in the memory buffer, i.e. Di = {(xij , yij)}1875j=1 . We split and rotate the test set similarly into 8 domains and give 10 labeled examples from each to find readout weights over the training data. We calculate the average accuracy over all test domains and collect results over 10 random seeds. While OLS trained over all domains get a mean and standard deviation accuracy of 55% ± 3.7% accuracy, BAM is able to achieve a test set accuracy of 71.8% ± 5.2%, showing that BAM is able to leverage previous experiences to adapt to novel domains. 5 CONCLUSION AND FUTURE WORK In this work we present BAM, a flexible Bayesian framework that allows agents to adapt to nonstationary environments. Our key contribution is the addition of a memory buffer to the Bayesian framework, which allows the agent to adaptively change its prior by choosing which past experiences to remember and which to forget. Empirically, we show the proposed approach is general enough to be deployed in a variety of problem settings such as online inference, control, non-stationary bandits and domain adaptation. To ensure that we isolated the benefits of BAM, the experiments focused on conjugate-prior distributions as it allowed us to compute the prior/posterior and the log-marginal likelihood in closed form. Future work will focus on leveraging advances in streaming variational inference (Broderick et al., 2013; Kurle et al., 2020) to allow BAM to be deployed on more complicated models, i.e. Bayesian deep neural networks. For simplicity, we focused on binary values for the readout weights as it allowed for a simple greedy discrete optimization algorithm to be used. We imagine that allowing the weights to be any value between 0 and 1 will increase performance in certain settings and allow BAM to have a much larger repertoire of priors that it can construct, as well as suggest different optimization algorithms to use within the framework. Finally, efficient memory buffer schemes will be explored to avoid the ’infinite memory’ problem of continual learning, enabling BAM to operate with efficiency indefinitely. 6 ACKNOWLEDGMENTS The authors thank Ayesha Vermani, Matthew Dowling and Il Memming Park for insightful discussions and feedback. A EXAMPLES OF POSTERIORS WITH DECREASING VARIANCE In this section we will provide two cases where the variance of the posterior is non-increasing with probability 1 as more data is collected, regardless of the observed data. For simplicity we stick to only 1D, though we are confident these results extend to their multi-dimensional extensions. A.1 BAYESIAN ESTIMATION OF MEAN OF NORMAL DISTRIBUTION The likelihood is of the form p(y|θ) = N (θ, σ2), (25) where σ2 > 0 is known. We use a normal prior p(θ) = N (θ̄0, τ0), (26) where τ0 > 0. Given arbitrary data y1, · · · , yN ∼ p(y1:N ) we get that the posterior is of the form p(θ|y1:N ) = N (θ̄N , τN ), (27) where τN = (τ −1 0 +Nσ −2)−1 = σ2τ0 σ2 +Nτ0 , (28) θ̄N = τN ( τ−10 θ̄0 + σ −2 N∑ n=1 yn ) . (29) We observe that the posterior variance, equation 28, is not a function of the observed data. In fact, the posterior variance is deterministic given N , τ0 and σ2. In this particular setting, we can show that τN is a strictly decreasing function of N . To prove that τ0 > τ1 > · · · > τn > · · · > τN , it suffices to show that τn−1 > τn, ∀n ∈ {1, · · ·N}, (30) which is equivalent to showing that τn τn−1 < 1, ∀n ∈ {1, · · ·N}. (31) Before proceeding, we note that as Bayes’ theorem is closed under recursion, we can always express the posterior variance as τn = (τn−1 + σ −2)−1 = σ2τn−1 σ2 + τn−1 . (32) Computing τn/τn−1 τn τn−1 = σ2τn−1 σ2 + τn−1 × 1 τn−1 , (33) = σ2 σ2 + τn−1 . (34) Because τn > 0, ∀n ∈ {0, · · · , N}, (35) we have that σ2 < σ2 + τn−1, and conclude that τn/τn−1 < 1. A.2 BAYESIAN LINEAR REGRESSION Next, we consider the setting of Bayesian linear regression with known variance. The likelihood is of the form p(yi|xi, θ) = N (θxi, σ2), xi ∈ R, (36) where σ2 > 0 is known. We use a normal prior p(θ) = N (θ̄0, τ0), (37) where τ0 > 0. Given arbitrary observations (x1, y1), . . . , (xn, yn), we have that the posterior is of the form p(θ|x1:N , y1:N ) = N (θ̄N , τN ), (38) where τN = ( τ−10 + σ −2 N∑ n=1 x2n )−1 = σ2τ0 σ2 + τ0 ∑N n=1 x 2 n , (39) θ̄N = τN (τ −1 0 θ̄0 + σ −2 N∑ n=1 xnyn). (40) To prove that τ0 ≥ τ1 ≥ · · · ≥ τn ≥ · · · ≥ τN , it suffices to show that τn τn−1 ≤ 1, ∀xn ∈ R, ∀n ∈ {1, · · · , N}. (41) Again, due to the Bayes being closed under recursion, we can always rewrite the posterior variance as τn = ( τ−1n−1 + σ −2x2n )−1 = σ2τn−1 σ2 + τn−1x2n . (42) So τn τn−1 = σ2τn−1 σ2 + τn−1x2n × 1 τn−1 , (43) = σ2 σ2 + τn−1x2n . (44) As x2n ≥ 0, we have that τn/τn−1 ≤ 1, which completes the proof. B PROOF OF PROPOSITION 1 For clarity, we rewrite the proposition below Proposition. Let p(θ|D<t,Wt) ∝ p(θ) t−1∏ j=1 p(Dj |θ)wt,j , wt,j ∈ {0, 1}, (45) be the prior used in BAM and let p(θ|D<t) ∝ p(θ) t−1∏ j=1 p(Dj |θ), (46) be the recursive Bayes prior. Then E [ Var(θ|D<t,Wt) ∣∣Wt] ≥ E[Var(θ|D<t)], ∀Wt ∈ {0, 1}t−1. (47) Proof. We begin by describing some simple cases, before presenting the proof for the general case. Case 1: All the readout weights are 1. If all the readout weights are 1, i.e. Wt = 1 then p(θ|D<t,Wt = 1) = p(θ|D<t), (48) recovering the recursive Bayes prior. Thus E [ Var(θ|D<t,Wt = 1) ∣∣Wt = 1] = E[Var(θ|D<t)]. (49) Case 2: All the readout weights are 0. If all the readout weights are 0, i.e. Wt = 0 then p(θ|D<t,Wt = 0) = p(θ), (50) recovering the base prior. The law of total variance states Var(θ) = E [Var(θ|D<t)] + Var(E[θ|D<t]). (51) As both terms on the right-hand side are positive, this implies that E [ Var(θ|D<t,Wt = 0) ∣∣Wt = 0] = Var(θ) ≥ E [Var(θ|D<t)] . (52) Case 3: General case Let r be the indices of the readout weight set to 1 (“remembered”) and f be the indices of the readout weights set to 0 (“forgotten”). We can express the memory buffer as D<t = Dr ∪ Df where Dr are the data points selected by the readout weights and Df are the data points that are ignored. We can rewrite the BAM prior as p(θ|D<t,Wt) = p(θ|Dr), (53) which is equivalent to applying Bayes theorem using Dr. Similarly, we can rewrite the recursive Bayes prior as p(θ|D<t) = p(θ|Dr,Df) ∝ p(Df|θ)p(θ|Dr). (54) Using the law of total variance, we get Var(θ|D<t,W ) = Var(θ|Dr) = E [ Var(θ|D<t) ∣∣Dr]+ Var (E[θ|D<t]∣∣Dr) , (55) where again, the above implies Var(θ|Dr) ≥ E [ Var(θ|D<t) ∣∣Dr] . (56) As the above inequality holds for all values of Dr, it also holds under expectation as well E[Var(θ|Dr) ∣∣Wt] ≥ E [Var(θ|D<t)∣∣Wt] . (57) Since Var(θ|D<t) is the variance under the recursive Bayes model, it is not a function ofWt, allowing the conditioning on Wt to be dropped E [ Var(θ|D<t) ∣∣Wt] = E [Var(θ|D<t)] . (58) Applying our definition of Dr recovers the desired result: E[Var(θ|D<t,Wt) ∣∣Wt] ≥ E [Var(θ|D<t)] . (59) C DISCUSSION OF GREEDY DISCRETE OPTIMIZATION As the number of choices is 2(t−1), it is impractical to use brute force methods for solving the discrete optimization problem defined in equation 21. For simplicity, we use two types of greedy approaches for discrete optimization. In both cases, each element in memory is evaluated against a target datum with the inner term of equation 19, the log marginal likelihood and regularization term. The first is a bottom-up approach, where we start with all readout weights set to 0 and greedily add the most beneficial associated datum until the combined score decreases. Pseudo code is displayed in Algorithm 1. Note that this is similar in spirit to the stepwise selection approach used for selecting variables in linear regression (Hocking, 1976). Algorithm 1: Bottom-Up Greedy for BAM Data: memoryD<t, targetDt, prior p, regularizer strength λ priorscore← log p(Dt) ; for size(D<t) do for eachDi inD<t do if W[i] > 0 then scores[i]← log ∫ p(Dt|θt)p(θt|W,D<t)dθt + log p(W |D<t) else scores[i] = -Inf end end score, idx = findmax(scores) ; if score > priorscore then W [idx]← 1 ; priorscore← score ; p = posterior(p,D<t[idx]) else returnW end end Result: Readout weightsW In the second approach, the readout weight starts at 0. The contribution of each datum in D<t is evaluated independently (and can be done practically in parallel with either multi-core CPUs or GPUs). These scores are filtered to only be scores better than the base prior’s likelihood. The top q% percentile of remaining scores are chosen and their corresponding readout weight value are set to 1. Pseudo code is displayed in Algorithm 2. This approach is faster than bottom-up as only one round of optimization is needed but the combination of each of the individual experiences could potentially lead to sub-optimal performance. Additionally, the percentile cutoff may needlessly include or exclude weight values. In practice, we found that the two approaches performed similarly with the main exception being the MNIST experiment, where the parallel approach was significantly worse than bottom-up. Algorithm 2: Parallel selection for BAM Data: memoryD<t, targetDt, regularizer strength λ, prior distribution p, cutoff q priorscore = log p(Dt) ; for eachDi inD<t do scores[i]← log ∫ p(Dt|θt)p(θt|Di)dθt + log p(W |Di) end cutoff = quantile(scores > priorscore, q) ; for each in scores do if scores[i] > cutoff then W [i]← 1 else W [i]← 0 end end Result: Readout weightsW D EXPERIMENTAL SETTINGS D.1 CONTROLS For our controls experiments, we used Model Predictive Path Integral control (Williams et al., 2017), a model predictive control (MPC) algorithm with a planning horizon of 50 timesteps and 32 sample trajectories. Our sampling covariance was 0.4 for each controlled joint–in the case of Cartpole, the action space is 1. The temperature parameter we used was 0.5. Planning with a probabilistic model involves each sampling trajectory to use a different model sampled from the current belief (as opposed to a sampled model per timestep); planning rollouts included noise, such that xt = xt−1 +M ′φ(xt−1, at) + εt, εt ∼ N (0, σ2I), (60) where M ′ is sampled from the current belief. φ is the random Fourier features function from (Rahimi & Recht, 2007) where we use 200 features with a bandwidth calculated as the mean pairwise distance of the inputs (states and actions) which is 6.0. To learn M , we use Bayesian linear regression where each row of M is modeled as being independent. We place a multivariate Normal prior on each of the rows with a prior mean of all 0s and prior precision of 10−4I . The Cartpole model’s initial state distribution for positions and velocities were sampled uniformly from -0.05 to 0.05, with the angle of the cart being π such that it points down. This sets up the swing-up problem. For the episodic one-shot experiment, we perform MPC for 200 timesteps as one trial. 15 trials make one episode, with the dynamical properties of the environment (i.e. gravity) fixed for the duration of the trial. We vary the gravity parameter of the model by selecting gravity values from celestial bodies of the Solar System; we used Earth, Mars, and Neptune at 9.81, 3.72, and 11.15 m/s2, respectively. At the start of a new episode, each method’s beliefs are reset to the base prior, and each method proceeds to update their respective beliefs accordingly. BAM retains each trail’s datum in memory across episodes. For the continual learning experiment, we do not inform our agent that the model dynamics have changed, i.e. we never reset the agent’s belief to a prior. Instead, we use Bayesian Online Changepoint Detection (BOCD) to discern if the underlying model distribution has changed. BOCD is compared against BAM, both with and without changepoint detection; while BOCD resets to a prior when a change is detected, BAM optimizes for a weight vector over the previously experienced data. The BOCD switching parameter λ for its hazard function was set to 0.11. The agent attempts the task for 60 trials, with the environment experiencing changes 3 times during said trials. D.2 DOMAIN ADAPTATION WITH ROTATED MNIST We ran 10 independent Bayesian linear regressions, one for each dimension of the one-hot encoded target. As the prior, we use a multivariate Normal distribution with a prior mean of all 0s and prior precision of 0.1I . Similar to the controls experiment, we assume the additive noise is fixed and set to σ2 = 10−4. As regularization had little effect, we set λ = 0. D.3 NON-STATIONARY BANDITS For both UCB and UCBAM, we use a confidence-level function of f(t) = 1 + t log2(t). The timescale parameter for BOCD + Thompson sampling is 0.016, which is the expected frequency of the arm switches. The weighting term for Bayesian exponential forgetting + Thompson sampling is 0.8. D.3.1 DESCRIPTION OF UCBAM The challenge of bandit settings is the need to explore, especially in the non-stationary setting we devised. As such, UCB is a well known algorithm for leveraging the uncertainty in the arm values to enable exploration. We combine this frequentist method with BAM as follows. When we assume to ‘know’ the current best arm value, we exploit it and keep a belief over its distribution with BAM. The signal for whether the best arm is ‘known’ is if the likelihood of the current arm’s value is higher with our current arm belief or higher with the naive base prior. If the base prior produces a higher likelihood, we assume the current arm distribution is incorrect (and will be updated with BAM), and we default to the UCB metric for arm selection. This simple combination of methods in this setting allows for the exploration benefits of UCB with the quick recognition of high value arms due to BAM and subsequent exploitation. Algorithm 3: UCBAM Data: prior distribution p K ← number of arms ; b = copy(p), empty D, K times ; # belief and memory per arm known← false ; for each iteration do if known then arm← thompson(b1...K) else arm← UCB choice end v← pull(arm) ; if log(p(v)) ≥ log(barm(v)) then known← false else known← true end barm = BAM(p,Darm<t , v) ; # BAM posterior update D<t = [D<t, v] ; # add value to memory end
1. What is the focus of the paper regarding Bayesian online learning in non-stationary environments? 2. What is the contribution of the proposed method, BAM, in constructing an informative prior distribution? 3. What are the weaknesses of the paper regarding scalability, related work discussion, continual learning, and large-scale experiments? 4. How does the reviewer suggest improving the paper by addressing the issues mentioned? 5. Are there any additional comments or suggestions provided by the reviewer?
Summary Of The Paper Review
Summary Of The Paper The paper aims to solve the downside of the posterior shrinkage of Bayesian online learning when applied in a non-stationary environment. The previously posterior (a.k.a., the current prior) could be misspecified for current observations with the posterior shrinkage due to the environment change. To solve this misspecification problem, the proposed method constructs an adaptive prior that is correctly specified for current observations. This adaptive prior distribution is constructed by selecting the previous relevant data samples, which requires storing the whole history. The method is evaluated on various analytical experiments. Review The paper tackles an important problem of Bayesian online learning in a non-stationary environment. The main contribution is BAM, an algorithm that constructs an informative prior distribution for current observations based on stored previous data. While the method seems technically sound, the method still falls short of the following points: Scalability: the algorithm remembers all the data so far, which requires infinite memory in an ever-running system. So as to the computation. The algorithm will fail in an online learning environment. An immediate improvement would be to use a fixed memory. A similar idea occurs in continual learning with a memory system where people only select representative data into a fixed memory (see the following point). Proper related work discussion: At least there are two highly-related papers not mentioned in the current version: Li, Aodong, et al., “Detecting and Adapting to Irregular Distribution Shifts in Bayesian Online Learning”, NeurIPS 2021 (previously occurred in workshops). Kurle, Richard, et al., "Continual learning with bayesian neural networks for non-stationary data." ICLR 2020. (Possibly) related work in other fields: Continual learning with a fixed-size memory: Aljundi, Rahaf, et al. "Gradient-based sample selection for online continual learning." Advances in Neural Information Processing Systems 32 (2019): 11816-11825. Bayesian inference with weighted likelihood: Wang, Yixin, Alp Kucukelbir, and David M. Blei. "Robust probabilistic modeling with bayesian data reweighting." International Conference on Machine Learning. PMLR, 2017. Mandt, Stephan, et al. "Variational tempering." Artificial Intelligence and Statistics. PMLR, 2016. Large-scale experiments: Current experiments are all analytical. However, practical systems could be complex, intractable, and large-scale in time. Without demonstrating the applicability in these environments, the approach may be not convincing. Baseline comparisons: The paper conducted thorough experiments with baselines including ordinary Bayesian online learning and BOCD. However, existing baselines have obvious limitations and couldn’t properly illustrate the benefits of the additional memory and re-visited states. More adaptive baselines like Bayesian Forgetting or Bayesian filter can be desirable to exemplify the usefulness of memory in a changing environment. Other additional comments/suggestions: p ( W t | D < t ) ∝ 1 : Should W t take some value? May consider change colors in Fig. 1 (left). Hard to distinguish methods. Fig. 5 is out of the main text.
ICLR
Title BAM: Bayes with Adaptive Memory Abstract Online learning via Bayes’ theorem allows new data to be continuously integrated into an agent’s current beliefs. However, a naive application of Bayesian methods in non-stationary environments leads to slow adaptation and results in state estimates that may converge confidently to the wrong parameter value. A common solution when learning in changing environments is to discard/downweight past data; however, this simple mechanism of “forgetting” fails to account for the fact that many real-world environments involve revisiting similar states. We propose a new framework, Bayes with Adaptive Memory (BAM), that takes advantage of past experience by allowing the agent to choose which past observations to remember and which to forget. We demonstrate that BAM generalizes many popular Bayesian update rules for non-stationary environments. Through a variety of experiments, we demonstrate the ability of BAM to continuously adapt in an ever-changing world. 1 INTRODUCTION The ability of an agent to continuously modulate its belief while interacting with a non-stationary environment is a hallmark of intelligence and has garnered a lot of attention in recent years (Zhang et al., 2020; Ebrahimi et al., 2020; Xie et al., 2020). The Bayesian framework enables online learning by providing a principled way to incorporate new observations into an agent’s model of the world (Jaynes, 2003; Gelman et al., 2013). Through the use of Bayes’ theorem, the agent can combine its own (subjective) a priori knowledge with data to achieve an updated belief encoded by the posterior distribution. The Bayesian framework is a particularly appealing option for online learning because Bayes’ theorem is closed under recursion, enabling continuous updates in what is commonly referred to as the recursive Bayes method (Wakefield, 2013). As an example, suppose the agent first observes a batch of data, D1, and then later observes another batch of data, D2. We can express the agent’s posterior distribution over the world, where the world is represented by θ, as p(θ|D1,D2) = p(D2|θ)p(θ|D1) p(D2|D1) , (1) where p(D2|D1) = ∫ p(D2|θ)p(θ|D1)dθ. (2) Equation 1 demonstrates the elegance and simplicity of recursive Bayes: at time t, the agent recycles its previous posterior, p(θ|D<t), where D<t = {D1, · · · ,Dt−1}, into its current prior and then combines it with a newly observed batch of data, Dt, to obtain an updated posterior, p(θ|D≤t). ∗corresponding authors At first glance, it would appear that a naive application of recursive Bayes would suffice for most online learning tasks. However, the recursive Bayes method relies on the assumption that the world is stationary, i.e. D1,D2, · · · are all independent and identically distributed. When this assumption is violated, recursive Bayes can fail catastrophically. As an illustration, consider the law of total variance: Var(θ|D<t) = E[Var(θ|D<t,Dt) ∣∣D<t] + Var(E[θ|D<t,Dt]∣∣D<t). (3) Since both terms on the right hand side are positive, equation 3 reveals that in expectation, the variance of the posterior decreases as more data is seen regardless of the actual distribution of Dt, i.e. Var(θ|D<t) ≥ E[Var(θ|D<t,Dt) ∣∣D<t]. (4) In fact, for some models equation 4 is true with probability 1; we demonstrate examples in Appendix A. Thus, if the parameters of the environment, θ, were to change, the variance of the posterior would still decrease, becoming more certain of a potentially obsolete parameter estimate. Modeling the environment as stationary when it is actually changing also keeps the learning speed of the agent artificially low, as tighter posteriors prevent large jumps in learning. This is the opposite of what an intelligent agent should do in such an event: if the environment changes, we would expect the agent’s uncertainty and learning speed to increase in response. As was elegantly stated by Monton (2002), the problem with naive use of recursive Bayes is that "Such a Bayesian never forgets." Previous approaches on enabling recursive Bayes to work in non-stationary settings have primarily focused on forgetting past experience either through the use of changepoint detection (Adams & MacKay, 2007; Li et al., 2021), or by exponentially weighting past experiences (Moens, 2018; Moens & Zénon, 2019; Masegosa et al., 2020). While empirically successful, their focus on forgetting the past means that revisited states are treated as novel. In this work we take an alternative approach to online Bayesian learning in non-stationary environments by endowing an agent with an explicit memory module. Crucially, the addition of a memory buffer equips the agent with the ability to modulate its uncertainty by choosing what past experiences to both forget and remember. We call our approach Bayes with Adaptive Memory (BAM) and demonstrate its wide applicability and effectiveness on a number of non-stationary learning tasks. 2 BAYES WITH ADAPTIVE MEMORY The generative model is assumed to evolve according to θt ∼ p(θt|θt−1, t), (5) Dt ∼ pt(D) ≡ p(D|θt), (6) where equation 5 is the latent dynamics that dictate the evolution of the environment parameters, θt, and equation 6 is the likelihood whose parametric form is fixed throughout time, i.e. pt(D) = N (θt, σ2). Equations 5 and 6 define a state-space model, which allows one to infer θt through Bayesian filtering (Särkkä, 2013) p(θt|D≤t) ∝ p(Dt|θt)p(θt|D<t), (7) p(θt|D<t) = ∫ p(θt|θt−1, t)p(θt−1|D<t)dθt−1. (8) The parameterization of equations 5 and 6 dictate the tractability of equations 7 and 8. If a priori an agent knew that equation 5 is a linear dynamical system with additive white Gaussian noise and equation 6 is also Gaussian whose conditional mean is a linear function of θt, then the Kalman filter can be used (Kalman, 1960). For more complicated latent dynamics and/or likelihood models, methods such as particle filtering (Doucet & Johansen, 2009) and unscented Kalman filtering (Julier & Uhlmann, 1997) can be used. Crucially, Bayesian filtering methods assume that the latent dynamics governed by equation 5 are known; however, this is rarely the case in practice. Instead of making assumptions on the parametric form of equation 5, we take a different approach. In BAM, the agent maintains a memory buffer, D<t, that stores previous observations of the environment. At time t the agent obtains a new batch of data, Dt ∼ pt(D). How should the agent combine the newly observed data, Dt, with its stored memory, D<t, to update its belief as encoded by the posterior distribution? In recursive Bayes, the posterior distribution is computed according to1 p(θt|Dt,D<t) ∝ p(Dt|θt)p(θt|D<t), (9) p(θt|D<t) ∝ p(θt) t−1∏ j=1 p(Dj |θt), (10) where we refer to p(θt) as the base prior. Equation 10 allows us to interpret recursive Bayes as the agent constructing a dynamic prior, p(θt|D<t), using all the experiences stored in its memory buffer. This works under the stationarity assumption; when this assumption is violated, the application of Bayes’ theorem can lead to confidently wrong results as the "distance" between pi(D) and pj(D) can be vast. An alternative is for the agent to completely forget all of its past experiences p(θt|Dt) ∝ p(Dt|θt)p(θt). (11) While equation 11 may be viable in situations where Dt is sufficiently informative, it is wasteful when experiences in the memory buffer may help infer θt. BAM dynamically finds a middle ground between these two extremes of remembering (equation 10) and forgetting (equation 11) everything by allowing the agent to choose which data to use from its memory buffer to construct the prior. Specifically, the agent is endowed with a time-dependent readout weight, Wt = [wt,1, wt,2, · · · , wt,t−1] where wt,j ∈ [0, 1]. Given a new datum Dt, BAM constructs its posterior according to p(θt|Dt,D<t,Wt) ∝ p(θt)p(Dt|θt) t−1∏ j=1 p(Dj |θt)wt,j . (12) We can rewrite equation 12 as p(θt|Dt,D<t,Wt) = p(Dt|θt)p(θt|D<t,Wt) p(Dt|D<t,Wt) , (13) where p(θt|D<t,Wt) ∝ p(θt) t−1∏ j=1 p(Dj |θt)wt,j , (14) and p(Dt|D<t,Wt) = ∫ p(Dt|θt)p(θt|D<t,Wt)dθt. (15) The prior construction in equation 14 is akin to recursive Bayes, but now the agent can dynamically and adaptively change its prior by using the readout weights, Wt, to weigh the importance of previous experience where at the extreme, it can choose to completely forget a previous experience, wt,j = 0, or fully remember it, wt,j = 1. For simplicity, we restrict the readout weights to be binary, i.e. wt,j ∈ {0, 1}. The combination of a memory buffer, D<t, with a time-dependent readout weight, Wt, allows BAM to generalize many previously proposed approaches. By setting wt,1 = wt,2 = · · · = wt,t−1 = 1, we recover recursive Bayes (equation 10). By setting wt,1 = wt,2 = · · · = wt,t−1 = α, where 0 ≤ α ≤ 1 we recover the power priors approach of Ibrahim et al. (2015). By setting wt,j = αt−1−j , where 0 ≤ α ≤ 1, we recover exponential forgetting (Moens, 2018; Moens & Zénon, 2019; Masegosa et al., 2020). Lastly, by setting a particular subset of the readout weights to be 0, we recover Bayesian unlearning (Nguyen et al., 2020). The ability to adaptively change its prior implies that BAM can increase/decrease its uncertainty as the situation demands; subsequently, this modulates the agent’s learning speed. Using variance as a proxy for uncertainty, one would expect that the variance of the prior used in BAM (equation 14) is always at least as large as the variance of the prior used in recursive Bayes (equation 10). We formalize this for the case of binary readout weights in the following proposition. Proposition 1. Let p(θ|D<t,Wt) be the prior used by BAM, defined in equation 14 and let p(θ|D<t) be the recursive Bayes prior, defined in equation 13. Then E[Var(θ|D<t,Wt) ∣∣Wt] ≥ E[Var(θ|D<t)], ∀Wt ∈ {0, 1}t−1. (16) Proof. Proof is in Appendix B. 1Recursive Bayes is equivalent to Bayesian filtering when p(θt|θt−1, t) = δ(θt = θt−1). 2.1 SELECTION OF READOUT WEIGHTS VIA BAYESIAN MODEL-SELECTION While the previous section demonstrated the flexibility of BAM, the question remains: how should the readout weights, Wt, be set? Equation 13 allows us to view different readout weights as different models. Through this lens, we can follow the spirit of Bayesian model selection (Gelman et al., 2013) and compute a posterior over the readout weights p(Wt|Dt,D<t) ∝ p(Wt|D<t)p(Dt|Wt,D<t). (17) For practicality, we compute the maximum a posteriori (MAP) estimate of equation 17 (Gelman et al., 2013) and use that as the value of the readout weight Wt = argmax W∈{0,1}t−1 log p(Dt|W,D<t) + log p(W |D<t), (18) = argmax W∈{0,1}t−1 log ∫ p(Dt|θt)p(θt|W,D<t)dθt + log p(W |D<t). (19) The first term of equation 18 is the log marginal likelihood, which measures the likelihood ofDt being distributed according to the predictive distribution, p(D|W,D<t) while the prior, log p(W |D<t), acts as a regularizer. This procedure of constantly updating the readout weights through equation 18 can be interpreted as providing Bayes a feedback mechanism: equation 18 allows the agent to directly measure its ability to fit the observed data using different combination of experiences in its buffer via the readout weight, and then choosing the readout weight that leads to best fit. In contrast, standard Bayesian inference is an open-loop procedure: data, likelihood and prior are given and a posterior is spat out, irrespective of the fit of the model to the data (Simpson et al., 2017). Still left is the question of how do we design the prior, p(W |D<t). In certain scenarios, using an uninformative prior, i.e. p(W |D<t) ∝ 1, may suffice if the data is very informative and/or the number of data points in Dt is large. In scenarios where these conditions are not met, it is important to use an informative prior as it reduces the chance of overfitting. In general, the design of priors is highly nontrivial (Winkler, 1967; Gelman et al., 2013; Simpson et al., 2017). While there exists many potential options, we use penalized model complexity priors proposed by Simpson et al. (2017) as they are designed to reduce the chance of overfitting. Following Simpson et al. (2017), we parameterize the prior as p(W |D<t) ∝ exp ( −λ √ 2DKL[p(θt|W,D<t)‖p(θt)] ) , (20) where λ ∈ [0,∞) is a hyperparameter that controls the strength of the prior.2 Equation 20 encodes our prior belief that we favor values ofWt that produce simpler models, where simplicity is quantified as the Kullback-Leibler divergence between p(θt|Wt,D<t) and the base prior, p(θt). Plugging equation 20 into equation 18 we get Wt = argmax W∈{0,1}t−1 log p(Dt|W,D<t)− λ √ 2DKL[p(θt|W,D<t)‖p(θt)]. (21) In general, solving equation 21 is difficult as the number of possible readout weights is 2(t−1), making brute force solutions practically infeasible. While there exists many approaches for performing discrete optimization, we found that using a simple greedy approach sufficed for our experiments; in the interest of space, we defer discussion regarding this to Appendix C. 3 RELATED WORKS A variety of approaches have been proposed for learning in non-stationary environments. In signal processing, adaptive filtering techniques such as recursive least squares (RLS) and least mean square filtering (LMS) are the de facto approaches for filtering in non-stationary environments (Haykin, 2008). While empirically successful, RLS and LMS are only applicable for a limited range of models, i.e. linear models. In contrast, BAM is a general purpose algorithm that can be deployed on a wide variety of models. 2λ = 0 recovers the uninformative prior case, p(Wt|D<t) ∝ 1. If the latent dynamics are known—or assumed to be known—then Bayesian filtering can be employed. A popular approach is to model the latent dynamics (equation 5) as an autoregressive process (Kurle et al., 2020; Rimella & Whiteley, 2020). While this approach has been popular, it is only applicable for models where the parameters are real-valued. A seminal work on Bayesian filtering is the Bayesian online changepoint detction (BOCD) algorithm of Adams & MacKay (2007), where the latent dynamics (equation 5) are modeled to be piece-wise constant. While BOCD is broadly applicable and has seen empirical success, the caveat is that an agent forgets all previous experience when a change is detected; thus, previously visited states appear novel to the agent and learning must begin from scratch. An extension to BOCD was proposed by Li et al. (2021), where when a change is detected a scaled version of the previous posterior is used as the prior. While similar in spirit to BAM, we note that the approach proposed in Li et al. (2021) is designed for Gaussian distributions, while BAM can work with arbitrary distributions. Moreover, the approach in Li et al. (2021) can only increase the uncertainty by a fixed pre-determined amount while BAM can adaptively modulate its uncertainty. Previous works have proposed solutions for making recursive Bayes more suited for use in nonstationary environments through exponential forgetting of past data (Moens, 2018; Moens & Zénon, 2019; Masegosa et al., 2020). While these models have also seen empirical success, their focus have been on forgetting past experiences which prevents the agent to leverage past experiences that are relevant. In BAM, the agent is focused not only on forgetting irrelevant experiences but remembering relevant experiences as well. The use of readout weights in BAM can be seen as an instance of likelihood tempering, which has been used to perform robust Bayesian inference (Wang et al., 2017) and to help with approximate Bayesian inference schemes (Neal, 1996; 2001; Mandt et al., 2016). While previous works focus on the offline case where data has already been collected, BAM focuses on the online case where the agent adaptively tempers the likelihood. The concept of an external memory buffer has recently been explored in machine learning (Gemici et al., 2017; Wu et al., 2018; Marblestone et al., 2020). While similar in spirit to BAM, most works use a softmax as their readout weight. As a byproduct, the agent must select an element from the buffer even if it isn’t applicable to the task at hand! BAM has no such restriction, and can ignore all the previous data in the buffer, resetting back to the base prior. 4 EXPERIMENTS To demonstrate the versatility of BAM, we apply it in a variety of scenarios. As BAM is a learning paradigm, it can be implemented as a module in a larger framework allowing it to be easily used in settings such as control/reinforcement learning and domain adaptation (Thompson, 1933; Osband et al., 2018; Lowrey et al., 2018; Yoon et al., 2018). BAM requires the ability to construct the posterior, p(θt|D<t,Wt), and evaluate the log marginal likelihood, log p(Dt|D<t,Wt). In general, the posterior and log marginal likelihood are only available analytically for conjugate priors (Gelman et al., 2013). While approaches exist for approximating the posterior (Robert et al., 2004; Brooks et al., 2011; Blei et al., 2017) and the log marginal likelihood (Robert et al., 2004; Gelman et al., 2013; Grosse et al., 2015), we restrict ourselves to only use conjugate priors to ensure any benefits of BAM are not due to uncertain effects of approximations. The use of conjugate priors also allows us to use sufficient statistics to compute posteriors, allowing BAM to scale amicably when the number of data points in a batch is large (Casella & Berger, 2021). 4.1 EXPERIMENT 1: INFERENCE IN A NON-STATIONARY ENVIRONMENT To evaluate BAM on online inference in a non-stationary environment, we generate data from the following model θt = a sin ( 2πt 100 ) + b, (22) p(Dt|θt) = Binomial(15, θt), (23) where a = 0.3 and b = 0.5 are chosen such that the lower and upper bounds for θt are 0.2 and 0.8, respectively. We evaluate BAM with no regularization, λ = 0, and with regularization, where λ = 0.1; as the data is discrete, there is a possibility that BAM could overfit, thus a priori we would expect the regularized BAM to perform better. We compare against recursive Bayes, Bayesian exponential forgetting (BF) and Bayesian online changepoint detection (BOCD). 3 Figure 1 demonstrates the weakness of recursive Bayes; as it views more data, the posterior gets more confident. This reduces the learning speed of the agent, preventing it from accurately tracking θt, and causing it to converge to the average with an extremely low posterior variance. BOCD tracks the parameter relatively well, though its estimates are slightly delayed. As BOCD lacks the ability to recall useful data from the past, its posterior variance resets every time a changepoint is detected. BAM is able to track θt and doesn’t suffer from temporal lag seen in the BOCD results, though the lack of regularization leads to estimates that are not as smooth as BOCD. The posterior variance of BAM reflects that the agent remembers relevant history and forgets irrelevant history, reducing the jump in posterior variance when revisiting a previously seen state. Lastly, we can see that BAM with regularization leads to smoother estimates but tends to be less confident compared to the other methods. 4.2 EXPERIMENT 2: CONTROLS In this section we illustrate the benefit of memory by applying BAM on a learning task to model non-linear dynamics for controls. The task is an analytical version of Cartpole (Barto et al., 1983), where the goal is to swing-up a pole on a moving cart. Non-stationarity is introduced by changing the environment’s gravity over time. We explore the performance of BAM under two different information models. In the episodic setting, the agent is told when a change occurs, but not the value of the new gravity parameter. In the continual learning setting, the agent is not informed of environmental changes.4 The reward for the task is the cosine of the angle of the pole on the cart, where an angle of 0◦ is the vertical ‘up’ position. To make the problem amenable for analytical posterior and log marginal likelihood computations, we model the nonlinear dynamics using linear regression with random Fourier features (RFF) (Rahimi & Recht, 2007) xt = xt−1 +Mφ(xt−1, at) + εt, εt ∼ N (0, σ2I), (24) 3The timescale parameter for BOCD is 1/100, which is the frequency of the sinusoid. The weighting term for BF is 0.8. 4For both settings, the number of data points in a batch is relatively large, leading the log marginal likelihood to overtake the prior in equation 21. As regularization has little effect, results are shown for λ = 0. where xt ∈ Rdx is the state vector, at ∈ Rda is the action vector, εt ∈ Rdx is state noise and φ is our RFF function. For simplicity, we assume a fixed noise variance of σ2 = 10−6. This parameterization allows us to perform Bayesian linear regression over M which is analytically tractable (Gelman et al., 2013). Full details can be found in Appendix D.1. 4.2.1 EPISODIC ONE-SHOT In this setting our simulated Cartpole hypothetically moves between different planets—hence a change in gravity—while still being asked to swing the pole up. In an episode, gravity is fixed for the duration of 15 trials, where each trial resets the Cartpole to a random initial state, x0. Each trial produces a trajectory of states and actions of length H that are batched into one unit of data, such that each episode contributes 15 experiences; thus the datum for trial t is Dt = {([xj , aj ], [xj − xj−1])}Hj=1. We compare BAM to recursive Bayes in a one-shot manner: after the first trial of a new episode, BAM computes a weight vector over all previously encountered trial data to inform a posterior for the duration of the episode. Recursive Bayes is reset to the base prior at the beginning of a new episode. Both proceed to update their belief every trial in an episode. We show in Figure 2 results over 5 random seeds where the expected score for a ground truth model is shown as a reference. The first time BAM encounters a novel planet, it resets its prior to the base prior and starts learning from scratch, similar to recursive Bayes. On subsequent visits however, BAM is able to leverage its past experiences to quickly adapt and recover high levels of performance. As recursive Bayes starts from scratch, it will again need multiple trials to develop a competent model. 4.2.2 CONTINUAL LEARNING In addition to the challenge of adapting to a different environment, we also test BAM when the agent is not informed of the change, such that adaption must happen continually. In this scenario without explicit episodes, the gravity of the world can change after a trial, unbeknownst to the agent. Similar to the previous setting, a datum for trial t is Dt = {([xj , aj ], [xj − xj−1])}Hj=1. While it is straightforward to run BAM in this setting, we also investigate combining BAM with BOCD, which we denote as BAM + BOCD. In BOCD, the detection of a changepoint causes the posterior distribution to be reset to the base prior. In BAM + BOCD, the detection of a changepoint is used as signal for when the agent should adapt its prior by computing Wt, to obtain p(θt|Wt,D<t); this avoids rerunning the optimization procedure after each trial. We show in Figure 3 that while BOCD works as intended, without BAM the Cartpole has to relearn a model from the prior belief, leading to significant dips in the expected reward. While all methods are able to adapt when the environment is in a constant state, the use of past information allows BAM and BAM + BOCD to quickly adapt. We can see that BAM and BAM + BOCD perform very similarly to each other, suggesting that we can bypass unnecessary computation. 4.3 EXPERIMENT 3: NON-STATIONARY BANDIT A common environment for testing the performance of online learning algorithms is the bandits setting (Sutton & Barto, 2018). We study a non-stationary version of the bandits setting where each arm switches between two values asynchronously, such that the best arm could be, at any point in time, a previously low value arm. Gaussian noise with σ = 0.25 is additionally added to the current arm value. Sample arm values can be found in Figure 5. For stationary bandits, a popular algorithm is Thompson sampling (Thompson, 1933) in which a posterior over each arm is continually updated via recursive Bayes. These posteriors are then leveraged to decide which arm the agent should pull, where the posterior uncertainty allows the agent to automatically switch between exploration and exploitation. In the non-stationary setting, we would expect vanilla Thompson sampling to fail as the arm posteriors would continue becoming more certain, as is evident from section 4.1. While there are many approaches for how to adapt BAM to perform well in the non-stationary bandits setting, we take a simple approach and combine BAM with the upper confidence bound (UCB) bandit algorithm (Agrawal, 1995), which we call UCBAM; in the interest of space, we provide an algorithm table in Appendix D.3.1. We compare UCBAM against UCB, Thompson sampling, Bayesian exponential forgetting + Thompson sampling and a BOCD + Thompson sampling scheme proposed by Mellor & Shapiro (2013); hyperparameter values can be found in Appendix D.3. From Figure 4, we see that UCBAM outperforms the other methods for both 10 and 50 arms. Thompson sampling fails to capture the true current values of the arms and suffers a large penalty while exploration afforded by UCB enables better performance. BOCD achieves low regret in the 10 arm setting, but reverts to its prior too often to perform well with 50 arms. 4.4 EXPERIMENT 4: DOMAIN ADAPTATION WITH ROTATED MNIST In the image classification setting, we often want to operate across a variety of domains. Traditional approaches include learning a single high capacity model or encoding assumptions about the domain structure into the system (Jaderberg et al., 2015; Worrall et al., 2017). Instead, we use a simple multivariate linear regression model where the targets are one-hot encoded labels, taking the highest output as the selected class. We consider a setting where the distribution of domains is known and is the same at both train and test time and evaluate BAM’s ability to classify given a small number of labeled examples from the domains to adapt its belief. To achieve this, we create a rotated MNIST dataset. 32 domains were created, where each domain was comprised of 1,875 randomly sampled without replacement from the training set. In a domain, the images are rotated by an angle sampled uniformly at random from 0 to π. Each domain is treated as one batch of data in the memory buffer, i.e. Di = {(xij , yij)}1875j=1 . We split and rotate the test set similarly into 8 domains and give 10 labeled examples from each to find readout weights over the training data. We calculate the average accuracy over all test domains and collect results over 10 random seeds. While OLS trained over all domains get a mean and standard deviation accuracy of 55% ± 3.7% accuracy, BAM is able to achieve a test set accuracy of 71.8% ± 5.2%, showing that BAM is able to leverage previous experiences to adapt to novel domains. 5 CONCLUSION AND FUTURE WORK In this work we present BAM, a flexible Bayesian framework that allows agents to adapt to nonstationary environments. Our key contribution is the addition of a memory buffer to the Bayesian framework, which allows the agent to adaptively change its prior by choosing which past experiences to remember and which to forget. Empirically, we show the proposed approach is general enough to be deployed in a variety of problem settings such as online inference, control, non-stationary bandits and domain adaptation. To ensure that we isolated the benefits of BAM, the experiments focused on conjugate-prior distributions as it allowed us to compute the prior/posterior and the log-marginal likelihood in closed form. Future work will focus on leveraging advances in streaming variational inference (Broderick et al., 2013; Kurle et al., 2020) to allow BAM to be deployed on more complicated models, i.e. Bayesian deep neural networks. For simplicity, we focused on binary values for the readout weights as it allowed for a simple greedy discrete optimization algorithm to be used. We imagine that allowing the weights to be any value between 0 and 1 will increase performance in certain settings and allow BAM to have a much larger repertoire of priors that it can construct, as well as suggest different optimization algorithms to use within the framework. Finally, efficient memory buffer schemes will be explored to avoid the ’infinite memory’ problem of continual learning, enabling BAM to operate with efficiency indefinitely. 6 ACKNOWLEDGMENTS The authors thank Ayesha Vermani, Matthew Dowling and Il Memming Park for insightful discussions and feedback. A EXAMPLES OF POSTERIORS WITH DECREASING VARIANCE In this section we will provide two cases where the variance of the posterior is non-increasing with probability 1 as more data is collected, regardless of the observed data. For simplicity we stick to only 1D, though we are confident these results extend to their multi-dimensional extensions. A.1 BAYESIAN ESTIMATION OF MEAN OF NORMAL DISTRIBUTION The likelihood is of the form p(y|θ) = N (θ, σ2), (25) where σ2 > 0 is known. We use a normal prior p(θ) = N (θ̄0, τ0), (26) where τ0 > 0. Given arbitrary data y1, · · · , yN ∼ p(y1:N ) we get that the posterior is of the form p(θ|y1:N ) = N (θ̄N , τN ), (27) where τN = (τ −1 0 +Nσ −2)−1 = σ2τ0 σ2 +Nτ0 , (28) θ̄N = τN ( τ−10 θ̄0 + σ −2 N∑ n=1 yn ) . (29) We observe that the posterior variance, equation 28, is not a function of the observed data. In fact, the posterior variance is deterministic given N , τ0 and σ2. In this particular setting, we can show that τN is a strictly decreasing function of N . To prove that τ0 > τ1 > · · · > τn > · · · > τN , it suffices to show that τn−1 > τn, ∀n ∈ {1, · · ·N}, (30) which is equivalent to showing that τn τn−1 < 1, ∀n ∈ {1, · · ·N}. (31) Before proceeding, we note that as Bayes’ theorem is closed under recursion, we can always express the posterior variance as τn = (τn−1 + σ −2)−1 = σ2τn−1 σ2 + τn−1 . (32) Computing τn/τn−1 τn τn−1 = σ2τn−1 σ2 + τn−1 × 1 τn−1 , (33) = σ2 σ2 + τn−1 . (34) Because τn > 0, ∀n ∈ {0, · · · , N}, (35) we have that σ2 < σ2 + τn−1, and conclude that τn/τn−1 < 1. A.2 BAYESIAN LINEAR REGRESSION Next, we consider the setting of Bayesian linear regression with known variance. The likelihood is of the form p(yi|xi, θ) = N (θxi, σ2), xi ∈ R, (36) where σ2 > 0 is known. We use a normal prior p(θ) = N (θ̄0, τ0), (37) where τ0 > 0. Given arbitrary observations (x1, y1), . . . , (xn, yn), we have that the posterior is of the form p(θ|x1:N , y1:N ) = N (θ̄N , τN ), (38) where τN = ( τ−10 + σ −2 N∑ n=1 x2n )−1 = σ2τ0 σ2 + τ0 ∑N n=1 x 2 n , (39) θ̄N = τN (τ −1 0 θ̄0 + σ −2 N∑ n=1 xnyn). (40) To prove that τ0 ≥ τ1 ≥ · · · ≥ τn ≥ · · · ≥ τN , it suffices to show that τn τn−1 ≤ 1, ∀xn ∈ R, ∀n ∈ {1, · · · , N}. (41) Again, due to the Bayes being closed under recursion, we can always rewrite the posterior variance as τn = ( τ−1n−1 + σ −2x2n )−1 = σ2τn−1 σ2 + τn−1x2n . (42) So τn τn−1 = σ2τn−1 σ2 + τn−1x2n × 1 τn−1 , (43) = σ2 σ2 + τn−1x2n . (44) As x2n ≥ 0, we have that τn/τn−1 ≤ 1, which completes the proof. B PROOF OF PROPOSITION 1 For clarity, we rewrite the proposition below Proposition. Let p(θ|D<t,Wt) ∝ p(θ) t−1∏ j=1 p(Dj |θ)wt,j , wt,j ∈ {0, 1}, (45) be the prior used in BAM and let p(θ|D<t) ∝ p(θ) t−1∏ j=1 p(Dj |θ), (46) be the recursive Bayes prior. Then E [ Var(θ|D<t,Wt) ∣∣Wt] ≥ E[Var(θ|D<t)], ∀Wt ∈ {0, 1}t−1. (47) Proof. We begin by describing some simple cases, before presenting the proof for the general case. Case 1: All the readout weights are 1. If all the readout weights are 1, i.e. Wt = 1 then p(θ|D<t,Wt = 1) = p(θ|D<t), (48) recovering the recursive Bayes prior. Thus E [ Var(θ|D<t,Wt = 1) ∣∣Wt = 1] = E[Var(θ|D<t)]. (49) Case 2: All the readout weights are 0. If all the readout weights are 0, i.e. Wt = 0 then p(θ|D<t,Wt = 0) = p(θ), (50) recovering the base prior. The law of total variance states Var(θ) = E [Var(θ|D<t)] + Var(E[θ|D<t]). (51) As both terms on the right-hand side are positive, this implies that E [ Var(θ|D<t,Wt = 0) ∣∣Wt = 0] = Var(θ) ≥ E [Var(θ|D<t)] . (52) Case 3: General case Let r be the indices of the readout weight set to 1 (“remembered”) and f be the indices of the readout weights set to 0 (“forgotten”). We can express the memory buffer as D<t = Dr ∪ Df where Dr are the data points selected by the readout weights and Df are the data points that are ignored. We can rewrite the BAM prior as p(θ|D<t,Wt) = p(θ|Dr), (53) which is equivalent to applying Bayes theorem using Dr. Similarly, we can rewrite the recursive Bayes prior as p(θ|D<t) = p(θ|Dr,Df) ∝ p(Df|θ)p(θ|Dr). (54) Using the law of total variance, we get Var(θ|D<t,W ) = Var(θ|Dr) = E [ Var(θ|D<t) ∣∣Dr]+ Var (E[θ|D<t]∣∣Dr) , (55) where again, the above implies Var(θ|Dr) ≥ E [ Var(θ|D<t) ∣∣Dr] . (56) As the above inequality holds for all values of Dr, it also holds under expectation as well E[Var(θ|Dr) ∣∣Wt] ≥ E [Var(θ|D<t)∣∣Wt] . (57) Since Var(θ|D<t) is the variance under the recursive Bayes model, it is not a function ofWt, allowing the conditioning on Wt to be dropped E [ Var(θ|D<t) ∣∣Wt] = E [Var(θ|D<t)] . (58) Applying our definition of Dr recovers the desired result: E[Var(θ|D<t,Wt) ∣∣Wt] ≥ E [Var(θ|D<t)] . (59) C DISCUSSION OF GREEDY DISCRETE OPTIMIZATION As the number of choices is 2(t−1), it is impractical to use brute force methods for solving the discrete optimization problem defined in equation 21. For simplicity, we use two types of greedy approaches for discrete optimization. In both cases, each element in memory is evaluated against a target datum with the inner term of equation 19, the log marginal likelihood and regularization term. The first is a bottom-up approach, where we start with all readout weights set to 0 and greedily add the most beneficial associated datum until the combined score decreases. Pseudo code is displayed in Algorithm 1. Note that this is similar in spirit to the stepwise selection approach used for selecting variables in linear regression (Hocking, 1976). Algorithm 1: Bottom-Up Greedy for BAM Data: memoryD<t, targetDt, prior p, regularizer strength λ priorscore← log p(Dt) ; for size(D<t) do for eachDi inD<t do if W[i] > 0 then scores[i]← log ∫ p(Dt|θt)p(θt|W,D<t)dθt + log p(W |D<t) else scores[i] = -Inf end end score, idx = findmax(scores) ; if score > priorscore then W [idx]← 1 ; priorscore← score ; p = posterior(p,D<t[idx]) else returnW end end Result: Readout weightsW In the second approach, the readout weight starts at 0. The contribution of each datum in D<t is evaluated independently (and can be done practically in parallel with either multi-core CPUs or GPUs). These scores are filtered to only be scores better than the base prior’s likelihood. The top q% percentile of remaining scores are chosen and their corresponding readout weight value are set to 1. Pseudo code is displayed in Algorithm 2. This approach is faster than bottom-up as only one round of optimization is needed but the combination of each of the individual experiences could potentially lead to sub-optimal performance. Additionally, the percentile cutoff may needlessly include or exclude weight values. In practice, we found that the two approaches performed similarly with the main exception being the MNIST experiment, where the parallel approach was significantly worse than bottom-up. Algorithm 2: Parallel selection for BAM Data: memoryD<t, targetDt, regularizer strength λ, prior distribution p, cutoff q priorscore = log p(Dt) ; for eachDi inD<t do scores[i]← log ∫ p(Dt|θt)p(θt|Di)dθt + log p(W |Di) end cutoff = quantile(scores > priorscore, q) ; for each in scores do if scores[i] > cutoff then W [i]← 1 else W [i]← 0 end end Result: Readout weightsW D EXPERIMENTAL SETTINGS D.1 CONTROLS For our controls experiments, we used Model Predictive Path Integral control (Williams et al., 2017), a model predictive control (MPC) algorithm with a planning horizon of 50 timesteps and 32 sample trajectories. Our sampling covariance was 0.4 for each controlled joint–in the case of Cartpole, the action space is 1. The temperature parameter we used was 0.5. Planning with a probabilistic model involves each sampling trajectory to use a different model sampled from the current belief (as opposed to a sampled model per timestep); planning rollouts included noise, such that xt = xt−1 +M ′φ(xt−1, at) + εt, εt ∼ N (0, σ2I), (60) where M ′ is sampled from the current belief. φ is the random Fourier features function from (Rahimi & Recht, 2007) where we use 200 features with a bandwidth calculated as the mean pairwise distance of the inputs (states and actions) which is 6.0. To learn M , we use Bayesian linear regression where each row of M is modeled as being independent. We place a multivariate Normal prior on each of the rows with a prior mean of all 0s and prior precision of 10−4I . The Cartpole model’s initial state distribution for positions and velocities were sampled uniformly from -0.05 to 0.05, with the angle of the cart being π such that it points down. This sets up the swing-up problem. For the episodic one-shot experiment, we perform MPC for 200 timesteps as one trial. 15 trials make one episode, with the dynamical properties of the environment (i.e. gravity) fixed for the duration of the trial. We vary the gravity parameter of the model by selecting gravity values from celestial bodies of the Solar System; we used Earth, Mars, and Neptune at 9.81, 3.72, and 11.15 m/s2, respectively. At the start of a new episode, each method’s beliefs are reset to the base prior, and each method proceeds to update their respective beliefs accordingly. BAM retains each trail’s datum in memory across episodes. For the continual learning experiment, we do not inform our agent that the model dynamics have changed, i.e. we never reset the agent’s belief to a prior. Instead, we use Bayesian Online Changepoint Detection (BOCD) to discern if the underlying model distribution has changed. BOCD is compared against BAM, both with and without changepoint detection; while BOCD resets to a prior when a change is detected, BAM optimizes for a weight vector over the previously experienced data. The BOCD switching parameter λ for its hazard function was set to 0.11. The agent attempts the task for 60 trials, with the environment experiencing changes 3 times during said trials. D.2 DOMAIN ADAPTATION WITH ROTATED MNIST We ran 10 independent Bayesian linear regressions, one for each dimension of the one-hot encoded target. As the prior, we use a multivariate Normal distribution with a prior mean of all 0s and prior precision of 0.1I . Similar to the controls experiment, we assume the additive noise is fixed and set to σ2 = 10−4. As regularization had little effect, we set λ = 0. D.3 NON-STATIONARY BANDITS For both UCB and UCBAM, we use a confidence-level function of f(t) = 1 + t log2(t). The timescale parameter for BOCD + Thompson sampling is 0.016, which is the expected frequency of the arm switches. The weighting term for Bayesian exponential forgetting + Thompson sampling is 0.8. D.3.1 DESCRIPTION OF UCBAM The challenge of bandit settings is the need to explore, especially in the non-stationary setting we devised. As such, UCB is a well known algorithm for leveraging the uncertainty in the arm values to enable exploration. We combine this frequentist method with BAM as follows. When we assume to ‘know’ the current best arm value, we exploit it and keep a belief over its distribution with BAM. The signal for whether the best arm is ‘known’ is if the likelihood of the current arm’s value is higher with our current arm belief or higher with the naive base prior. If the base prior produces a higher likelihood, we assume the current arm distribution is incorrect (and will be updated with BAM), and we default to the UCB metric for arm selection. This simple combination of methods in this setting allows for the exploration benefits of UCB with the quick recognition of high value arms due to BAM and subsequent exploitation. Algorithm 3: UCBAM Data: prior distribution p K ← number of arms ; b = copy(p), empty D, K times ; # belief and memory per arm known← false ; for each iteration do if known then arm← thompson(b1...K) else arm← UCB choice end v← pull(arm) ; if log(p(v)) ≥ log(barm(v)) then known← false else known← true end barm = BAM(p,Darm<t , v) ; # BAM posterior update D<t = [D<t, v] ; # add value to memory end
1. What is the main contribution of the paper, and how does it address the problem of non-stationary environments? 2. How does the proposed framework, Bayes Augmented with Memory (BAM), differ from traditional recursive Bayesian methods? 3. What are the concerns regarding the practicality of BAM, especially in terms of computational complexity and applicability to real-world scenarios? 4. How does BAM compare to other continual learning techniques, such as variational continual learning (Nguyen et al., 2018)? 5. What is the significance of the name "Bayes augmented with memory," and what does it represent in the context of the paper?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a learning framework called Bayes Augmented with Memory (BAM), where a recursive adaptation of the Bayes formula is augmented with selection variables W to allow an agent to adaptively choose past experiences to forget. The original recursive Bayes formula assumes the stationarity of the data generating distribution, so keep all the past data for updating the posteriors for newly arriving data, and thus always tends to decrease the variance of the posteriors as it is meant to be. However, this can severely fail when the stationarity assumption is violated; when an agent encounters a change in the environment, it should somehow discard the past experience and quickly adapt to new data. BAM explicitly models this procedure with the binary selection variables, and by greedily optimizing those selection variables at each entrance of new data, quickly adapt to the change of environment and still leverages the past experiences. Various experiments with non-stationary environments demonstrate the usefulness of the proposed framework. Review The paper is well written and easy to follow. The motivation is clear and the problem is well set. The proposed learning framework is a reasonable solution in principle. The experiments clearly demonstrate the benefit of BAM, especially its ability to adapt to the change of environments. However, I'm a bit skeptical about the practicability of the proposed framework. All the elements of the algorithm, including the posterior updates and greedy selection procedure, assume the traceability of the marginal likelihood, which is not true in most non-trivial real-world applications. As the authors stated in the conclusion, one can introduce variational approximations or MCMC to conduct approximate inference, but these are way harder than it looks since now each evaluation of p ( θ t | W , D < t ) requires an iterative procedure requiring heavy computation until convergence, so the greedy selection procedure can be prohibitively time-consuming. As the authors pointed out when relating BAM to the existing works, and also suggested as future work, one can allow the selection variables W to be any real number between [ 0 , 1 ] . Then we can resort to gradient-based approximate optimization techniques (e.g., based on variational approximation with Gumbel-softmax), but at this time it is not clear how accurate all such approximate inference techniques (especially for high-dimensional models such as deep neural networks applied to large-scale data). One should also think about an alternative prior p ( W | θ t ) because the KL-divergence needed for the current prior is intractable. So my point is, there are many obstacles to be addressed if we are to extend the current BAM framework for realistic scenarios, and all of such issues pose their own research problems that might require contributions significant enough to write standalone papers. Therefore, I think the current submission is quite incomplete, although I agree that the research direction itself is interesting. Another practical aspect that is worth pondering is how the proposed framework compares to existing continual learning techniques introduced for deep neural networks. There are plenty of works for deep continual learning, where the goal is to learn large-scale deep neural networks while taking a continuous stream of non-stationary task data. Existing continual learning techniques learn to adaptively forget or retain past experiences, and also try to minimize the cost of learning and the complexity of the model (minimize the unnecessary expansion of the model). Especially, an important consideration in continual learning is that it typically assumes that an agent doesn't have access to the previous data ( D < t ), but only to the model learned from it. This is a reasonable assumption because keeping all the past data requires huge memory. So the difficulty in continual learning comes from the fact that it should learn continuously adapting model which keeps a balance between 1) how much to forget and 2) how much to retain past experiences, without access to the previous data. For instance, variational continual learning (Nguyen et al., 2018) constructs a lightweight summary of previous data that can be used as a representative for the subsequent learning. As far as I can see, BAM assumes access to all the previous data whenever updating the posteriors, and this again can be a significant challenge for more realistic learning scenarios. I'm also quite confused with the name "Bayes augmented with memory". What does the "memory" stand for? If it is for the past data being used for the update of the posteriors, isn't the vanilla recursive Bayes also augmented with memory? The difference between BAM and the recursive Bayes is in BAM's use of the readout variable W , but I don't think the variable W itself stands for the "memory". References (Nguyen et al., 2018) Nguyen, C. V., Li, Y., Bui, T. D., and Turner, R. E. Variational continual learning. ICLR, 2018.
ICLR
Title The GAN Landscape: Losses, Architectures, Regularization, and Normalization Abstract Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant amount of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of “tricks”. The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, and neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We reproduce the current state of the art and go beyond fairly exploring the GAN landscape. We discuss common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub. 1 INTRODUCTION Deep generative models are a powerful class of unsupervised machine learning models. The power of these models was recently harnessed in a variety of applications, including image generation, learned compression, and domain transfer (Isola et al., 2017; Radford et al., 2016; Agustsson et al., 2018; Tschannen et al., 2018). Generative adversarial networks (Goodfellow et al., 2014) are one of the main approaches to learning such models in a fully unsupervised fashion. The GAN framework can be viewed as a two-player game where the first player, the generator, is learning to transform some simple input distribution (usually a standard multivariate Normal or uniform) to a distribution on the space of images, such that the second player, the discriminator, cannot tell whether the samples belong to the true distribution or were synthesized. Both players aim to minimize their own loss and the solution to the game is the Nash equilibrium where neither player can improve their loss unilaterally. This powerful framework can also be derived by minimizing a divergence between the model distribution and the true distribution (Nowozin et al., 2016; Arjovsky et al., 2017). Training GANs involves solving a minimax problem over the parameters of the generator and the discriminator which are usually parameterized as deep convolutional neural networks. Consequently, this minimax problem is notoriously hard to solve in practice. As a result, a plethora of loss functions, regularization and normalization schemes, coupled with neural architecture choices, have been proposed (Goodfellow et al., 2014; Salimans et al., 2016; Miyato et al., 2018; Gulrajani et al., 2017; Arjovsky et al., 2017; Mao et al., 2016). Our contributions. In this work we provide a thorough empirical analysis of these competing approaches, and help the researchers and practitioners navigate this space. We first define the GAN landscape – the set of loss functions, normalization and regularization schemes, and the most commonly used architectures. We explore this search space on several modern large-scale data sets by means of hyperparameter optimization, considering both “good” sets of hyperparameters reported in the literature, as well as ones obtained by Gaussian Process regression. By analyzing the impact of the loss function, we conclude that the non-saturating loss is sufficiently stable across data sets, architectures and hyperparameters. We then proceed to decompose the effect of various normalization and regularization schemes, as well as varying architectures. We show that both gradient penalty (Gulrajani et al., 2017) as well as spectral normalization (Miyato et al., 2018) are useful in the context of high-capacity architectures. Finally, we discuss some common pitfalls, reproducibility issues, and practical considerations. We provide reference implementations, including training and evaluation code on Github1 and provide pre-trained models on TensorFlow Hub.2 2 THE GAN LANDSCAPE 2.1 LOSS FUNCTIONS Let P denote the target (true) distribution and Q the model distribution. Goodfellow et al. (2014) suggest two loss functions: the minimax GAN and the non-saturating (NS) GAN. In the former the discriminator minimizes the negative log-likelihood for the binary classification task. In the latter the generator maximizes the probability of generated samples being real. In this work we consider the non-saturating loss as it is known to outperform the minimax variant. The corresponding loss functions are LD = −Ex∼P [log(D(x))]− Ex̂∼Q[log(1−D(x̂))] and LG = −Ex̂∼Q[log(D(x̂))]. In Wasserstein GAN (WGAN) (Arjovsky et al., 2017) the authors propose to consider the Wasserstein divergence instead of the original Jensen-Shannon (JS). In particular, under the optimal discriminator, minimizing the proposed value function with respect to the generator minimizes the Wasserstein distance between P and Q. The drawback is that one has to ensure a 1-Lipschitz discriminator due to exploited Kantorovich-Rubenstein duality. The corresponding loss functions are LD = −Ex∼P [D(x)] + Ex̂∼Q[D(x̂)] and LG = −Ex̂∼Q[D(x̂)]. Finally, we consider the least-squares loss (LS) which corresponds to minimizing the Pearson χ2 divergence between P and Q (Mao et al., 2016). The intuition is that this loss function is smooth and saturates slower than the sigmoid cross-entropy loss of the JS formulation. The corresponding loss functions are LD = −Ex∼P [(D(x)− 1)2] + Ex̂∼Q[D(x̂)2] and LG = −Ex̂∼Q[(D(x̂)− 1)2]. 2.2 REGULARIZATION AND NORMALIZATION OF THE DISCRIMINATOR Gradient norm penalty. In the context of Wasserstein GANs this penalty can be interpreted as a soft penalty for the violation of 1-Lipschitzness (WGAN GP) (Gulrajani et al., 2017). Hereby, the gradient is evaluated on a linear interpolation between training points and generated samples as a proxy to the optimal coupling. The gradient penalty can also be evaluated around the data manifold which encourages the discriminator to be piece-wise linear in that region (Dragan) (Kodali et al., 2017). However, the gradient norm penalty can be considered purely as a regularizer for the discriminator and it was shown that it can improve the performance for other losses (Fedus et al., 2018). Furthermore, the penalty can be scaled by the “confidence” of the discriminator in the context of f-divergences (Roth et al., 2017). A drawback of gradient penalty (GP) regularization scheme is that it can depend on the model distribution Q which changes during training. One drawback of Dragan is that it is unclear to which extent the Gaussian assumption for the manifold holds. Finally, computing the gradient norms implies a non-trivial running time penalty – essentially doubling the running time. We also investigate the impact of a regularizer ubiquitous in supervised learning – the L2 penalty on all the weights of the network. Discriminator normalization. Normalizing the discriminator can be useful from both the optimization perspective (more efficient gradient flow, a more stable optimization), as well as from the representation perspective – the representation richness of the layers in a neural network depends on the spectral structure of the corresponding weight matrices (Miyato et al., 2018). From the optimization point of view, several techniques have found their way into the GAN literature, namely batch normalization (BN) (Ioffe and Szegedy, 2015) and layer normalization (LN) (Ba et al., 2016). Batch normalization in the context of GANs was suggested by Denton et al. (2015) and further popularized by Radford et al. (2016). It normalizes the pre-activations of nodes in a layer to mean β and standard deviation γ, where both β and γ are parameters learned for each node in the layer. The normalization is done on the batch level and for each node separately. In contrast, with Layer normalization, all the hidden units in a layer share the same normalization terms β and γ, but different 1Link removed to preserve anonymity. 2Link removed to preserve anonymity. samples are normalized differently (Ba et al., 2016). Layer normalization was first applied in the context of GANs in Gulrajani et al. (2017). From the representation point of view, one has to consider the neural network as a composition of (possibly non-linear) mappings and analyze their spectral properties. In particular, for the discriminator to be a bounded linear operator it suffices to control the maximum singular value. This approach is followed in Miyato et al. (2018) where the authors suggest dividing each weight matrix, including the matrices representing convolutional kernels, by their spectral norm. Furthermore, the authors argue that a key advantage of spectral normalization over competing approaches is that it results in discriminators of higher rank. 2.3 GENERATOR AND DISCRIMINATOR ARCHITECTURE We explore two classes of architectures in this study: deep convolutional generative adversarial networks (DCGAN) (Radford et al., 2016) and residual networks (ResNet) (He et al., 2016), both of which are ubiquitous in GAN research. Recently, Miyato et al. (2018) defined a variation of DCGAN, so called SNDCGAN. Apart from minor updates (cf. Section 4) the main difference to DCGAN is the use of an eight-layer discriminator network. The details of both networks are summarized in Table 3. The other architecture, ResNet19, is an architecture with five ResNet blocks in the generator and six ResNet blocks in the discriminator, that can operate on 128 × 128 images. We follow the ResNet setup from Miyato et al. (2018), with the small difference that we simplified the design of the discriminator. The detailed parameters of discriminator and generator are summarized in Table 4a and Table 4b. With this setup we were able to reproduce the current state of the art results. An ablation study on various ResNet modifications is available in the Appendix. 2.4 EVALUATION METRICS We focus on several recently proposed metrics well suited to the image domain. For an in-depth overview of quantitative metrics we refer the reader to (Borji, 2018). Inception Score (IS). Proposed by Salimans et al. (2016), IS offers a way to quantitatively evaluate the quality of generated samples. Intuitively, the conditional label distribution of samples containing meaningful objects should have low entropy, and the variability of the samples should be high. which can be expressed as IS = exp(Ex∼Q[dKL(p(y | x), p(y))]). The authors found that this score is well-correlated with scores from human annotators. Drawbacks include insensitivity to the prior distribution over labels and not being a proper distance. As an alternative Heusel et al. (2017) proposed the Frechet Inception Distance (FID). Samples from P and Q are first embedded into a feature space (a specific layer of InceptionNet). Then, assuming that the embedded data follows a multivariate Gaussian distribution, the mean and covariance are estimated. Finally, the Fréchet distance between these two Gaussians is computed, i.e. FID = ||µx − µy||22 + Tr(Σx + Σy − 2(ΣxΣy) 1 2 ), where (µx,Σx), and (µy,Σy) are the mean and covariance of the embedded samples from P and Q, respectively. The authors argue that FID is consistent with human judgment and more robust to noise than IS. Furthermore, the score is sensitive to the visual quality of generated samples – introducing noise or artifacts in the generated samples will reduce the FID. In contrast to IS, FID can detect intra-class mode dropping, i.e. a model that generates only one image per class can score a perfect IS, but will suffer from have a high FID (Lucic et al., 2018). Bińkowski et al. (2018) argued that FID has no unbiased estimator and suggest Kernel Inception distance (KID) instead. In Appendix B we empirically compare KID to FID and observe that both metrics are very strongly correlated (Spearman rank-order correlation coefficient of 0.994 for LSUN-BEDROOM and 0.995 for CELEBA-HQ-128 datasets). As a result we focus on FID as it is likely to result in the same ranking. Multi-scale Structural Similarity for Image Quality (MS-SSIM) and Diversity. A critical issue in GANs are mode collapse and mode-dropping – failing to capture a mode, or low-diversity of generated samples from a given mode. The MS-SSIM score (Wang et al., 2003) is used for measuring the similarity of two images where higher MS-SSIM score indicates more similar images. Several recent works suggest using the average pairwise MS-SSIM score within a given class as a proxy for the diversity of generated samples (Odena et al., 2017; Fedus et al., 2018). The drawback of this approach is that we do not know the class corresponding to the generated sample, so it is usually applied on one-class data sets, such as CELEBA-HQ-128. In this work we use the same setup as in Fedus et al. (2018). In particular, given a batch size b, we compute the average pairwise MS-SSIM score on 5 batches, of 5× b× (b− 1)/2 image pairs in total. We stress that the diversity should only be taken into account together with the FID and IS metrics. 2.5 DATA SETS We consider three data sets, namely CIFAR10, CELEBA-HQ-128, and LSUN-BEDROOM. The LSUN-BEDROOM data set (Yu et al., 2015) contains slightly more than 3 million images3. We randomly partition the images into a train and test set whereby we use 30588 images as the test set. Secondly, we use the CELEBA-HQ data set of 30k images (Karras et al., 2018). We use the 128× 128× 3 version obtained by running the code provided by the authors.4 We use 3000 examples as the test set and the remaining examples as the training set. Finally, we also include the CIFAR10 data set which contains 70K images (32x32x3), partitioned into 60000 training instances and 10000 testing instances. The baseline FID scores are 12.6 for CELEBA-HQ-128, 3.8 for LSUN-BEDROOM, and 5.19 for CIFAR10. Details on FID computation are presented in Section 4. 2.6 EXPLORING THE GAN LANDSCAPE The search space for GANs is prohibitively expensive: exploring all combinations of all losses, normalization and regularization schemes, and architectures is outside of the practical realm. Instead, in this study we analyze several slices of this tensor for each data set. In particular, to ensure that we can reproduce existing results, we perform a study over the subset of this tensor on CIFAR10. We then proceed to analyze the performance of these models across CELEBA-HQ-128 and LSUN-BEDROOM. In Section 3.1 we fix everything but the loss. In Section 3.2 we fix everything but the regularization and normalization scheme. Finally, in Section 3.3 we fix everything but the architecture. This allows us to decouple some of these design choices and provide some insight on what matters most. As noted in Lucic et al. (2018), one major issue preventing further progress is the hyperparameter tuning – currently, the community has converged to a small set of parameter values which work on some data sets, and may completely fail on others. In this study we combine the best hyperparameter settings found in the literature (Miyato et al., 2018), and perform Gaussian Process regression in the bandit setting (Srinivas et al., 2010) to possibly uncover better hyperparameter settings. We then consider the top performing models and discuss the impact of the computational budget. We summarize the fixed hyperparameter settings in Table 1a which contains the “good” parameters reported in recent publications (Fedus et al., 2018; Miyato et al., 2018; Gulrajani et al., 2017). In particular, we consider the cross product of these parameters to obtain 24 hyperparameter settings to reduce the bias. Finally, to provide a fair comparison, we perform Gaussian Process optimization in the bandit setting (Srinivas et al., 2010) on the parameter ranges provided in Table 1b. We run 12 rounds (i.e. we communicate with the oracle 12 times) of the optimization, each with a batch of 10 hyperparameter sets selected based on the FID scores from the results of the previous iterations. 3The images are preprocessed to 128× 128× 3 using TensorFlow resize image with crop or pad. 4Available online at https://github.com/tkarras/progressive_growing_of_gans. As we explore the number of discriminator updates per generator update (1 or 5), this leads to an additional 240 hyperparameter settings which in some cases outperform the previously known hyperparameter settings. Batch size is set to 64 for all the experiments. We use a fixed the number of discriminator update steps of 100K for LSUN-BEDROOM data set and CELEBA-HQ-128 data set, and 200K for CIFAR10 data set. We apply the Adam optimizer (Kingma and Ba, 2015). 3 RESULTS AND DISCUSSION Given that there are 4 major components (loss, architecture, regularization, normalization) to analyze for each data set, it is infeasible to explore the whole landscape. Hence, we opt for a more pragmatic solution – we keep some dimensions fixed, and vary the others. For each experiment we highlight three aspects: (1) FID distribution of the top 5% of the trained models, (2) the corresponding sample diversity score, and (3) the tradeoff between the computational budget (i.e. number of models to train) and model quality in terms of FID. Each model was retrained 5 times with a different random seed and we report the median score. The variance for models obtained by Gaussian Process regression is handled implicitly so we train each model once. 3.1 IMPACT OF THE LOSS FUNCTION Here the loss is either the non-saturating loss (NS) (Goodfellow et al., 2014), the least-squares loss (LS) (Mao et al., 2016), or the Wasserstein loss (WGAN) (Arjovsky et al., 2017). We use the ResNet19 with generator and discriminator architectures detailed in Table 4a. We consider the most prominent normalization and regularization approaches: gradient penalty (Gulrajani et al., 2017), and spectral normalization (Miyato et al., 2018). Both studies were performed on CELEBA-HQ-128 and LSUN-BEDROOM with hyperparameter settings shown in Table 1a. The results are presented in Figure 1. We observe that the non-saturating loss is stable over both data sets. Spectral normalization improves the quality of the model on both data sets. Similarly, the gradient penalty can help improve the quality of the model, but finding a good regularization tradeoff is non-trivial and requires a high computational budget. Models using the GP penalty benefit from 5:1 ratio of discriminator to generator updates as suggested by (Gulrajani et al., 2017). We also performed a study on hinge loss (Miyato et al., 2018) and present it in the Appendix. 3.2 IMPACT OF REGULARIZATION AND NORMALIZATION The goal of this study is to compare the relative performance of various regularization and normalization methods presented in the literature. To this end, and based on the loss study, we fix the loss to non-saturating loss (Goodfellow et al., 2014). We use the ResNet19 with generator and discriminator architectures described in Table 4a. Finally, we consider batch normalization (BN) (Ioffe and Szegedy, 2015), layer normalization (LN) (Ba et al., 2016), spectral normalization (SN), gradient penalty (GP) (Gulrajani et al., 2017), dragan penalty (DR) (Kodali et al., 2017), or L2 regularization. We consider both CELEBA-HQ-128 and LSUN-BEDROOM with the hyperparameter settings shown in Table 1a and Table 1b. The results are presented in Figure 2. We observe that adding batch norm to the discriminator hurts the performance. Secondly, gradient penalty can help, but it doesn’t stabilize the training. In fact, it is non-trivial to strike a balance of the loss and regularization strength. Spectral normalization helps improve the model quality and is more computationally efficient than gradient penalty. This is consistent with recent results in Zhang et al. (2018). Similarly to the loss study, models using GP penalty benefit from 5:1 ratio of discriminator to generator updates. Furthermore, in a separate ablation study we observed that running the optimization procedure for an additional 100K steps is likely to increase the performance of the models with GP penalty. Impact of Simultaneous Regularization and Normalization. Given the folklore that the Lipschitz constant of the discriminator is critical for the performance, one may expect simultaneous regularization and normalization could improve model quality. To quantify this effect, we fix the loss to non-saturating loss (Goodfellow et al., 2014), use the Resnet19 architecture (as above), and combine several normalization and regularization schemes, with hyperparameter settings shown in Table 1a coupled with 24 randomly selected parameters. The results are presented in Figure 3. We observe that one may benefit from additional regularization and normalization. However, a lot of computational effort has to be invested for somewhat marginal gains in FID. Nevertheless, given enough computational budget we advocate simultaneous regularization and normalization – spectral normalization and layer normalization seem to perform well in practice. 3.3 IMPACT OF GENERATOR AND DISCRIMINATOR ARCHITECTURES An interesting practical question is whether our findings also hold for a different model capacity. To this end, we also perform a study on SNDCGAN from Miyato et al. (2018). We consider the non-saturating GAN loss, gradient penalty and spectral normalization. While for smaller architectures regularization is not essential (Lucic et al., 2018), the regularization and normalization effects might become more relevant due to deeper architectures and optimization considerations. The results are presented in Figure 4. We observe that both architectures achieve comparable results and benefit from regularization and normalization. Spectral normalization strongly outperforms the baseline for both architectures. 4 COMMON PITFALLS In this section we focus on several pitfalls we encountered while trying to reproduce existing results and provide a fairly and accurate comparison. Metrics. There already seems to be a divergence in how the FID score is computed: (1) Some authors report the score on training data, yielding a FID between 50k training and 50k generated samples (Unterthiner et al., 2018). Some opt to report the FID based on 10k test samples and 5k generated samples and use a custom implementation (Miyato et al., 2018). Finally, Lucic et al. (2018) report the score with respect to the test data, in particular FID between 10k test samples, and 10k generated samples. The subtle differences will result in a mismatch between the reported FIDs, in some cases of more than 10%. We argue that FID should be computed with respect to the test data set as and use 10k test samples and 10k generated samples on CIFAR10 and LSUN-BEDROOM, and 3k vs 3k on CELEBA-HQ-128 as in in Lucic et al. (2018). Similarly, there are several ways to compute a diversity score using MS-SSIM and we follow the approach from Fedus et al. (2018). We provide the implementation details in Section G of the Appendix. Details of neural architectures. Even in popular architectures, like ResNet, there is still a number of design decision one needs to make, that are often omitted from the reported results. Those include the exact design of the ResNet cell (order of layers, when is ReLu applied, when to upsample and downsample, how many filters to use). Some of these differences might lead to potentially unfair comparison. As a result, we suggest to use the architectures presented within this work as a solid baseline. An ablation study on various ResNet modifications is available in the Appendix. Data sets. A common issue is related to data set processing – does LSUN-BEDROOM always correspond to the same data set? In most cases the precise algorithm for upscaling or cropping is not clear which introduces inconsistencies between results on the “same” data set. Implementation details and non-determinism. One major issue is the mismatch between the algorithm presented in a paper and the code provided online. We are aware that there is an embarrassingly large gap between a good implementation and a bad implementation of a given model. Hence, when no code is available, one is forced to guess which modifications were done. Another particularly tricky issue is removing randomness from the training process. After one fixes the data ordering and the initial weights, obtaining the same score by training the same model twice is non-trivial due to randomness present in certain GPU operations (Chetlur et al., 2014). Disabling the optimizations causing the non-determinism often results in an order of magnitude running time penalty. While each of these issues taken in isolation seems minor, they compound to create a mist which introduces friction in practical applications and the research process (Sculley et al., 2018). 5 RELATED WORK A recent large-scale study on GANs and Variational Autoencoders was presented in Lucic et al. (2018). The authors consider several loss functions and regularizers, and study the effect of the loss function on the FID score, with low-to-medium complexity data sets (MNIST, CIFAR10, CELEBA), and a single (InfoGAN style) architecture. In this limited setting, the authors found that there is no statistically significant difference between recently introduced models and the original non-saturating GAN. A study of the effects of gradient-norm regularization in GANs was recently presented in Fedus et al. (2018). The authors posit that the gradient penalty can also be applied to the non-saturating GAN, and that, to a limited extent, it reduces the sensitivity to hyperparameter selection. In a recent work on spectral normalization, the authors perform a small study of the competing regularization and normalization approaches (Miyato et al., 2018). We are happy to report that we could reproduce these results and we present them in the Appendix. Inspired by these works and building on the available open-source code from Lucic et al. (2018), we take one additional step in all dimensions considered therein: more complex neural architectures, more complex data sets, and more involved regularization and normalization schemes. 6 CONCLUSION In this work we study the GAN landscape: losses, regularization and normalization schemes, and neural architectures, and their impact on the on the quality of generated samples which we assess by recently introduced quantitative metrics. Our fair and thorough empirical evaluation suggests that one should consider non-saturating GAN loss and spectral normalization as default choices when applying GANs to a new data set. Given additional computational budget, we suggest adding the gradient penalty from Gulrajani et al. (2017) and train the model until convergence. Furthermore, additional marginal gains can be obtained by combining normalization and regularization empirically confirming the importance of the Lipschitz constant of the discriminator. Furthermore, both types of architectures proposed up-to this point perform reasonably well. A separate ablation study uncovered that most of the tricks applied in the ResNet style architectures lead to marginal changes in the quality and should be avoided due to the high computational cost. As a result of this large-scale study we identify the common pitfalls standing in the way of accurate and fair comparison and propose concrete actions to demystify the future results – issues with metrics, data set preprocessing, non-determinism, and missing implementation details are particularly striking. We hope that this work, together with the open-sourced reference implementations and trained models, will serve as a solid baseline for future GAN research. A FID AND INCEPTION SCORES ON CIFAR10 We present an empirical study with SNDCGAN and ResNet CIFAR architectures on CIFAR10 in figure 5 and figure 6. In addition to Section 3.1, we evaluate one more kind of loss on CIFAR10. Here HG, NS and WGAN stand for hinge loss, non saturating loss and Wasserstein loss respectively. We observe that hinge loss performs very similar to non-saturating loss. B COMPARISON OF FID AND KID METRICS The KID metric introduced by Bińkowski et al. (2018) is an alternative to FID. We use models from our Regularization and Normalization study (see Section 3.2) to compare both metrics. Here, by model we denote everything that needs to be specified for the training – including all hyper-parameters, like learning rate, λ, Adam’s β, etc. The Spearman rank-order correlation coefficient between KID and FID scores is approximately 0.994 for LSUN-BEDROOM and 0.995 for CELEBA-HQ-128 datasets. To evaluate a practical setting of selecting several best models, we compare the intersection between the set of “best K models by FID” and the set of “best K models by KID” for K ∈ 5, 10, 20, 50, 100. The results are summarized in Table 2. This experiment suggests that FID and KID metrics are very strongly correlated, and for the practical applications one can choose either of them. Also, the conclusions from our studies based on FID should transfer to studies based on KID. C ARCHITECTURES C.1 SNDCGAN We used the same architecture as Miyato et al. (2018), with the parameters copied from the GitHub page5. In Table 3a and Table 3b, we describe the operations in layer column with order. Kernel size is described in format [filter h, filter w, stride], input shape is h× w and output shape is h× w × channels. The slopes of all lReLU functions are set to 0.1. The input shape h×w is 128× 128 for CELEBA-HQ-128 and LSUN-BEDROOM, 32× 32 for CIFAR10. C.2 RESNET ARCHITECTURE The ResNet19 architecture is described in Table 4. RS column stands for the resample of the residual block, with downscale(D)/upscale(U)/none(-) setting. MP stands for mean pooling and BN for batch normalization. ResBlock is defined in Table 5. The addition layer merges two paths by adding them. The first path is a shortcut layer with exactly one convolution operation, while the second path consists of two convolution operations. The downscale layer and upscale layer are marked in Table 5. We used average pool with kernel [2, 2, 2] for downscale, after the convolution operation. We used unpool from https://github.com/tensorflow/ tensorflow/issues/2169 for upscale, before convolution operation. h and w are the input shape to the ResNet block, output shape depends on the RS parameter. ci and co are the input channels and output channels for a ResNet block. Table 6 described the ResNet CIFAR architecture we used in Figure 5 for reproducing the existing results. Note that RS is set to none for third ResBlock and fourth ResBlock in discriminator. In this case, we used the same ResNet block defined in Table 5 without resampling. 5https://github.com/pfnet-research/chainer-gan-lib D RESNET ARCHITECTURE ABLATION STUDY We have noticed six minor differences on Resnet architecture comparing to implementation from https: //github.com/pfnet-research/chainer-gan-lib/blob/master/common/net.py (Miyato et al., 2018). We did ablation study to verify the impact of these differences. Figure 7 shows the impact of the ablation study, with details described as following. • DEFAULT: ResNet CIFAR architecture with spectral normalization and non-saturating GAN loss. • SKIP: Use input as output for the shortcut connection in the discriminator ResBlock. By default it was a conv layer with 3x3 kernel. • CIN: Use ci for the discriminator ResBlock hidden layer output channels. By default it was co in our setup, while Miyato et al. (2018) used co for first ResBlock and ci for the rest. • OPT: Use an optimized setup for the first discriminator ResBlock, which includes: (1) no ReLU, (2) a conv layer for the shortcut connections, (3) use co instead of ci in ResBlock. • CIN OPT: Use CIN and OPT together. It means the first ResBlock is optimized while the remaining ResBlocks use ci for the hidden output channels. • SUM: Use reduce sum for the discriminator output. By default it was reduce mean. • TAN: Use tanh for the generator output, as well as range [-1, 1] for discriminator input. By default it was sigmoid and discriminator input range [0, 1]. • EPS: Use a bigger epsilon 2e − 5 for generator batch normalization. By default it was 1e − 5 in TensorFlow. • ALL: Apply all the above differences together. In the ablation study, the CIN experiment obtained the worst FID score. Combining with OPT, the CIN results were improved to the same level as the others which is reasonable because the first block has three input channels, which becomes a bottleneck for the optimization. Hence, using OPT and CIN together performs as well as the others. Overall, the impact of these differences are minor according to the study on CIFAR10. E RECOMMENDED HYPERPARAMETER SETTINGS To make the future GAN training simpler, we propose a set of best parameters for three setups: (1) Best parameters without any regularizer. (2) Best parameters with only one regularizer. (3) Best parameters with at most two regularizers. Table 7, Table 8 and Table 9 summarize the top 2 parameters for SNDCGAN architecture, ResNet19 architecture and ResNet CIFAR architecture, respectively. Models are ranked according to the median FID score of five different random seeds with fixed hyper-parameters in Table 1a. Note that ranking models according to the best FID score of different seeds will achieve better but unstable result. Gaussian Process optimization hyper-parameters are not included in this table. For ResNet19 architecture with at most two regularizers, we have run it only once due to computational overhead. To show the model stability, we listed the best FID score out of five seeds from the same parameters in column best. Spectral normalization is clearly outperforms the other normalizers on SNDCGAN and ResNet CIFAR architectures, while on ResNet19 both layer normalization and spectral normalization work well. To visualize the FID score on each data set, Figure 8, Figure 9 and Figure 10 show the generated examples by GANs. We select the examples from the best FID run, and then increase the FID score for two more plots. F WHICH PARAMETERS REALLY MATTER? For each architecture and hyper-parameter we estimate its impact on the final FID. Figure 11 presents heatmaps for hyperparameters, namely the learning rate, β1, β2, ndisc, and λ for each combination of neural architecture and data set. G VARIATIONS OF MS-SSIM We used the MS-SSIM scorer from TensorFlow with default power factors (Wang et al., 2003). Note that the default filter size for each scale layer is 11, the minimum image edge is 11 × 24 = 176. To adapt it to CELEBA-HQ-128 data set with size 128× 128, we used the minimum of filter size 11 and image size in last scale layer to allow the computation followed the previous work (Fedus et al., 2018).
1. What is the main contribution of the paper regarding GAN training? 2. What are the strengths and weaknesses of the proposed approach? 3. How does the reviewer assess the clarity and focus of the paper's content? 4. Are there any concerns regarding the experimental design and results? 5. Does the paper provide sufficient explanations and mathematical formulations of the techniques used?
Review
Review The paper studies several different techniques for training GANs: the architecture chosen, the loss function of the discriminator and generator, and training techniques: normalization methods, ratio between updates of discriminator and generator, and regularization. The method is performing an empirical training study on three image datasets, modifying the training procedure (e.g. changing one of the parameters) and using different metrics to evaluate the performance of the trained network. Since the space of possible hyper-parameters , training algorithms, loss functions and network architecture is huge , the authors set a default training procedure, and in each numerical experiment freeze all techniques and parameters except for one or two which they modify and evaluate. The results of the paper do not give major insights into what are the preferred techniques for training GANs, and certainly not why and under what circumstances they'll work. The authors recommend using non-saturated GANs loss and spectral normalization when training on new datasets, because these techniques achieved good performance metrics in most experiments. But there is no attempt to generalize the findings (e.g. new datasets not from original study, changing other parameters and then evaluating again if these techniques help etc.), not clear if the improvement in performance is statistically significant, how robust it is to changes in other parameters etc. The authors also rely mostly on the FID metric, but do not show if and how there is improvement upon visual inspection of the generated images (i.e. is resolution improved, is fraction of images that look clearly 'unnatural' reduced etc.) The writing is understandable for the most part, but the paper seems to lack focus - there is no clear take home message. The authors use numerous jargon words to describe the techniques studied (e.g. dragon penalty, gradient penalty, spectral normalization, Gaussian process regression in the bandit setting) but they do not explain them, give mathematical formulations, or insights into their advantages/disadvantages, making it hard to the non-expert reader to understand what are these techniques and why are they introduced. With lack of clear novel insights, or at least more systematic study on additional datasets of the 'winning' techniques and a sensitivity analysis, the paper does not give a valuable enough contribution to the field to merit publication.
ICLR
Title The GAN Landscape: Losses, Architectures, Regularization, and Normalization Abstract Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant amount of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of “tricks”. The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, and neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We reproduce the current state of the art and go beyond fairly exploring the GAN landscape. We discuss common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub. 1 INTRODUCTION Deep generative models are a powerful class of unsupervised machine learning models. The power of these models was recently harnessed in a variety of applications, including image generation, learned compression, and domain transfer (Isola et al., 2017; Radford et al., 2016; Agustsson et al., 2018; Tschannen et al., 2018). Generative adversarial networks (Goodfellow et al., 2014) are one of the main approaches to learning such models in a fully unsupervised fashion. The GAN framework can be viewed as a two-player game where the first player, the generator, is learning to transform some simple input distribution (usually a standard multivariate Normal or uniform) to a distribution on the space of images, such that the second player, the discriminator, cannot tell whether the samples belong to the true distribution or were synthesized. Both players aim to minimize their own loss and the solution to the game is the Nash equilibrium where neither player can improve their loss unilaterally. This powerful framework can also be derived by minimizing a divergence between the model distribution and the true distribution (Nowozin et al., 2016; Arjovsky et al., 2017). Training GANs involves solving a minimax problem over the parameters of the generator and the discriminator which are usually parameterized as deep convolutional neural networks. Consequently, this minimax problem is notoriously hard to solve in practice. As a result, a plethora of loss functions, regularization and normalization schemes, coupled with neural architecture choices, have been proposed (Goodfellow et al., 2014; Salimans et al., 2016; Miyato et al., 2018; Gulrajani et al., 2017; Arjovsky et al., 2017; Mao et al., 2016). Our contributions. In this work we provide a thorough empirical analysis of these competing approaches, and help the researchers and practitioners navigate this space. We first define the GAN landscape – the set of loss functions, normalization and regularization schemes, and the most commonly used architectures. We explore this search space on several modern large-scale data sets by means of hyperparameter optimization, considering both “good” sets of hyperparameters reported in the literature, as well as ones obtained by Gaussian Process regression. By analyzing the impact of the loss function, we conclude that the non-saturating loss is sufficiently stable across data sets, architectures and hyperparameters. We then proceed to decompose the effect of various normalization and regularization schemes, as well as varying architectures. We show that both gradient penalty (Gulrajani et al., 2017) as well as spectral normalization (Miyato et al., 2018) are useful in the context of high-capacity architectures. Finally, we discuss some common pitfalls, reproducibility issues, and practical considerations. We provide reference implementations, including training and evaluation code on Github1 and provide pre-trained models on TensorFlow Hub.2 2 THE GAN LANDSCAPE 2.1 LOSS FUNCTIONS Let P denote the target (true) distribution and Q the model distribution. Goodfellow et al. (2014) suggest two loss functions: the minimax GAN and the non-saturating (NS) GAN. In the former the discriminator minimizes the negative log-likelihood for the binary classification task. In the latter the generator maximizes the probability of generated samples being real. In this work we consider the non-saturating loss as it is known to outperform the minimax variant. The corresponding loss functions are LD = −Ex∼P [log(D(x))]− Ex̂∼Q[log(1−D(x̂))] and LG = −Ex̂∼Q[log(D(x̂))]. In Wasserstein GAN (WGAN) (Arjovsky et al., 2017) the authors propose to consider the Wasserstein divergence instead of the original Jensen-Shannon (JS). In particular, under the optimal discriminator, minimizing the proposed value function with respect to the generator minimizes the Wasserstein distance between P and Q. The drawback is that one has to ensure a 1-Lipschitz discriminator due to exploited Kantorovich-Rubenstein duality. The corresponding loss functions are LD = −Ex∼P [D(x)] + Ex̂∼Q[D(x̂)] and LG = −Ex̂∼Q[D(x̂)]. Finally, we consider the least-squares loss (LS) which corresponds to minimizing the Pearson χ2 divergence between P and Q (Mao et al., 2016). The intuition is that this loss function is smooth and saturates slower than the sigmoid cross-entropy loss of the JS formulation. The corresponding loss functions are LD = −Ex∼P [(D(x)− 1)2] + Ex̂∼Q[D(x̂)2] and LG = −Ex̂∼Q[(D(x̂)− 1)2]. 2.2 REGULARIZATION AND NORMALIZATION OF THE DISCRIMINATOR Gradient norm penalty. In the context of Wasserstein GANs this penalty can be interpreted as a soft penalty for the violation of 1-Lipschitzness (WGAN GP) (Gulrajani et al., 2017). Hereby, the gradient is evaluated on a linear interpolation between training points and generated samples as a proxy to the optimal coupling. The gradient penalty can also be evaluated around the data manifold which encourages the discriminator to be piece-wise linear in that region (Dragan) (Kodali et al., 2017). However, the gradient norm penalty can be considered purely as a regularizer for the discriminator and it was shown that it can improve the performance for other losses (Fedus et al., 2018). Furthermore, the penalty can be scaled by the “confidence” of the discriminator in the context of f-divergences (Roth et al., 2017). A drawback of gradient penalty (GP) regularization scheme is that it can depend on the model distribution Q which changes during training. One drawback of Dragan is that it is unclear to which extent the Gaussian assumption for the manifold holds. Finally, computing the gradient norms implies a non-trivial running time penalty – essentially doubling the running time. We also investigate the impact of a regularizer ubiquitous in supervised learning – the L2 penalty on all the weights of the network. Discriminator normalization. Normalizing the discriminator can be useful from both the optimization perspective (more efficient gradient flow, a more stable optimization), as well as from the representation perspective – the representation richness of the layers in a neural network depends on the spectral structure of the corresponding weight matrices (Miyato et al., 2018). From the optimization point of view, several techniques have found their way into the GAN literature, namely batch normalization (BN) (Ioffe and Szegedy, 2015) and layer normalization (LN) (Ba et al., 2016). Batch normalization in the context of GANs was suggested by Denton et al. (2015) and further popularized by Radford et al. (2016). It normalizes the pre-activations of nodes in a layer to mean β and standard deviation γ, where both β and γ are parameters learned for each node in the layer. The normalization is done on the batch level and for each node separately. In contrast, with Layer normalization, all the hidden units in a layer share the same normalization terms β and γ, but different 1Link removed to preserve anonymity. 2Link removed to preserve anonymity. samples are normalized differently (Ba et al., 2016). Layer normalization was first applied in the context of GANs in Gulrajani et al. (2017). From the representation point of view, one has to consider the neural network as a composition of (possibly non-linear) mappings and analyze their spectral properties. In particular, for the discriminator to be a bounded linear operator it suffices to control the maximum singular value. This approach is followed in Miyato et al. (2018) where the authors suggest dividing each weight matrix, including the matrices representing convolutional kernels, by their spectral norm. Furthermore, the authors argue that a key advantage of spectral normalization over competing approaches is that it results in discriminators of higher rank. 2.3 GENERATOR AND DISCRIMINATOR ARCHITECTURE We explore two classes of architectures in this study: deep convolutional generative adversarial networks (DCGAN) (Radford et al., 2016) and residual networks (ResNet) (He et al., 2016), both of which are ubiquitous in GAN research. Recently, Miyato et al. (2018) defined a variation of DCGAN, so called SNDCGAN. Apart from minor updates (cf. Section 4) the main difference to DCGAN is the use of an eight-layer discriminator network. The details of both networks are summarized in Table 3. The other architecture, ResNet19, is an architecture with five ResNet blocks in the generator and six ResNet blocks in the discriminator, that can operate on 128 × 128 images. We follow the ResNet setup from Miyato et al. (2018), with the small difference that we simplified the design of the discriminator. The detailed parameters of discriminator and generator are summarized in Table 4a and Table 4b. With this setup we were able to reproduce the current state of the art results. An ablation study on various ResNet modifications is available in the Appendix. 2.4 EVALUATION METRICS We focus on several recently proposed metrics well suited to the image domain. For an in-depth overview of quantitative metrics we refer the reader to (Borji, 2018). Inception Score (IS). Proposed by Salimans et al. (2016), IS offers a way to quantitatively evaluate the quality of generated samples. Intuitively, the conditional label distribution of samples containing meaningful objects should have low entropy, and the variability of the samples should be high. which can be expressed as IS = exp(Ex∼Q[dKL(p(y | x), p(y))]). The authors found that this score is well-correlated with scores from human annotators. Drawbacks include insensitivity to the prior distribution over labels and not being a proper distance. As an alternative Heusel et al. (2017) proposed the Frechet Inception Distance (FID). Samples from P and Q are first embedded into a feature space (a specific layer of InceptionNet). Then, assuming that the embedded data follows a multivariate Gaussian distribution, the mean and covariance are estimated. Finally, the Fréchet distance between these two Gaussians is computed, i.e. FID = ||µx − µy||22 + Tr(Σx + Σy − 2(ΣxΣy) 1 2 ), where (µx,Σx), and (µy,Σy) are the mean and covariance of the embedded samples from P and Q, respectively. The authors argue that FID is consistent with human judgment and more robust to noise than IS. Furthermore, the score is sensitive to the visual quality of generated samples – introducing noise or artifacts in the generated samples will reduce the FID. In contrast to IS, FID can detect intra-class mode dropping, i.e. a model that generates only one image per class can score a perfect IS, but will suffer from have a high FID (Lucic et al., 2018). Bińkowski et al. (2018) argued that FID has no unbiased estimator and suggest Kernel Inception distance (KID) instead. In Appendix B we empirically compare KID to FID and observe that both metrics are very strongly correlated (Spearman rank-order correlation coefficient of 0.994 for LSUN-BEDROOM and 0.995 for CELEBA-HQ-128 datasets). As a result we focus on FID as it is likely to result in the same ranking. Multi-scale Structural Similarity for Image Quality (MS-SSIM) and Diversity. A critical issue in GANs are mode collapse and mode-dropping – failing to capture a mode, or low-diversity of generated samples from a given mode. The MS-SSIM score (Wang et al., 2003) is used for measuring the similarity of two images where higher MS-SSIM score indicates more similar images. Several recent works suggest using the average pairwise MS-SSIM score within a given class as a proxy for the diversity of generated samples (Odena et al., 2017; Fedus et al., 2018). The drawback of this approach is that we do not know the class corresponding to the generated sample, so it is usually applied on one-class data sets, such as CELEBA-HQ-128. In this work we use the same setup as in Fedus et al. (2018). In particular, given a batch size b, we compute the average pairwise MS-SSIM score on 5 batches, of 5× b× (b− 1)/2 image pairs in total. We stress that the diversity should only be taken into account together with the FID and IS metrics. 2.5 DATA SETS We consider three data sets, namely CIFAR10, CELEBA-HQ-128, and LSUN-BEDROOM. The LSUN-BEDROOM data set (Yu et al., 2015) contains slightly more than 3 million images3. We randomly partition the images into a train and test set whereby we use 30588 images as the test set. Secondly, we use the CELEBA-HQ data set of 30k images (Karras et al., 2018). We use the 128× 128× 3 version obtained by running the code provided by the authors.4 We use 3000 examples as the test set and the remaining examples as the training set. Finally, we also include the CIFAR10 data set which contains 70K images (32x32x3), partitioned into 60000 training instances and 10000 testing instances. The baseline FID scores are 12.6 for CELEBA-HQ-128, 3.8 for LSUN-BEDROOM, and 5.19 for CIFAR10. Details on FID computation are presented in Section 4. 2.6 EXPLORING THE GAN LANDSCAPE The search space for GANs is prohibitively expensive: exploring all combinations of all losses, normalization and regularization schemes, and architectures is outside of the practical realm. Instead, in this study we analyze several slices of this tensor for each data set. In particular, to ensure that we can reproduce existing results, we perform a study over the subset of this tensor on CIFAR10. We then proceed to analyze the performance of these models across CELEBA-HQ-128 and LSUN-BEDROOM. In Section 3.1 we fix everything but the loss. In Section 3.2 we fix everything but the regularization and normalization scheme. Finally, in Section 3.3 we fix everything but the architecture. This allows us to decouple some of these design choices and provide some insight on what matters most. As noted in Lucic et al. (2018), one major issue preventing further progress is the hyperparameter tuning – currently, the community has converged to a small set of parameter values which work on some data sets, and may completely fail on others. In this study we combine the best hyperparameter settings found in the literature (Miyato et al., 2018), and perform Gaussian Process regression in the bandit setting (Srinivas et al., 2010) to possibly uncover better hyperparameter settings. We then consider the top performing models and discuss the impact of the computational budget. We summarize the fixed hyperparameter settings in Table 1a which contains the “good” parameters reported in recent publications (Fedus et al., 2018; Miyato et al., 2018; Gulrajani et al., 2017). In particular, we consider the cross product of these parameters to obtain 24 hyperparameter settings to reduce the bias. Finally, to provide a fair comparison, we perform Gaussian Process optimization in the bandit setting (Srinivas et al., 2010) on the parameter ranges provided in Table 1b. We run 12 rounds (i.e. we communicate with the oracle 12 times) of the optimization, each with a batch of 10 hyperparameter sets selected based on the FID scores from the results of the previous iterations. 3The images are preprocessed to 128× 128× 3 using TensorFlow resize image with crop or pad. 4Available online at https://github.com/tkarras/progressive_growing_of_gans. As we explore the number of discriminator updates per generator update (1 or 5), this leads to an additional 240 hyperparameter settings which in some cases outperform the previously known hyperparameter settings. Batch size is set to 64 for all the experiments. We use a fixed the number of discriminator update steps of 100K for LSUN-BEDROOM data set and CELEBA-HQ-128 data set, and 200K for CIFAR10 data set. We apply the Adam optimizer (Kingma and Ba, 2015). 3 RESULTS AND DISCUSSION Given that there are 4 major components (loss, architecture, regularization, normalization) to analyze for each data set, it is infeasible to explore the whole landscape. Hence, we opt for a more pragmatic solution – we keep some dimensions fixed, and vary the others. For each experiment we highlight three aspects: (1) FID distribution of the top 5% of the trained models, (2) the corresponding sample diversity score, and (3) the tradeoff between the computational budget (i.e. number of models to train) and model quality in terms of FID. Each model was retrained 5 times with a different random seed and we report the median score. The variance for models obtained by Gaussian Process regression is handled implicitly so we train each model once. 3.1 IMPACT OF THE LOSS FUNCTION Here the loss is either the non-saturating loss (NS) (Goodfellow et al., 2014), the least-squares loss (LS) (Mao et al., 2016), or the Wasserstein loss (WGAN) (Arjovsky et al., 2017). We use the ResNet19 with generator and discriminator architectures detailed in Table 4a. We consider the most prominent normalization and regularization approaches: gradient penalty (Gulrajani et al., 2017), and spectral normalization (Miyato et al., 2018). Both studies were performed on CELEBA-HQ-128 and LSUN-BEDROOM with hyperparameter settings shown in Table 1a. The results are presented in Figure 1. We observe that the non-saturating loss is stable over both data sets. Spectral normalization improves the quality of the model on both data sets. Similarly, the gradient penalty can help improve the quality of the model, but finding a good regularization tradeoff is non-trivial and requires a high computational budget. Models using the GP penalty benefit from 5:1 ratio of discriminator to generator updates as suggested by (Gulrajani et al., 2017). We also performed a study on hinge loss (Miyato et al., 2018) and present it in the Appendix. 3.2 IMPACT OF REGULARIZATION AND NORMALIZATION The goal of this study is to compare the relative performance of various regularization and normalization methods presented in the literature. To this end, and based on the loss study, we fix the loss to non-saturating loss (Goodfellow et al., 2014). We use the ResNet19 with generator and discriminator architectures described in Table 4a. Finally, we consider batch normalization (BN) (Ioffe and Szegedy, 2015), layer normalization (LN) (Ba et al., 2016), spectral normalization (SN), gradient penalty (GP) (Gulrajani et al., 2017), dragan penalty (DR) (Kodali et al., 2017), or L2 regularization. We consider both CELEBA-HQ-128 and LSUN-BEDROOM with the hyperparameter settings shown in Table 1a and Table 1b. The results are presented in Figure 2. We observe that adding batch norm to the discriminator hurts the performance. Secondly, gradient penalty can help, but it doesn’t stabilize the training. In fact, it is non-trivial to strike a balance of the loss and regularization strength. Spectral normalization helps improve the model quality and is more computationally efficient than gradient penalty. This is consistent with recent results in Zhang et al. (2018). Similarly to the loss study, models using GP penalty benefit from 5:1 ratio of discriminator to generator updates. Furthermore, in a separate ablation study we observed that running the optimization procedure for an additional 100K steps is likely to increase the performance of the models with GP penalty. Impact of Simultaneous Regularization and Normalization. Given the folklore that the Lipschitz constant of the discriminator is critical for the performance, one may expect simultaneous regularization and normalization could improve model quality. To quantify this effect, we fix the loss to non-saturating loss (Goodfellow et al., 2014), use the Resnet19 architecture (as above), and combine several normalization and regularization schemes, with hyperparameter settings shown in Table 1a coupled with 24 randomly selected parameters. The results are presented in Figure 3. We observe that one may benefit from additional regularization and normalization. However, a lot of computational effort has to be invested for somewhat marginal gains in FID. Nevertheless, given enough computational budget we advocate simultaneous regularization and normalization – spectral normalization and layer normalization seem to perform well in practice. 3.3 IMPACT OF GENERATOR AND DISCRIMINATOR ARCHITECTURES An interesting practical question is whether our findings also hold for a different model capacity. To this end, we also perform a study on SNDCGAN from Miyato et al. (2018). We consider the non-saturating GAN loss, gradient penalty and spectral normalization. While for smaller architectures regularization is not essential (Lucic et al., 2018), the regularization and normalization effects might become more relevant due to deeper architectures and optimization considerations. The results are presented in Figure 4. We observe that both architectures achieve comparable results and benefit from regularization and normalization. Spectral normalization strongly outperforms the baseline for both architectures. 4 COMMON PITFALLS In this section we focus on several pitfalls we encountered while trying to reproduce existing results and provide a fairly and accurate comparison. Metrics. There already seems to be a divergence in how the FID score is computed: (1) Some authors report the score on training data, yielding a FID between 50k training and 50k generated samples (Unterthiner et al., 2018). Some opt to report the FID based on 10k test samples and 5k generated samples and use a custom implementation (Miyato et al., 2018). Finally, Lucic et al. (2018) report the score with respect to the test data, in particular FID between 10k test samples, and 10k generated samples. The subtle differences will result in a mismatch between the reported FIDs, in some cases of more than 10%. We argue that FID should be computed with respect to the test data set as and use 10k test samples and 10k generated samples on CIFAR10 and LSUN-BEDROOM, and 3k vs 3k on CELEBA-HQ-128 as in in Lucic et al. (2018). Similarly, there are several ways to compute a diversity score using MS-SSIM and we follow the approach from Fedus et al. (2018). We provide the implementation details in Section G of the Appendix. Details of neural architectures. Even in popular architectures, like ResNet, there is still a number of design decision one needs to make, that are often omitted from the reported results. Those include the exact design of the ResNet cell (order of layers, when is ReLu applied, when to upsample and downsample, how many filters to use). Some of these differences might lead to potentially unfair comparison. As a result, we suggest to use the architectures presented within this work as a solid baseline. An ablation study on various ResNet modifications is available in the Appendix. Data sets. A common issue is related to data set processing – does LSUN-BEDROOM always correspond to the same data set? In most cases the precise algorithm for upscaling or cropping is not clear which introduces inconsistencies between results on the “same” data set. Implementation details and non-determinism. One major issue is the mismatch between the algorithm presented in a paper and the code provided online. We are aware that there is an embarrassingly large gap between a good implementation and a bad implementation of a given model. Hence, when no code is available, one is forced to guess which modifications were done. Another particularly tricky issue is removing randomness from the training process. After one fixes the data ordering and the initial weights, obtaining the same score by training the same model twice is non-trivial due to randomness present in certain GPU operations (Chetlur et al., 2014). Disabling the optimizations causing the non-determinism often results in an order of magnitude running time penalty. While each of these issues taken in isolation seems minor, they compound to create a mist which introduces friction in practical applications and the research process (Sculley et al., 2018). 5 RELATED WORK A recent large-scale study on GANs and Variational Autoencoders was presented in Lucic et al. (2018). The authors consider several loss functions and regularizers, and study the effect of the loss function on the FID score, with low-to-medium complexity data sets (MNIST, CIFAR10, CELEBA), and a single (InfoGAN style) architecture. In this limited setting, the authors found that there is no statistically significant difference between recently introduced models and the original non-saturating GAN. A study of the effects of gradient-norm regularization in GANs was recently presented in Fedus et al. (2018). The authors posit that the gradient penalty can also be applied to the non-saturating GAN, and that, to a limited extent, it reduces the sensitivity to hyperparameter selection. In a recent work on spectral normalization, the authors perform a small study of the competing regularization and normalization approaches (Miyato et al., 2018). We are happy to report that we could reproduce these results and we present them in the Appendix. Inspired by these works and building on the available open-source code from Lucic et al. (2018), we take one additional step in all dimensions considered therein: more complex neural architectures, more complex data sets, and more involved regularization and normalization schemes. 6 CONCLUSION In this work we study the GAN landscape: losses, regularization and normalization schemes, and neural architectures, and their impact on the on the quality of generated samples which we assess by recently introduced quantitative metrics. Our fair and thorough empirical evaluation suggests that one should consider non-saturating GAN loss and spectral normalization as default choices when applying GANs to a new data set. Given additional computational budget, we suggest adding the gradient penalty from Gulrajani et al. (2017) and train the model until convergence. Furthermore, additional marginal gains can be obtained by combining normalization and regularization empirically confirming the importance of the Lipschitz constant of the discriminator. Furthermore, both types of architectures proposed up-to this point perform reasonably well. A separate ablation study uncovered that most of the tricks applied in the ResNet style architectures lead to marginal changes in the quality and should be avoided due to the high computational cost. As a result of this large-scale study we identify the common pitfalls standing in the way of accurate and fair comparison and propose concrete actions to demystify the future results – issues with metrics, data set preprocessing, non-determinism, and missing implementation details are particularly striking. We hope that this work, together with the open-sourced reference implementations and trained models, will serve as a solid baseline for future GAN research. A FID AND INCEPTION SCORES ON CIFAR10 We present an empirical study with SNDCGAN and ResNet CIFAR architectures on CIFAR10 in figure 5 and figure 6. In addition to Section 3.1, we evaluate one more kind of loss on CIFAR10. Here HG, NS and WGAN stand for hinge loss, non saturating loss and Wasserstein loss respectively. We observe that hinge loss performs very similar to non-saturating loss. B COMPARISON OF FID AND KID METRICS The KID metric introduced by Bińkowski et al. (2018) is an alternative to FID. We use models from our Regularization and Normalization study (see Section 3.2) to compare both metrics. Here, by model we denote everything that needs to be specified for the training – including all hyper-parameters, like learning rate, λ, Adam’s β, etc. The Spearman rank-order correlation coefficient between KID and FID scores is approximately 0.994 for LSUN-BEDROOM and 0.995 for CELEBA-HQ-128 datasets. To evaluate a practical setting of selecting several best models, we compare the intersection between the set of “best K models by FID” and the set of “best K models by KID” for K ∈ 5, 10, 20, 50, 100. The results are summarized in Table 2. This experiment suggests that FID and KID metrics are very strongly correlated, and for the practical applications one can choose either of them. Also, the conclusions from our studies based on FID should transfer to studies based on KID. C ARCHITECTURES C.1 SNDCGAN We used the same architecture as Miyato et al. (2018), with the parameters copied from the GitHub page5. In Table 3a and Table 3b, we describe the operations in layer column with order. Kernel size is described in format [filter h, filter w, stride], input shape is h× w and output shape is h× w × channels. The slopes of all lReLU functions are set to 0.1. The input shape h×w is 128× 128 for CELEBA-HQ-128 and LSUN-BEDROOM, 32× 32 for CIFAR10. C.2 RESNET ARCHITECTURE The ResNet19 architecture is described in Table 4. RS column stands for the resample of the residual block, with downscale(D)/upscale(U)/none(-) setting. MP stands for mean pooling and BN for batch normalization. ResBlock is defined in Table 5. The addition layer merges two paths by adding them. The first path is a shortcut layer with exactly one convolution operation, while the second path consists of two convolution operations. The downscale layer and upscale layer are marked in Table 5. We used average pool with kernel [2, 2, 2] for downscale, after the convolution operation. We used unpool from https://github.com/tensorflow/ tensorflow/issues/2169 for upscale, before convolution operation. h and w are the input shape to the ResNet block, output shape depends on the RS parameter. ci and co are the input channels and output channels for a ResNet block. Table 6 described the ResNet CIFAR architecture we used in Figure 5 for reproducing the existing results. Note that RS is set to none for third ResBlock and fourth ResBlock in discriminator. In this case, we used the same ResNet block defined in Table 5 without resampling. 5https://github.com/pfnet-research/chainer-gan-lib D RESNET ARCHITECTURE ABLATION STUDY We have noticed six minor differences on Resnet architecture comparing to implementation from https: //github.com/pfnet-research/chainer-gan-lib/blob/master/common/net.py (Miyato et al., 2018). We did ablation study to verify the impact of these differences. Figure 7 shows the impact of the ablation study, with details described as following. • DEFAULT: ResNet CIFAR architecture with spectral normalization and non-saturating GAN loss. • SKIP: Use input as output for the shortcut connection in the discriminator ResBlock. By default it was a conv layer with 3x3 kernel. • CIN: Use ci for the discriminator ResBlock hidden layer output channels. By default it was co in our setup, while Miyato et al. (2018) used co for first ResBlock and ci for the rest. • OPT: Use an optimized setup for the first discriminator ResBlock, which includes: (1) no ReLU, (2) a conv layer for the shortcut connections, (3) use co instead of ci in ResBlock. • CIN OPT: Use CIN and OPT together. It means the first ResBlock is optimized while the remaining ResBlocks use ci for the hidden output channels. • SUM: Use reduce sum for the discriminator output. By default it was reduce mean. • TAN: Use tanh for the generator output, as well as range [-1, 1] for discriminator input. By default it was sigmoid and discriminator input range [0, 1]. • EPS: Use a bigger epsilon 2e − 5 for generator batch normalization. By default it was 1e − 5 in TensorFlow. • ALL: Apply all the above differences together. In the ablation study, the CIN experiment obtained the worst FID score. Combining with OPT, the CIN results were improved to the same level as the others which is reasonable because the first block has three input channels, which becomes a bottleneck for the optimization. Hence, using OPT and CIN together performs as well as the others. Overall, the impact of these differences are minor according to the study on CIFAR10. E RECOMMENDED HYPERPARAMETER SETTINGS To make the future GAN training simpler, we propose a set of best parameters for three setups: (1) Best parameters without any regularizer. (2) Best parameters with only one regularizer. (3) Best parameters with at most two regularizers. Table 7, Table 8 and Table 9 summarize the top 2 parameters for SNDCGAN architecture, ResNet19 architecture and ResNet CIFAR architecture, respectively. Models are ranked according to the median FID score of five different random seeds with fixed hyper-parameters in Table 1a. Note that ranking models according to the best FID score of different seeds will achieve better but unstable result. Gaussian Process optimization hyper-parameters are not included in this table. For ResNet19 architecture with at most two regularizers, we have run it only once due to computational overhead. To show the model stability, we listed the best FID score out of five seeds from the same parameters in column best. Spectral normalization is clearly outperforms the other normalizers on SNDCGAN and ResNet CIFAR architectures, while on ResNet19 both layer normalization and spectral normalization work well. To visualize the FID score on each data set, Figure 8, Figure 9 and Figure 10 show the generated examples by GANs. We select the examples from the best FID run, and then increase the FID score for two more plots. F WHICH PARAMETERS REALLY MATTER? For each architecture and hyper-parameter we estimate its impact on the final FID. Figure 11 presents heatmaps for hyperparameters, namely the learning rate, β1, β2, ndisc, and λ for each combination of neural architecture and data set. G VARIATIONS OF MS-SSIM We used the MS-SSIM scorer from TensorFlow with default power factors (Wang et al., 2003). Note that the default filter size for each scale layer is 11, the minimum image edge is 11 × 24 = 176. To adapt it to CELEBA-HQ-128 data set with size 128× 128, we used the minimum of filter size 11 and image size in last scale layer to allow the computation followed the previous work (Fedus et al., 2018).
1. What is the main contribution of the paper in the field of Generative Adversarial Networks (GANs)? 2. What are the strengths and weaknesses of the paper regarding its exposition and target audience? 3. How could the paper be improved regarding the level of detail and clarity in certain sections? 4. Are there any questions or concerns regarding the presentation of the results and their interpretation? 5. Are there any suggestions for improving the readability and understanding of the paper for non-experts in GANs?
Review
Review (As a disclamer I want to point out I'm not an expert in GANs and have only a basic understanding of the sub-field, but arguably this would make me target audience of this paper). The authors presents a large scale study comparing a large number of GAN experiments, in this study they compare various choices of architechtures, losses and hyperparameters. The first part of the paper describes the various losses, architectures, regularization and normalization schemes; and the second part describes the results of the comparison experiments. While I wish there were more such studies -- as I believe reproducing past results experimentally is important, and so is providing practical advice for practitioners -- this work in many parts hard to follow, and it is hard to get lot of new insight from the results, or a better understanding of GANs. As far I can see the most important take home message of the paper can be summarized in "one should consider non-saturating GAN loss and spectral normalization as default choices [...] Given additional computational budget, we suggest adding the gradient penalty [...] and train the model until convergence". Pros: - available source code - large number of experiments Cons: - the exposition could be improved, in particular the description of the plots is not very clear, I'm still not sure exactly what they show - not clear what the target audience of the first part (section 2) is, it is too technical for a survey intended for outsiders, and discusses subtle points that are not easy to understand without more knowledge, but at the same time seems unlikely to give additional insight to an insider - limited amount of new insight, which is limiting as new and better understanding of GANs and practical guidelines are arguably the main contribution of a work of this type Some suggestions that I think could make the paper stronger - I believe that in particular section 2 goes into too many mathematical details and subtleties that do not really add a lot. I think that either the reader already understand those concepts well (which I admit, I don't really, I'm merely curious about GANs and have been following the action from a distance, hence my low confidence rating to this review), or if they does not, it will be very hard to get much out of it. I would leave out some of the details, shortening the whole sections, and focus more on making a few of the concepts more understandable, and potentially leaving more space for a clearer description of the results - it is not really clear to be what data the graphs show: the boxplots show 5% of what data? does it also include the models obtained by gaussian process regression? and what about the line plots, is it the best model so far as you train more and more models? if so, how are those models chosen and ordered? are they the results of single models or average of multiple ones? - "the variance of models obtained by Guassian Process regression is handled implicitely so we tran each model once"? I do not understand what this means, and I work with hyper-parameter tuning using gaussian processes daily. It should probably be rephrased - at the start of section 3: what is an "experiment"? - in 3.1 towards the end of the first paragraph, what is a "study", is that the same as experiment or something different? - (minor) stating that lower is better in the graphs might be useful - (minor) typo in page 5 "We use a fixed the number"
ICLR
Title The GAN Landscape: Losses, Architectures, Regularization, and Normalization Abstract Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant amount of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of “tricks”. The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, and neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We reproduce the current state of the art and go beyond fairly exploring the GAN landscape. We discuss common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub. 1 INTRODUCTION Deep generative models are a powerful class of unsupervised machine learning models. The power of these models was recently harnessed in a variety of applications, including image generation, learned compression, and domain transfer (Isola et al., 2017; Radford et al., 2016; Agustsson et al., 2018; Tschannen et al., 2018). Generative adversarial networks (Goodfellow et al., 2014) are one of the main approaches to learning such models in a fully unsupervised fashion. The GAN framework can be viewed as a two-player game where the first player, the generator, is learning to transform some simple input distribution (usually a standard multivariate Normal or uniform) to a distribution on the space of images, such that the second player, the discriminator, cannot tell whether the samples belong to the true distribution or were synthesized. Both players aim to minimize their own loss and the solution to the game is the Nash equilibrium where neither player can improve their loss unilaterally. This powerful framework can also be derived by minimizing a divergence between the model distribution and the true distribution (Nowozin et al., 2016; Arjovsky et al., 2017). Training GANs involves solving a minimax problem over the parameters of the generator and the discriminator which are usually parameterized as deep convolutional neural networks. Consequently, this minimax problem is notoriously hard to solve in practice. As a result, a plethora of loss functions, regularization and normalization schemes, coupled with neural architecture choices, have been proposed (Goodfellow et al., 2014; Salimans et al., 2016; Miyato et al., 2018; Gulrajani et al., 2017; Arjovsky et al., 2017; Mao et al., 2016). Our contributions. In this work we provide a thorough empirical analysis of these competing approaches, and help the researchers and practitioners navigate this space. We first define the GAN landscape – the set of loss functions, normalization and regularization schemes, and the most commonly used architectures. We explore this search space on several modern large-scale data sets by means of hyperparameter optimization, considering both “good” sets of hyperparameters reported in the literature, as well as ones obtained by Gaussian Process regression. By analyzing the impact of the loss function, we conclude that the non-saturating loss is sufficiently stable across data sets, architectures and hyperparameters. We then proceed to decompose the effect of various normalization and regularization schemes, as well as varying architectures. We show that both gradient penalty (Gulrajani et al., 2017) as well as spectral normalization (Miyato et al., 2018) are useful in the context of high-capacity architectures. Finally, we discuss some common pitfalls, reproducibility issues, and practical considerations. We provide reference implementations, including training and evaluation code on Github1 and provide pre-trained models on TensorFlow Hub.2 2 THE GAN LANDSCAPE 2.1 LOSS FUNCTIONS Let P denote the target (true) distribution and Q the model distribution. Goodfellow et al. (2014) suggest two loss functions: the minimax GAN and the non-saturating (NS) GAN. In the former the discriminator minimizes the negative log-likelihood for the binary classification task. In the latter the generator maximizes the probability of generated samples being real. In this work we consider the non-saturating loss as it is known to outperform the minimax variant. The corresponding loss functions are LD = −Ex∼P [log(D(x))]− Ex̂∼Q[log(1−D(x̂))] and LG = −Ex̂∼Q[log(D(x̂))]. In Wasserstein GAN (WGAN) (Arjovsky et al., 2017) the authors propose to consider the Wasserstein divergence instead of the original Jensen-Shannon (JS). In particular, under the optimal discriminator, minimizing the proposed value function with respect to the generator minimizes the Wasserstein distance between P and Q. The drawback is that one has to ensure a 1-Lipschitz discriminator due to exploited Kantorovich-Rubenstein duality. The corresponding loss functions are LD = −Ex∼P [D(x)] + Ex̂∼Q[D(x̂)] and LG = −Ex̂∼Q[D(x̂)]. Finally, we consider the least-squares loss (LS) which corresponds to minimizing the Pearson χ2 divergence between P and Q (Mao et al., 2016). The intuition is that this loss function is smooth and saturates slower than the sigmoid cross-entropy loss of the JS formulation. The corresponding loss functions are LD = −Ex∼P [(D(x)− 1)2] + Ex̂∼Q[D(x̂)2] and LG = −Ex̂∼Q[(D(x̂)− 1)2]. 2.2 REGULARIZATION AND NORMALIZATION OF THE DISCRIMINATOR Gradient norm penalty. In the context of Wasserstein GANs this penalty can be interpreted as a soft penalty for the violation of 1-Lipschitzness (WGAN GP) (Gulrajani et al., 2017). Hereby, the gradient is evaluated on a linear interpolation between training points and generated samples as a proxy to the optimal coupling. The gradient penalty can also be evaluated around the data manifold which encourages the discriminator to be piece-wise linear in that region (Dragan) (Kodali et al., 2017). However, the gradient norm penalty can be considered purely as a regularizer for the discriminator and it was shown that it can improve the performance for other losses (Fedus et al., 2018). Furthermore, the penalty can be scaled by the “confidence” of the discriminator in the context of f-divergences (Roth et al., 2017). A drawback of gradient penalty (GP) regularization scheme is that it can depend on the model distribution Q which changes during training. One drawback of Dragan is that it is unclear to which extent the Gaussian assumption for the manifold holds. Finally, computing the gradient norms implies a non-trivial running time penalty – essentially doubling the running time. We also investigate the impact of a regularizer ubiquitous in supervised learning – the L2 penalty on all the weights of the network. Discriminator normalization. Normalizing the discriminator can be useful from both the optimization perspective (more efficient gradient flow, a more stable optimization), as well as from the representation perspective – the representation richness of the layers in a neural network depends on the spectral structure of the corresponding weight matrices (Miyato et al., 2018). From the optimization point of view, several techniques have found their way into the GAN literature, namely batch normalization (BN) (Ioffe and Szegedy, 2015) and layer normalization (LN) (Ba et al., 2016). Batch normalization in the context of GANs was suggested by Denton et al. (2015) and further popularized by Radford et al. (2016). It normalizes the pre-activations of nodes in a layer to mean β and standard deviation γ, where both β and γ are parameters learned for each node in the layer. The normalization is done on the batch level and for each node separately. In contrast, with Layer normalization, all the hidden units in a layer share the same normalization terms β and γ, but different 1Link removed to preserve anonymity. 2Link removed to preserve anonymity. samples are normalized differently (Ba et al., 2016). Layer normalization was first applied in the context of GANs in Gulrajani et al. (2017). From the representation point of view, one has to consider the neural network as a composition of (possibly non-linear) mappings and analyze their spectral properties. In particular, for the discriminator to be a bounded linear operator it suffices to control the maximum singular value. This approach is followed in Miyato et al. (2018) where the authors suggest dividing each weight matrix, including the matrices representing convolutional kernels, by their spectral norm. Furthermore, the authors argue that a key advantage of spectral normalization over competing approaches is that it results in discriminators of higher rank. 2.3 GENERATOR AND DISCRIMINATOR ARCHITECTURE We explore two classes of architectures in this study: deep convolutional generative adversarial networks (DCGAN) (Radford et al., 2016) and residual networks (ResNet) (He et al., 2016), both of which are ubiquitous in GAN research. Recently, Miyato et al. (2018) defined a variation of DCGAN, so called SNDCGAN. Apart from minor updates (cf. Section 4) the main difference to DCGAN is the use of an eight-layer discriminator network. The details of both networks are summarized in Table 3. The other architecture, ResNet19, is an architecture with five ResNet blocks in the generator and six ResNet blocks in the discriminator, that can operate on 128 × 128 images. We follow the ResNet setup from Miyato et al. (2018), with the small difference that we simplified the design of the discriminator. The detailed parameters of discriminator and generator are summarized in Table 4a and Table 4b. With this setup we were able to reproduce the current state of the art results. An ablation study on various ResNet modifications is available in the Appendix. 2.4 EVALUATION METRICS We focus on several recently proposed metrics well suited to the image domain. For an in-depth overview of quantitative metrics we refer the reader to (Borji, 2018). Inception Score (IS). Proposed by Salimans et al. (2016), IS offers a way to quantitatively evaluate the quality of generated samples. Intuitively, the conditional label distribution of samples containing meaningful objects should have low entropy, and the variability of the samples should be high. which can be expressed as IS = exp(Ex∼Q[dKL(p(y | x), p(y))]). The authors found that this score is well-correlated with scores from human annotators. Drawbacks include insensitivity to the prior distribution over labels and not being a proper distance. As an alternative Heusel et al. (2017) proposed the Frechet Inception Distance (FID). Samples from P and Q are first embedded into a feature space (a specific layer of InceptionNet). Then, assuming that the embedded data follows a multivariate Gaussian distribution, the mean and covariance are estimated. Finally, the Fréchet distance between these two Gaussians is computed, i.e. FID = ||µx − µy||22 + Tr(Σx + Σy − 2(ΣxΣy) 1 2 ), where (µx,Σx), and (µy,Σy) are the mean and covariance of the embedded samples from P and Q, respectively. The authors argue that FID is consistent with human judgment and more robust to noise than IS. Furthermore, the score is sensitive to the visual quality of generated samples – introducing noise or artifacts in the generated samples will reduce the FID. In contrast to IS, FID can detect intra-class mode dropping, i.e. a model that generates only one image per class can score a perfect IS, but will suffer from have a high FID (Lucic et al., 2018). Bińkowski et al. (2018) argued that FID has no unbiased estimator and suggest Kernel Inception distance (KID) instead. In Appendix B we empirically compare KID to FID and observe that both metrics are very strongly correlated (Spearman rank-order correlation coefficient of 0.994 for LSUN-BEDROOM and 0.995 for CELEBA-HQ-128 datasets). As a result we focus on FID as it is likely to result in the same ranking. Multi-scale Structural Similarity for Image Quality (MS-SSIM) and Diversity. A critical issue in GANs are mode collapse and mode-dropping – failing to capture a mode, or low-diversity of generated samples from a given mode. The MS-SSIM score (Wang et al., 2003) is used for measuring the similarity of two images where higher MS-SSIM score indicates more similar images. Several recent works suggest using the average pairwise MS-SSIM score within a given class as a proxy for the diversity of generated samples (Odena et al., 2017; Fedus et al., 2018). The drawback of this approach is that we do not know the class corresponding to the generated sample, so it is usually applied on one-class data sets, such as CELEBA-HQ-128. In this work we use the same setup as in Fedus et al. (2018). In particular, given a batch size b, we compute the average pairwise MS-SSIM score on 5 batches, of 5× b× (b− 1)/2 image pairs in total. We stress that the diversity should only be taken into account together with the FID and IS metrics. 2.5 DATA SETS We consider three data sets, namely CIFAR10, CELEBA-HQ-128, and LSUN-BEDROOM. The LSUN-BEDROOM data set (Yu et al., 2015) contains slightly more than 3 million images3. We randomly partition the images into a train and test set whereby we use 30588 images as the test set. Secondly, we use the CELEBA-HQ data set of 30k images (Karras et al., 2018). We use the 128× 128× 3 version obtained by running the code provided by the authors.4 We use 3000 examples as the test set and the remaining examples as the training set. Finally, we also include the CIFAR10 data set which contains 70K images (32x32x3), partitioned into 60000 training instances and 10000 testing instances. The baseline FID scores are 12.6 for CELEBA-HQ-128, 3.8 for LSUN-BEDROOM, and 5.19 for CIFAR10. Details on FID computation are presented in Section 4. 2.6 EXPLORING THE GAN LANDSCAPE The search space for GANs is prohibitively expensive: exploring all combinations of all losses, normalization and regularization schemes, and architectures is outside of the practical realm. Instead, in this study we analyze several slices of this tensor for each data set. In particular, to ensure that we can reproduce existing results, we perform a study over the subset of this tensor on CIFAR10. We then proceed to analyze the performance of these models across CELEBA-HQ-128 and LSUN-BEDROOM. In Section 3.1 we fix everything but the loss. In Section 3.2 we fix everything but the regularization and normalization scheme. Finally, in Section 3.3 we fix everything but the architecture. This allows us to decouple some of these design choices and provide some insight on what matters most. As noted in Lucic et al. (2018), one major issue preventing further progress is the hyperparameter tuning – currently, the community has converged to a small set of parameter values which work on some data sets, and may completely fail on others. In this study we combine the best hyperparameter settings found in the literature (Miyato et al., 2018), and perform Gaussian Process regression in the bandit setting (Srinivas et al., 2010) to possibly uncover better hyperparameter settings. We then consider the top performing models and discuss the impact of the computational budget. We summarize the fixed hyperparameter settings in Table 1a which contains the “good” parameters reported in recent publications (Fedus et al., 2018; Miyato et al., 2018; Gulrajani et al., 2017). In particular, we consider the cross product of these parameters to obtain 24 hyperparameter settings to reduce the bias. Finally, to provide a fair comparison, we perform Gaussian Process optimization in the bandit setting (Srinivas et al., 2010) on the parameter ranges provided in Table 1b. We run 12 rounds (i.e. we communicate with the oracle 12 times) of the optimization, each with a batch of 10 hyperparameter sets selected based on the FID scores from the results of the previous iterations. 3The images are preprocessed to 128× 128× 3 using TensorFlow resize image with crop or pad. 4Available online at https://github.com/tkarras/progressive_growing_of_gans. As we explore the number of discriminator updates per generator update (1 or 5), this leads to an additional 240 hyperparameter settings which in some cases outperform the previously known hyperparameter settings. Batch size is set to 64 for all the experiments. We use a fixed the number of discriminator update steps of 100K for LSUN-BEDROOM data set and CELEBA-HQ-128 data set, and 200K for CIFAR10 data set. We apply the Adam optimizer (Kingma and Ba, 2015). 3 RESULTS AND DISCUSSION Given that there are 4 major components (loss, architecture, regularization, normalization) to analyze for each data set, it is infeasible to explore the whole landscape. Hence, we opt for a more pragmatic solution – we keep some dimensions fixed, and vary the others. For each experiment we highlight three aspects: (1) FID distribution of the top 5% of the trained models, (2) the corresponding sample diversity score, and (3) the tradeoff between the computational budget (i.e. number of models to train) and model quality in terms of FID. Each model was retrained 5 times with a different random seed and we report the median score. The variance for models obtained by Gaussian Process regression is handled implicitly so we train each model once. 3.1 IMPACT OF THE LOSS FUNCTION Here the loss is either the non-saturating loss (NS) (Goodfellow et al., 2014), the least-squares loss (LS) (Mao et al., 2016), or the Wasserstein loss (WGAN) (Arjovsky et al., 2017). We use the ResNet19 with generator and discriminator architectures detailed in Table 4a. We consider the most prominent normalization and regularization approaches: gradient penalty (Gulrajani et al., 2017), and spectral normalization (Miyato et al., 2018). Both studies were performed on CELEBA-HQ-128 and LSUN-BEDROOM with hyperparameter settings shown in Table 1a. The results are presented in Figure 1. We observe that the non-saturating loss is stable over both data sets. Spectral normalization improves the quality of the model on both data sets. Similarly, the gradient penalty can help improve the quality of the model, but finding a good regularization tradeoff is non-trivial and requires a high computational budget. Models using the GP penalty benefit from 5:1 ratio of discriminator to generator updates as suggested by (Gulrajani et al., 2017). We also performed a study on hinge loss (Miyato et al., 2018) and present it in the Appendix. 3.2 IMPACT OF REGULARIZATION AND NORMALIZATION The goal of this study is to compare the relative performance of various regularization and normalization methods presented in the literature. To this end, and based on the loss study, we fix the loss to non-saturating loss (Goodfellow et al., 2014). We use the ResNet19 with generator and discriminator architectures described in Table 4a. Finally, we consider batch normalization (BN) (Ioffe and Szegedy, 2015), layer normalization (LN) (Ba et al., 2016), spectral normalization (SN), gradient penalty (GP) (Gulrajani et al., 2017), dragan penalty (DR) (Kodali et al., 2017), or L2 regularization. We consider both CELEBA-HQ-128 and LSUN-BEDROOM with the hyperparameter settings shown in Table 1a and Table 1b. The results are presented in Figure 2. We observe that adding batch norm to the discriminator hurts the performance. Secondly, gradient penalty can help, but it doesn’t stabilize the training. In fact, it is non-trivial to strike a balance of the loss and regularization strength. Spectral normalization helps improve the model quality and is more computationally efficient than gradient penalty. This is consistent with recent results in Zhang et al. (2018). Similarly to the loss study, models using GP penalty benefit from 5:1 ratio of discriminator to generator updates. Furthermore, in a separate ablation study we observed that running the optimization procedure for an additional 100K steps is likely to increase the performance of the models with GP penalty. Impact of Simultaneous Regularization and Normalization. Given the folklore that the Lipschitz constant of the discriminator is critical for the performance, one may expect simultaneous regularization and normalization could improve model quality. To quantify this effect, we fix the loss to non-saturating loss (Goodfellow et al., 2014), use the Resnet19 architecture (as above), and combine several normalization and regularization schemes, with hyperparameter settings shown in Table 1a coupled with 24 randomly selected parameters. The results are presented in Figure 3. We observe that one may benefit from additional regularization and normalization. However, a lot of computational effort has to be invested for somewhat marginal gains in FID. Nevertheless, given enough computational budget we advocate simultaneous regularization and normalization – spectral normalization and layer normalization seem to perform well in practice. 3.3 IMPACT OF GENERATOR AND DISCRIMINATOR ARCHITECTURES An interesting practical question is whether our findings also hold for a different model capacity. To this end, we also perform a study on SNDCGAN from Miyato et al. (2018). We consider the non-saturating GAN loss, gradient penalty and spectral normalization. While for smaller architectures regularization is not essential (Lucic et al., 2018), the regularization and normalization effects might become more relevant due to deeper architectures and optimization considerations. The results are presented in Figure 4. We observe that both architectures achieve comparable results and benefit from regularization and normalization. Spectral normalization strongly outperforms the baseline for both architectures. 4 COMMON PITFALLS In this section we focus on several pitfalls we encountered while trying to reproduce existing results and provide a fairly and accurate comparison. Metrics. There already seems to be a divergence in how the FID score is computed: (1) Some authors report the score on training data, yielding a FID between 50k training and 50k generated samples (Unterthiner et al., 2018). Some opt to report the FID based on 10k test samples and 5k generated samples and use a custom implementation (Miyato et al., 2018). Finally, Lucic et al. (2018) report the score with respect to the test data, in particular FID between 10k test samples, and 10k generated samples. The subtle differences will result in a mismatch between the reported FIDs, in some cases of more than 10%. We argue that FID should be computed with respect to the test data set as and use 10k test samples and 10k generated samples on CIFAR10 and LSUN-BEDROOM, and 3k vs 3k on CELEBA-HQ-128 as in in Lucic et al. (2018). Similarly, there are several ways to compute a diversity score using MS-SSIM and we follow the approach from Fedus et al. (2018). We provide the implementation details in Section G of the Appendix. Details of neural architectures. Even in popular architectures, like ResNet, there is still a number of design decision one needs to make, that are often omitted from the reported results. Those include the exact design of the ResNet cell (order of layers, when is ReLu applied, when to upsample and downsample, how many filters to use). Some of these differences might lead to potentially unfair comparison. As a result, we suggest to use the architectures presented within this work as a solid baseline. An ablation study on various ResNet modifications is available in the Appendix. Data sets. A common issue is related to data set processing – does LSUN-BEDROOM always correspond to the same data set? In most cases the precise algorithm for upscaling or cropping is not clear which introduces inconsistencies between results on the “same” data set. Implementation details and non-determinism. One major issue is the mismatch between the algorithm presented in a paper and the code provided online. We are aware that there is an embarrassingly large gap between a good implementation and a bad implementation of a given model. Hence, when no code is available, one is forced to guess which modifications were done. Another particularly tricky issue is removing randomness from the training process. After one fixes the data ordering and the initial weights, obtaining the same score by training the same model twice is non-trivial due to randomness present in certain GPU operations (Chetlur et al., 2014). Disabling the optimizations causing the non-determinism often results in an order of magnitude running time penalty. While each of these issues taken in isolation seems minor, they compound to create a mist which introduces friction in practical applications and the research process (Sculley et al., 2018). 5 RELATED WORK A recent large-scale study on GANs and Variational Autoencoders was presented in Lucic et al. (2018). The authors consider several loss functions and regularizers, and study the effect of the loss function on the FID score, with low-to-medium complexity data sets (MNIST, CIFAR10, CELEBA), and a single (InfoGAN style) architecture. In this limited setting, the authors found that there is no statistically significant difference between recently introduced models and the original non-saturating GAN. A study of the effects of gradient-norm regularization in GANs was recently presented in Fedus et al. (2018). The authors posit that the gradient penalty can also be applied to the non-saturating GAN, and that, to a limited extent, it reduces the sensitivity to hyperparameter selection. In a recent work on spectral normalization, the authors perform a small study of the competing regularization and normalization approaches (Miyato et al., 2018). We are happy to report that we could reproduce these results and we present them in the Appendix. Inspired by these works and building on the available open-source code from Lucic et al. (2018), we take one additional step in all dimensions considered therein: more complex neural architectures, more complex data sets, and more involved regularization and normalization schemes. 6 CONCLUSION In this work we study the GAN landscape: losses, regularization and normalization schemes, and neural architectures, and their impact on the on the quality of generated samples which we assess by recently introduced quantitative metrics. Our fair and thorough empirical evaluation suggests that one should consider non-saturating GAN loss and spectral normalization as default choices when applying GANs to a new data set. Given additional computational budget, we suggest adding the gradient penalty from Gulrajani et al. (2017) and train the model until convergence. Furthermore, additional marginal gains can be obtained by combining normalization and regularization empirically confirming the importance of the Lipschitz constant of the discriminator. Furthermore, both types of architectures proposed up-to this point perform reasonably well. A separate ablation study uncovered that most of the tricks applied in the ResNet style architectures lead to marginal changes in the quality and should be avoided due to the high computational cost. As a result of this large-scale study we identify the common pitfalls standing in the way of accurate and fair comparison and propose concrete actions to demystify the future results – issues with metrics, data set preprocessing, non-determinism, and missing implementation details are particularly striking. We hope that this work, together with the open-sourced reference implementations and trained models, will serve as a solid baseline for future GAN research. A FID AND INCEPTION SCORES ON CIFAR10 We present an empirical study with SNDCGAN and ResNet CIFAR architectures on CIFAR10 in figure 5 and figure 6. In addition to Section 3.1, we evaluate one more kind of loss on CIFAR10. Here HG, NS and WGAN stand for hinge loss, non saturating loss and Wasserstein loss respectively. We observe that hinge loss performs very similar to non-saturating loss. B COMPARISON OF FID AND KID METRICS The KID metric introduced by Bińkowski et al. (2018) is an alternative to FID. We use models from our Regularization and Normalization study (see Section 3.2) to compare both metrics. Here, by model we denote everything that needs to be specified for the training – including all hyper-parameters, like learning rate, λ, Adam’s β, etc. The Spearman rank-order correlation coefficient between KID and FID scores is approximately 0.994 for LSUN-BEDROOM and 0.995 for CELEBA-HQ-128 datasets. To evaluate a practical setting of selecting several best models, we compare the intersection between the set of “best K models by FID” and the set of “best K models by KID” for K ∈ 5, 10, 20, 50, 100. The results are summarized in Table 2. This experiment suggests that FID and KID metrics are very strongly correlated, and for the practical applications one can choose either of them. Also, the conclusions from our studies based on FID should transfer to studies based on KID. C ARCHITECTURES C.1 SNDCGAN We used the same architecture as Miyato et al. (2018), with the parameters copied from the GitHub page5. In Table 3a and Table 3b, we describe the operations in layer column with order. Kernel size is described in format [filter h, filter w, stride], input shape is h× w and output shape is h× w × channels. The slopes of all lReLU functions are set to 0.1. The input shape h×w is 128× 128 for CELEBA-HQ-128 and LSUN-BEDROOM, 32× 32 for CIFAR10. C.2 RESNET ARCHITECTURE The ResNet19 architecture is described in Table 4. RS column stands for the resample of the residual block, with downscale(D)/upscale(U)/none(-) setting. MP stands for mean pooling and BN for batch normalization. ResBlock is defined in Table 5. The addition layer merges two paths by adding them. The first path is a shortcut layer with exactly one convolution operation, while the second path consists of two convolution operations. The downscale layer and upscale layer are marked in Table 5. We used average pool with kernel [2, 2, 2] for downscale, after the convolution operation. We used unpool from https://github.com/tensorflow/ tensorflow/issues/2169 for upscale, before convolution operation. h and w are the input shape to the ResNet block, output shape depends on the RS parameter. ci and co are the input channels and output channels for a ResNet block. Table 6 described the ResNet CIFAR architecture we used in Figure 5 for reproducing the existing results. Note that RS is set to none for third ResBlock and fourth ResBlock in discriminator. In this case, we used the same ResNet block defined in Table 5 without resampling. 5https://github.com/pfnet-research/chainer-gan-lib D RESNET ARCHITECTURE ABLATION STUDY We have noticed six minor differences on Resnet architecture comparing to implementation from https: //github.com/pfnet-research/chainer-gan-lib/blob/master/common/net.py (Miyato et al., 2018). We did ablation study to verify the impact of these differences. Figure 7 shows the impact of the ablation study, with details described as following. • DEFAULT: ResNet CIFAR architecture with spectral normalization and non-saturating GAN loss. • SKIP: Use input as output for the shortcut connection in the discriminator ResBlock. By default it was a conv layer with 3x3 kernel. • CIN: Use ci for the discriminator ResBlock hidden layer output channels. By default it was co in our setup, while Miyato et al. (2018) used co for first ResBlock and ci for the rest. • OPT: Use an optimized setup for the first discriminator ResBlock, which includes: (1) no ReLU, (2) a conv layer for the shortcut connections, (3) use co instead of ci in ResBlock. • CIN OPT: Use CIN and OPT together. It means the first ResBlock is optimized while the remaining ResBlocks use ci for the hidden output channels. • SUM: Use reduce sum for the discriminator output. By default it was reduce mean. • TAN: Use tanh for the generator output, as well as range [-1, 1] for discriminator input. By default it was sigmoid and discriminator input range [0, 1]. • EPS: Use a bigger epsilon 2e − 5 for generator batch normalization. By default it was 1e − 5 in TensorFlow. • ALL: Apply all the above differences together. In the ablation study, the CIN experiment obtained the worst FID score. Combining with OPT, the CIN results were improved to the same level as the others which is reasonable because the first block has three input channels, which becomes a bottleneck for the optimization. Hence, using OPT and CIN together performs as well as the others. Overall, the impact of these differences are minor according to the study on CIFAR10. E RECOMMENDED HYPERPARAMETER SETTINGS To make the future GAN training simpler, we propose a set of best parameters for three setups: (1) Best parameters without any regularizer. (2) Best parameters with only one regularizer. (3) Best parameters with at most two regularizers. Table 7, Table 8 and Table 9 summarize the top 2 parameters for SNDCGAN architecture, ResNet19 architecture and ResNet CIFAR architecture, respectively. Models are ranked according to the median FID score of five different random seeds with fixed hyper-parameters in Table 1a. Note that ranking models according to the best FID score of different seeds will achieve better but unstable result. Gaussian Process optimization hyper-parameters are not included in this table. For ResNet19 architecture with at most two regularizers, we have run it only once due to computational overhead. To show the model stability, we listed the best FID score out of five seeds from the same parameters in column best. Spectral normalization is clearly outperforms the other normalizers on SNDCGAN and ResNet CIFAR architectures, while on ResNet19 both layer normalization and spectral normalization work well. To visualize the FID score on each data set, Figure 8, Figure 9 and Figure 10 show the generated examples by GANs. We select the examples from the best FID run, and then increase the FID score for two more plots. F WHICH PARAMETERS REALLY MATTER? For each architecture and hyper-parameter we estimate its impact on the final FID. Figure 11 presents heatmaps for hyperparameters, namely the learning rate, β1, β2, ndisc, and λ for each combination of neural architecture and data set. G VARIATIONS OF MS-SSIM We used the MS-SSIM scorer from TensorFlow with default power factors (Wang et al., 2003). Note that the default filter size for each scale layer is 11, the minimum image edge is 11 × 24 = 176. To adapt it to CELEBA-HQ-128 data set with size 128× 128, we used the minimum of filter size 11 and image size in last scale layer to allow the computation followed the previous work (Fedus et al., 2018).
1. What are the primary performance-affecting aspects of generative adversarial networks (GANs)? 2. How do hyperparameters and architectural components affect GAN performance? 3. What are the most salient metrics for evaluating GAN performance? 4. How does the choice of activation function affect GAN performance? 5. What are some challenges in comparing and reproducing results in GAN research? 6. How can we improve the visual parsing of graphs in GAN research papers? 7. What are some best practices for sharing source code in GAN research papers while maintaining anonymity during the review process?
Review
Review This paper seems to be an exposition on the primary performance affecting aspects of generative adversarial networks (GANs). This can possibly affect our understanding of GANs, helping practitioners get the most in their applications, and perhaps leading to innovations that positively affect GAN performance. Normally, expositions such as this I find difficult to recommend for publication. In these times, one can find "best practices" with a reasonable amount of rigor on data science blogs and such. An exposition that I would recommend for publication, would need to exhibit a high sense of depth and rigor for me to deem it publication worthy. This paper, for me, achieves this level of quality. The authors start off by giving a precise, constrained list of hyperparameters and architectural components that they would explore. This is listed in the title and explained in detail in the beginning of the paper. The authors are right in explaining that they could not cover all hyperparameters and chose what I feel are quite salient ones. My one ask would have been a survey of how activations might affect performance. I sense that everyone has settled upon LeakyReLUs for internal layers, but a survey of that work and experimentation within the authors' framework would have been nice. The authors then explain the metrics for evaluation and datasets. The datasets offered a healthy variety for typical image recognition tasks. It would be interesting to see what these metrics would reveal when applied to other types of data (e.g. scientific images). The authors explain, with graphs, the results of the loss, normalization, and architectures. I feel the discussion on loss was rushed, and I gained no insight on what the authors thought was a prominent difference between the three losses studied. Perhaps the authors had no salient observations for loss, but explicitly stating such would be useful to the reader. The only observation I gained as far as this is that non-saturating loss would possibly be stable across various datasets. Regularization and normalization are discussed in much more detail, and I think the authors made helpful and interesting observations, such as the benefits of spectral normalization and the fact that batch normalization in the discriminator might be a harmful thing. These are good takeaways that could be useful to a vast number of GANs researchers. For architectures to be a main pillar of the paper, I feel that this area could have been explored in greater detail. I feel that this discussion devolved into a discussion, again, about normalization rather than the architectural differences in performance. Unless I am misunderstanding something, it seems that the authors simply tested one more architecture, for the express purpose of testing whether their observations about normalization would hold. As a bonus, the authors bring up some problems they had in making comparisons and reproducing results. I think this is an extremely important discussion to have, and I am glad that the authors detailed the obstacles in their journey. Hopefully this will inspire other researchers to avoid adding to the complications in this field. The graphs were difficult to parse. I was able to make them out, but perhaps separating the top row (FID and diversity graphs) into separate figures, separate lines, or something would have reduced some confusion. In addition, different charts presenting only one loss function, with their spectral normalization and gradient penalty variants, would have made the effects of the normalization more obvious on the FID distribution graphs. If this can be changed before publication, I would strongly suggest it. I appreciate that the authors provided source code via GitHub. However, in the future, the authors should be careful to provide an anonymous repository for review purposes. I had to be careful not to allow myself to focus on the author names which are prominent in the repository readme, and one of whom has his/her name in the GitHub URL itself. I didn't immediately recognize the names and thus it was easy for me not to retain them or focus on them. However, if it had been otherwise, it might have risked biasing the review. In all, I think this is a good and useful paper from which I have learned and to which I will refer in the future as I continue my research into GANs and VAEs. I would suggest changing the title to be more appropriate and accurate (the researchers are primarily focused on showing the positive and negative effects of normalization across various loss functions and architectures). But altogether, I believe this is a paper worth publishing at ICLR.
ICLR
Title Learning Disconnected Manifolds: Avoiding The No Gan's Land by Latent Rejection Abstract Standard formulations of GANs, where a continuous function deforms a connected latent space, have been shown to be misspecified when fitting disconnected manifolds. In particular, when covering different classes of images, the generator will necessarily sample some low quality images in between the modes. Rather than modify the learning procedure, a line of works aims at improving the sampling quality from trained generators. Thus, it is now common to introduce a rejection step within the generation procedure. Building on this, we propose to train an additional network and transform the latent space via an adversarial learning of importance weights. This idea has several advantages: 1) it provides a way to inject disconnectedness on any GAN architecture, 2) since the rejection happens in the latent space, it avoids going through both the generator and the discriminator saving computation time, 3) this importance weights formulation provides a principled way to reduce the Wasserstein’s distance to the target distribution. We demonstrate the effectiveness of our method on different datasets, both synthetic and high dimensional. 1 INTRODUCTION GANs (Goodfellow et al., 2014) are an effective way to learn complex and high-dimensional distributions, leading to state-of-the-art models for image synthesis in both unconditional (Karras et al., 2019) and conditional settings (Brock et al., 2019). However, it is well-known that a single generator with a unimodal latent variable cannot recover a distribution composed of disconnected sub-manifolds (Khayatkhoei et al., 2018). This leads to a common problem for practitioners: the necessary existence of very-low quality samples when covering different modes. This is formalized by Tanielian et al. (2020) which refers to this area as the no GAN’s land and provides impossibility theorems on the learning of disconnected manifolds with standard formulations of GANs. Fitting a disconnected target distribution requires an additional mechanism inserting disconnectedness in the modeled distribution. A first solution is to add some expressivity to the model: Khayatkhoei et al. (2018) propose to train a mixture of generators while Gurumurthy et al. (2017) make use of a multi-modal latent distribution. A second solution is to improve the quality of a trained generative model by avoiding its poorest samples (Tao et al., 2018; Azadi et al., 2019; Turner et al., 2019; Grover et al., 2019; Tanaka, 2019). This second line of research relies heavily on a variety of Monte-Carlo algorithms, such as Rejection Sampling or the Metropolis-Hastings. These methods aim at sampling from a target distribution, while having only access to samples generated from a proposal distribution. This idea was successfully applied to GANs, using the previously learned generative distribution as a proposal distribution. However, one of the main drawback is that Monte-Carlo algorithms only guarantee to sample from the target distribution under strong assumptions. First, we need access to the density ratios between the proposal and target distributions or equivalently to a perfect discriminator (Azadi et al., 2019). Second, the support of the proposal distribution must fully cover the one of the target distribution, which means no mode collapse. This is known to be very demanding in high dimension since the intersection of supports between the proposal and target distribution is likely to be negligible (Arjovsky and Bottou, 2017, Lemma 3). In this setting, an optimal discriminator would give null acceptance probabilities for almost any generated points, leading to a lower performance. To tackle the aforementioned issue, we propose a novel method aiming at reducing the Wasserstein distance between the previously trained generative model and the target distribution. This is done via the adversarial training of a third network that learns importance weights in the latent space. The goal is to learn the redistribution of mass of the modeled distribution that best fits the target distribution. To better understand our approach, we first consider a simple 2D motivational example where the real data lies on four disconnected manifolds. To approximate this, the generator splits the latent space into four distinct areas and maps data points located in the frontiers, areas in orange in Figure 1b, out of the true manifold (see Figure 1a). Our method consequently aims at learning latent importance weights that can identify these frontiers and simply avoid them. This is highlighted in Figure 1d where the importance weighter has identified these four frontiers. When sampling from the new latent distribution, we can now perfectly fit the mixture of four gaussians (see Figure 1c). Our contributions are the following: • We discuss works improving the sampling quality of GANs and identify their limitations. • We propose a novel approach that directly modifies the latent space distribution. It provides a principled way to reduce the Wasserstein distance to the target distribution. • We thorougly compare our method with a large set of previous approaches on a variety of datasets and distributions. We empirically show that our solution significantly reduces the computational cost of inference while demonstrating an equal efficiency. Notation. Before moving to the related work section, we shortly present notation needed in the paper. The goal of the generator is to generate data points that are “similar” to samples collected from some target probability measure µ?. The measure µ? is defined on a potentially high dimensional spaceRD, equipped with the euclidean norm ‖ · ‖. To approach µ?, we use a parametric family of generative distribution where each distribution is the push-forward measure of a latent distribution Z and a continuous function modeled by a neural network. In most of all practical applications, the random variable Z defined on a low dimensional space Rd is either a multivariate Gaussian distribution or uniform distribution. The generator is a parameterized class of functions from Rd to RD, say G = {Gθ : θ ∈Θ}, where Θ ⊆ Rp is the set of parameters describing the model. Each function Gθ takes input from Z and outputs “fake” observations with distribution µθ = Gθ ]Z. On the other hand, the discriminator is described by a family of functions from RD to R, say D = {Dα : α ∈Λ}, Λ ⊆RQ, where each Dα . Finally, for any given distribution µ , we note Sµ its support. 2 RELATED WORK 2.1 DISCONNECTED MANIFOLD LEARNING: HOW TO TRAIN AND EVALUATE GANS Goodfellow et al. (2014) already stated that when training vanilla GANs, the generator could ignore modes of the target distribution: this is the mode collapse. A significant step towards understanding this phenomenon was made by Arjovsky and Bottou (2017) who explained that the standard formulation of GANs leads to vanishing or unstable gradients. The authors proposed the Wasserstein GANs (WGANs) architecture (Arjovsky et al., 2017) where, in particular, discriminative functions are restricted to the class of 1-Lipschitz functions. WGANs aim at solving: sup α∈A inf θ∈Θ Ex∼µ? Dα(x)−Ez∼Z Dα(Gθ (z))) (1) The broader drawback of standard GANs is that, since any modeled distribution is the push-forward of a unimodal distribution by a continuous transformation, it consequently has a connected support. This means that when the generator covers multiple disconnected modes of the target distribution, it necessarily generates samples out of the real data manifold (Khayatkhoei et al., 2018). Consequently, any thorough evaluation of GANs should assess simultaneously both the quality and the variety of the generated samples. Sajjadi et al. (2018) argue that a single-digit metric such as the Inception Score (Salimans et al., 2016) or the Frechet Inception distance (Heusel et al., 2017) is thus not adequate to compare generative models. To solve this issue, the authors propose a Precision/Recall metric that aims at measuring both the mode dropping and the mode inventing. In the Improved Precision/Recall (Kynkäänniemi et al., 2019), the precision refers to the portion of generated points that belongs to the target manifold, while the recall measures how much of the target distribution can be re-constructed by the model distribution. Building on this metric, Tanielian et al. (2020) highlighted the trade-off property of GANs deriving upper-bounds on the precision of standard GANs. To solve this problem, a common direction of research consists in over-parameterizing the generative model. Khayatkhoei et al. (2018) enforce diversity by using a mixture of generators while Gurumurthy et al. (2017) suggest that a mixture of Gaussians in the latent space is efficient to learn diverse and limited data. 2.2 IMPROVING THE QUALITY OF TRAINED GENERATORS To better fit disconnected manifolds with standard GANs architectures, another line of research consists in inserting disconnectedness into a previously learned generative distribution µθ . Tanielian et al. (2020) proposed an heuristic to remove the no GAN’s land (i.e. samples mapped out of the true manifold): rejecting data points with a high Jacobian Frobenius norm. Another possibility would be to use one of the different Monte-Carlo methods (Robert and Casella, 2013) and apply it to GANs. Building up on the well-known inference theory, Azadi et al. (2019) suggests the use of rejection sampling to improve the quality of the proposal distribution µθ . One can compute density ratios using either a classifier trained from scratch or the discriminator obtained at the end of the training. Consequently, in this Discriminator Rejection Sampling (DRS), any generated data point x∼ µθ is accepted with the following acceptance probability Pa: Pa(x) = µ?(x) Mµθ (x) where M = max x∈Sµθ µ?(x) µθ (x) , (2) where µ? and µθ here refers to the density functions. Similarly, Turner et al. (2019) use the same density ratios and derive MH-GAN, an adaptation of the Metropolis-Hasting algorithm (Hastings, 1970), that improves the sampling from µθ . Finally, Grover et al. (2019) use these density ratios r as importance weights and define an importance resampled generative model whose density is now defined by µ̂θ ∝ µθ × r(x). In order to perform discrete sampling from µ̂θ , authors rely on the Sampling-Importance-Resampling (SIR) algorithm (Rubin, 1988; Liu and Chen, 1998). This defines a new distribution µ̂θ SIR: µ̂SIRθ (xi) = r(xi) n ∑ j=1 r(x j) where x1, . . . ,xn ∼ µnθ . Note that these algorithms rely on the same density ratios and an acceptance-rejection scheme. In Rejection Sampling, the acceptance rate is uncontrollable but sampling from µ? is assured. With SIR and MH, the acceptance rate is controllable but sampling from µ? is no longer guaranteed. 3 ADVERSARIAL LEARNING OF LATENT IMPORTANCE WEIGHTS 3.1 OUR APPROACH Similar to previous works, our method consists in improving the performance of a given generative model, post-training. Given a trained WGANs (Gθ ,Dα), we now propose to learn importance weights in the latent space. To do so, we use a feed-forward neural network from Rd to R+, say Ω = {wϕ : ϕ ∈ Φ}. The neural network wϕ is trained using an adversarial process with the discriminator Dα , whilst keeping the weights of Gθ frozen. We now want to solve the following: sup α∈A inf ϕ∈Φ Ex∼µ?Dα(x)−Ez∼Z ( wϕ(z)×Dα(Gθ (z))) ) (3) Note that our formulation can also be plugged on top of many different objective functions. Interestingly, the use of the predictor wϕ defines a new latent space distribution whose density γ̂ is defined by γ̂(z) ∝ wϕ(z)× γ(z). Consequently, the newly defined modeled distribution µ̂θ is defined as the pushforward µ̂θ = Gθ ]γ̂ . The proposed method can be seen as minimizing the Wasserstein distance to the target distribution, over an increased class of generative distributions. The network wϕ thus learns how to redistribute the mass of µθ such that µ̂θ is closer to µ? in terms Wasserstein distance. However, as in the field of counterfactual estimation, a naive optimization of importance weights by gradient descent can lead to trivial solutions. First, if for example, the Wasserstein critic Dα outputs negative values for any generated samples, the network wϕ could simply learn to avoid the dataset and output 0 everywhere. To avoid this issue, we follow Swaminathan and Joachims (2015c) and scale the output of the discriminator such that the reward is always positive. A second problem comes from the fact that equation 3 can now be minimized not only by putting large importance weights wϕ(z) on the examples with high likelihoods Dα(G(z)), but also by maximizing the sum of the weights: this is the propensity overfitting (Swaminathan and Joachims, 2015a). To stabilize the optimisation process, we consequently introduce two important regularization techniques: Self-normalization. Similarly to Swaminathan and Joachims (2015a), we advocate the use of a normalization of the importance weights. To be more precise, we enforce the expectation of the importance weights to be close 1 by adding a penalty term. By doing so, we prohibit the propensity overfitting since the sum of the importance weights in the batch is bounded. Soft-Clipping To avoid cases where small areas of z have really high wϕ(z) values, which would lead to mode collapse, we enforce a soft-clipping on the weights (Bottou et al., 2013; Grover et al., 2019). Note that this constraint on wϕ(z) could also be implemented with a bounded activation function on the final layer, such as a re-scaled sigmoid or tanh activation. Finally, we thus get the following objective function: sup ϕ∈Φ Ez∼Z wϕ(z) ( Dα(Gθ (z)))−∇ )︸ ︷︷ ︸ discriminator reward −λ1 ( Ez∼Zwϕ(z)−1 )2︸ ︷︷ ︸ self-normalization −λ2Ez∼Z max ( 0,(wϕ(z)−m) )2︸ ︷︷ ︸ soft-clipping , (4) where ∇ = minz∼Z Dα(G(z)). λ1, λ2, and m are hyper-parameters (values displayed in Appendix). 3.2 SAMPLING FROM THE NEW DISTRIBUTION As mentionned above, the scale and variance of the learned importance weights are actively controlled, as it is done in counterfactual estimation (Bottou et al., 2013; Swaminathan and Joachims, 2015b; Faury et al., 2020). Doing so, we explicitly control the acceptance rates Pa(z) of the rejection sampling algorithm performed on γ̂ , since for any given z∼ Z, we have: Pa(z) = γ̂(z) mγ(z) and EZ Pa(z) = ∫ Rd γ̂(z) mγ(z) γ(z)dz = ∫ Rd γ̂(z) m dz = 1 m , where m is the maximum output of the importance weighter as defined in equation 4. We define as Latent Rejection Sampling (latentRS), the method that performs the Rejection Sampling algorithm on top of the learned importance weights. Since the exact sampling of the distribution γ̂ is now tractable with a rejection sampling algorithm, we do not need to implement neither the Metropolis-Hasting nor the Sampling Importance Resampling algorithm. Inspired from the literature on latent space optimal transport (Salimans et al., 2018; Agustsson et al., 2019; Tanaka, 2019), we also propose a second method where we perform gradient ascent in the latent space. To be more precise, for any given sample in the latent space, we follow the path maximizing the learned importance weights. This method is denoted latent Gradient Ascent (latentGA). In high-dimension, similarly to Tanaka (2019, Algorithm 2), gradients are projected to restrict z on the training support. Note that the learning rate and the number of updates used for this method are hyper-parameter that need to be tuned. 3.3 ADVANTAGES OF THE PROPOSED APPROACH We now discuss, in detail, the flaws of previous Monte-Carlo based approaches: 1) Computational cost. By using sampling algorithms in the latent space, we avoid going through both the generator and the discriminator, leading to a significant computational speed-up. This is of particular interest when dealing with high-dimensional spaces since we do not need to pass through deep CNNs generator and discriminator (Brock et al., 2019). 2) Existence of density functions. Every Monte-Carlo based methods assume that both µ? and µθ are probability distributions with associated density functions. However, in high dimension, the hypothesis that data tend to lie near a low dimensional manifold (Fefferman et al., 2016) is now commonly accepted. Besides, it is often the case that GANs be defined as the push-forward from much lower dimensional space, that is d << D. In that case, neither µ? nor µθ have density function in RD. Note that our method based on Wasserstein distance does not require this assumption. 3) Covering of the support Sµ? . First, Monte-Carlo methods are well-known to suffer from the curse of dimensionality (Mengersen et al., 1996; Robert and Casella, 2013). Besides, in the context of GANs, Arjovsky and Bottou (2017, Theorem 2.2) have shown that the intersection Sµ? ⋂ Sµθ is likely to be a negligible set under µθ . In this specific case, the density ratios would evaluate close to 0 almost everywhere on Sµθ , increasing the time complexity. More generally, Monte-Carlo based methods tend to avoid any area within Sµθ \Sµ? which could lead to a deteriorated sampling quality. To better illustrate this phenomenon, we represent in Figure 2a a synthetic experiment, where Sµθ does not recover Sµ? (by slightly shifting the mean of two modes after training the WGAN). In this setting, we clearly see in Figure 2b that Monte-carlo based methods worsen the WGAN. when Sµ? 6⊂ Sµθ , density ratios focus on local information and lead to non-optimal solutions. On the opposite, our method suggests to learn the optimal re-weighting of mass within the support Sµθ . Interestingly, on this synthetic dataset, it significantly reduces the Wasserstein distance to µ?, see Figure 2c. 4) Non-optimal discriminators. Knowing that optimal discriminators would lead to non-optimal objectives (very low acceptance probabilities), previous approaches made sure that their obtained classifier is sufficiently far from the optimal classifier (Section 3.2 in (Azadi et al., 2019)). Authors have thus come up with heuristics to approximate density ratios: for example Azadi et al. (2019) fine tune a regularized discriminator, Grover et al. (2019) use a pre-trained neural network on ImageNet classification and only fine-tune the final layers for the binary classification task. In our method, on the contrary, we are still looking for the discriminator maximizing the Integral Probability Metric (Müller, 1997) in equation 3, linked to optimal transport. 4 EXPERIMENTS In the following section, we illustrate the efficiency of the proposed methods, latentRS and latentGA on synthetic datasets. Then, we compare their performances with previous works on image datasets. On this image generation tasks, we empirically stress that both latentRS or latentGA methods slightly surpass density ratios based methods while significantly reducing the time complexity. 4.1 EVALUATION METRICS To measure performances of GANs when dealing with low dimensional applications - as with synthetic datasets - we equip our space with the standard Euclidean distance. However, for high dimensional applications such as image generation, Brock et al. (2019); Kynkäänniemi et al. (2019) have shown that embedding images into a feature space with a pre-trained convolutional classifier provides more semantic information. In this setting, we consequently use the euclidean distance between the images’ embeddings from a classifier. For a pair of images (a,b), we define the distance d(a,b) as d(a,b) = ‖φ(a)−φ(b)‖2 where φ is a pre-softmax layer of a supervised classifier, trained specifically on each dataset. Doing so, they will more easily separate images sampled from the target distribution µ? from the ones sampled by the distribution µθ . We compare the performance of the different methods with a panel of evaluation metrics. To begin with, we use the Improved Precision/Recall (Improved PR) metric (Kynkäänniemi et al., 2019), a more robust version of the Precision/Recall metric which was first applied to the context of GANs by Sajjadi et al. (2018). The Improved PR metric is based on a non-parametric estimation of the support of both generated and real distributions using k-Nearest Neighbors. Besides, we also report two well-known metrics: the Earth Mover’s Distance (EMD), the discrete version of the Wasserstein distance, and the average Hausdhorff distance (Hausd). EMD is a distance between probability distributions while Hausd. focuses on support estimations. These two measures are particularly interesting in GANs since one can compute them with collections of discrete points. Let (x1, . . . ,xn) and (y1, . . . ,yn) be two collections of n data points and S be the set of permutations of [1,n], then: EMD(X ,Y ) = min σ∈S n ∑ i=1 ‖xi,yσi‖ and average Hausd(X ,Y ) = 1 n ∑xi∈X min y j∈Y ‖xi,y j‖ Besides, we argue that the Wasserstein distance should be a metric of reference when evaluating WGANs since it is directly linked to their objective function. Finally, for completeness, we report FID Heusel et al. (2017). 4.2 MIXTURE OF GAUSSIANS Further experiments were ran on synthetic datasets with mixtures of 2D Gaussians, with either 4, 9, 16 or 25 components. When dealing with 2D mixtures of Gaussians, we used MLP with 4 hidden layers of 30 nodes for the generator, the discriminator and the importance weighter. As expected in this setting, a standard WGAN-GP combined with a connected latent space (i.e. multivariate normal or uniform), necessarily generates samples in-between two modes. Both Figure 1 and Figure 2a have stressed how the importance weighter can truncate latent space areas that are mapped outside the real data manifold and improve the EMD metric. More figures and details of the different evaluation metrics are given in Appendix. 4.3 IMAGE DATASETS MNIST, F-MNIST and Stacked MNIST. We further study the efficiency of the proposed methods on three image datasets: MNIST (LeCun et al., 1998), FashionMNIST (F-MNIST) (Xiao et al., 2017), and Stacked MNIST (Metz et al., 2016) a highly disconnected datasets with 1,000 classes. For MNIST, F-MNIST and Stacked MNIST, we follow Khayatkhoei et al. (2018) and use a standard CNN architecture composed of a sequence of 3x3 convolution layer, relu activation with nearest neighbor upsampling. To exhibit the efficiency of the proposed methods in different settings, we use hinge loss with gradient penalty (Hinge-GP) (Miyato et al., 2018) on MNIST and F-MNIST, and a Wasserstein loss with gradient penalty (Gulrajani et al., 2017) on Stacked Mnist. For the importance weighter wϕ , we use an MLP architecture with fully-connected layers and relu activation. wϕ has 4 hidden layers, each having its width four times larger than the dimension of the latent space. For exhaustivity, we compare latentRS and latentGA with previous works leveraging density ratios. In particular, we implemented a wide set of post-processing methods for GANs: DRS (Azadi et al., 2019), MH-GAN (Turner et al., 2019), SIR-GAN (Grover et al., 2019) and DOT (Tanaka, 2019). Similarly to Azadi et al. (2019), we take the discriminator at the end of the adversarial training, fine-tune it with the binary cross-entropy loss and select the best model in terms of EMD. During fine-tuning, we keep the gradient penalty or spectral normalization, otherwise the classifier easily separates real from generated data, which leads to a degraded performance, as shown in Figure 2a. Following Azadi et al. (2019); Grover et al. (2019), we do not include explicit mechanism to calibrate the classifier. To the extent of our knowledge, we are the first to empirically compare such a wide variety of Monte-Carlo methods on different datasets and metrics. The main results of this comparison are shown in Table 1 (see Appendix for more details). We see that, except for Stacked MNIST, both of our methods outperform every other method on precision, av. Hausdhorff and the EMD metric. Interestingly, latentGA seems to be the strongest one. In Figure 3, we show how samples evolve when performing latent gradient ascent on the importance weights. As expected, as importance weights are increased, the quality of the generated images significantly improves. Besides, a strong contribution of the paper also resides in notably speeding-up the inference procedure. As shown in Table 1, the inference time of a given data point is 25 times faster with latentRS than with SIR-GAN. CelebA is a large-scale dataset of faces covering a variety of poses. We train the models at 64x64 resolution. Following recent studies (Brock et al., 2019), the discriminator is trained with the hinge loss and spectral normalization (Miyato et al., 2018). For the generator network, residual connections (He et al., 2016) are used alongside self-modulation layers (Chen et al., 2019). The importance weighter is a simple 4 hidden-layer MLP with a width 10 times larger than the latent space dimension. In this one-class high-dimensional dataset, the importance weighter still managed to learn some meaningful features. First, Figure 3 higlights a subtle improvement of the generated images when performing latentGA. Second, when ranking generated images with the importance weighter and comparing the top-5% vs worst-5% in Figure 4, we observe some differences in quality. However, on a broader scale, the importance weighter does not bring a clear improvement on neither the EMD nor the Hausdhorff metric. Interestingly, this is also the case for any of the different rejection methods (see Appendix for details). We argue that in this one-class generation task, post-processing the generated samples is not as efficient as in a multi-modal setting (e.g. MNIST, FMNIST, Stacked MNIST). Intuitively, it is a much easier task to remove generated samples that are out of the target manifold than to discriminate between samples that already share similarities with training samples. It further stresses that this family of methods is useful if one needs to insert disconectedness in the modeled distribution. However, when the target distribution is a single-class distribution with a connected support, their efficiency decrease. To illustrate this, we added in Appendix a figure highlighting samples generated by a trained WGAN on Celeba 64x64, ranked by the discriminator. We observe that on these images, the discriminator does not correlate well with human judgement prohibiting the importance weighter to learn a meaningful signal. 5 CONCLUSION In this paper, we provide insights on improving the learning of disconnected manifolds with GANs. Given the existence of the no GAN’s land, latent space areas mapping outside the target manifold, we provide two methods to truncate them. Contrary to previous works focusing on learning density ratios in the output space, both of our methods are based on training adversarially a neural network learning importance weights in the latent space. On the task of image generation, both of the proposed methods were shown to be empirically efficient while significantly reducing the inference time (latentRS by an order of 20), when compared to density ratio based methods. This paper has specifically stressed the efficiency of post-training methods when dealing with highly disconnected target distributions. However, when dealing with single-class connected distributions or class-conditioning generative models, the efficiency of such methods is not clear. We argue that one of the reason is that, once the generator maps correctly inside the target manifold, it is a much harder task to discriminate between realistic and fake samples. A potential future work would therefore be to investigate how can we help the discriminator better classify among the set of generated images. A EVALUATION DETAILS Precision recall metric. For the precision-recall metric, we use the algorithm from Khayatkhoei et al. (2018). Namely, when comparing the set of real data points (x1, ...,xn) with the set of fake data points (y1, ...,yn): A point xi has a recall r(xi) = 1 if there exists y j, such that ‖xi− y j‖ ≤ ‖y j− y j(k)‖, where y j(k) is the k-nearest neighbor of n. Finally, the recall is the average of individual recall: 1n ∑i r(xi). A point yi has a precision p(yi) = 1 if there exists x j, such that ‖yi− x j‖ ≤ ‖x j− x j(k)‖, where x j(k) is the k-nearest neighbor of n. Finally, the precision is the average of individual precision: 1n ∑i p(xi). Images’ embeddings. As mentioned earlier, for images we use the distance between embeddings of images in a neural network trained specifically for classification on this dataset. For Stacked Mnist, we use a MNIST classifier on each output channel and simply stack the three embedding vectors. Parameters. For all datasets, we use k = 3 (3rd nearest neighbor). For MNIST, F-MNIST and Stacked MNIST, we use a set of n = 2048 points. For CelebA, we use a set of n = 1024 points. This is also valid for the other metrics used: EMD, Av. Hausd. For FID on CelebA, we use the standard protocol evaluation with Inception Net and 50k data points. B HYPER-PARAMETERS. SIR: Model selection: we fine-tune with a binary cross-entropy loss the discriminator from the end of the adversarial training and select the best model in terms of EMD. We use then use Sampling-Importance-Resampling algorithm, with sets of n=40 points. DRS: Model selection: we fine-tune with a binary cross-entropy loss the discriminator from the end of the adversarial training and select the best model in terms of EMD. We use the standard Rejection Sampling algorithm, without artificially increasing the acceptance rate such as Azadi et al. (2019). We use regularized discriminator (with gradient penalty or spectral normalization), which avoids the acceptance rate falling to almost zero. MH-GAN: Model selection: we fine-tune with a binary cross-entropy loss the discriminator from the end of the adversarial training and select the best model in terms of EMD. We use the independance Metropolis-Hastings algorithm with Markov Chains of 40 points, and select the last point. DOT: Model selection: we fine-tune with the dual wasserstein loss the discriminator from the end of the adversarial training and select the best model in terms of EMD. We then perform a projected gradient descent as described in Tanaka (2019) with SGD, with Nsteps = 10 and ε = 0.01. LRS: For MNIST, F-MNIST and Stacked MNIST, we use the same hyper-parameters: λ1 = 10, λ2 = 2 and m = 3. wϕ is a standard MLP with 4 hidden layers, each having 600 nodes (6x dimension of latent space), and relu activation. The output layer is 1-dimensional and with a relu activation. For CelebA, we use: λ1 = 50, λ2 = 2 and m = 3. wϕ is a standard MLP with 4 hidden layers, each having 1280 nodes (10x dimension of latent space), and relu activation. The output layer is 1-dimensional and with a relu activation. For the adversarial training of importance weights, we use the discriminator from the end of the standard adversarial training (generator vs discriminator). We then alternate between 1 step of wϕ and 1 update of Dα . LGA: We use the same neural network than in LRS. The hyper-parameters for this method are similar to DOT: number of steps of gradient ascent Nsteps and learning rate ε . We choose Nsteps = 10 and ε = 0.1. C VISUALIZATION AND RESULTS FOR SYNTHETIC DATASETS (a) WGAN for the mixture of 9 Gaussians. (b) WGAN for the mixture of 16 Gaussians. (c) WGAN for the mixture of 25 Gaussians. (d) Heatmap in the latent space of the distance between a generated sample and its nearest neighbor for the mixture of 9 Gaussians. (e) Heatmap in the latent space of the distance between a generated sample and its nearest neighbor for the mixture of 16 Gaussians. (f) Heatmap in the latent space of the distance between a generated sample and its nearest neighbor for the mixture of 25 Gaussians. D COMPARISONS WITH CONCURRENT METHODS ON SYNTHETIC AND REAL-WORLD DATASETS
1. What is the main contribution of the paper, and how does it address the problem of disconnected support in GANs? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its formulation and empirical evaluation? 3. Do you have any concerns or questions about the arguments made in the paper, such as the claim regarding the adversarial training procedure? 4. How does the paper compare to other works that address mode collapse in GANs, such as PAC-GAN and the algorithms considered in table 5 of the linked paper? 5. Are there any unsubstantiated claims or unclear aspects in the paper, such as the remark about optimal transport at the bottom of page 5?
Review
Review Summary: The paper proposes a new algorithm for improved sampling of GANs. Since GANs are continuous functions that act on a connected latent space, they will have trouble learning distributions whose support is disconnected (for e.g., clustered data). The proposed method tries to fix this issue and is motivated by rejected sampling. However, instead of using density based algorithms for rejecting samples, the authors take a fixed pre-trained generative model and train a neural network that learns to reject samples from the latent space. Significance: The problem is well motivated and seems significant. Learning distributions with disjoint support can be difficult using the traditional GAN training, and this paper addresses this problem by learning which areas in the latent space must be avoided. Quality: While the problem is significant, I find that the proposed method has some weaknesses in its formulation and empirical evaluation. Please see the section "Cons" below. Originality: The proposed method seems sufficiently novel, but I am not familiar enough with this area to know if something closely related has been done before. Pros: On synthetic data, the proposed method can capture modes better than the considered baselines. The proposed technique produces better quality samples on GANs trained on CelebA. Cons: While some of the experiments are convincing, I do not buy some of the arguments made in the paper. Specifically, under eqn (3), it is argued that if the GAN G is kept fixed and the following adversarial training is performed for a classifier w ϕ and discriminator D α , then the following procedure: sup α ∈ A inf ϕ ∈ Φ E x ∼ μ D α ( x ) − E z ∼ Z [ w ϕ ( z ) ⋅ D α ( G ( z ) ) ] will produce a new distribution on z which is γ ^ ( z ) ∝ γ ( z ) w ϕ ( z ) . There is no proof of this claim, and I further suspect that this claim in not correct. There exist several baseline methods that consider the problem of mode collapse. Examples like PAC-GAN [Lin et al 2017] have shown to be effective, and are also provably good. Other examples include those considered in table 5 of https://arxiv.org/pdf/2010.00654v1.pdf For the Stacked-MNIST dataset, the algorithms listed in table 5 seem to have much better mode coverage than the algorithms in Table 1 of this submission. While the paper I have linked is very recent, the baseline algorithms considered in the paper are published works from 2017 onwards. Comparing to these algorithms would make this paper much stronger. There are a lot of unsubstantiated claims. Some include: Modifying the loss in equation 3 gives a new distribution γ ^ defined underneath equation 3. At the bottom of page 5, the authors remark "In our method, on the contrary, we are still looking for the discriminator maximizing the Integral Probability Metric (Müller, 1997) in equation 3, linked to optimal transport." . In equation 3, the authors take the dual optimization problem for estimating the Wasserstein distance, and modify it. With this modification, optimizing equation 3 will not give the Wasserstein distance any more. Minor: Should there be a quanitifer in eqn (3) restricting D α to 1 − Lipschitz models?
ICLR
Title Learning Disconnected Manifolds: Avoiding The No Gan's Land by Latent Rejection Abstract Standard formulations of GANs, where a continuous function deforms a connected latent space, have been shown to be misspecified when fitting disconnected manifolds. In particular, when covering different classes of images, the generator will necessarily sample some low quality images in between the modes. Rather than modify the learning procedure, a line of works aims at improving the sampling quality from trained generators. Thus, it is now common to introduce a rejection step within the generation procedure. Building on this, we propose to train an additional network and transform the latent space via an adversarial learning of importance weights. This idea has several advantages: 1) it provides a way to inject disconnectedness on any GAN architecture, 2) since the rejection happens in the latent space, it avoids going through both the generator and the discriminator saving computation time, 3) this importance weights formulation provides a principled way to reduce the Wasserstein’s distance to the target distribution. We demonstrate the effectiveness of our method on different datasets, both synthetic and high dimensional. 1 INTRODUCTION GANs (Goodfellow et al., 2014) are an effective way to learn complex and high-dimensional distributions, leading to state-of-the-art models for image synthesis in both unconditional (Karras et al., 2019) and conditional settings (Brock et al., 2019). However, it is well-known that a single generator with a unimodal latent variable cannot recover a distribution composed of disconnected sub-manifolds (Khayatkhoei et al., 2018). This leads to a common problem for practitioners: the necessary existence of very-low quality samples when covering different modes. This is formalized by Tanielian et al. (2020) which refers to this area as the no GAN’s land and provides impossibility theorems on the learning of disconnected manifolds with standard formulations of GANs. Fitting a disconnected target distribution requires an additional mechanism inserting disconnectedness in the modeled distribution. A first solution is to add some expressivity to the model: Khayatkhoei et al. (2018) propose to train a mixture of generators while Gurumurthy et al. (2017) make use of a multi-modal latent distribution. A second solution is to improve the quality of a trained generative model by avoiding its poorest samples (Tao et al., 2018; Azadi et al., 2019; Turner et al., 2019; Grover et al., 2019; Tanaka, 2019). This second line of research relies heavily on a variety of Monte-Carlo algorithms, such as Rejection Sampling or the Metropolis-Hastings. These methods aim at sampling from a target distribution, while having only access to samples generated from a proposal distribution. This idea was successfully applied to GANs, using the previously learned generative distribution as a proposal distribution. However, one of the main drawback is that Monte-Carlo algorithms only guarantee to sample from the target distribution under strong assumptions. First, we need access to the density ratios between the proposal and target distributions or equivalently to a perfect discriminator (Azadi et al., 2019). Second, the support of the proposal distribution must fully cover the one of the target distribution, which means no mode collapse. This is known to be very demanding in high dimension since the intersection of supports between the proposal and target distribution is likely to be negligible (Arjovsky and Bottou, 2017, Lemma 3). In this setting, an optimal discriminator would give null acceptance probabilities for almost any generated points, leading to a lower performance. To tackle the aforementioned issue, we propose a novel method aiming at reducing the Wasserstein distance between the previously trained generative model and the target distribution. This is done via the adversarial training of a third network that learns importance weights in the latent space. The goal is to learn the redistribution of mass of the modeled distribution that best fits the target distribution. To better understand our approach, we first consider a simple 2D motivational example where the real data lies on four disconnected manifolds. To approximate this, the generator splits the latent space into four distinct areas and maps data points located in the frontiers, areas in orange in Figure 1b, out of the true manifold (see Figure 1a). Our method consequently aims at learning latent importance weights that can identify these frontiers and simply avoid them. This is highlighted in Figure 1d where the importance weighter has identified these four frontiers. When sampling from the new latent distribution, we can now perfectly fit the mixture of four gaussians (see Figure 1c). Our contributions are the following: • We discuss works improving the sampling quality of GANs and identify their limitations. • We propose a novel approach that directly modifies the latent space distribution. It provides a principled way to reduce the Wasserstein distance to the target distribution. • We thorougly compare our method with a large set of previous approaches on a variety of datasets and distributions. We empirically show that our solution significantly reduces the computational cost of inference while demonstrating an equal efficiency. Notation. Before moving to the related work section, we shortly present notation needed in the paper. The goal of the generator is to generate data points that are “similar” to samples collected from some target probability measure µ?. The measure µ? is defined on a potentially high dimensional spaceRD, equipped with the euclidean norm ‖ · ‖. To approach µ?, we use a parametric family of generative distribution where each distribution is the push-forward measure of a latent distribution Z and a continuous function modeled by a neural network. In most of all practical applications, the random variable Z defined on a low dimensional space Rd is either a multivariate Gaussian distribution or uniform distribution. The generator is a parameterized class of functions from Rd to RD, say G = {Gθ : θ ∈Θ}, where Θ ⊆ Rp is the set of parameters describing the model. Each function Gθ takes input from Z and outputs “fake” observations with distribution µθ = Gθ ]Z. On the other hand, the discriminator is described by a family of functions from RD to R, say D = {Dα : α ∈Λ}, Λ ⊆RQ, where each Dα . Finally, for any given distribution µ , we note Sµ its support. 2 RELATED WORK 2.1 DISCONNECTED MANIFOLD LEARNING: HOW TO TRAIN AND EVALUATE GANS Goodfellow et al. (2014) already stated that when training vanilla GANs, the generator could ignore modes of the target distribution: this is the mode collapse. A significant step towards understanding this phenomenon was made by Arjovsky and Bottou (2017) who explained that the standard formulation of GANs leads to vanishing or unstable gradients. The authors proposed the Wasserstein GANs (WGANs) architecture (Arjovsky et al., 2017) where, in particular, discriminative functions are restricted to the class of 1-Lipschitz functions. WGANs aim at solving: sup α∈A inf θ∈Θ Ex∼µ? Dα(x)−Ez∼Z Dα(Gθ (z))) (1) The broader drawback of standard GANs is that, since any modeled distribution is the push-forward of a unimodal distribution by a continuous transformation, it consequently has a connected support. This means that when the generator covers multiple disconnected modes of the target distribution, it necessarily generates samples out of the real data manifold (Khayatkhoei et al., 2018). Consequently, any thorough evaluation of GANs should assess simultaneously both the quality and the variety of the generated samples. Sajjadi et al. (2018) argue that a single-digit metric such as the Inception Score (Salimans et al., 2016) or the Frechet Inception distance (Heusel et al., 2017) is thus not adequate to compare generative models. To solve this issue, the authors propose a Precision/Recall metric that aims at measuring both the mode dropping and the mode inventing. In the Improved Precision/Recall (Kynkäänniemi et al., 2019), the precision refers to the portion of generated points that belongs to the target manifold, while the recall measures how much of the target distribution can be re-constructed by the model distribution. Building on this metric, Tanielian et al. (2020) highlighted the trade-off property of GANs deriving upper-bounds on the precision of standard GANs. To solve this problem, a common direction of research consists in over-parameterizing the generative model. Khayatkhoei et al. (2018) enforce diversity by using a mixture of generators while Gurumurthy et al. (2017) suggest that a mixture of Gaussians in the latent space is efficient to learn diverse and limited data. 2.2 IMPROVING THE QUALITY OF TRAINED GENERATORS To better fit disconnected manifolds with standard GANs architectures, another line of research consists in inserting disconnectedness into a previously learned generative distribution µθ . Tanielian et al. (2020) proposed an heuristic to remove the no GAN’s land (i.e. samples mapped out of the true manifold): rejecting data points with a high Jacobian Frobenius norm. Another possibility would be to use one of the different Monte-Carlo methods (Robert and Casella, 2013) and apply it to GANs. Building up on the well-known inference theory, Azadi et al. (2019) suggests the use of rejection sampling to improve the quality of the proposal distribution µθ . One can compute density ratios using either a classifier trained from scratch or the discriminator obtained at the end of the training. Consequently, in this Discriminator Rejection Sampling (DRS), any generated data point x∼ µθ is accepted with the following acceptance probability Pa: Pa(x) = µ?(x) Mµθ (x) where M = max x∈Sµθ µ?(x) µθ (x) , (2) where µ? and µθ here refers to the density functions. Similarly, Turner et al. (2019) use the same density ratios and derive MH-GAN, an adaptation of the Metropolis-Hasting algorithm (Hastings, 1970), that improves the sampling from µθ . Finally, Grover et al. (2019) use these density ratios r as importance weights and define an importance resampled generative model whose density is now defined by µ̂θ ∝ µθ × r(x). In order to perform discrete sampling from µ̂θ , authors rely on the Sampling-Importance-Resampling (SIR) algorithm (Rubin, 1988; Liu and Chen, 1998). This defines a new distribution µ̂θ SIR: µ̂SIRθ (xi) = r(xi) n ∑ j=1 r(x j) where x1, . . . ,xn ∼ µnθ . Note that these algorithms rely on the same density ratios and an acceptance-rejection scheme. In Rejection Sampling, the acceptance rate is uncontrollable but sampling from µ? is assured. With SIR and MH, the acceptance rate is controllable but sampling from µ? is no longer guaranteed. 3 ADVERSARIAL LEARNING OF LATENT IMPORTANCE WEIGHTS 3.1 OUR APPROACH Similar to previous works, our method consists in improving the performance of a given generative model, post-training. Given a trained WGANs (Gθ ,Dα), we now propose to learn importance weights in the latent space. To do so, we use a feed-forward neural network from Rd to R+, say Ω = {wϕ : ϕ ∈ Φ}. The neural network wϕ is trained using an adversarial process with the discriminator Dα , whilst keeping the weights of Gθ frozen. We now want to solve the following: sup α∈A inf ϕ∈Φ Ex∼µ?Dα(x)−Ez∼Z ( wϕ(z)×Dα(Gθ (z))) ) (3) Note that our formulation can also be plugged on top of many different objective functions. Interestingly, the use of the predictor wϕ defines a new latent space distribution whose density γ̂ is defined by γ̂(z) ∝ wϕ(z)× γ(z). Consequently, the newly defined modeled distribution µ̂θ is defined as the pushforward µ̂θ = Gθ ]γ̂ . The proposed method can be seen as minimizing the Wasserstein distance to the target distribution, over an increased class of generative distributions. The network wϕ thus learns how to redistribute the mass of µθ such that µ̂θ is closer to µ? in terms Wasserstein distance. However, as in the field of counterfactual estimation, a naive optimization of importance weights by gradient descent can lead to trivial solutions. First, if for example, the Wasserstein critic Dα outputs negative values for any generated samples, the network wϕ could simply learn to avoid the dataset and output 0 everywhere. To avoid this issue, we follow Swaminathan and Joachims (2015c) and scale the output of the discriminator such that the reward is always positive. A second problem comes from the fact that equation 3 can now be minimized not only by putting large importance weights wϕ(z) on the examples with high likelihoods Dα(G(z)), but also by maximizing the sum of the weights: this is the propensity overfitting (Swaminathan and Joachims, 2015a). To stabilize the optimisation process, we consequently introduce two important regularization techniques: Self-normalization. Similarly to Swaminathan and Joachims (2015a), we advocate the use of a normalization of the importance weights. To be more precise, we enforce the expectation of the importance weights to be close 1 by adding a penalty term. By doing so, we prohibit the propensity overfitting since the sum of the importance weights in the batch is bounded. Soft-Clipping To avoid cases where small areas of z have really high wϕ(z) values, which would lead to mode collapse, we enforce a soft-clipping on the weights (Bottou et al., 2013; Grover et al., 2019). Note that this constraint on wϕ(z) could also be implemented with a bounded activation function on the final layer, such as a re-scaled sigmoid or tanh activation. Finally, we thus get the following objective function: sup ϕ∈Φ Ez∼Z wϕ(z) ( Dα(Gθ (z)))−∇ )︸ ︷︷ ︸ discriminator reward −λ1 ( Ez∼Zwϕ(z)−1 )2︸ ︷︷ ︸ self-normalization −λ2Ez∼Z max ( 0,(wϕ(z)−m) )2︸ ︷︷ ︸ soft-clipping , (4) where ∇ = minz∼Z Dα(G(z)). λ1, λ2, and m are hyper-parameters (values displayed in Appendix). 3.2 SAMPLING FROM THE NEW DISTRIBUTION As mentionned above, the scale and variance of the learned importance weights are actively controlled, as it is done in counterfactual estimation (Bottou et al., 2013; Swaminathan and Joachims, 2015b; Faury et al., 2020). Doing so, we explicitly control the acceptance rates Pa(z) of the rejection sampling algorithm performed on γ̂ , since for any given z∼ Z, we have: Pa(z) = γ̂(z) mγ(z) and EZ Pa(z) = ∫ Rd γ̂(z) mγ(z) γ(z)dz = ∫ Rd γ̂(z) m dz = 1 m , where m is the maximum output of the importance weighter as defined in equation 4. We define as Latent Rejection Sampling (latentRS), the method that performs the Rejection Sampling algorithm on top of the learned importance weights. Since the exact sampling of the distribution γ̂ is now tractable with a rejection sampling algorithm, we do not need to implement neither the Metropolis-Hasting nor the Sampling Importance Resampling algorithm. Inspired from the literature on latent space optimal transport (Salimans et al., 2018; Agustsson et al., 2019; Tanaka, 2019), we also propose a second method where we perform gradient ascent in the latent space. To be more precise, for any given sample in the latent space, we follow the path maximizing the learned importance weights. This method is denoted latent Gradient Ascent (latentGA). In high-dimension, similarly to Tanaka (2019, Algorithm 2), gradients are projected to restrict z on the training support. Note that the learning rate and the number of updates used for this method are hyper-parameter that need to be tuned. 3.3 ADVANTAGES OF THE PROPOSED APPROACH We now discuss, in detail, the flaws of previous Monte-Carlo based approaches: 1) Computational cost. By using sampling algorithms in the latent space, we avoid going through both the generator and the discriminator, leading to a significant computational speed-up. This is of particular interest when dealing with high-dimensional spaces since we do not need to pass through deep CNNs generator and discriminator (Brock et al., 2019). 2) Existence of density functions. Every Monte-Carlo based methods assume that both µ? and µθ are probability distributions with associated density functions. However, in high dimension, the hypothesis that data tend to lie near a low dimensional manifold (Fefferman et al., 2016) is now commonly accepted. Besides, it is often the case that GANs be defined as the push-forward from much lower dimensional space, that is d << D. In that case, neither µ? nor µθ have density function in RD. Note that our method based on Wasserstein distance does not require this assumption. 3) Covering of the support Sµ? . First, Monte-Carlo methods are well-known to suffer from the curse of dimensionality (Mengersen et al., 1996; Robert and Casella, 2013). Besides, in the context of GANs, Arjovsky and Bottou (2017, Theorem 2.2) have shown that the intersection Sµ? ⋂ Sµθ is likely to be a negligible set under µθ . In this specific case, the density ratios would evaluate close to 0 almost everywhere on Sµθ , increasing the time complexity. More generally, Monte-Carlo based methods tend to avoid any area within Sµθ \Sµ? which could lead to a deteriorated sampling quality. To better illustrate this phenomenon, we represent in Figure 2a a synthetic experiment, where Sµθ does not recover Sµ? (by slightly shifting the mean of two modes after training the WGAN). In this setting, we clearly see in Figure 2b that Monte-carlo based methods worsen the WGAN. when Sµ? 6⊂ Sµθ , density ratios focus on local information and lead to non-optimal solutions. On the opposite, our method suggests to learn the optimal re-weighting of mass within the support Sµθ . Interestingly, on this synthetic dataset, it significantly reduces the Wasserstein distance to µ?, see Figure 2c. 4) Non-optimal discriminators. Knowing that optimal discriminators would lead to non-optimal objectives (very low acceptance probabilities), previous approaches made sure that their obtained classifier is sufficiently far from the optimal classifier (Section 3.2 in (Azadi et al., 2019)). Authors have thus come up with heuristics to approximate density ratios: for example Azadi et al. (2019) fine tune a regularized discriminator, Grover et al. (2019) use a pre-trained neural network on ImageNet classification and only fine-tune the final layers for the binary classification task. In our method, on the contrary, we are still looking for the discriminator maximizing the Integral Probability Metric (Müller, 1997) in equation 3, linked to optimal transport. 4 EXPERIMENTS In the following section, we illustrate the efficiency of the proposed methods, latentRS and latentGA on synthetic datasets. Then, we compare their performances with previous works on image datasets. On this image generation tasks, we empirically stress that both latentRS or latentGA methods slightly surpass density ratios based methods while significantly reducing the time complexity. 4.1 EVALUATION METRICS To measure performances of GANs when dealing with low dimensional applications - as with synthetic datasets - we equip our space with the standard Euclidean distance. However, for high dimensional applications such as image generation, Brock et al. (2019); Kynkäänniemi et al. (2019) have shown that embedding images into a feature space with a pre-trained convolutional classifier provides more semantic information. In this setting, we consequently use the euclidean distance between the images’ embeddings from a classifier. For a pair of images (a,b), we define the distance d(a,b) as d(a,b) = ‖φ(a)−φ(b)‖2 where φ is a pre-softmax layer of a supervised classifier, trained specifically on each dataset. Doing so, they will more easily separate images sampled from the target distribution µ? from the ones sampled by the distribution µθ . We compare the performance of the different methods with a panel of evaluation metrics. To begin with, we use the Improved Precision/Recall (Improved PR) metric (Kynkäänniemi et al., 2019), a more robust version of the Precision/Recall metric which was first applied to the context of GANs by Sajjadi et al. (2018). The Improved PR metric is based on a non-parametric estimation of the support of both generated and real distributions using k-Nearest Neighbors. Besides, we also report two well-known metrics: the Earth Mover’s Distance (EMD), the discrete version of the Wasserstein distance, and the average Hausdhorff distance (Hausd). EMD is a distance between probability distributions while Hausd. focuses on support estimations. These two measures are particularly interesting in GANs since one can compute them with collections of discrete points. Let (x1, . . . ,xn) and (y1, . . . ,yn) be two collections of n data points and S be the set of permutations of [1,n], then: EMD(X ,Y ) = min σ∈S n ∑ i=1 ‖xi,yσi‖ and average Hausd(X ,Y ) = 1 n ∑xi∈X min y j∈Y ‖xi,y j‖ Besides, we argue that the Wasserstein distance should be a metric of reference when evaluating WGANs since it is directly linked to their objective function. Finally, for completeness, we report FID Heusel et al. (2017). 4.2 MIXTURE OF GAUSSIANS Further experiments were ran on synthetic datasets with mixtures of 2D Gaussians, with either 4, 9, 16 or 25 components. When dealing with 2D mixtures of Gaussians, we used MLP with 4 hidden layers of 30 nodes for the generator, the discriminator and the importance weighter. As expected in this setting, a standard WGAN-GP combined with a connected latent space (i.e. multivariate normal or uniform), necessarily generates samples in-between two modes. Both Figure 1 and Figure 2a have stressed how the importance weighter can truncate latent space areas that are mapped outside the real data manifold and improve the EMD metric. More figures and details of the different evaluation metrics are given in Appendix. 4.3 IMAGE DATASETS MNIST, F-MNIST and Stacked MNIST. We further study the efficiency of the proposed methods on three image datasets: MNIST (LeCun et al., 1998), FashionMNIST (F-MNIST) (Xiao et al., 2017), and Stacked MNIST (Metz et al., 2016) a highly disconnected datasets with 1,000 classes. For MNIST, F-MNIST and Stacked MNIST, we follow Khayatkhoei et al. (2018) and use a standard CNN architecture composed of a sequence of 3x3 convolution layer, relu activation with nearest neighbor upsampling. To exhibit the efficiency of the proposed methods in different settings, we use hinge loss with gradient penalty (Hinge-GP) (Miyato et al., 2018) on MNIST and F-MNIST, and a Wasserstein loss with gradient penalty (Gulrajani et al., 2017) on Stacked Mnist. For the importance weighter wϕ , we use an MLP architecture with fully-connected layers and relu activation. wϕ has 4 hidden layers, each having its width four times larger than the dimension of the latent space. For exhaustivity, we compare latentRS and latentGA with previous works leveraging density ratios. In particular, we implemented a wide set of post-processing methods for GANs: DRS (Azadi et al., 2019), MH-GAN (Turner et al., 2019), SIR-GAN (Grover et al., 2019) and DOT (Tanaka, 2019). Similarly to Azadi et al. (2019), we take the discriminator at the end of the adversarial training, fine-tune it with the binary cross-entropy loss and select the best model in terms of EMD. During fine-tuning, we keep the gradient penalty or spectral normalization, otherwise the classifier easily separates real from generated data, which leads to a degraded performance, as shown in Figure 2a. Following Azadi et al. (2019); Grover et al. (2019), we do not include explicit mechanism to calibrate the classifier. To the extent of our knowledge, we are the first to empirically compare such a wide variety of Monte-Carlo methods on different datasets and metrics. The main results of this comparison are shown in Table 1 (see Appendix for more details). We see that, except for Stacked MNIST, both of our methods outperform every other method on precision, av. Hausdhorff and the EMD metric. Interestingly, latentGA seems to be the strongest one. In Figure 3, we show how samples evolve when performing latent gradient ascent on the importance weights. As expected, as importance weights are increased, the quality of the generated images significantly improves. Besides, a strong contribution of the paper also resides in notably speeding-up the inference procedure. As shown in Table 1, the inference time of a given data point is 25 times faster with latentRS than with SIR-GAN. CelebA is a large-scale dataset of faces covering a variety of poses. We train the models at 64x64 resolution. Following recent studies (Brock et al., 2019), the discriminator is trained with the hinge loss and spectral normalization (Miyato et al., 2018). For the generator network, residual connections (He et al., 2016) are used alongside self-modulation layers (Chen et al., 2019). The importance weighter is a simple 4 hidden-layer MLP with a width 10 times larger than the latent space dimension. In this one-class high-dimensional dataset, the importance weighter still managed to learn some meaningful features. First, Figure 3 higlights a subtle improvement of the generated images when performing latentGA. Second, when ranking generated images with the importance weighter and comparing the top-5% vs worst-5% in Figure 4, we observe some differences in quality. However, on a broader scale, the importance weighter does not bring a clear improvement on neither the EMD nor the Hausdhorff metric. Interestingly, this is also the case for any of the different rejection methods (see Appendix for details). We argue that in this one-class generation task, post-processing the generated samples is not as efficient as in a multi-modal setting (e.g. MNIST, FMNIST, Stacked MNIST). Intuitively, it is a much easier task to remove generated samples that are out of the target manifold than to discriminate between samples that already share similarities with training samples. It further stresses that this family of methods is useful if one needs to insert disconectedness in the modeled distribution. However, when the target distribution is a single-class distribution with a connected support, their efficiency decrease. To illustrate this, we added in Appendix a figure highlighting samples generated by a trained WGAN on Celeba 64x64, ranked by the discriminator. We observe that on these images, the discriminator does not correlate well with human judgement prohibiting the importance weighter to learn a meaningful signal. 5 CONCLUSION In this paper, we provide insights on improving the learning of disconnected manifolds with GANs. Given the existence of the no GAN’s land, latent space areas mapping outside the target manifold, we provide two methods to truncate them. Contrary to previous works focusing on learning density ratios in the output space, both of our methods are based on training adversarially a neural network learning importance weights in the latent space. On the task of image generation, both of the proposed methods were shown to be empirically efficient while significantly reducing the inference time (latentRS by an order of 20), when compared to density ratio based methods. This paper has specifically stressed the efficiency of post-training methods when dealing with highly disconnected target distributions. However, when dealing with single-class connected distributions or class-conditioning generative models, the efficiency of such methods is not clear. We argue that one of the reason is that, once the generator maps correctly inside the target manifold, it is a much harder task to discriminate between realistic and fake samples. A potential future work would therefore be to investigate how can we help the discriminator better classify among the set of generated images. A EVALUATION DETAILS Precision recall metric. For the precision-recall metric, we use the algorithm from Khayatkhoei et al. (2018). Namely, when comparing the set of real data points (x1, ...,xn) with the set of fake data points (y1, ...,yn): A point xi has a recall r(xi) = 1 if there exists y j, such that ‖xi− y j‖ ≤ ‖y j− y j(k)‖, where y j(k) is the k-nearest neighbor of n. Finally, the recall is the average of individual recall: 1n ∑i r(xi). A point yi has a precision p(yi) = 1 if there exists x j, such that ‖yi− x j‖ ≤ ‖x j− x j(k)‖, where x j(k) is the k-nearest neighbor of n. Finally, the precision is the average of individual precision: 1n ∑i p(xi). Images’ embeddings. As mentioned earlier, for images we use the distance between embeddings of images in a neural network trained specifically for classification on this dataset. For Stacked Mnist, we use a MNIST classifier on each output channel and simply stack the three embedding vectors. Parameters. For all datasets, we use k = 3 (3rd nearest neighbor). For MNIST, F-MNIST and Stacked MNIST, we use a set of n = 2048 points. For CelebA, we use a set of n = 1024 points. This is also valid for the other metrics used: EMD, Av. Hausd. For FID on CelebA, we use the standard protocol evaluation with Inception Net and 50k data points. B HYPER-PARAMETERS. SIR: Model selection: we fine-tune with a binary cross-entropy loss the discriminator from the end of the adversarial training and select the best model in terms of EMD. We use then use Sampling-Importance-Resampling algorithm, with sets of n=40 points. DRS: Model selection: we fine-tune with a binary cross-entropy loss the discriminator from the end of the adversarial training and select the best model in terms of EMD. We use the standard Rejection Sampling algorithm, without artificially increasing the acceptance rate such as Azadi et al. (2019). We use regularized discriminator (with gradient penalty or spectral normalization), which avoids the acceptance rate falling to almost zero. MH-GAN: Model selection: we fine-tune with a binary cross-entropy loss the discriminator from the end of the adversarial training and select the best model in terms of EMD. We use the independance Metropolis-Hastings algorithm with Markov Chains of 40 points, and select the last point. DOT: Model selection: we fine-tune with the dual wasserstein loss the discriminator from the end of the adversarial training and select the best model in terms of EMD. We then perform a projected gradient descent as described in Tanaka (2019) with SGD, with Nsteps = 10 and ε = 0.01. LRS: For MNIST, F-MNIST and Stacked MNIST, we use the same hyper-parameters: λ1 = 10, λ2 = 2 and m = 3. wϕ is a standard MLP with 4 hidden layers, each having 600 nodes (6x dimension of latent space), and relu activation. The output layer is 1-dimensional and with a relu activation. For CelebA, we use: λ1 = 50, λ2 = 2 and m = 3. wϕ is a standard MLP with 4 hidden layers, each having 1280 nodes (10x dimension of latent space), and relu activation. The output layer is 1-dimensional and with a relu activation. For the adversarial training of importance weights, we use the discriminator from the end of the standard adversarial training (generator vs discriminator). We then alternate between 1 step of wϕ and 1 update of Dα . LGA: We use the same neural network than in LRS. The hyper-parameters for this method are similar to DOT: number of steps of gradient ascent Nsteps and learning rate ε . We choose Nsteps = 10 and ε = 0.1. C VISUALIZATION AND RESULTS FOR SYNTHETIC DATASETS (a) WGAN for the mixture of 9 Gaussians. (b) WGAN for the mixture of 16 Gaussians. (c) WGAN for the mixture of 25 Gaussians. (d) Heatmap in the latent space of the distance between a generated sample and its nearest neighbor for the mixture of 9 Gaussians. (e) Heatmap in the latent space of the distance between a generated sample and its nearest neighbor for the mixture of 16 Gaussians. (f) Heatmap in the latent space of the distance between a generated sample and its nearest neighbor for the mixture of 25 Gaussians. D COMPARISONS WITH CONCURRENT METHODS ON SYNTHETIC AND REAL-WORLD DATASETS
1. What are the strengths and weaknesses of the proposed method for improving sample quality in generative models? 2. How does the proposed method compare to existing methods in terms of computational complexity and performance metrics? 3. Are there any limitations or potential biases in the experimental design or results that may impact the validity of the claims made in the paper? 4. What additional experiments or analyses could be conducted to further support or refute the findings presented in the paper? 5. How might the proposed method be applied in real-world scenarios or combined with other techniques to enhance its effectiveness?
Review
Review This work aims at improving the sample quality of generative models through better sampling, which is a relevant problem and has brought about a line of work [1,2,3,4,5], to name a few. By leveraging the idea of importance sampling, the authors train an additional network. The latter uses the information contained in the learned discriminator to assign importance weights to the latent points, thus defining a new distribution in the latent space. Subsequently, rejection sampling on the newly defined latent distribution is applied to obtain inputs for a generator network. By treating the problem in the latent space, the paper introduces latentRS method that compares favourably to several existing methods in terms of computational complexity for generating a sample. The authors propose one more method, latentGA, following the path in the latent space that maximizes the learned importance weights. The paper also discusses the limitations of the previously proposed methods and presents their empirical comparison on several datasets and metrics. The method is concise and straightforward to be applicable by a broad community of ML practitioners. One of the proposed methods, latentRS, offers a significant speedup at the inference stage compared to analyzed methods while being similar in performance metrics. The paper also raises an interesting question of whether the existing enhanced sampling methods help when the target distribution is not sufficiently disconnected. However, there are several weaknesses in the experiments which lead to questioning the claims. While the paper's claim of careful comparison with the existing methods and the discussion of existing methods' limitations is indeed well-presented, the paper neglects the already recurring standard experiments for such methods or brushes them under the Appendix section. The generated samples from the Appendix figure for the mixture of Gaussians with n>9 show that the results are not as promising as the same experiments in the literature (the 'fake' clouds are not as nicely located on top of the true ones). n=25 is a recurring setting and seems to be a standard check for algorithms that refine GAN's sampling (e.g. DRS, DOT, DDLS). Table 2 in the Appendix lacks computer metrics for existing methods since it would demonstrate the tangible interpretable difference in this setting between comparable methods. The paper misses some essential experiments to be faithfully compared with existing methods. It would be helpful to see the Swiss Roll experiment and the statistics on recovered modes and quality on 25 Gaussians for all the considered methods. As for more realistic image spaces, the CIFAR10 is a dataset that represents an undoubtfully disconnected manifold, and it has more potential to show the advantage of the proposed methods. Returning to the presented empirical study, these too raise a number of questions. The IPR results in Table 1 (and Table 3 in the Appendix) do not show consistent advantages for the proposed methods over the existing ones. It either favours latentRS or latentGA in terms of precision or recall alone, not both at the same time. It's understandable that when maximizing importance weights with latentGA we get higher precision; we force the generated samples to stay within true points at the cost of their diversity (which can also be seen in the synthetic experiment with Gaussians), so I guess the method is highly reliant on the hyperparameter m, which controls the 'conservativeness' of the trained importance weights network. It would be helpful to see an ablation study for the hyperparameters. Given all the above, I am leaning towards a reject and my main concerns are as follows. The experiment with 25 Gaussians doesn't show as much improvement in sampling as existing methods implying there might be little effect in real-world datasets. I believe that the proposed methods have not been faithfully compared to the existing methods. There is no ablation study on the hyperparameters of the proposed methods. While the argument about one class CelebA has grounds, the DRS technique shows that it improves face generation by producing less warped nightmare-like faces. Thus, better GAN sampling techniques should ideally not only help avoid empty regions in the latent space between the nodes (inject disconnectedness) but also grasp the shape of those modes. In this regard, using an energy-based model for the latent variable might be an apt direction [5]. The authors state that they use image embeddings from corresponding classification networks for each of their datasets, but how do they obtain image embeddings for CelebA? Some minor points — the notation for the proposal and true distribution we want to sample from in section 3 is a bit confusing as the hat is usually used to denote an approximation. Also, the submission has quite a few typos — it needs proofreading. References: [1] Azadi, Samaneh, et al. "Discriminator rejection sampling." arXiv preprint arXiv:1810.06758 (2018). [2] Turner, Ryan, et al. "Metropolis-hastings generative adversarial networks." International Conference on Machine Learning. 2019. [3] Neklyudov, Kirill, Evgenii Egorov, and Dmitry P. Vetrov. "The Implicit Metropolis-Hastings Algorithm." Advances in Neural Information Processing Systems. 2019. [4] Tanaka, Akinori. "Discriminator optimal transport." Advances in Neural Information Processing Systems. 2019. [5] Che, Tong, et al. "Your GAN is Secretly an Energy-based Model and You Should use Discriminator Driven Latent Sampling." arXiv preprint arXiv:2003.06060 (2020).
ICLR
Title Learning Disconnected Manifolds: Avoiding The No Gan's Land by Latent Rejection Abstract Standard formulations of GANs, where a continuous function deforms a connected latent space, have been shown to be misspecified when fitting disconnected manifolds. In particular, when covering different classes of images, the generator will necessarily sample some low quality images in between the modes. Rather than modify the learning procedure, a line of works aims at improving the sampling quality from trained generators. Thus, it is now common to introduce a rejection step within the generation procedure. Building on this, we propose to train an additional network and transform the latent space via an adversarial learning of importance weights. This idea has several advantages: 1) it provides a way to inject disconnectedness on any GAN architecture, 2) since the rejection happens in the latent space, it avoids going through both the generator and the discriminator saving computation time, 3) this importance weights formulation provides a principled way to reduce the Wasserstein’s distance to the target distribution. We demonstrate the effectiveness of our method on different datasets, both synthetic and high dimensional. 1 INTRODUCTION GANs (Goodfellow et al., 2014) are an effective way to learn complex and high-dimensional distributions, leading to state-of-the-art models for image synthesis in both unconditional (Karras et al., 2019) and conditional settings (Brock et al., 2019). However, it is well-known that a single generator with a unimodal latent variable cannot recover a distribution composed of disconnected sub-manifolds (Khayatkhoei et al., 2018). This leads to a common problem for practitioners: the necessary existence of very-low quality samples when covering different modes. This is formalized by Tanielian et al. (2020) which refers to this area as the no GAN’s land and provides impossibility theorems on the learning of disconnected manifolds with standard formulations of GANs. Fitting a disconnected target distribution requires an additional mechanism inserting disconnectedness in the modeled distribution. A first solution is to add some expressivity to the model: Khayatkhoei et al. (2018) propose to train a mixture of generators while Gurumurthy et al. (2017) make use of a multi-modal latent distribution. A second solution is to improve the quality of a trained generative model by avoiding its poorest samples (Tao et al., 2018; Azadi et al., 2019; Turner et al., 2019; Grover et al., 2019; Tanaka, 2019). This second line of research relies heavily on a variety of Monte-Carlo algorithms, such as Rejection Sampling or the Metropolis-Hastings. These methods aim at sampling from a target distribution, while having only access to samples generated from a proposal distribution. This idea was successfully applied to GANs, using the previously learned generative distribution as a proposal distribution. However, one of the main drawback is that Monte-Carlo algorithms only guarantee to sample from the target distribution under strong assumptions. First, we need access to the density ratios between the proposal and target distributions or equivalently to a perfect discriminator (Azadi et al., 2019). Second, the support of the proposal distribution must fully cover the one of the target distribution, which means no mode collapse. This is known to be very demanding in high dimension since the intersection of supports between the proposal and target distribution is likely to be negligible (Arjovsky and Bottou, 2017, Lemma 3). In this setting, an optimal discriminator would give null acceptance probabilities for almost any generated points, leading to a lower performance. To tackle the aforementioned issue, we propose a novel method aiming at reducing the Wasserstein distance between the previously trained generative model and the target distribution. This is done via the adversarial training of a third network that learns importance weights in the latent space. The goal is to learn the redistribution of mass of the modeled distribution that best fits the target distribution. To better understand our approach, we first consider a simple 2D motivational example where the real data lies on four disconnected manifolds. To approximate this, the generator splits the latent space into four distinct areas and maps data points located in the frontiers, areas in orange in Figure 1b, out of the true manifold (see Figure 1a). Our method consequently aims at learning latent importance weights that can identify these frontiers and simply avoid them. This is highlighted in Figure 1d where the importance weighter has identified these four frontiers. When sampling from the new latent distribution, we can now perfectly fit the mixture of four gaussians (see Figure 1c). Our contributions are the following: • We discuss works improving the sampling quality of GANs and identify their limitations. • We propose a novel approach that directly modifies the latent space distribution. It provides a principled way to reduce the Wasserstein distance to the target distribution. • We thorougly compare our method with a large set of previous approaches on a variety of datasets and distributions. We empirically show that our solution significantly reduces the computational cost of inference while demonstrating an equal efficiency. Notation. Before moving to the related work section, we shortly present notation needed in the paper. The goal of the generator is to generate data points that are “similar” to samples collected from some target probability measure µ?. The measure µ? is defined on a potentially high dimensional spaceRD, equipped with the euclidean norm ‖ · ‖. To approach µ?, we use a parametric family of generative distribution where each distribution is the push-forward measure of a latent distribution Z and a continuous function modeled by a neural network. In most of all practical applications, the random variable Z defined on a low dimensional space Rd is either a multivariate Gaussian distribution or uniform distribution. The generator is a parameterized class of functions from Rd to RD, say G = {Gθ : θ ∈Θ}, where Θ ⊆ Rp is the set of parameters describing the model. Each function Gθ takes input from Z and outputs “fake” observations with distribution µθ = Gθ ]Z. On the other hand, the discriminator is described by a family of functions from RD to R, say D = {Dα : α ∈Λ}, Λ ⊆RQ, where each Dα . Finally, for any given distribution µ , we note Sµ its support. 2 RELATED WORK 2.1 DISCONNECTED MANIFOLD LEARNING: HOW TO TRAIN AND EVALUATE GANS Goodfellow et al. (2014) already stated that when training vanilla GANs, the generator could ignore modes of the target distribution: this is the mode collapse. A significant step towards understanding this phenomenon was made by Arjovsky and Bottou (2017) who explained that the standard formulation of GANs leads to vanishing or unstable gradients. The authors proposed the Wasserstein GANs (WGANs) architecture (Arjovsky et al., 2017) where, in particular, discriminative functions are restricted to the class of 1-Lipschitz functions. WGANs aim at solving: sup α∈A inf θ∈Θ Ex∼µ? Dα(x)−Ez∼Z Dα(Gθ (z))) (1) The broader drawback of standard GANs is that, since any modeled distribution is the push-forward of a unimodal distribution by a continuous transformation, it consequently has a connected support. This means that when the generator covers multiple disconnected modes of the target distribution, it necessarily generates samples out of the real data manifold (Khayatkhoei et al., 2018). Consequently, any thorough evaluation of GANs should assess simultaneously both the quality and the variety of the generated samples. Sajjadi et al. (2018) argue that a single-digit metric such as the Inception Score (Salimans et al., 2016) or the Frechet Inception distance (Heusel et al., 2017) is thus not adequate to compare generative models. To solve this issue, the authors propose a Precision/Recall metric that aims at measuring both the mode dropping and the mode inventing. In the Improved Precision/Recall (Kynkäänniemi et al., 2019), the precision refers to the portion of generated points that belongs to the target manifold, while the recall measures how much of the target distribution can be re-constructed by the model distribution. Building on this metric, Tanielian et al. (2020) highlighted the trade-off property of GANs deriving upper-bounds on the precision of standard GANs. To solve this problem, a common direction of research consists in over-parameterizing the generative model. Khayatkhoei et al. (2018) enforce diversity by using a mixture of generators while Gurumurthy et al. (2017) suggest that a mixture of Gaussians in the latent space is efficient to learn diverse and limited data. 2.2 IMPROVING THE QUALITY OF TRAINED GENERATORS To better fit disconnected manifolds with standard GANs architectures, another line of research consists in inserting disconnectedness into a previously learned generative distribution µθ . Tanielian et al. (2020) proposed an heuristic to remove the no GAN’s land (i.e. samples mapped out of the true manifold): rejecting data points with a high Jacobian Frobenius norm. Another possibility would be to use one of the different Monte-Carlo methods (Robert and Casella, 2013) and apply it to GANs. Building up on the well-known inference theory, Azadi et al. (2019) suggests the use of rejection sampling to improve the quality of the proposal distribution µθ . One can compute density ratios using either a classifier trained from scratch or the discriminator obtained at the end of the training. Consequently, in this Discriminator Rejection Sampling (DRS), any generated data point x∼ µθ is accepted with the following acceptance probability Pa: Pa(x) = µ?(x) Mµθ (x) where M = max x∈Sµθ µ?(x) µθ (x) , (2) where µ? and µθ here refers to the density functions. Similarly, Turner et al. (2019) use the same density ratios and derive MH-GAN, an adaptation of the Metropolis-Hasting algorithm (Hastings, 1970), that improves the sampling from µθ . Finally, Grover et al. (2019) use these density ratios r as importance weights and define an importance resampled generative model whose density is now defined by µ̂θ ∝ µθ × r(x). In order to perform discrete sampling from µ̂θ , authors rely on the Sampling-Importance-Resampling (SIR) algorithm (Rubin, 1988; Liu and Chen, 1998). This defines a new distribution µ̂θ SIR: µ̂SIRθ (xi) = r(xi) n ∑ j=1 r(x j) where x1, . . . ,xn ∼ µnθ . Note that these algorithms rely on the same density ratios and an acceptance-rejection scheme. In Rejection Sampling, the acceptance rate is uncontrollable but sampling from µ? is assured. With SIR and MH, the acceptance rate is controllable but sampling from µ? is no longer guaranteed. 3 ADVERSARIAL LEARNING OF LATENT IMPORTANCE WEIGHTS 3.1 OUR APPROACH Similar to previous works, our method consists in improving the performance of a given generative model, post-training. Given a trained WGANs (Gθ ,Dα), we now propose to learn importance weights in the latent space. To do so, we use a feed-forward neural network from Rd to R+, say Ω = {wϕ : ϕ ∈ Φ}. The neural network wϕ is trained using an adversarial process with the discriminator Dα , whilst keeping the weights of Gθ frozen. We now want to solve the following: sup α∈A inf ϕ∈Φ Ex∼µ?Dα(x)−Ez∼Z ( wϕ(z)×Dα(Gθ (z))) ) (3) Note that our formulation can also be plugged on top of many different objective functions. Interestingly, the use of the predictor wϕ defines a new latent space distribution whose density γ̂ is defined by γ̂(z) ∝ wϕ(z)× γ(z). Consequently, the newly defined modeled distribution µ̂θ is defined as the pushforward µ̂θ = Gθ ]γ̂ . The proposed method can be seen as minimizing the Wasserstein distance to the target distribution, over an increased class of generative distributions. The network wϕ thus learns how to redistribute the mass of µθ such that µ̂θ is closer to µ? in terms Wasserstein distance. However, as in the field of counterfactual estimation, a naive optimization of importance weights by gradient descent can lead to trivial solutions. First, if for example, the Wasserstein critic Dα outputs negative values for any generated samples, the network wϕ could simply learn to avoid the dataset and output 0 everywhere. To avoid this issue, we follow Swaminathan and Joachims (2015c) and scale the output of the discriminator such that the reward is always positive. A second problem comes from the fact that equation 3 can now be minimized not only by putting large importance weights wϕ(z) on the examples with high likelihoods Dα(G(z)), but also by maximizing the sum of the weights: this is the propensity overfitting (Swaminathan and Joachims, 2015a). To stabilize the optimisation process, we consequently introduce two important regularization techniques: Self-normalization. Similarly to Swaminathan and Joachims (2015a), we advocate the use of a normalization of the importance weights. To be more precise, we enforce the expectation of the importance weights to be close 1 by adding a penalty term. By doing so, we prohibit the propensity overfitting since the sum of the importance weights in the batch is bounded. Soft-Clipping To avoid cases where small areas of z have really high wϕ(z) values, which would lead to mode collapse, we enforce a soft-clipping on the weights (Bottou et al., 2013; Grover et al., 2019). Note that this constraint on wϕ(z) could also be implemented with a bounded activation function on the final layer, such as a re-scaled sigmoid or tanh activation. Finally, we thus get the following objective function: sup ϕ∈Φ Ez∼Z wϕ(z) ( Dα(Gθ (z)))−∇ )︸ ︷︷ ︸ discriminator reward −λ1 ( Ez∼Zwϕ(z)−1 )2︸ ︷︷ ︸ self-normalization −λ2Ez∼Z max ( 0,(wϕ(z)−m) )2︸ ︷︷ ︸ soft-clipping , (4) where ∇ = minz∼Z Dα(G(z)). λ1, λ2, and m are hyper-parameters (values displayed in Appendix). 3.2 SAMPLING FROM THE NEW DISTRIBUTION As mentionned above, the scale and variance of the learned importance weights are actively controlled, as it is done in counterfactual estimation (Bottou et al., 2013; Swaminathan and Joachims, 2015b; Faury et al., 2020). Doing so, we explicitly control the acceptance rates Pa(z) of the rejection sampling algorithm performed on γ̂ , since for any given z∼ Z, we have: Pa(z) = γ̂(z) mγ(z) and EZ Pa(z) = ∫ Rd γ̂(z) mγ(z) γ(z)dz = ∫ Rd γ̂(z) m dz = 1 m , where m is the maximum output of the importance weighter as defined in equation 4. We define as Latent Rejection Sampling (latentRS), the method that performs the Rejection Sampling algorithm on top of the learned importance weights. Since the exact sampling of the distribution γ̂ is now tractable with a rejection sampling algorithm, we do not need to implement neither the Metropolis-Hasting nor the Sampling Importance Resampling algorithm. Inspired from the literature on latent space optimal transport (Salimans et al., 2018; Agustsson et al., 2019; Tanaka, 2019), we also propose a second method where we perform gradient ascent in the latent space. To be more precise, for any given sample in the latent space, we follow the path maximizing the learned importance weights. This method is denoted latent Gradient Ascent (latentGA). In high-dimension, similarly to Tanaka (2019, Algorithm 2), gradients are projected to restrict z on the training support. Note that the learning rate and the number of updates used for this method are hyper-parameter that need to be tuned. 3.3 ADVANTAGES OF THE PROPOSED APPROACH We now discuss, in detail, the flaws of previous Monte-Carlo based approaches: 1) Computational cost. By using sampling algorithms in the latent space, we avoid going through both the generator and the discriminator, leading to a significant computational speed-up. This is of particular interest when dealing with high-dimensional spaces since we do not need to pass through deep CNNs generator and discriminator (Brock et al., 2019). 2) Existence of density functions. Every Monte-Carlo based methods assume that both µ? and µθ are probability distributions with associated density functions. However, in high dimension, the hypothesis that data tend to lie near a low dimensional manifold (Fefferman et al., 2016) is now commonly accepted. Besides, it is often the case that GANs be defined as the push-forward from much lower dimensional space, that is d << D. In that case, neither µ? nor µθ have density function in RD. Note that our method based on Wasserstein distance does not require this assumption. 3) Covering of the support Sµ? . First, Monte-Carlo methods are well-known to suffer from the curse of dimensionality (Mengersen et al., 1996; Robert and Casella, 2013). Besides, in the context of GANs, Arjovsky and Bottou (2017, Theorem 2.2) have shown that the intersection Sµ? ⋂ Sµθ is likely to be a negligible set under µθ . In this specific case, the density ratios would evaluate close to 0 almost everywhere on Sµθ , increasing the time complexity. More generally, Monte-Carlo based methods tend to avoid any area within Sµθ \Sµ? which could lead to a deteriorated sampling quality. To better illustrate this phenomenon, we represent in Figure 2a a synthetic experiment, where Sµθ does not recover Sµ? (by slightly shifting the mean of two modes after training the WGAN). In this setting, we clearly see in Figure 2b that Monte-carlo based methods worsen the WGAN. when Sµ? 6⊂ Sµθ , density ratios focus on local information and lead to non-optimal solutions. On the opposite, our method suggests to learn the optimal re-weighting of mass within the support Sµθ . Interestingly, on this synthetic dataset, it significantly reduces the Wasserstein distance to µ?, see Figure 2c. 4) Non-optimal discriminators. Knowing that optimal discriminators would lead to non-optimal objectives (very low acceptance probabilities), previous approaches made sure that their obtained classifier is sufficiently far from the optimal classifier (Section 3.2 in (Azadi et al., 2019)). Authors have thus come up with heuristics to approximate density ratios: for example Azadi et al. (2019) fine tune a regularized discriminator, Grover et al. (2019) use a pre-trained neural network on ImageNet classification and only fine-tune the final layers for the binary classification task. In our method, on the contrary, we are still looking for the discriminator maximizing the Integral Probability Metric (Müller, 1997) in equation 3, linked to optimal transport. 4 EXPERIMENTS In the following section, we illustrate the efficiency of the proposed methods, latentRS and latentGA on synthetic datasets. Then, we compare their performances with previous works on image datasets. On this image generation tasks, we empirically stress that both latentRS or latentGA methods slightly surpass density ratios based methods while significantly reducing the time complexity. 4.1 EVALUATION METRICS To measure performances of GANs when dealing with low dimensional applications - as with synthetic datasets - we equip our space with the standard Euclidean distance. However, for high dimensional applications such as image generation, Brock et al. (2019); Kynkäänniemi et al. (2019) have shown that embedding images into a feature space with a pre-trained convolutional classifier provides more semantic information. In this setting, we consequently use the euclidean distance between the images’ embeddings from a classifier. For a pair of images (a,b), we define the distance d(a,b) as d(a,b) = ‖φ(a)−φ(b)‖2 where φ is a pre-softmax layer of a supervised classifier, trained specifically on each dataset. Doing so, they will more easily separate images sampled from the target distribution µ? from the ones sampled by the distribution µθ . We compare the performance of the different methods with a panel of evaluation metrics. To begin with, we use the Improved Precision/Recall (Improved PR) metric (Kynkäänniemi et al., 2019), a more robust version of the Precision/Recall metric which was first applied to the context of GANs by Sajjadi et al. (2018). The Improved PR metric is based on a non-parametric estimation of the support of both generated and real distributions using k-Nearest Neighbors. Besides, we also report two well-known metrics: the Earth Mover’s Distance (EMD), the discrete version of the Wasserstein distance, and the average Hausdhorff distance (Hausd). EMD is a distance between probability distributions while Hausd. focuses on support estimations. These two measures are particularly interesting in GANs since one can compute them with collections of discrete points. Let (x1, . . . ,xn) and (y1, . . . ,yn) be two collections of n data points and S be the set of permutations of [1,n], then: EMD(X ,Y ) = min σ∈S n ∑ i=1 ‖xi,yσi‖ and average Hausd(X ,Y ) = 1 n ∑xi∈X min y j∈Y ‖xi,y j‖ Besides, we argue that the Wasserstein distance should be a metric of reference when evaluating WGANs since it is directly linked to their objective function. Finally, for completeness, we report FID Heusel et al. (2017). 4.2 MIXTURE OF GAUSSIANS Further experiments were ran on synthetic datasets with mixtures of 2D Gaussians, with either 4, 9, 16 or 25 components. When dealing with 2D mixtures of Gaussians, we used MLP with 4 hidden layers of 30 nodes for the generator, the discriminator and the importance weighter. As expected in this setting, a standard WGAN-GP combined with a connected latent space (i.e. multivariate normal or uniform), necessarily generates samples in-between two modes. Both Figure 1 and Figure 2a have stressed how the importance weighter can truncate latent space areas that are mapped outside the real data manifold and improve the EMD metric. More figures and details of the different evaluation metrics are given in Appendix. 4.3 IMAGE DATASETS MNIST, F-MNIST and Stacked MNIST. We further study the efficiency of the proposed methods on three image datasets: MNIST (LeCun et al., 1998), FashionMNIST (F-MNIST) (Xiao et al., 2017), and Stacked MNIST (Metz et al., 2016) a highly disconnected datasets with 1,000 classes. For MNIST, F-MNIST and Stacked MNIST, we follow Khayatkhoei et al. (2018) and use a standard CNN architecture composed of a sequence of 3x3 convolution layer, relu activation with nearest neighbor upsampling. To exhibit the efficiency of the proposed methods in different settings, we use hinge loss with gradient penalty (Hinge-GP) (Miyato et al., 2018) on MNIST and F-MNIST, and a Wasserstein loss with gradient penalty (Gulrajani et al., 2017) on Stacked Mnist. For the importance weighter wϕ , we use an MLP architecture with fully-connected layers and relu activation. wϕ has 4 hidden layers, each having its width four times larger than the dimension of the latent space. For exhaustivity, we compare latentRS and latentGA with previous works leveraging density ratios. In particular, we implemented a wide set of post-processing methods for GANs: DRS (Azadi et al., 2019), MH-GAN (Turner et al., 2019), SIR-GAN (Grover et al., 2019) and DOT (Tanaka, 2019). Similarly to Azadi et al. (2019), we take the discriminator at the end of the adversarial training, fine-tune it with the binary cross-entropy loss and select the best model in terms of EMD. During fine-tuning, we keep the gradient penalty or spectral normalization, otherwise the classifier easily separates real from generated data, which leads to a degraded performance, as shown in Figure 2a. Following Azadi et al. (2019); Grover et al. (2019), we do not include explicit mechanism to calibrate the classifier. To the extent of our knowledge, we are the first to empirically compare such a wide variety of Monte-Carlo methods on different datasets and metrics. The main results of this comparison are shown in Table 1 (see Appendix for more details). We see that, except for Stacked MNIST, both of our methods outperform every other method on precision, av. Hausdhorff and the EMD metric. Interestingly, latentGA seems to be the strongest one. In Figure 3, we show how samples evolve when performing latent gradient ascent on the importance weights. As expected, as importance weights are increased, the quality of the generated images significantly improves. Besides, a strong contribution of the paper also resides in notably speeding-up the inference procedure. As shown in Table 1, the inference time of a given data point is 25 times faster with latentRS than with SIR-GAN. CelebA is a large-scale dataset of faces covering a variety of poses. We train the models at 64x64 resolution. Following recent studies (Brock et al., 2019), the discriminator is trained with the hinge loss and spectral normalization (Miyato et al., 2018). For the generator network, residual connections (He et al., 2016) are used alongside self-modulation layers (Chen et al., 2019). The importance weighter is a simple 4 hidden-layer MLP with a width 10 times larger than the latent space dimension. In this one-class high-dimensional dataset, the importance weighter still managed to learn some meaningful features. First, Figure 3 higlights a subtle improvement of the generated images when performing latentGA. Second, when ranking generated images with the importance weighter and comparing the top-5% vs worst-5% in Figure 4, we observe some differences in quality. However, on a broader scale, the importance weighter does not bring a clear improvement on neither the EMD nor the Hausdhorff metric. Interestingly, this is also the case for any of the different rejection methods (see Appendix for details). We argue that in this one-class generation task, post-processing the generated samples is not as efficient as in a multi-modal setting (e.g. MNIST, FMNIST, Stacked MNIST). Intuitively, it is a much easier task to remove generated samples that are out of the target manifold than to discriminate between samples that already share similarities with training samples. It further stresses that this family of methods is useful if one needs to insert disconectedness in the modeled distribution. However, when the target distribution is a single-class distribution with a connected support, their efficiency decrease. To illustrate this, we added in Appendix a figure highlighting samples generated by a trained WGAN on Celeba 64x64, ranked by the discriminator. We observe that on these images, the discriminator does not correlate well with human judgement prohibiting the importance weighter to learn a meaningful signal. 5 CONCLUSION In this paper, we provide insights on improving the learning of disconnected manifolds with GANs. Given the existence of the no GAN’s land, latent space areas mapping outside the target manifold, we provide two methods to truncate them. Contrary to previous works focusing on learning density ratios in the output space, both of our methods are based on training adversarially a neural network learning importance weights in the latent space. On the task of image generation, both of the proposed methods were shown to be empirically efficient while significantly reducing the inference time (latentRS by an order of 20), when compared to density ratio based methods. This paper has specifically stressed the efficiency of post-training methods when dealing with highly disconnected target distributions. However, when dealing with single-class connected distributions or class-conditioning generative models, the efficiency of such methods is not clear. We argue that one of the reason is that, once the generator maps correctly inside the target manifold, it is a much harder task to discriminate between realistic and fake samples. A potential future work would therefore be to investigate how can we help the discriminator better classify among the set of generated images. A EVALUATION DETAILS Precision recall metric. For the precision-recall metric, we use the algorithm from Khayatkhoei et al. (2018). Namely, when comparing the set of real data points (x1, ...,xn) with the set of fake data points (y1, ...,yn): A point xi has a recall r(xi) = 1 if there exists y j, such that ‖xi− y j‖ ≤ ‖y j− y j(k)‖, where y j(k) is the k-nearest neighbor of n. Finally, the recall is the average of individual recall: 1n ∑i r(xi). A point yi has a precision p(yi) = 1 if there exists x j, such that ‖yi− x j‖ ≤ ‖x j− x j(k)‖, where x j(k) is the k-nearest neighbor of n. Finally, the precision is the average of individual precision: 1n ∑i p(xi). Images’ embeddings. As mentioned earlier, for images we use the distance between embeddings of images in a neural network trained specifically for classification on this dataset. For Stacked Mnist, we use a MNIST classifier on each output channel and simply stack the three embedding vectors. Parameters. For all datasets, we use k = 3 (3rd nearest neighbor). For MNIST, F-MNIST and Stacked MNIST, we use a set of n = 2048 points. For CelebA, we use a set of n = 1024 points. This is also valid for the other metrics used: EMD, Av. Hausd. For FID on CelebA, we use the standard protocol evaluation with Inception Net and 50k data points. B HYPER-PARAMETERS. SIR: Model selection: we fine-tune with a binary cross-entropy loss the discriminator from the end of the adversarial training and select the best model in terms of EMD. We use then use Sampling-Importance-Resampling algorithm, with sets of n=40 points. DRS: Model selection: we fine-tune with a binary cross-entropy loss the discriminator from the end of the adversarial training and select the best model in terms of EMD. We use the standard Rejection Sampling algorithm, without artificially increasing the acceptance rate such as Azadi et al. (2019). We use regularized discriminator (with gradient penalty or spectral normalization), which avoids the acceptance rate falling to almost zero. MH-GAN: Model selection: we fine-tune with a binary cross-entropy loss the discriminator from the end of the adversarial training and select the best model in terms of EMD. We use the independance Metropolis-Hastings algorithm with Markov Chains of 40 points, and select the last point. DOT: Model selection: we fine-tune with the dual wasserstein loss the discriminator from the end of the adversarial training and select the best model in terms of EMD. We then perform a projected gradient descent as described in Tanaka (2019) with SGD, with Nsteps = 10 and ε = 0.01. LRS: For MNIST, F-MNIST and Stacked MNIST, we use the same hyper-parameters: λ1 = 10, λ2 = 2 and m = 3. wϕ is a standard MLP with 4 hidden layers, each having 600 nodes (6x dimension of latent space), and relu activation. The output layer is 1-dimensional and with a relu activation. For CelebA, we use: λ1 = 50, λ2 = 2 and m = 3. wϕ is a standard MLP with 4 hidden layers, each having 1280 nodes (10x dimension of latent space), and relu activation. The output layer is 1-dimensional and with a relu activation. For the adversarial training of importance weights, we use the discriminator from the end of the standard adversarial training (generator vs discriminator). We then alternate between 1 step of wϕ and 1 update of Dα . LGA: We use the same neural network than in LRS. The hyper-parameters for this method are similar to DOT: number of steps of gradient ascent Nsteps and learning rate ε . We choose Nsteps = 10 and ε = 0.1. C VISUALIZATION AND RESULTS FOR SYNTHETIC DATASETS (a) WGAN for the mixture of 9 Gaussians. (b) WGAN for the mixture of 16 Gaussians. (c) WGAN for the mixture of 25 Gaussians. (d) Heatmap in the latent space of the distance between a generated sample and its nearest neighbor for the mixture of 9 Gaussians. (e) Heatmap in the latent space of the distance between a generated sample and its nearest neighbor for the mixture of 16 Gaussians. (f) Heatmap in the latent space of the distance between a generated sample and its nearest neighbor for the mixture of 25 Gaussians. D COMPARISONS WITH CONCURRENT METHODS ON SYNTHETIC AND REAL-WORLD DATASETS
1. What is the main contribution of the paper, and how does it improve generative adversarial models? 2. What are the concerns regarding the theoretical novelty and empirical study of the proposed method? 3. How does the objective function in equation (3) correspond to the optimization of Wasserstein distance in the space of images? 4. Why is the rejection sampling scheme considered tractable, and how does it compare to the MH algorithm in terms of efficiency and acceptance rate? 5. How does the proposed method provide better support coverage than methods operating on the image space, and what are the implications for image quality? 6. How does DRS differ from MH in terms of guarantees provided for sampling from the target distribution? 7. Are there any minor comments or suggestions for improving the paper's presentation, such as rephrasing the phrase "inject disconnectedness" or correcting typos?
Review
Review The paper proposes a method for an improvement of generative adversarial models via post-processing its latent variable distribution. To be more precise, the method proposes to train an additional neural network that outputs an important weight for each point of the latent space, thus reweighting the final distribution in the space of images. For the optimization of this network, the authors use the dual form of the Wasserstein distance, where they multiply the initial latent density by the output of the network. To fix the ill-behaved objective, the authors add two regularization terms to it. The proposed objective is then validated on 3 MNIST-like datasets quantitatively and on CelebA qualitatively. Review: My major concern is the limited theoretical novelty together with modest empirical study. Let me clarify. I think the idea to put the filtering stage into the latent space is indeed worthy. However, the straightforward amortization of the discriminator network via a fully connected network is challenging due to the described computational problems and usually high dimensionality of the latent space. Furthermore, the verification of the method on MNIST-like data does not seem convincing, especially when the relevant works provide a comparison on ImageNet (Azadi 2018, Neklyudov 2019). Additional comments: perhaps, I'm missing something, but for me, it is not clear why the objective in equation (3) corresponds to the optimization of Wasserstein distance in the space of images w.r.t. the parameters alpha and phi. I mean that there are even no guarantees that \widehat{\gamma} is a distribution. "since the rejection sampling scheme is now tractable, we do not need to implement the MH algorithm or the importance sampling". Firstly, I do not understand why the rejection sampling is tractable. The regularization term does not provide any guarantees for the maximum value of the density ratio. Secondly, even if the rejection sampling is tractable, I still find the MH algorithm more efficient: it does not require the evaluation of the constant; given the same proposal, MH's acceptance rate is greater or equal to the acceptance rate of the rejection sampling. the authors claim that reweighting in the latent space allows for better support coverage than the methods operating on the image space. Although I believe that such an effect occurs, I wouldn't expect the quality of images to be high. Indeed, this additional coverage could be produced by sampling from the low-density regions of the latent distribution. It is clear that such regions are underrepresented during the training. Moreover, there is empirical evidence of the deteriorating quality of images for latent distributions with higher variance (see Brock 2018). the bottom of page 3. DRS does not assure sampling from the target distribution since it adjusts the constant and uses an approximation of density ratio. In contrast, the MH algorithm provides some guarantees by upper bounding the total variation distance between the stationary distribution and the target (see Neklyudov 2019). minor comments: abstract. I would suggest finding a better analog for the phrase "inject disconnectedness". It does not sound like a desirable feature of your model when we speak about GANs, especially at the beginning of the paper, where few context is given. I would propose something like "postselection" or "filtering". eq. 4, the signs of regularization terms are incorrect typo on page 5, item 2). every methods -> every method References: (Azadi 2018) Azadi, Samaneh, Catherine Olsson, Trevor Darrell, Ian Goodfellow, and Augustus Odena. "Discriminator rejection sampling." arXiv preprint arXiv:1810.06758 (2018). (Brock 2018) Brock, Andrew, Jeff Donahue, and Karen Simonyan. "Large scale gan training for high fidelity natural image synthesis." arXiv preprint arXiv:1809.11096 (2018). (Neklyudov 2019) Neklyudov, Kirill, Evgenii Egorov, and Dmitry P. Vetrov. "The Implicit Metropolis-Hastings Algorithm." In Advances in Neural Information Processing Systems, pp. 13954-13964. 2019.
ICLR
Title Limitations of Piecewise Linearity for Efficient Robustness Certification Abstract Certified defenses against small-norm adversarial examples have received growing attention in recent years; though certified accuracies of state-of-the-art methods remain far below their non-robust counterparts, despite the fact that benchmark datasets have been shown to be well-separated at far larger radii than the literature generally attempts to certify. In this work, we offer insights that identify potential factors in this performance gap. Specifically, our analysis reveals that piecewise linearity imposes fundamental limitations on the tightness of leading certification techniques. These limitations are felt in practical terms as a greater need for capacity in models hoped to be certified efficiently. Moreover, this is in addition to the capacity necessary to learn a robust boundary, studied in prior work. However, we argue that addressing the limitations of piecewise linearity through scaling up model capacity may give rise to potential difficulties—particularly regarding robust generalization—therefore, we conclude by suggesting that developing smooth activation functions may be the way forward for advancing the performance of certified neural networks. 1 INTRODUCTION Since the discovery of adversarial examples (Szegedy et al., 2014), defenses against malicious input perturbations to deep learning systems have received notable attention. While many early-proposed defenses—such as adversarial training (Madry et al., 2018)—are heuristic in nature, a growing body of work seeking provable defenses has arisen (Cohen et al., 2019; Croce et al., 2019; Fromherz et al., 2021; Huang et al., 2021; Jordan et al., 2019; Lee et al., 2020; Leino & Fredrikson, 2021; Leino et al., 2021; Li et al., 2019; Singla et al., 2022; Trockman & Kolter, 2021; Wong et al., 2018; Zhang et al., 2018). Generally, such defenses attempt to provide a certificate of local robustness (given formally in Definition 1), which guarantees a network’s prediction on a given point is stable under small perturbations (typically in Euclidean or sometimes `∞ space); this precludes the possibility of small-norm adversarial examples on certified points. The success of a certified defense is typically measured empirically using verified robust accuracy (VRA), which reflects the fraction of points that are both (i) classified correctly and (ii) certified as locally robust. Despite the fact that perfect robust classification (i.e., 100% VRA) is known to be possible on standard datasets at the adversarial perturbation budgets used in the literature (Yang et al., 2020b), this possibility is far from realized in the current state of the art. For example, on the benchmark dataset CIFAR-10, state-of-the-art methods offering deterministic guarantees of `2 robustness1 have remained at approximately 60% VRA (Huang et al., 2021; Leino et al., 2021; Singla et al., 2022; Trockman & Kolter, 2021), while non-robust models handily eclipse 95% accuracy. It is difficult to precisely account for this discrepancy; though among other reasons, state-of-the-art methods typically use loose bounds to perform certification—as exact certification is (for general ReLU networks) NP-complete (Katz et al., 2017; Sinha et al., 2018)—which conceivably leads to falsely flagging truly robust points or to over-regularization of the learned model. While conservative approximations may be necessary to perform efficient certification (and to facilitate efficient robust training), it is certainly possible that they foil reasonable hopes for “optimality.” In this work, we 1In this work we primarily consider certified defenses that provide a deterministic guarantee of local robustness, as opposed to a statistical guarantee. For further discussion of this point, see Section 4. offer further insight into the shortcomings of modern certification techniques by analyzing their limitations in the context of the architectural settings in which they are conventionally employed. In particular, we find that piecewise linearity—a practically ubiquitous property of neural networks considered in the certification literature (e.g., standard ReLU and the more recently popularized “MinMax” (Anil et al., 2019) activations are both piecewise linear)—fundamentally limits the power of Lipschitz-based `2 local robustness certification. In effect, we argue, this means that extra capacity is needed simply for facilitating efficient certification—in addition to whatever capacity may be required for learning a robust boundary (e.g., as examined by Bubeck & Sellke (2021)). On the other hand, perhaps surprisingly, we prove that free from the constraint of piecewise linearity, Lipschitz-based certification is powerful enough to perform complete certification on any decision boundary, provided the implementation of the function giving rise to the boundary is under the learner’s control (indeed, this is consistent with the fact that the highest performing certified defenses incorporate Lipschitz-based certification into training). These latter findings suggest that continued progress towards improving state-of-the-art VRA may be enabled through carefully chosen smooth activation functions,2 which do not inherently limit the power of what are currently the most promising forms of efficient local robustness certification. In summary, the primary contributions of this work are as follows: (1) we show that piecewise linearity imposes inherent limitations on the tightness of efficient robustness certification—our primary focus is Lipschitz-based certification, but we discuss similar limitations of other methods in Appendix B; (2) we prove that Lipschitz-based certification is fundamentally powerful for tight robustness certification, provided (i) the robust learning procedure has power over the implementation of the classifier, and (ii) the hypothesis class is not limited to piecewise linear networks; and (3) we demonstrate that tight Lipschitz-based certification may require significant capacity overhead in piecewise-linear networks. These findings offer a new perspective on the sticking points of modern certified training methods, and suggest possible paths forward. We begin in Section 2 by introducing the limitations piecewise linearity imposes on robustness certification, starting generally, and narrowing our focus specifically to Lipschitz-based certification. We then discuss the role that capacity plays in mitigating these limitations in Section 3, which concludes with a discussion of the implications of our findings, both retrospectively and prescriptively. Finally, we discuss related work in Section 4, and offer our concluding remarks in Section 5. 2 LIMITATIONS OF PIECEWISE LINEARITY The main insights in this work stem from the simple, yet crucial observation that the points lying at a fixed Euclidean distance from a piecewise-linear decision boundary, in general, do not themselves comprise a piecewise-linear surface. Therefore, in order for a certification procedure to precisely recover the set of robust points—those which lie a distance of at least from the decision boundary—it must be capable of producing a boundary between robust and non-robust points that is not piecewise-linear, even on networks that are. However, as we will see, Lipschitz-based certification, for example, is in fact constrained to produce a piecewise-linear “certified frontier” on piecewise-linear networks, as the set of just-certifiable points essentially corresponds to a level curve in the output of the network being certified. On the other hand, if the level curves of the function being certified correspond (up to some constant factor) to their distance from the decision boundary (and must therefore include smooth curves), Lipschitz-based certification identifies precisely the points that are truly -locally robust, provided a tight bound on the Lipschitz constant. As we will make clear, this has important implications regarding the power of Lipschitz-based certification in properly suited network architectures. In the remainder of this section, we formalize this intuition and discuss some of its implications. Section 2.1 introduces our main theorem regarding the limitations imposed by piecewise linearity, along with the necessary background and definitions. Section 2.2 narrows the focus to Lipschitzbased certification, showing that despite being powerful in general, it is fundamentally limited within the hypothesis class of piecewise linear networks. Finally, Section 2.3 presents a thought experiment that provides basic intuition about the possible scale of the problems caused by these limitations. 2Or at least, activation functions which enable learning curved (as opposed to piecewise linear) functions. 2.1 FUNDAMENTAL LIMITATIONS TO CERTIFICATION COMPLETENESS For our purposes, we will consider a neural network to be a function f : Rn → Rm mapping ndimensional inputs to logit values corresponding to m different classes. From the network function f , we derive a neural classifier, F : Rn → Rm, by letting F (x) = argmaxi∈[m] fi(x). When it is clear from the context which we are referring to, we will use the term “neural network” for both the network function f and its corresponding classifier F . Note that two different neural network functions, f and f ′, may lead to the same predictions everywhere, i.e., ∀x . F (x) = F ′(x). When this happens, we say that f and f ′ share the same decision boundary, where the decision boundary is simply the set of points where fi(x) = fj(x) for some i 6= j ∈ [m]. In this work, we consider the problem of local robustness certification. As in prior work, we define local robustness as a property of a point x and classifier F , parameterized by a perturbation budget, or robustness radius, , as in Definition 1. Definition 1 ( -Local Robustness). A classifier F : Rn → [m] is -locally robust at point x ∈ Rn, with respect to norm || · ||, if ∀x′ ∈ Rn . ||x− x′|| ≤ =⇒ F (x) = F (x′). A certification procedure, cert, is a function that takes a neural network, f , a point, x, and a perturbation budget, , and produces a label in {0, 1}, where an output of 1 means that f is certified as -locally robust at x. A valid certification procedure must be sound, i.e., cert(f, x, ) = 1 =⇒ F is -locally robust at x; however, it need not be complete, i.e., it may be the case that cert(f, x, ) = 0 and yet F is in fact -locally robust at x. For a given certification procedure, let the certified regions of f , Ccert(f, ) = {x : cert(f, x, )} be the set of points that can be positively certified by cert. Similarly, let the robust regions of f be given by the set R(F, ) = {x : F is -locally robust at x} of -locally robust points (note that, in contrast to Ccert, R does not depend on the implementation of f , only its classification outputs, given by F ). Soundness entails that ∀f . Ccert(f, ) ⊆ R(F, ), but clearly it is desirable for Ccert(f, ) to match R(F, ) as tightly as possible; when this is achieved perfectly we can consider cert to be “complete.” However, as Ccert(f, ) can depend on the underlying function, f , which has a surjective mapping to classifiers, F , derived from the same hypothesis class, we must be careful in defining completeness precisely. Let F be a hypothesis class—a family of functions of type Rn → Rm, e.g., that are captured by some neural network architecture. We will also use the slight abuse of notation, F ∈ F , to denote any F : Rn → [m] such that there exists a function f ′ ∈ F which produces the same labels as F on all inputs, i.e., ∀x . F (x) = argmaxi∈[m] f ′i(x). We say that a certification procedure, cert, is complete on F if all possible decision boundaries achievable by functions in the hypothesis class have at least one implementation in F for which cert perfectly recovers the true robust regions. This is stated formally in Definition 2. Definition 2. A certification procedure, cert, is complete on hypothesis class, F , if for > 0 ∀F ∈ F . ∃f ′ ∈ F . ( ∀x . F (x) = argmax i∈[m] f ′i(x) ) ∧ ( Ccert(f ′, ) = R(F, ) ) Essentially, completeness over a hypothesis class entails a notion of compatibility between the certification procedure and the hypothesis class; specifically, it means that for any decision boundary expressible by the hypothesis class, it is possible for a learning procedure to produce a model that implements the decision boundary in a way that makes the certification procedure complete. Definition 2 provides a key relaxation from a stricter notion of completeness that would require Ccert(f, ) = R(F, ) for all f , as this would not be achievable by any polynomial certification procedure3 (Katz et al., 2017; Sinha et al., 2018). By requiring tight certification only modulo the decision boundary, we avoid this limitation, splitting the responsibility for completeness between the certification procedure, the learning algorithm, and the hypothesis class. Next, we will also find it useful to define the certified frontier of F under cert (Definition 3); essentially, the set of points that are just barely certified, which lie at the frontier of the certified 3Assuming P 6= NP . regions. We will similarly define the robust frontier as the set of points that are just barely -locally robust, which lie at the frontier of the robust regions. Definition 3 (Certified Frontier). The certified frontier of a neural network, F : Rn → [m], under certifier, cert, at perturbation budget, , is the set of points ∆ ( Ccert(f, ) ) = { x : cert(f, x, ) ∧ ( ∀δ > 0 . ¬cert ( f, x, + δ )) } . We now turn to the specifics of one of our main results, namely, that piecewise linearity is a limiting factor for tight certification. Of course, as alluded to earlier, some certification procedures do achieve complete certification on piecewise-linear networks—e.g., (Jordan et al., 2019; Tjeng et al., 2019)—however, such methods are invariably exponential. Thus, we characterize the set of piecewise-linear limited (PLL) methods in Definition 4. Intuitively, a certification procedure is PLL if it is constrained to produce piecewise-linear certified frontiers on piecewise-linear models. Definition 4 (Piecewise-linear Limited Certification). A certification procedure, cert, is piecewise-linear limited (PLL) if ∀f . f is piecewise-linear =⇒ ∆ ( Ccert(f, ) ) is piecewise-linear Note that the robust frontier of a network F is, in general, not piecewise linear, even if F (and thus its decision boundary) is piecewise linear. Thus, if the certified frontier of cert is piecewise linear, cert cannot be complete, i.e., C 6= R. Moreover, this means that any piecewise-linear limited certification procedure cannot be complete on the hypothesis class of piecewise linear networks (Theorem 1). The proof of Theorem 1 is given formally in Appendix A.1. Theorem 1. Any piecewise-linear limited certification procedure is incomplete on the hypothesis class of piecewise linear networks. The proof of Theorem 1 relies on the fact that a piecewise-linear function cannot be equal to a function exhibiting smooth curves. However, it is known that neural networks, provided with enough capacity, can approximate any function with arbitrary precision (Hornik, 1991). We address this point in Section 3, where we discuss the implications of Theorem 1 regarding the capacity requirements of tightly certifiable networks. 2.2 THE POWER AND LIMITATIONS OF LIPSCHITZ-BASED CERTIFICATION We will now narrow our focus to consider the specific family of Lipschitz-based certification methods. Such methods perform certification by using an upper bound, K, on the network’s Lipschitz constant; essentially, a point is certified if the margin by which the top-predicted class exceeds all other classes is greater than K. In our work, we will set aside the details around how the Lipschitz is obtained, though this is also a source of potential looseness in the general approach. That is, we will (optimistically) take for granted that a tight bound is obtained in our analysis. Lipschitz-based certification has proven effective in the literature, achieving state-of-the-art performance—when paired with an appropriate training routine—despite its simplicity (Leino et al., 2021; Trockman & Kolter, 2021). Lipschitz-based certification is advantageous in many ways; in addition to being easy to incorporate into a robust learning objective, it enables zero-cost certification at run time, as the Lipschitz constant does not need to be recomputed after training. On the other hand, it would seem that Lipschitz-based certification is fundamentally underpowered—the “global” Lipschitz constant is a conservative estimate of the local Lipschitz constant, which in turn gives a conservative estimate of how much the net output can change within a given neighborhood. If a primary sticking point for advancing certified accuracy is loose certification, it is fair to ask how promising Lipschitz-based certification will continue to be. The philosophy behind incorporating Lipschitz-based certification into training is essentially that the potential shortcomings of Lipschitz-based certification can be addressed by learning a easily certifiable network function. We show that this intuition is essentially correct. Perhaps surprisingly, we show that Lipschitz-based certification is sufficiently powerful to be complete on the hypothesis class of Lipschitz functions4 However, we also show that Lipschitz-based certification is PLL, meaning this potential cannot be achieved with a hypothesis class constrained by piecewise linearity. 4I.e., with bounded Lipschitz constant. Note that this is not a meaningful constraint for neural networks, as any neural network with Lipschitz activation functions and finite weights is Lipschitz in this sense. 2.2.1 LIPSCHITZ-BASED CERTIFICATION IS POWERFUL We begin by showing that for any boundary achievable by a Lipschitz network function, when the learner is given control over the precise network function implementing the boundary, it is always possible to find an implementation that can be tightly certified using Lipschitz-based certification. This is stated formally in Theorem 2. Theorem 2 further entails that there exists a network function for any 2 -separated data that achieves perfect VRA under Lipschitz-based certification. The proof of Theorem 2 is given in Appendix A.2. Theorem 2. When the hypothesis class, F , is given as the set of Lipschitz functions, Lipschitz-based certification is complete on F . 2.2.2 LIPSCHITZ-BASED CERTIFICATION IS LIMITED BY PIECEWISE-LINEARITY Despite the power of Lipschitz-based certification for general functions, when restricted to the hypothesis class of piecewise linear networks, it becomes fundamentally limited. That is, formally, Lipschitz-based certification is PLL (Proposition 3). Proposition 3. Lipschitz-based certification is piecewise-linear limited. Proposition 3 follows essentially because the certified frontier of Lipschitz-based certification corresponds to a particular level curve of the network function, which is piecewise linear whenever the function is. As a direct consequence of Proposition 3 and Theorem 1, we arrive at Corollary 4. Corollary 4. Lipschitz-based certification is not complete on the hypothesis class of piece-wise linear networks. Note that taken in the context of Theorem 2, Corollary 4 means that in a sense, the fundamental limitation of Lipschitz-based certification is not intrinsic to its simplicity (e.g., because the local Lipschitz constant might be tighter than the global constant on some functions), but rather, it is related to the hypothesis class of networks being certified. Put differently, piecewise linearity imposes real limitations on Lipschitz-based certification that cannot be attributed to practical, but non-fundamental, issues, such as efficient computation of Lipschitz bounds, etc. 2.3 THE PROBLEM WITH CORNERS AND THE CURSE OF DIMENSIONALITY The incongruence between the piecewise-linear certified frontier of Lipschitz-based methods, and the robust frontier of a piecewise-linear boundary, which features smooth curves, becomes relevant when the boundary comes to a “corner,” or relatively sharp inflection point. At corners, the robust frontier curves at a fixed radius around the corner, while the certified frontier, absent aid from additional capacity (see Section 3), runs parallel to the facets forming the corner, offset by a fixed amount (see Figure 2 in Appendix D for an illustration). The sharper the corner, the larger the difference will be between the corresponding robust and certified regions. Additionally, we will see that this is also true the higher the dimension of the corner, i.e., the more independent half-spaces meet to create the corner. As a thought experiment, we will model a d-dimensional corner as the intersection of d orthogonal half-spaces. Assuming the level curves near the corner run parallel to the half-spaces, h ∈ H , forming the corner, in the best case, the certified region is given by the union of half-spaces obtained by flipping each h ∈ H and shifting it by . Consider the hypercube of width just opposite the corner. This hypercube lies entirely outside the robust region, meaning all points within it cannot be certified using Lipschitz-based certification. However, only the points intersecting the hypersphere of radius centered at the corner are truly non- -robust. We can compute the ratio of the volume of the hypercube to the intersecting portion of the hypersphere, given by Equation 1: πd/2 Γ ( d 2 + 1 ) · ( 2 )d (1) As the dimension increases, this ratio tends to zero, meaning that in high dimensions, almost all points in this region opposite the corner are incorrectly uncertified. Furthermore, the maximum distance from an uncertified point within this region to the boundary is equal to the diagonal of the hypercube, which is given by √ d · . This means that even points that are significantly more robust than required may yet be uncertified. 3 THE ROLE OF CAPACITY The primary limitation of Lipschitz-based certification in piecewise-linear networks derives from the fact that we cannot have smoothly curved level curves in such networks (or, more generally, that PLL certification methods cannot have smoothly curved certified frontiers in such networks). However, while this is true in the strictest sense, a function with smooth curves can be approximated with arbitrary precision, given sufficient capacity. In other words, increased network capacity may be one possible option to mitigate the fundamental limitations discussed throughout Section 2. In this section, we investigate the capacity requirements necessary for tight PLL certification in piecewiselinear networks. While the precise meaning of “capacity” in a quantifiable sense is a bit nebulous, for our purposes, we will consider capacity in a piecewise-linear network to correspond to the number of piecewiselinear regions. This grows with the number of internal neurons, though the relationship may vary depending on other aspects of the network architecture, e.g., the depth of the network. Previous work has studied the capacity implications for learning a robust decision boundary, finding that separating points while controlling Lipschitzness may require additional capacity beyond what would be necessary to simply separate them (Bubeck & Sellke, 2021). Besides the capacity required to represent the decision boundary in a robust network, our work asks instead about the capacity required to tightly certify a given boundary. We find that in a piecewise linear network, even if the boundary is optimal—in that all points in the distribution are indeed a distance of or more from it—the network may require additional capacity to be able to prove this using the Lipschitz constant. Taking the data distribution aside, we consider the goal of certifying all points that are sufficiently far from the boundary. As highlighted in Section 2.3, in places where the decision boundary forms high-dimensional “corners,” there may be relatively large volumes of points that are -far from the boundary but cannot be certified as long as the level curves simply run parallel to the boundary. In such cases, tight certification requires extra capacity specifically to round out the level curves around the corners in the decision boundary. We begin by demonstrating this concept via an illustrative example. We conclude by discussing the implications of our results and suggest avenues for future work. 3.1 AN ILLUSTRATIVE EXAMPLE OF HOW CAPACITY ENABLES TIGHT CERTIFICATION As an example of how Lipschitz-based certification can require excess capacity beyond what is necessary to learn a robust boundary, we consider a synthetic 2-D dataset that can robustly separated by a simple piecewise linear boundary. An illustration is provided in Figure 1a. We begin with a decision boundary given by B = {(x1, x2) : max(x1, x2) = 0}; this boundary separates points with negative x- and y-coordinates from points in the other three quadrants, and forms a 90◦ corner at the origin. The data are then generated such that all the points with label 0 lie a distance of at least below and to the right of the boundary, and the points with label 1 lie a distance of at least above and to the right of the boundary. Specifically, the 1-labeled points curve around the boundary such that there is a tight margin of exactly 2 about the boundary. By construction, the function f(x) = [0,max(x1, x2)] produces logit values that yield the boundary B, with respect to which all points in the dataset are -locally robust. This function can be trivially implemented with minimal capacity by a simple MinMax network, f(x) = σ(xW 1)W 2, where σ is the MinMax activation function, and W 1 and W 2 are given by Equation 2. W 1 = [ 1 0 0 1 ] W 2 = [ 0 0 0 1 ] (2) Furthermore, the Lipschitz constant of f is 1;5 this can even be tightly obtained by taking the layerwise product of the layer operator norms, as is typically done in practice. Hence, the points that can be certified will be those for which |f1(x) − f0(x)| ≥ ; that is, the points outside the level curves max(x1, x2) = − and max(x1, x2) = . However, we see that this certified frontier fails to certify many points in the positive x-y quadrant, despite the fact that all the points are indeed robust with respect to the boundary of f . This is depicted in Figure 1b. In order to certify these points, we need the level curve corresponding to f1(x)− f0(x) = to bend smoothly around the boundary, rather than forming the same 90◦ angle. This requires more capacity. To gain a sense of how this plays out in practice, we consider adding capacity via expanding the number of neurons in the hidden layer (which contained only two neurons in our minimal example). In Figures 1d and 1e, we show the boundaries of two additional learned networks, g and h, with 20 and 200 internal neurons, respectively. We see that increasing the number of internal neurons by an order of magnitude yields a better set of level curves, but the network g still must compromise as the level curves are not smooth enough to tightly follow the contour of the data. Finally, when we increase the number of internal neurons by two orders of magnitude, we at last obtain a function h that achieves nearly 100% VRA on our sample data. This function, as desired, forms essentially smooth level curves that bend around the boundary corner with a radius of . Interestingly, h learns a boundary that is somewhat different from the boundary originally used to derive the data; however, both boundaries can be thought of as “equivalent” in the sense that they produce the same margin, reflecting that the optimal boundary for this dataset is not unique. Discussion. In our example, we needed 100 times more neurons than were necessary to construct an optimal decision boundary in order to tightly certify the boundary with the Lipschitz constant. While it is difficult to extrapolate from this toy example to a “real world” scenario, our results suggest that smoothing the level curves may require significant overhead beyond the capacity necessary to produce a truly robust boundary. Another aspect of this experiment worth noting is that when the network had insufficient capacity to learn an optimally robust, tightly certified boundary (e.g., in Figures 1c and 1d), the resulting model tended to compromise by making the corner less sharp (compared to the desired 90◦ angle). Geometrically, when the boundary has an inflection with a wider angle, the difference between the certifiable frontier and the frontier of robust points is less pronounced (consider for example, what happens then the inflection approaches 180◦). In effect, this means that while under-parameterization of piecewise-linear models may be a problem for robust model performance in practice, this limitation may be (at least in part) manifested as an under-fit model as opposed to one with many robust but non-certifiable points. This is reflected in the empirical results for certifiably trained models in the literature, which typically have lower “clean accuracies” than their standard-trained counterparts. However, we note that these models also exhibit a discrepancy between their certified accuracy and their vulnerability to actual attacks, leaving the possibility that they may also fail to certify some truly robust points. 5More properly put, the Lipschitz constant of |f1−f0|—which represents the margin by which the predicted class exceeds the non-predicted class—is 1. 3.2 POTENTIAL DRAWBACKS OF THE CAPACITY ESCAPE HATCH As we have seen, by adding capacity, we can help overcome the limitations of piecewise linearity by enabling the network to approximate smooth curves around corners in the decision boundary. For universal tight certification, this needs to be done in the neighborhood of all corners on the decision boundary. To the extent that each corner requires independent capacity, hopes for the scalability of such an approach seem slim; albeit, VRA only requires tight certification on the data manifold, meaning that extra capacity should only be needed in places where the decision boundary has sharp inflections near in-distribution points. However, this, too, presents an interesting problem. Namely, the network only has incentive to allocate capacity to round the level curves in the places that are necessary to certify its training set; i.e., where inflections in the decision boundary encroach on training points. Meanwhile, if similar inflections exist near test points not seen during training, the learned network may fail to certify them—even if the boundary is general, and even if it is also robust. In other words, we are faced with not only the challenge of learning a generally robust boundary, but additionally of learning a generally certifiable function. Indeed, generalization of VRA is empirically observed to be worse than the corresponding “clean accuracy” would indicate—a principle that has been noted in prior work due to its privacy implications (Yeom et al., 2020). A Proposed Way Forward. Another possibility for addressing the fact that Lipschitz-based certification is PLL is to expand the hypothesis class to enable smooth curves in the decision surface. Ultimately, our analysis shows that Lipschitz-based certification is most effective when the level curves of the network function accurately reflect the `2 distance to the boundary, which requires the possibility of smooth curves. This goal may be best achieved by purpose-built activations, as piecewise linearity stems from the choice in activation function. State-of-the-art Lipschitz-based certifiable training methods have enjoyed increased success in recent years through leveraging MinMax activations (Anil et al., 2019)—or a variant thereof proposed by Singla et al. (2022)—which are piecewise linear. MinMax has a distinct advantage over the more common ReLU activation, due to its gradient-norm-preserving (GNP) property, which Anil et al. demonstrate is key for tight, efficient Lipschitz bounds. While the need for gradient norm preservation remains clear, we posit that some form of smoothness is an additional desirable property, as it would free the hypothesis class from piecewise linearity. We believe the task of designing suitable smooth activation functions for PLL-certified networks is a promising avenue for future work. 4 RELATED WORK Power and Limitations of Lipschitz-based Certification. Several of the early efforts around robustness certification focused on post hoc certification of networks trained outside of the control of the certifier. This is a fundamentally hard problem, shown to be NP-complete by Katz et al. (2017) and Sinha et al. (2018). While this fundamentally limits the tractability of complete post hoc certification, the limitation is of lesser concern for modern approaches that incorporate certification into the training objective, thus encouraging learning models that better facilitate efficient certification. The specific limitations of Lipschitz-based certification have also been of great interest in the prior literature. Most of these results particularly consider the practical problem of bounding a neural network’s Lipschitz constant. For example, Huster et al. (2018) note that the common method of using the product of the layer-wise operator norm cannot tightly bound the Lipschitz constant of even basic functions in ReLU networks. Anil et al. (2019) study this point further demonstrating a trade-off between expressive power and efficient Lipschitz bound computation in networks with non-gradient-norm-preserving activation functions. This limitation is handled by using network architectures with gradient-norm-preserving activation function such as MinMax, and orthonormal linear operators (though the latter need not necessarily be strictly enforced as it is a learnable objective). Anil et al. conjecture that such networks are universal 1-Lipschitz function approximators, suggesting that learning any Lipschitz function in such a way that the Lipschitz constant can be bounded tightly and efficiently is possible. By contrast, our work points to previously unstudied limitations that are separate from the Lipschitz constant bounding problem, and are indeed not mitigated through the use of MinMax activations, which are piecewise linear. However, we propose that the limitations brought forth in our work may similarly be addressed via novel activation functions. On the flip side, previous work has also touched on the power of Lipschitz-based certification. (Leino et al., 2021) showed that certification with the global Lipschitz constant can be as powerful as with the local Lipschitz constant when the model is under the learner’s control. We extend this result in a number of key ways. First, we prove a stronger result that can be stated for all points, rather than for a finite set of points certified via the local Lipschitz constant. Second, we explicitly consider the hypothesis class, demonstrating that smoothness is a necessary condition to achieve this result. Capacity Requirements for Robust Neural Networks. Understanding the role of capacity in deep neural networks has been a topic of interest in general, particularly due to the demonstrated effectiveness of highly over-parameterized models (Arora et al., 2018; Bubeck & Sellke, 2021; Du et al., 2019; Garg et al., 2022; Zhang et al., 2017). Recent work has also investigated this subject in the particular context of robust models. Bubeck & Sellke (2021) showed that under mild regularity assumptions, learning a highly accurate model with small Lipschitz constant requires significantly more parameters than would be required with no constraint on the Lipschitz constant—where the capacity overhead, in terms of the number of parameters, scales with the dimension. While a controlled Lipschitz constant is central to successful Lipschitz-based certification, our work (e.g., our example in Section 3.1), shows that a Lipschitz interpolation between points of opposite class is not sufficient for certification. As our analysis is focused on certification rather than Lipschitz interpolation, we complement the work of Bubeck & Sellke, showing that even further capacity may be required to appropriately bend the function’s level curves to facilitate Lipschitz-based certification. In addition to the information-theoretic capacity requirements, large numbers of parameters in deep networks may be necessary to facilitate efficient learning (Arora et al., 2018; Du et al., 2019). Recently, Garg et al. (2022) showed that robust learning in particular may require even greater overparameterization than standard learning. Results such as these are complimentary to work such as ours, which focus on minimal parameterizations. Randomized Smoothing. Our work has focused on deterministic certification. By contrast, randomized smoothing (Cohen et al., 2019; Lecuyer et al., 2018) has become a popular method that instead provides a statistical guarantee of robustness. Randomized smoothing (RS) essentially modifies the original function by predicting the expected label under Gaussian6 noise. These predictions are empirically determined through sampling, with the statistical certificate depending on the unanimity of the sample labels. While RS provides a weaker robustness guarantee, it solidly outperforms deterministic methods in terms of certified accuracy. Interestingly, it seems clear that RS is not PLL, since it naturally smooths piecewise linear networks, leading to a smooth boundary and certified frontier—this may be one of the keys to its success. This observation gives further support to the notion that state-of-the-art deterministic methods may be held back by piecewise linearity, and may benefit from smooth activation functions. 5 CONCLUSIONS AND FUTURE DIRECTIONS Incorporating Lipschitz-based certification into robust training procedures has proven to be the most effective way to achieve high deterministic `2 verified-robust accuracy yet considered in the literature. Due to our Theorem 2, there is reason to believe Lipschitz-based certification has the power to remain as promising as current results suggest. However, we also showed that restricted to the hypothesis class of piecewise-linear networks, as has been the standard regime, Lipschitz-based certification becomes fundamentally limited. For piecewise-linear networks, this means that tight Lipschitz-based certification may require significantly more parameters, which, even if tractable, can complicate certifiably robust generalization (e.g., see Section 3.2). On the other hand, rather than viewing this as a fundamental drawback for Lipschitz-based certification, we propose that purpose-built activations—with the correct smoothness and gradient-norm-preserving properties— is a promising avenue for future work to free the most promising form of efficient deterministic certification from the limitations of piecewise linearity. 6Prior work has considered other distributions as well (Yang et al., 2020a) A PROOFS A.1 PROOF OF THEOREM 1 Theorem Statement. Any piecewise-linear limited certification procedure is incomplete on the hypothesis class of piecewise linear networks. Proof. It suffices to show that there exists a boundary achievable by a piecewise-linear network for which no PLL certification method can tightly certify. We proceed by producing a piecewise linear boundary that induces a smooth robust frontier. This is sufficient to prove our theorem, as ∆ ( Ccert(f, ) ) 6= ∆ ( R(F, ) ) =⇒ Ccert(f, ) 6= R(F, ). Consider the 2-D boundary given by max(x, y) = 0. Clearly, this boundary exists within the class of piecewise linear functions as the function f(x, y) = max(x, y) is piecewise linear. Now consider the points in the positive x-y quadrant. The points in this quadrant that are at distance from the boundary are given by √ x2 + y2 = , which is not piecewise linear. By definition, any certification method that is PLL must have a certified frontier that is piecewise linear. Thus, the certified frontier of such any such method cannot be equal to √ x2 + y2 = in this quadrant. A.2 PROOF OF THEOREM 2 Theorem Statement. When the hypothesis class, F , is given as the set of Lipschitz functions, Lipschitz-based certification is complete on F . Proof. Let F be the set of Lipschitz functions. Consider the decision boundary of any function f ∈ F . Define f ′ as follows: let d(x) be the minimum distance of x from the decision boundary and let f ′(x) = d(x) · 1F (x), where 1F (x) is the one-hot encoding of F (x). First, observe that f ′j − f ′i is 1-Lipschitz for all i 6= j. To see this consider the following. The Lipschitz constant is given by sup x,x′ ∣∣(f ′j(x)− f ′i(x))− (f ′j(x′)− f ′i(x′))∣∣ ||x− x′|| = sup x,x′ ∣∣f ′j(x)− f ′j(x′) + f ′i(x′)− f ′i(x)∣∣ ||x− x′|| (3) Consider points x and x′, and let us assume that ||x−x′|| = δ. We would like to bound the quantity given by (4), the numerator in (3), by δ.∣∣f ′j(x)− f ′j(x′) + f ′i(x′)− f ′i(x)∣∣ (4) There are a few cases to consider. First if F (x) and F (x′) are both different from i and j, then (4) is 0 ≤ δ. Since (4) is symmetric in both i and j, and x and x′, without loss of generality, we will assume F (x) = j. This leaves two cases: when F (x′) = j, and when F (x′) 6= j (in the latter case we will not be concerned with whether or not F (x′) = i). In the first case we have (4) = |f ′j(x)− f ′j(x′)| = |d(x)− d(x′)| (5) = d(x)− d(x′) without loss of generality (6) Let a be the nearest point on the boundary to x′, such that which d(x′) = ||x′ − a||. Thus, d(x) ≤ ||x− a|| as a is on the boundary (7) ≤ ||x− x′||+ ||x′ − a|| by the triangle inequality (8) = δ + d(x′) (9) =⇒ d(x)− d(x′) ≤ δ as desired (10) In the second case, x and x′ are given different labels and we have (4) = |f ′j(x) + f ′i(x′)| (11) ≤ d(x) + d(x′) as f ′i(x′) is at most d(x′) (achieved when F (x′) = i) (12) Since x and x′ are given different labels, there must be at least one part of decision boundary that bisects the line segment connecting x and x′; let a be this intersection point. Additionally, since a is on the boundary, we must have that d(x) ≤ ||x− a|| and d(x′) ≤ ||x′ − a||. Thus, as desired, d(x) + d(x′) ≤ ||x− a||+ ||x′ − a|| = δ (13) This allows us to conclude that f ′j − f ′i is 1-Lipschitz for all i 6= j, as claimed. The points that are certified by Lipschitz-based certification are those for which (14) holds, where j = F (x) and Kji is the Lipschitz constant of f ′j − f ′i . min i6=j { f ′j(x)− f ′i(x)− Kji } ≥ 0 (14) Notice that when i 6= F (x), f ′i(x) = 0. Thus (14) can be simplified to f ′j(x) = d(x) ≥ , noting also that Kji = 1 ∀i, j. Therefore, the points that can be certified via Lipschitz-based certification are those for which d(x) ≥ , which are precisely the points that are locally robust. A.3 PROOF OF PROPOSITION 3 Theorem Statement. Lipschitz-based certification is piecewise-linear limited. Proof. Assume the function, f , being certified is piecewise linear. Without loss of generality, consider inputs x for which the network predicts class j. The margin by which class j surpasses all other classes is given by m(x) = mini {fj(x)− fi(x)}. Note that m is piecewise linear as f is piecewise linear. Let K be the Lipschitz constant of m. The largest radius that can be certified at x is then m/K. Thus, the certified frontier is given by m/K = ; this corresponds to the level curve of m corresponding to m = ·K. Since m is piecewise linear, this level curve is piecewise linear. Thus, the certified frontier is piecewise linear, and Lipschitz-based certification is PLL. B LIMITATIONS OF OTHER CERTIFICATION METHODS B.1 LIMITATIONS OF LOCAL-LIPSCHITZ-BASED CERTIFICATION State-of-the-art deterministic `2 certified performance is currently achieved using Lipschitz-based certification, which outperforms other types of certified training methods (Leino et al., 2021; Trockman & Kolter, 2021) such as those based on convex relaxations—e.g., (Wong et al., 2018)—or maximizing linear regions—e.g., (Croce et al., 2019; Xiao et al., 2019). Unsurprisingly, however, methods that use the local Lipschitz constant for certification can achieve similarly high VRA (Huang et al., 2021), though this comes at the cost of significantly slower certification. The local Lipschitz constant at a point x is given by K (x) in Definition 5, which essentially corresponds to the maximum slope of the function within an neighborhood of x. Definition 5. The local Lipschitz constant is given by K (x) = sup x1,x2 . ||x−x1||≤ ||x−x2||≤ { |f(x1)− f(x2)| ||x1 − x2|| } Local-Lipschitz-based certification, similar to Lipschitz-based certification (Section 2.2), certifies points, x, when the margin by which the top-predicted class, F (x), exceeds all other classes is greater than ·K (x). While the local Lipschitz constant is always a lower bound for the global Lipschitz constant—and therefore local-Lipschitz-based certification can possibly be tighter—local-Lipschitz-based certification is nonetheless equally limited. We will consider a generous setting in which the bound used for certification is exact, i.e., where the certification procedure has oracle access to K (x). Because K (x) is not piecewise linear, localLipschitz-based certification is not strictly piecewise-linear limited (PLL) in this setting. It is worth noting, however, that methods for approximating the local Lipschitz constant may not leverage this smoothness in practice. Regardless, we show that local-Lipschitz-based certification is incomplete on piecewise-linear networks (Theorem 5). This result is related to the fact that when the learner is given control over the implementation of the boundary, (global) Lipschitz-based certification can match the power of local-Lipschitz-based certification; this result has been proven in a slightly weaker formulation by Leino et al. (2021). We provide an alternative theorem statement and proof here that better aligns with the insights in this work. Theorem 5. Local-Lipschitz-based certification is not complete on the hypothesis class of piecewise-linear networks. Proof. It suffices to show that there exists a boundary achievable by a piecewise-linear network for which no corresponding piecewise-linear implementation can be tightly certified by local-Lipschitzbased certification. Recall that by Corollary 4 there exists such a boundary for (global) Lipschitzbased certification. We will consider one of the same such boundaries. For a particular value of , consider the points ∆ (R(F, )), which are at distance exactly from the boundary. There are two cases to consider: either (1) the local Lipschitz constant is always the same everywhere, i.e., ∀ > 0, ∀x1, x2 ∈ ∆ (R(F, )), K (x1) = K (x2), or (2) there is some variation in the local Lipschitz constant, such that ∃ > 0, x1, x2 ∈ ∆ (R(F, )) where K (x1) 6= K (x2). In the first case, we see that K (x) = K (the global Lipschitz constant), meaning that localLipschitz-based certification will certify the exact same points as (global) Lipschitz-based certification. Thus, by Corollary 4, there must be a point which is robust at radius but not certifiable. In the second case, without loss of generality, assume K (x1) > K (x2). Because f is piecewise linear, it is comprised of a finite number of linear functions, which in turn have a finite number of distinct slopes (gradient norms). Thus, ifK (x1) > K (x2),K (x1)−K (x2) = δ where δ belongs to some finite set of strictly positive values. Furthermore, without loss of generality, x1 and x2 can be chosen to be arbitrarily close together, i.e., they lie arbitrarily near a point where the local Lipschitz constant changes. We will therefore consider x1 and x2 that are chosen such according to Equation 15. ||x1 − x2|| < · δ K (15) Let m2 be the margin by which the top-predicted class, F (x2), exceeds all other classes. The maximum radius that can be certified at x2 is thus m2/K (x2). Note that as certification is sound, we have m2 K (x2) ≤ (16) Now consider the maximum radius that can be certified at x1. Let m1 be the margin by which the top-predicted class, F (x1), exceeds all other classes. The maximum radius that can be certified at x1 is thus m1/K (x1) m1 K (x1) = m1 K (x2) + δ by assumption (17) ≤ m2 +K||x1 − x2|| K (x2) + δ by definition of the Lipschitz constant (18) < m2 + · δ K (x2) + δ by our choice of ||x1 − x2|| in (15) (19) ≤ ·K (x2) + · δ K (x2) + δ by (16) (20) = (21) Thus, we see that x1 cannot be certified with radius , despite that its distance from the boundary is exactly . B.2 OTHER PIECEWISE-LINEAR LIMITED METHODS Our work focuses primarily on Lipschitz-based certification, which we demonstrate is fundamentally limited on the hypothesis class of piecewise linear networks. However, this limitation is not due specifically to the use of the Lipschitz constant per se; instead, we attribute it more generally to the fact that Lipschitz-based certification always produces a piecewise-linear certified frontier on piecewise-linear networks, a property we refer to as PLL (Definition 4). In this section we briefly discuss how this property may apply to other flavors of certification techniques that have been proposed in the literature. Convex Relaxations and Dual Networks. One classic approach for certification is through convex relaxation. A survey of such methods is given by Salman et al. (2019), who point out the limitations (regarding tight certification) of convex relaxations (though the authors do not consider our setting where the learner may control the implementation of the boundary, but rather focus on post hoc certification). Though many approaches in this family have been proposed, we will consider two baseline methods that capture a primal and dual formulation of convex relaxations: Fast-Lin (Weng et al., 2018), and an approach proposed by Wong & Kolter (2018), often referred to as “KW.” Fast-Lin directly derives upper and lower bounds on the output of a ReLU network in order to determine if an adversarial example might exist. This is done by iteratively computing upper and lower bounds for the neurons in each layer and using them to replace the ReLU activations with linear upper and lower bounds. This computation resembles a piecewise-linear network, suggesting that Fast-Lin is PLL. The KW approach formulates the adversary as an LP that optimizes over the convex outer approximation of the set of top-level activations reachable through a norm-bounded perturbation. Crucially, for the sake of tractability, the LP can be bounded by the feasible set of the dual, which Wong & Kolter show can be expressed as a dual network, which resembles a backwards pass in the network being certified. For ReLU networks, the activations in the dual network are replaced with their upper convex envelopes (a linear function) over the bounded set [`, u], where ` and u represent lower and upper bounds on the pre-ReLU neural activations. The upper and lower bounds can be iteratively computed in a similar way to in Fast-Lip; thus, in its simplest form,7 the dual network inherits the piecewise linearity of the original ReLU network being certified, suggesting the resulting certified frontier is piecewise linear, and certification is PLL. Hyperplane Projections. As exact certification is NP-complete, the literature has often turned to training procedures that help simple, approximate certification enjoy greater success. In piecewise linear networks, the input can be partitioned into a polyhedral complex where each convex region corresponds to a single activation pattern, over which the network is linear (Croce et al., 2019; Fromherz et al., 2021; Jordan et al., 2019). Motivated by this view of ReLU networks, one family of robust training approaches attempts to expand the linear regions of the network to simplify the combinatorial analysis of the possible ReLU activation patterns (Croce et al., 2019; Xiao et al., 2019). Croce et al. proposed a simple certification technique for networks trained with their “Maximum Margin Regularization” (MMR), where a point, x, is certified only if (1) the entire -ball around x is contained in a single convex activation region, and (2) the linear function corresponding to the region does not have a boundary within from x. This approach is clearly PLL, as the certified regions can be obtained by shrinking each activation region (possibly split in two if a linear decision boundary crosses it) by . Since the original regions are convex polytopes, so too are the certified regions, thus the certified frontier is piecewise-linear. In contrast to our findings for Lipschitz-based certification, it is worth noting that the limitations of this approach go beyond PLL, as completeness of the MMR approach is in direct conflict with non-linearity; and moreover, the approach is designed specifically for piecewise-linear networks. C DETAILS ON EXPERIMENTS The experiments presented in Figure 1 in Section 3 were performed using the gloro Python library, which implements the GloRo Net method of Leino et al. (2021) for training certifiably robust 7This approach has been refined in subsequent work that we do not consider here (Wong et al., 2018). models by incorporating Lipschitz-based certification into training. All networks in the experiments consisted of a 1-hidden layer dense network with MinMax Anil et al. (2019) activations; three specific architectures were used, with 2, 20, and 200 hidden units, respectively. Models were trained for 64 epochs, with a batch size of 128. We chose hyperparameters inspired by those used by Leino et al. (see the original paper for details on the meaning of the various hyperparameters); namely, we used GloRo-TRADES loss with λ = 1.2, we scaled logarithmically to its ultimate value of 0.5 by the half-way point of training, and we linearly decreased the learning rate from 10−3 to 0 half-way through training. D AN ILLUSTRATIVE EXAMPLE OF THE CORNER PROBLEM For illustrative purposes a diagram is provided in Figure 2 that serves as a visual explanation of the “corner problem” described in Section 2.3. The boundary of a neural network, shown by the bold black line, forms a sharp corner. The complement to the robust region, i.e., the set of points that are not robust, is shown in gray. A simple implementation of this boundary has level curves that make similar sharp corners; the level curve corresponding to the certified frontier is shown by the dotted line, and the certified region is colored in blue. The region opposite the corner in the boundary is highlighted. We see that in this region, there is a set of points, shown in orange, that are not certified, despite the fact that they are robust, being at distance greater than from the boundary. In this two-dimensional example, these falsely flagged points make up a relatively small fraction of the uncertified points opposite the corner (represented as the union of the orange points and the highlighted gray points in the diagram); however, in high dimensions, virtually all uncertified points in this region would be falsely flagged, as indicated by Equation 1.
1. What is the focus of the paper regarding limitations in robustness certification? 2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical analysis? 3. Do you have any concerns about the contribution or significance of the paper's findings? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions regarding the paper's writing style, missing conditions, or poor presentation?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This is a theoretical work studying the limitations of piecewise linear functions in robustness certification. Main findings include: 1) Any piecewise linear certification is incomplete for piecewise linear networks; 2) For Lipschitz networks, when the learner can control the network, there exists some implementation such that Lipschitz-based certification is tight, but Lipschitz-based certification is piecewise-linear limited when restricted to the hy- pothesis class of piecewise linear networks. 3) Capacity can help tight certification, but using Lipschitz-based certification may need additional capacity. Strengths And Weaknesses Strengths: This work conducted some theoretical analysis on the fundamental limits of piecewise linearity. The study concluded the limitation of piecewise linear certification and Lipschitz-based certification under certain conditions. Weaknesses: Theorem 1 talks about “piecewise-linear limited certification procedure”. But it is unclear what certification methods the paper is referring to by “piecewise-linear limited certification”. I didn’t see a concrete example in the paper for “piecewise-linear limited certification”. I know many works use convex relaxation based certification. But the limitation of convex relaxation based certification is already known as the convex relation barrier in Salman et al., 2019 which is missing in this paper. Thus I am concerned if the contribution on Theorem 1 is significant. Theorem 2 sounds kind of trivial to me. The existence of a Lipschitz function which can be tightly verified does not seem to be quite useful. In particular, are such Lipschitz functions nontrivial (i.e., do they correspond to any network with a good accuracy)? Section 3 on capacity is based on a case study only, without formal theories. The paper is poorly written. Many theorems are not clearly stated, especially with unclear conditions: In Section 2.2.1, it is mentioned that the learner needs to be given control over the network implementation, which seems to be a condition of Theorem 2 but is missing in the Theorem. In Section 2.2.2, it looks like Proposition 3 is only applicable when the networks are restricted to the hypothesis class of piecewise linear networks. This condition is also missing in Proposition 3 which simply says “Lipschitz-based certification is piecewise-linear limited.” Section 2.3 is very poorly presented. The section describes some geometry elements such as “corner”, “point”, etc., without any figure. In the current form, it is difficult for readers to understand this section. At least the writing should be combined with a figure. In Section 2, “the main insights in this work stem from the simple, yet crucial observation that the points lying at a fixed Euclidean distance from a piecewise-linear decision boundary, in general, do not them- selves comprise a piecewise-linear surface.” But doesn’t certification considers points within a distance on the input space rather than the output space? Why do the authors consider “fixed Euclidean distance from a piecewise-linear decision boundary” (output space rather than input space). In Definition 2, why does there have to be f’? Why can’t it be contained in “cert” already (the certification procedure?) Salman, H., Yang, G., Zhang, H., Hsieh, C. J., & Zhang, P. (2019). A convex relaxation barrier to tight robustness verification of neural networks. Advances in Neural Information Processing Systems, 32. Clarity, Quality, Novelty And Reproducibility Clarity and quality: Many parts of the paper are unclear, as detailed above. Novelty: Not novel. Reproducibility: N/A.
ICLR
Title Limitations of Piecewise Linearity for Efficient Robustness Certification Abstract Certified defenses against small-norm adversarial examples have received growing attention in recent years; though certified accuracies of state-of-the-art methods remain far below their non-robust counterparts, despite the fact that benchmark datasets have been shown to be well-separated at far larger radii than the literature generally attempts to certify. In this work, we offer insights that identify potential factors in this performance gap. Specifically, our analysis reveals that piecewise linearity imposes fundamental limitations on the tightness of leading certification techniques. These limitations are felt in practical terms as a greater need for capacity in models hoped to be certified efficiently. Moreover, this is in addition to the capacity necessary to learn a robust boundary, studied in prior work. However, we argue that addressing the limitations of piecewise linearity through scaling up model capacity may give rise to potential difficulties—particularly regarding robust generalization—therefore, we conclude by suggesting that developing smooth activation functions may be the way forward for advancing the performance of certified neural networks. 1 INTRODUCTION Since the discovery of adversarial examples (Szegedy et al., 2014), defenses against malicious input perturbations to deep learning systems have received notable attention. While many early-proposed defenses—such as adversarial training (Madry et al., 2018)—are heuristic in nature, a growing body of work seeking provable defenses has arisen (Cohen et al., 2019; Croce et al., 2019; Fromherz et al., 2021; Huang et al., 2021; Jordan et al., 2019; Lee et al., 2020; Leino & Fredrikson, 2021; Leino et al., 2021; Li et al., 2019; Singla et al., 2022; Trockman & Kolter, 2021; Wong et al., 2018; Zhang et al., 2018). Generally, such defenses attempt to provide a certificate of local robustness (given formally in Definition 1), which guarantees a network’s prediction on a given point is stable under small perturbations (typically in Euclidean or sometimes `∞ space); this precludes the possibility of small-norm adversarial examples on certified points. The success of a certified defense is typically measured empirically using verified robust accuracy (VRA), which reflects the fraction of points that are both (i) classified correctly and (ii) certified as locally robust. Despite the fact that perfect robust classification (i.e., 100% VRA) is known to be possible on standard datasets at the adversarial perturbation budgets used in the literature (Yang et al., 2020b), this possibility is far from realized in the current state of the art. For example, on the benchmark dataset CIFAR-10, state-of-the-art methods offering deterministic guarantees of `2 robustness1 have remained at approximately 60% VRA (Huang et al., 2021; Leino et al., 2021; Singla et al., 2022; Trockman & Kolter, 2021), while non-robust models handily eclipse 95% accuracy. It is difficult to precisely account for this discrepancy; though among other reasons, state-of-the-art methods typically use loose bounds to perform certification—as exact certification is (for general ReLU networks) NP-complete (Katz et al., 2017; Sinha et al., 2018)—which conceivably leads to falsely flagging truly robust points or to over-regularization of the learned model. While conservative approximations may be necessary to perform efficient certification (and to facilitate efficient robust training), it is certainly possible that they foil reasonable hopes for “optimality.” In this work, we 1In this work we primarily consider certified defenses that provide a deterministic guarantee of local robustness, as opposed to a statistical guarantee. For further discussion of this point, see Section 4. offer further insight into the shortcomings of modern certification techniques by analyzing their limitations in the context of the architectural settings in which they are conventionally employed. In particular, we find that piecewise linearity—a practically ubiquitous property of neural networks considered in the certification literature (e.g., standard ReLU and the more recently popularized “MinMax” (Anil et al., 2019) activations are both piecewise linear)—fundamentally limits the power of Lipschitz-based `2 local robustness certification. In effect, we argue, this means that extra capacity is needed simply for facilitating efficient certification—in addition to whatever capacity may be required for learning a robust boundary (e.g., as examined by Bubeck & Sellke (2021)). On the other hand, perhaps surprisingly, we prove that free from the constraint of piecewise linearity, Lipschitz-based certification is powerful enough to perform complete certification on any decision boundary, provided the implementation of the function giving rise to the boundary is under the learner’s control (indeed, this is consistent with the fact that the highest performing certified defenses incorporate Lipschitz-based certification into training). These latter findings suggest that continued progress towards improving state-of-the-art VRA may be enabled through carefully chosen smooth activation functions,2 which do not inherently limit the power of what are currently the most promising forms of efficient local robustness certification. In summary, the primary contributions of this work are as follows: (1) we show that piecewise linearity imposes inherent limitations on the tightness of efficient robustness certification—our primary focus is Lipschitz-based certification, but we discuss similar limitations of other methods in Appendix B; (2) we prove that Lipschitz-based certification is fundamentally powerful for tight robustness certification, provided (i) the robust learning procedure has power over the implementation of the classifier, and (ii) the hypothesis class is not limited to piecewise linear networks; and (3) we demonstrate that tight Lipschitz-based certification may require significant capacity overhead in piecewise-linear networks. These findings offer a new perspective on the sticking points of modern certified training methods, and suggest possible paths forward. We begin in Section 2 by introducing the limitations piecewise linearity imposes on robustness certification, starting generally, and narrowing our focus specifically to Lipschitz-based certification. We then discuss the role that capacity plays in mitigating these limitations in Section 3, which concludes with a discussion of the implications of our findings, both retrospectively and prescriptively. Finally, we discuss related work in Section 4, and offer our concluding remarks in Section 5. 2 LIMITATIONS OF PIECEWISE LINEARITY The main insights in this work stem from the simple, yet crucial observation that the points lying at a fixed Euclidean distance from a piecewise-linear decision boundary, in general, do not themselves comprise a piecewise-linear surface. Therefore, in order for a certification procedure to precisely recover the set of robust points—those which lie a distance of at least from the decision boundary—it must be capable of producing a boundary between robust and non-robust points that is not piecewise-linear, even on networks that are. However, as we will see, Lipschitz-based certification, for example, is in fact constrained to produce a piecewise-linear “certified frontier” on piecewise-linear networks, as the set of just-certifiable points essentially corresponds to a level curve in the output of the network being certified. On the other hand, if the level curves of the function being certified correspond (up to some constant factor) to their distance from the decision boundary (and must therefore include smooth curves), Lipschitz-based certification identifies precisely the points that are truly -locally robust, provided a tight bound on the Lipschitz constant. As we will make clear, this has important implications regarding the power of Lipschitz-based certification in properly suited network architectures. In the remainder of this section, we formalize this intuition and discuss some of its implications. Section 2.1 introduces our main theorem regarding the limitations imposed by piecewise linearity, along with the necessary background and definitions. Section 2.2 narrows the focus to Lipschitzbased certification, showing that despite being powerful in general, it is fundamentally limited within the hypothesis class of piecewise linear networks. Finally, Section 2.3 presents a thought experiment that provides basic intuition about the possible scale of the problems caused by these limitations. 2Or at least, activation functions which enable learning curved (as opposed to piecewise linear) functions. 2.1 FUNDAMENTAL LIMITATIONS TO CERTIFICATION COMPLETENESS For our purposes, we will consider a neural network to be a function f : Rn → Rm mapping ndimensional inputs to logit values corresponding to m different classes. From the network function f , we derive a neural classifier, F : Rn → Rm, by letting F (x) = argmaxi∈[m] fi(x). When it is clear from the context which we are referring to, we will use the term “neural network” for both the network function f and its corresponding classifier F . Note that two different neural network functions, f and f ′, may lead to the same predictions everywhere, i.e., ∀x . F (x) = F ′(x). When this happens, we say that f and f ′ share the same decision boundary, where the decision boundary is simply the set of points where fi(x) = fj(x) for some i 6= j ∈ [m]. In this work, we consider the problem of local robustness certification. As in prior work, we define local robustness as a property of a point x and classifier F , parameterized by a perturbation budget, or robustness radius, , as in Definition 1. Definition 1 ( -Local Robustness). A classifier F : Rn → [m] is -locally robust at point x ∈ Rn, with respect to norm || · ||, if ∀x′ ∈ Rn . ||x− x′|| ≤ =⇒ F (x) = F (x′). A certification procedure, cert, is a function that takes a neural network, f , a point, x, and a perturbation budget, , and produces a label in {0, 1}, where an output of 1 means that f is certified as -locally robust at x. A valid certification procedure must be sound, i.e., cert(f, x, ) = 1 =⇒ F is -locally robust at x; however, it need not be complete, i.e., it may be the case that cert(f, x, ) = 0 and yet F is in fact -locally robust at x. For a given certification procedure, let the certified regions of f , Ccert(f, ) = {x : cert(f, x, )} be the set of points that can be positively certified by cert. Similarly, let the robust regions of f be given by the set R(F, ) = {x : F is -locally robust at x} of -locally robust points (note that, in contrast to Ccert, R does not depend on the implementation of f , only its classification outputs, given by F ). Soundness entails that ∀f . Ccert(f, ) ⊆ R(F, ), but clearly it is desirable for Ccert(f, ) to match R(F, ) as tightly as possible; when this is achieved perfectly we can consider cert to be “complete.” However, as Ccert(f, ) can depend on the underlying function, f , which has a surjective mapping to classifiers, F , derived from the same hypothesis class, we must be careful in defining completeness precisely. Let F be a hypothesis class—a family of functions of type Rn → Rm, e.g., that are captured by some neural network architecture. We will also use the slight abuse of notation, F ∈ F , to denote any F : Rn → [m] such that there exists a function f ′ ∈ F which produces the same labels as F on all inputs, i.e., ∀x . F (x) = argmaxi∈[m] f ′i(x). We say that a certification procedure, cert, is complete on F if all possible decision boundaries achievable by functions in the hypothesis class have at least one implementation in F for which cert perfectly recovers the true robust regions. This is stated formally in Definition 2. Definition 2. A certification procedure, cert, is complete on hypothesis class, F , if for > 0 ∀F ∈ F . ∃f ′ ∈ F . ( ∀x . F (x) = argmax i∈[m] f ′i(x) ) ∧ ( Ccert(f ′, ) = R(F, ) ) Essentially, completeness over a hypothesis class entails a notion of compatibility between the certification procedure and the hypothesis class; specifically, it means that for any decision boundary expressible by the hypothesis class, it is possible for a learning procedure to produce a model that implements the decision boundary in a way that makes the certification procedure complete. Definition 2 provides a key relaxation from a stricter notion of completeness that would require Ccert(f, ) = R(F, ) for all f , as this would not be achievable by any polynomial certification procedure3 (Katz et al., 2017; Sinha et al., 2018). By requiring tight certification only modulo the decision boundary, we avoid this limitation, splitting the responsibility for completeness between the certification procedure, the learning algorithm, and the hypothesis class. Next, we will also find it useful to define the certified frontier of F under cert (Definition 3); essentially, the set of points that are just barely certified, which lie at the frontier of the certified 3Assuming P 6= NP . regions. We will similarly define the robust frontier as the set of points that are just barely -locally robust, which lie at the frontier of the robust regions. Definition 3 (Certified Frontier). The certified frontier of a neural network, F : Rn → [m], under certifier, cert, at perturbation budget, , is the set of points ∆ ( Ccert(f, ) ) = { x : cert(f, x, ) ∧ ( ∀δ > 0 . ¬cert ( f, x, + δ )) } . We now turn to the specifics of one of our main results, namely, that piecewise linearity is a limiting factor for tight certification. Of course, as alluded to earlier, some certification procedures do achieve complete certification on piecewise-linear networks—e.g., (Jordan et al., 2019; Tjeng et al., 2019)—however, such methods are invariably exponential. Thus, we characterize the set of piecewise-linear limited (PLL) methods in Definition 4. Intuitively, a certification procedure is PLL if it is constrained to produce piecewise-linear certified frontiers on piecewise-linear models. Definition 4 (Piecewise-linear Limited Certification). A certification procedure, cert, is piecewise-linear limited (PLL) if ∀f . f is piecewise-linear =⇒ ∆ ( Ccert(f, ) ) is piecewise-linear Note that the robust frontier of a network F is, in general, not piecewise linear, even if F (and thus its decision boundary) is piecewise linear. Thus, if the certified frontier of cert is piecewise linear, cert cannot be complete, i.e., C 6= R. Moreover, this means that any piecewise-linear limited certification procedure cannot be complete on the hypothesis class of piecewise linear networks (Theorem 1). The proof of Theorem 1 is given formally in Appendix A.1. Theorem 1. Any piecewise-linear limited certification procedure is incomplete on the hypothesis class of piecewise linear networks. The proof of Theorem 1 relies on the fact that a piecewise-linear function cannot be equal to a function exhibiting smooth curves. However, it is known that neural networks, provided with enough capacity, can approximate any function with arbitrary precision (Hornik, 1991). We address this point in Section 3, where we discuss the implications of Theorem 1 regarding the capacity requirements of tightly certifiable networks. 2.2 THE POWER AND LIMITATIONS OF LIPSCHITZ-BASED CERTIFICATION We will now narrow our focus to consider the specific family of Lipschitz-based certification methods. Such methods perform certification by using an upper bound, K, on the network’s Lipschitz constant; essentially, a point is certified if the margin by which the top-predicted class exceeds all other classes is greater than K. In our work, we will set aside the details around how the Lipschitz is obtained, though this is also a source of potential looseness in the general approach. That is, we will (optimistically) take for granted that a tight bound is obtained in our analysis. Lipschitz-based certification has proven effective in the literature, achieving state-of-the-art performance—when paired with an appropriate training routine—despite its simplicity (Leino et al., 2021; Trockman & Kolter, 2021). Lipschitz-based certification is advantageous in many ways; in addition to being easy to incorporate into a robust learning objective, it enables zero-cost certification at run time, as the Lipschitz constant does not need to be recomputed after training. On the other hand, it would seem that Lipschitz-based certification is fundamentally underpowered—the “global” Lipschitz constant is a conservative estimate of the local Lipschitz constant, which in turn gives a conservative estimate of how much the net output can change within a given neighborhood. If a primary sticking point for advancing certified accuracy is loose certification, it is fair to ask how promising Lipschitz-based certification will continue to be. The philosophy behind incorporating Lipschitz-based certification into training is essentially that the potential shortcomings of Lipschitz-based certification can be addressed by learning a easily certifiable network function. We show that this intuition is essentially correct. Perhaps surprisingly, we show that Lipschitz-based certification is sufficiently powerful to be complete on the hypothesis class of Lipschitz functions4 However, we also show that Lipschitz-based certification is PLL, meaning this potential cannot be achieved with a hypothesis class constrained by piecewise linearity. 4I.e., with bounded Lipschitz constant. Note that this is not a meaningful constraint for neural networks, as any neural network with Lipschitz activation functions and finite weights is Lipschitz in this sense. 2.2.1 LIPSCHITZ-BASED CERTIFICATION IS POWERFUL We begin by showing that for any boundary achievable by a Lipschitz network function, when the learner is given control over the precise network function implementing the boundary, it is always possible to find an implementation that can be tightly certified using Lipschitz-based certification. This is stated formally in Theorem 2. Theorem 2 further entails that there exists a network function for any 2 -separated data that achieves perfect VRA under Lipschitz-based certification. The proof of Theorem 2 is given in Appendix A.2. Theorem 2. When the hypothesis class, F , is given as the set of Lipschitz functions, Lipschitz-based certification is complete on F . 2.2.2 LIPSCHITZ-BASED CERTIFICATION IS LIMITED BY PIECEWISE-LINEARITY Despite the power of Lipschitz-based certification for general functions, when restricted to the hypothesis class of piecewise linear networks, it becomes fundamentally limited. That is, formally, Lipschitz-based certification is PLL (Proposition 3). Proposition 3. Lipschitz-based certification is piecewise-linear limited. Proposition 3 follows essentially because the certified frontier of Lipschitz-based certification corresponds to a particular level curve of the network function, which is piecewise linear whenever the function is. As a direct consequence of Proposition 3 and Theorem 1, we arrive at Corollary 4. Corollary 4. Lipschitz-based certification is not complete on the hypothesis class of piece-wise linear networks. Note that taken in the context of Theorem 2, Corollary 4 means that in a sense, the fundamental limitation of Lipschitz-based certification is not intrinsic to its simplicity (e.g., because the local Lipschitz constant might be tighter than the global constant on some functions), but rather, it is related to the hypothesis class of networks being certified. Put differently, piecewise linearity imposes real limitations on Lipschitz-based certification that cannot be attributed to practical, but non-fundamental, issues, such as efficient computation of Lipschitz bounds, etc. 2.3 THE PROBLEM WITH CORNERS AND THE CURSE OF DIMENSIONALITY The incongruence between the piecewise-linear certified frontier of Lipschitz-based methods, and the robust frontier of a piecewise-linear boundary, which features smooth curves, becomes relevant when the boundary comes to a “corner,” or relatively sharp inflection point. At corners, the robust frontier curves at a fixed radius around the corner, while the certified frontier, absent aid from additional capacity (see Section 3), runs parallel to the facets forming the corner, offset by a fixed amount (see Figure 2 in Appendix D for an illustration). The sharper the corner, the larger the difference will be between the corresponding robust and certified regions. Additionally, we will see that this is also true the higher the dimension of the corner, i.e., the more independent half-spaces meet to create the corner. As a thought experiment, we will model a d-dimensional corner as the intersection of d orthogonal half-spaces. Assuming the level curves near the corner run parallel to the half-spaces, h ∈ H , forming the corner, in the best case, the certified region is given by the union of half-spaces obtained by flipping each h ∈ H and shifting it by . Consider the hypercube of width just opposite the corner. This hypercube lies entirely outside the robust region, meaning all points within it cannot be certified using Lipschitz-based certification. However, only the points intersecting the hypersphere of radius centered at the corner are truly non- -robust. We can compute the ratio of the volume of the hypercube to the intersecting portion of the hypersphere, given by Equation 1: πd/2 Γ ( d 2 + 1 ) · ( 2 )d (1) As the dimension increases, this ratio tends to zero, meaning that in high dimensions, almost all points in this region opposite the corner are incorrectly uncertified. Furthermore, the maximum distance from an uncertified point within this region to the boundary is equal to the diagonal of the hypercube, which is given by √ d · . This means that even points that are significantly more robust than required may yet be uncertified. 3 THE ROLE OF CAPACITY The primary limitation of Lipschitz-based certification in piecewise-linear networks derives from the fact that we cannot have smoothly curved level curves in such networks (or, more generally, that PLL certification methods cannot have smoothly curved certified frontiers in such networks). However, while this is true in the strictest sense, a function with smooth curves can be approximated with arbitrary precision, given sufficient capacity. In other words, increased network capacity may be one possible option to mitigate the fundamental limitations discussed throughout Section 2. In this section, we investigate the capacity requirements necessary for tight PLL certification in piecewiselinear networks. While the precise meaning of “capacity” in a quantifiable sense is a bit nebulous, for our purposes, we will consider capacity in a piecewise-linear network to correspond to the number of piecewiselinear regions. This grows with the number of internal neurons, though the relationship may vary depending on other aspects of the network architecture, e.g., the depth of the network. Previous work has studied the capacity implications for learning a robust decision boundary, finding that separating points while controlling Lipschitzness may require additional capacity beyond what would be necessary to simply separate them (Bubeck & Sellke, 2021). Besides the capacity required to represent the decision boundary in a robust network, our work asks instead about the capacity required to tightly certify a given boundary. We find that in a piecewise linear network, even if the boundary is optimal—in that all points in the distribution are indeed a distance of or more from it—the network may require additional capacity to be able to prove this using the Lipschitz constant. Taking the data distribution aside, we consider the goal of certifying all points that are sufficiently far from the boundary. As highlighted in Section 2.3, in places where the decision boundary forms high-dimensional “corners,” there may be relatively large volumes of points that are -far from the boundary but cannot be certified as long as the level curves simply run parallel to the boundary. In such cases, tight certification requires extra capacity specifically to round out the level curves around the corners in the decision boundary. We begin by demonstrating this concept via an illustrative example. We conclude by discussing the implications of our results and suggest avenues for future work. 3.1 AN ILLUSTRATIVE EXAMPLE OF HOW CAPACITY ENABLES TIGHT CERTIFICATION As an example of how Lipschitz-based certification can require excess capacity beyond what is necessary to learn a robust boundary, we consider a synthetic 2-D dataset that can robustly separated by a simple piecewise linear boundary. An illustration is provided in Figure 1a. We begin with a decision boundary given by B = {(x1, x2) : max(x1, x2) = 0}; this boundary separates points with negative x- and y-coordinates from points in the other three quadrants, and forms a 90◦ corner at the origin. The data are then generated such that all the points with label 0 lie a distance of at least below and to the right of the boundary, and the points with label 1 lie a distance of at least above and to the right of the boundary. Specifically, the 1-labeled points curve around the boundary such that there is a tight margin of exactly 2 about the boundary. By construction, the function f(x) = [0,max(x1, x2)] produces logit values that yield the boundary B, with respect to which all points in the dataset are -locally robust. This function can be trivially implemented with minimal capacity by a simple MinMax network, f(x) = σ(xW 1)W 2, where σ is the MinMax activation function, and W 1 and W 2 are given by Equation 2. W 1 = [ 1 0 0 1 ] W 2 = [ 0 0 0 1 ] (2) Furthermore, the Lipschitz constant of f is 1;5 this can even be tightly obtained by taking the layerwise product of the layer operator norms, as is typically done in practice. Hence, the points that can be certified will be those for which |f1(x) − f0(x)| ≥ ; that is, the points outside the level curves max(x1, x2) = − and max(x1, x2) = . However, we see that this certified frontier fails to certify many points in the positive x-y quadrant, despite the fact that all the points are indeed robust with respect to the boundary of f . This is depicted in Figure 1b. In order to certify these points, we need the level curve corresponding to f1(x)− f0(x) = to bend smoothly around the boundary, rather than forming the same 90◦ angle. This requires more capacity. To gain a sense of how this plays out in practice, we consider adding capacity via expanding the number of neurons in the hidden layer (which contained only two neurons in our minimal example). In Figures 1d and 1e, we show the boundaries of two additional learned networks, g and h, with 20 and 200 internal neurons, respectively. We see that increasing the number of internal neurons by an order of magnitude yields a better set of level curves, but the network g still must compromise as the level curves are not smooth enough to tightly follow the contour of the data. Finally, when we increase the number of internal neurons by two orders of magnitude, we at last obtain a function h that achieves nearly 100% VRA on our sample data. This function, as desired, forms essentially smooth level curves that bend around the boundary corner with a radius of . Interestingly, h learns a boundary that is somewhat different from the boundary originally used to derive the data; however, both boundaries can be thought of as “equivalent” in the sense that they produce the same margin, reflecting that the optimal boundary for this dataset is not unique. Discussion. In our example, we needed 100 times more neurons than were necessary to construct an optimal decision boundary in order to tightly certify the boundary with the Lipschitz constant. While it is difficult to extrapolate from this toy example to a “real world” scenario, our results suggest that smoothing the level curves may require significant overhead beyond the capacity necessary to produce a truly robust boundary. Another aspect of this experiment worth noting is that when the network had insufficient capacity to learn an optimally robust, tightly certified boundary (e.g., in Figures 1c and 1d), the resulting model tended to compromise by making the corner less sharp (compared to the desired 90◦ angle). Geometrically, when the boundary has an inflection with a wider angle, the difference between the certifiable frontier and the frontier of robust points is less pronounced (consider for example, what happens then the inflection approaches 180◦). In effect, this means that while under-parameterization of piecewise-linear models may be a problem for robust model performance in practice, this limitation may be (at least in part) manifested as an under-fit model as opposed to one with many robust but non-certifiable points. This is reflected in the empirical results for certifiably trained models in the literature, which typically have lower “clean accuracies” than their standard-trained counterparts. However, we note that these models also exhibit a discrepancy between their certified accuracy and their vulnerability to actual attacks, leaving the possibility that they may also fail to certify some truly robust points. 5More properly put, the Lipschitz constant of |f1−f0|—which represents the margin by which the predicted class exceeds the non-predicted class—is 1. 3.2 POTENTIAL DRAWBACKS OF THE CAPACITY ESCAPE HATCH As we have seen, by adding capacity, we can help overcome the limitations of piecewise linearity by enabling the network to approximate smooth curves around corners in the decision boundary. For universal tight certification, this needs to be done in the neighborhood of all corners on the decision boundary. To the extent that each corner requires independent capacity, hopes for the scalability of such an approach seem slim; albeit, VRA only requires tight certification on the data manifold, meaning that extra capacity should only be needed in places where the decision boundary has sharp inflections near in-distribution points. However, this, too, presents an interesting problem. Namely, the network only has incentive to allocate capacity to round the level curves in the places that are necessary to certify its training set; i.e., where inflections in the decision boundary encroach on training points. Meanwhile, if similar inflections exist near test points not seen during training, the learned network may fail to certify them—even if the boundary is general, and even if it is also robust. In other words, we are faced with not only the challenge of learning a generally robust boundary, but additionally of learning a generally certifiable function. Indeed, generalization of VRA is empirically observed to be worse than the corresponding “clean accuracy” would indicate—a principle that has been noted in prior work due to its privacy implications (Yeom et al., 2020). A Proposed Way Forward. Another possibility for addressing the fact that Lipschitz-based certification is PLL is to expand the hypothesis class to enable smooth curves in the decision surface. Ultimately, our analysis shows that Lipschitz-based certification is most effective when the level curves of the network function accurately reflect the `2 distance to the boundary, which requires the possibility of smooth curves. This goal may be best achieved by purpose-built activations, as piecewise linearity stems from the choice in activation function. State-of-the-art Lipschitz-based certifiable training methods have enjoyed increased success in recent years through leveraging MinMax activations (Anil et al., 2019)—or a variant thereof proposed by Singla et al. (2022)—which are piecewise linear. MinMax has a distinct advantage over the more common ReLU activation, due to its gradient-norm-preserving (GNP) property, which Anil et al. demonstrate is key for tight, efficient Lipschitz bounds. While the need for gradient norm preservation remains clear, we posit that some form of smoothness is an additional desirable property, as it would free the hypothesis class from piecewise linearity. We believe the task of designing suitable smooth activation functions for PLL-certified networks is a promising avenue for future work. 4 RELATED WORK Power and Limitations of Lipschitz-based Certification. Several of the early efforts around robustness certification focused on post hoc certification of networks trained outside of the control of the certifier. This is a fundamentally hard problem, shown to be NP-complete by Katz et al. (2017) and Sinha et al. (2018). While this fundamentally limits the tractability of complete post hoc certification, the limitation is of lesser concern for modern approaches that incorporate certification into the training objective, thus encouraging learning models that better facilitate efficient certification. The specific limitations of Lipschitz-based certification have also been of great interest in the prior literature. Most of these results particularly consider the practical problem of bounding a neural network’s Lipschitz constant. For example, Huster et al. (2018) note that the common method of using the product of the layer-wise operator norm cannot tightly bound the Lipschitz constant of even basic functions in ReLU networks. Anil et al. (2019) study this point further demonstrating a trade-off between expressive power and efficient Lipschitz bound computation in networks with non-gradient-norm-preserving activation functions. This limitation is handled by using network architectures with gradient-norm-preserving activation function such as MinMax, and orthonormal linear operators (though the latter need not necessarily be strictly enforced as it is a learnable objective). Anil et al. conjecture that such networks are universal 1-Lipschitz function approximators, suggesting that learning any Lipschitz function in such a way that the Lipschitz constant can be bounded tightly and efficiently is possible. By contrast, our work points to previously unstudied limitations that are separate from the Lipschitz constant bounding problem, and are indeed not mitigated through the use of MinMax activations, which are piecewise linear. However, we propose that the limitations brought forth in our work may similarly be addressed via novel activation functions. On the flip side, previous work has also touched on the power of Lipschitz-based certification. (Leino et al., 2021) showed that certification with the global Lipschitz constant can be as powerful as with the local Lipschitz constant when the model is under the learner’s control. We extend this result in a number of key ways. First, we prove a stronger result that can be stated for all points, rather than for a finite set of points certified via the local Lipschitz constant. Second, we explicitly consider the hypothesis class, demonstrating that smoothness is a necessary condition to achieve this result. Capacity Requirements for Robust Neural Networks. Understanding the role of capacity in deep neural networks has been a topic of interest in general, particularly due to the demonstrated effectiveness of highly over-parameterized models (Arora et al., 2018; Bubeck & Sellke, 2021; Du et al., 2019; Garg et al., 2022; Zhang et al., 2017). Recent work has also investigated this subject in the particular context of robust models. Bubeck & Sellke (2021) showed that under mild regularity assumptions, learning a highly accurate model with small Lipschitz constant requires significantly more parameters than would be required with no constraint on the Lipschitz constant—where the capacity overhead, in terms of the number of parameters, scales with the dimension. While a controlled Lipschitz constant is central to successful Lipschitz-based certification, our work (e.g., our example in Section 3.1), shows that a Lipschitz interpolation between points of opposite class is not sufficient for certification. As our analysis is focused on certification rather than Lipschitz interpolation, we complement the work of Bubeck & Sellke, showing that even further capacity may be required to appropriately bend the function’s level curves to facilitate Lipschitz-based certification. In addition to the information-theoretic capacity requirements, large numbers of parameters in deep networks may be necessary to facilitate efficient learning (Arora et al., 2018; Du et al., 2019). Recently, Garg et al. (2022) showed that robust learning in particular may require even greater overparameterization than standard learning. Results such as these are complimentary to work such as ours, which focus on minimal parameterizations. Randomized Smoothing. Our work has focused on deterministic certification. By contrast, randomized smoothing (Cohen et al., 2019; Lecuyer et al., 2018) has become a popular method that instead provides a statistical guarantee of robustness. Randomized smoothing (RS) essentially modifies the original function by predicting the expected label under Gaussian6 noise. These predictions are empirically determined through sampling, with the statistical certificate depending on the unanimity of the sample labels. While RS provides a weaker robustness guarantee, it solidly outperforms deterministic methods in terms of certified accuracy. Interestingly, it seems clear that RS is not PLL, since it naturally smooths piecewise linear networks, leading to a smooth boundary and certified frontier—this may be one of the keys to its success. This observation gives further support to the notion that state-of-the-art deterministic methods may be held back by piecewise linearity, and may benefit from smooth activation functions. 5 CONCLUSIONS AND FUTURE DIRECTIONS Incorporating Lipschitz-based certification into robust training procedures has proven to be the most effective way to achieve high deterministic `2 verified-robust accuracy yet considered in the literature. Due to our Theorem 2, there is reason to believe Lipschitz-based certification has the power to remain as promising as current results suggest. However, we also showed that restricted to the hypothesis class of piecewise-linear networks, as has been the standard regime, Lipschitz-based certification becomes fundamentally limited. For piecewise-linear networks, this means that tight Lipschitz-based certification may require significantly more parameters, which, even if tractable, can complicate certifiably robust generalization (e.g., see Section 3.2). On the other hand, rather than viewing this as a fundamental drawback for Lipschitz-based certification, we propose that purpose-built activations—with the correct smoothness and gradient-norm-preserving properties— is a promising avenue for future work to free the most promising form of efficient deterministic certification from the limitations of piecewise linearity. 6Prior work has considered other distributions as well (Yang et al., 2020a) A PROOFS A.1 PROOF OF THEOREM 1 Theorem Statement. Any piecewise-linear limited certification procedure is incomplete on the hypothesis class of piecewise linear networks. Proof. It suffices to show that there exists a boundary achievable by a piecewise-linear network for which no PLL certification method can tightly certify. We proceed by producing a piecewise linear boundary that induces a smooth robust frontier. This is sufficient to prove our theorem, as ∆ ( Ccert(f, ) ) 6= ∆ ( R(F, ) ) =⇒ Ccert(f, ) 6= R(F, ). Consider the 2-D boundary given by max(x, y) = 0. Clearly, this boundary exists within the class of piecewise linear functions as the function f(x, y) = max(x, y) is piecewise linear. Now consider the points in the positive x-y quadrant. The points in this quadrant that are at distance from the boundary are given by √ x2 + y2 = , which is not piecewise linear. By definition, any certification method that is PLL must have a certified frontier that is piecewise linear. Thus, the certified frontier of such any such method cannot be equal to √ x2 + y2 = in this quadrant. A.2 PROOF OF THEOREM 2 Theorem Statement. When the hypothesis class, F , is given as the set of Lipschitz functions, Lipschitz-based certification is complete on F . Proof. Let F be the set of Lipschitz functions. Consider the decision boundary of any function f ∈ F . Define f ′ as follows: let d(x) be the minimum distance of x from the decision boundary and let f ′(x) = d(x) · 1F (x), where 1F (x) is the one-hot encoding of F (x). First, observe that f ′j − f ′i is 1-Lipschitz for all i 6= j. To see this consider the following. The Lipschitz constant is given by sup x,x′ ∣∣(f ′j(x)− f ′i(x))− (f ′j(x′)− f ′i(x′))∣∣ ||x− x′|| = sup x,x′ ∣∣f ′j(x)− f ′j(x′) + f ′i(x′)− f ′i(x)∣∣ ||x− x′|| (3) Consider points x and x′, and let us assume that ||x−x′|| = δ. We would like to bound the quantity given by (4), the numerator in (3), by δ.∣∣f ′j(x)− f ′j(x′) + f ′i(x′)− f ′i(x)∣∣ (4) There are a few cases to consider. First if F (x) and F (x′) are both different from i and j, then (4) is 0 ≤ δ. Since (4) is symmetric in both i and j, and x and x′, without loss of generality, we will assume F (x) = j. This leaves two cases: when F (x′) = j, and when F (x′) 6= j (in the latter case we will not be concerned with whether or not F (x′) = i). In the first case we have (4) = |f ′j(x)− f ′j(x′)| = |d(x)− d(x′)| (5) = d(x)− d(x′) without loss of generality (6) Let a be the nearest point on the boundary to x′, such that which d(x′) = ||x′ − a||. Thus, d(x) ≤ ||x− a|| as a is on the boundary (7) ≤ ||x− x′||+ ||x′ − a|| by the triangle inequality (8) = δ + d(x′) (9) =⇒ d(x)− d(x′) ≤ δ as desired (10) In the second case, x and x′ are given different labels and we have (4) = |f ′j(x) + f ′i(x′)| (11) ≤ d(x) + d(x′) as f ′i(x′) is at most d(x′) (achieved when F (x′) = i) (12) Since x and x′ are given different labels, there must be at least one part of decision boundary that bisects the line segment connecting x and x′; let a be this intersection point. Additionally, since a is on the boundary, we must have that d(x) ≤ ||x− a|| and d(x′) ≤ ||x′ − a||. Thus, as desired, d(x) + d(x′) ≤ ||x− a||+ ||x′ − a|| = δ (13) This allows us to conclude that f ′j − f ′i is 1-Lipschitz for all i 6= j, as claimed. The points that are certified by Lipschitz-based certification are those for which (14) holds, where j = F (x) and Kji is the Lipschitz constant of f ′j − f ′i . min i6=j { f ′j(x)− f ′i(x)− Kji } ≥ 0 (14) Notice that when i 6= F (x), f ′i(x) = 0. Thus (14) can be simplified to f ′j(x) = d(x) ≥ , noting also that Kji = 1 ∀i, j. Therefore, the points that can be certified via Lipschitz-based certification are those for which d(x) ≥ , which are precisely the points that are locally robust. A.3 PROOF OF PROPOSITION 3 Theorem Statement. Lipschitz-based certification is piecewise-linear limited. Proof. Assume the function, f , being certified is piecewise linear. Without loss of generality, consider inputs x for which the network predicts class j. The margin by which class j surpasses all other classes is given by m(x) = mini {fj(x)− fi(x)}. Note that m is piecewise linear as f is piecewise linear. Let K be the Lipschitz constant of m. The largest radius that can be certified at x is then m/K. Thus, the certified frontier is given by m/K = ; this corresponds to the level curve of m corresponding to m = ·K. Since m is piecewise linear, this level curve is piecewise linear. Thus, the certified frontier is piecewise linear, and Lipschitz-based certification is PLL. B LIMITATIONS OF OTHER CERTIFICATION METHODS B.1 LIMITATIONS OF LOCAL-LIPSCHITZ-BASED CERTIFICATION State-of-the-art deterministic `2 certified performance is currently achieved using Lipschitz-based certification, which outperforms other types of certified training methods (Leino et al., 2021; Trockman & Kolter, 2021) such as those based on convex relaxations—e.g., (Wong et al., 2018)—or maximizing linear regions—e.g., (Croce et al., 2019; Xiao et al., 2019). Unsurprisingly, however, methods that use the local Lipschitz constant for certification can achieve similarly high VRA (Huang et al., 2021), though this comes at the cost of significantly slower certification. The local Lipschitz constant at a point x is given by K (x) in Definition 5, which essentially corresponds to the maximum slope of the function within an neighborhood of x. Definition 5. The local Lipschitz constant is given by K (x) = sup x1,x2 . ||x−x1||≤ ||x−x2||≤ { |f(x1)− f(x2)| ||x1 − x2|| } Local-Lipschitz-based certification, similar to Lipschitz-based certification (Section 2.2), certifies points, x, when the margin by which the top-predicted class, F (x), exceeds all other classes is greater than ·K (x). While the local Lipschitz constant is always a lower bound for the global Lipschitz constant—and therefore local-Lipschitz-based certification can possibly be tighter—local-Lipschitz-based certification is nonetheless equally limited. We will consider a generous setting in which the bound used for certification is exact, i.e., where the certification procedure has oracle access to K (x). Because K (x) is not piecewise linear, localLipschitz-based certification is not strictly piecewise-linear limited (PLL) in this setting. It is worth noting, however, that methods for approximating the local Lipschitz constant may not leverage this smoothness in practice. Regardless, we show that local-Lipschitz-based certification is incomplete on piecewise-linear networks (Theorem 5). This result is related to the fact that when the learner is given control over the implementation of the boundary, (global) Lipschitz-based certification can match the power of local-Lipschitz-based certification; this result has been proven in a slightly weaker formulation by Leino et al. (2021). We provide an alternative theorem statement and proof here that better aligns with the insights in this work. Theorem 5. Local-Lipschitz-based certification is not complete on the hypothesis class of piecewise-linear networks. Proof. It suffices to show that there exists a boundary achievable by a piecewise-linear network for which no corresponding piecewise-linear implementation can be tightly certified by local-Lipschitzbased certification. Recall that by Corollary 4 there exists such a boundary for (global) Lipschitzbased certification. We will consider one of the same such boundaries. For a particular value of , consider the points ∆ (R(F, )), which are at distance exactly from the boundary. There are two cases to consider: either (1) the local Lipschitz constant is always the same everywhere, i.e., ∀ > 0, ∀x1, x2 ∈ ∆ (R(F, )), K (x1) = K (x2), or (2) there is some variation in the local Lipschitz constant, such that ∃ > 0, x1, x2 ∈ ∆ (R(F, )) where K (x1) 6= K (x2). In the first case, we see that K (x) = K (the global Lipschitz constant), meaning that localLipschitz-based certification will certify the exact same points as (global) Lipschitz-based certification. Thus, by Corollary 4, there must be a point which is robust at radius but not certifiable. In the second case, without loss of generality, assume K (x1) > K (x2). Because f is piecewise linear, it is comprised of a finite number of linear functions, which in turn have a finite number of distinct slopes (gradient norms). Thus, ifK (x1) > K (x2),K (x1)−K (x2) = δ where δ belongs to some finite set of strictly positive values. Furthermore, without loss of generality, x1 and x2 can be chosen to be arbitrarily close together, i.e., they lie arbitrarily near a point where the local Lipschitz constant changes. We will therefore consider x1 and x2 that are chosen such according to Equation 15. ||x1 − x2|| < · δ K (15) Let m2 be the margin by which the top-predicted class, F (x2), exceeds all other classes. The maximum radius that can be certified at x2 is thus m2/K (x2). Note that as certification is sound, we have m2 K (x2) ≤ (16) Now consider the maximum radius that can be certified at x1. Let m1 be the margin by which the top-predicted class, F (x1), exceeds all other classes. The maximum radius that can be certified at x1 is thus m1/K (x1) m1 K (x1) = m1 K (x2) + δ by assumption (17) ≤ m2 +K||x1 − x2|| K (x2) + δ by definition of the Lipschitz constant (18) < m2 + · δ K (x2) + δ by our choice of ||x1 − x2|| in (15) (19) ≤ ·K (x2) + · δ K (x2) + δ by (16) (20) = (21) Thus, we see that x1 cannot be certified with radius , despite that its distance from the boundary is exactly . B.2 OTHER PIECEWISE-LINEAR LIMITED METHODS Our work focuses primarily on Lipschitz-based certification, which we demonstrate is fundamentally limited on the hypothesis class of piecewise linear networks. However, this limitation is not due specifically to the use of the Lipschitz constant per se; instead, we attribute it more generally to the fact that Lipschitz-based certification always produces a piecewise-linear certified frontier on piecewise-linear networks, a property we refer to as PLL (Definition 4). In this section we briefly discuss how this property may apply to other flavors of certification techniques that have been proposed in the literature. Convex Relaxations and Dual Networks. One classic approach for certification is through convex relaxation. A survey of such methods is given by Salman et al. (2019), who point out the limitations (regarding tight certification) of convex relaxations (though the authors do not consider our setting where the learner may control the implementation of the boundary, but rather focus on post hoc certification). Though many approaches in this family have been proposed, we will consider two baseline methods that capture a primal and dual formulation of convex relaxations: Fast-Lin (Weng et al., 2018), and an approach proposed by Wong & Kolter (2018), often referred to as “KW.” Fast-Lin directly derives upper and lower bounds on the output of a ReLU network in order to determine if an adversarial example might exist. This is done by iteratively computing upper and lower bounds for the neurons in each layer and using them to replace the ReLU activations with linear upper and lower bounds. This computation resembles a piecewise-linear network, suggesting that Fast-Lin is PLL. The KW approach formulates the adversary as an LP that optimizes over the convex outer approximation of the set of top-level activations reachable through a norm-bounded perturbation. Crucially, for the sake of tractability, the LP can be bounded by the feasible set of the dual, which Wong & Kolter show can be expressed as a dual network, which resembles a backwards pass in the network being certified. For ReLU networks, the activations in the dual network are replaced with their upper convex envelopes (a linear function) over the bounded set [`, u], where ` and u represent lower and upper bounds on the pre-ReLU neural activations. The upper and lower bounds can be iteratively computed in a similar way to in Fast-Lip; thus, in its simplest form,7 the dual network inherits the piecewise linearity of the original ReLU network being certified, suggesting the resulting certified frontier is piecewise linear, and certification is PLL. Hyperplane Projections. As exact certification is NP-complete, the literature has often turned to training procedures that help simple, approximate certification enjoy greater success. In piecewise linear networks, the input can be partitioned into a polyhedral complex where each convex region corresponds to a single activation pattern, over which the network is linear (Croce et al., 2019; Fromherz et al., 2021; Jordan et al., 2019). Motivated by this view of ReLU networks, one family of robust training approaches attempts to expand the linear regions of the network to simplify the combinatorial analysis of the possible ReLU activation patterns (Croce et al., 2019; Xiao et al., 2019). Croce et al. proposed a simple certification technique for networks trained with their “Maximum Margin Regularization” (MMR), where a point, x, is certified only if (1) the entire -ball around x is contained in a single convex activation region, and (2) the linear function corresponding to the region does not have a boundary within from x. This approach is clearly PLL, as the certified regions can be obtained by shrinking each activation region (possibly split in two if a linear decision boundary crosses it) by . Since the original regions are convex polytopes, so too are the certified regions, thus the certified frontier is piecewise-linear. In contrast to our findings for Lipschitz-based certification, it is worth noting that the limitations of this approach go beyond PLL, as completeness of the MMR approach is in direct conflict with non-linearity; and moreover, the approach is designed specifically for piecewise-linear networks. C DETAILS ON EXPERIMENTS The experiments presented in Figure 1 in Section 3 were performed using the gloro Python library, which implements the GloRo Net method of Leino et al. (2021) for training certifiably robust 7This approach has been refined in subsequent work that we do not consider here (Wong et al., 2018). models by incorporating Lipschitz-based certification into training. All networks in the experiments consisted of a 1-hidden layer dense network with MinMax Anil et al. (2019) activations; three specific architectures were used, with 2, 20, and 200 hidden units, respectively. Models were trained for 64 epochs, with a batch size of 128. We chose hyperparameters inspired by those used by Leino et al. (see the original paper for details on the meaning of the various hyperparameters); namely, we used GloRo-TRADES loss with λ = 1.2, we scaled logarithmically to its ultimate value of 0.5 by the half-way point of training, and we linearly decreased the learning rate from 10−3 to 0 half-way through training. D AN ILLUSTRATIVE EXAMPLE OF THE CORNER PROBLEM For illustrative purposes a diagram is provided in Figure 2 that serves as a visual explanation of the “corner problem” described in Section 2.3. The boundary of a neural network, shown by the bold black line, forms a sharp corner. The complement to the robust region, i.e., the set of points that are not robust, is shown in gray. A simple implementation of this boundary has level curves that make similar sharp corners; the level curve corresponding to the certified frontier is shown by the dotted line, and the certified region is colored in blue. The region opposite the corner in the boundary is highlighted. We see that in this region, there is a set of points, shown in orange, that are not certified, despite the fact that they are robust, being at distance greater than from the boundary. In this two-dimensional example, these falsely flagged points make up a relatively small fraction of the uncertified points opposite the corner (represented as the union of the orange points and the highlighted gray points in the diagram); however, in high dimensions, virtually all uncertified points in this region would be falsely flagged, as indicated by Equation 1.
1. What is the main contribution of the paper regarding Lipschitz-based certification for piecewise-linear hypothesis classes? 2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to capture the true robust frontier of data? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the assumptions and limitations of the paper's analysis?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper attempt to understand the fundamental limitations of Lipschitz-based certification for piecewise-linear hypothesis classes. They show that piece-wise linear certification methods cannot sufficiently capture a true robust frontier of data that is curved and hence might need additional parameters. I commend the authors for proposing a unique viewpoint for understanding the gap between existing robustness certificates vs actual robust accuracy. Strengths And Weaknesses On a broad scope my take-away from the results of the paper feels very different to the author's inference and I am eager to hear the author's opinion on this and ready to change my mind on the following observations. In Theorem 2, why is the hypothesis class cal{F} restricted to be set of Lipschitz functions? Even for arbitrary label classifiers f, one can construct f' based on d(x) : the distance to boundary. Additionally if one has access to d(x), then there is no certification to speak of? The label classifier f is epsilon-locally robust at x for all epsilon <= d(x) by definition..? So this result does not meaningfully show that Lipschitz-based certification is a good approach. In this ideal case, there is no need for certification. Additionally, even when "the learner is allowed flexibility over the precise flexibility over the network function" shouldn't f' also be a neural network of the same architecture? How is the f' constructed a valid instance? The fundamental issue appears to be a mismatch between level curves of a hypothesis and the shape of the data manifold. This issue is independent of Lipschitz-based certification right? The authors show in Figure 1 that if the data curved but the function class is piece-wise linear then one can find pockets that escape certification analysis. However if one uses this information to switch the function class to be smooth, couldn't we have the reverse issue? i.e. when the data is piece-wise linear opposite the boundary induced by the function which is now curved? Further, at different parts of the image manifold the local shape can vary in regularity and thus any structured hypothesis might necessarily need more parameters to learn a good boundary. If this thought is sound, then the take-away is that any certification is only as good as how well the regularity of the hypothesis matches shape of data manifold across the distribution. Minor Comments/Feedback The analysis implicitly assumes that the unit ball induced by the norm is a curved surface (which can reduce to piece-wise linear when the norm is \ell_1 for e.g.) so I request that the authors explicitly mention this upfront. I would appreciate if the authors remove overloaded notation and have a single use of F, f and cal{F} and of the term "network function". O In page 3,"Note that two different neural networks f, f' may lead to the same predictions everywhere", are there examples of such networks f, f' where the weights aren't equivalent modulo layer-wise scaling? Also in this statement can f,f' be different architectures? In Def 3, it should be \forall delta>0, \neg cert(f, x, \epsilon + delta) Can the authors provide an example of when the robust frontier of a network is not piecewise linear? Is this only limited to situations where the data is supported on a curved (or non piecewise-linear) surface? In Appendix A.1, proof of Theorem 1, should it be \sqrt{x^2+y^2} = \epsilon ? In Appendix A.2, proof of Theorem 2, should it be "d(x) be the minimum distance of x from ..." I request that the authors add an explicit proof of Proposition 3 even if obvious. Clarity, Quality, Novelty And Reproducibility The writing is imprecise in parts and I hope the authors make edits to address the minor comments. The ideas outlined are novel and interesting. The authors have provided code to reproduce their experiments.
ICLR
Title Limitations of Piecewise Linearity for Efficient Robustness Certification Abstract Certified defenses against small-norm adversarial examples have received growing attention in recent years; though certified accuracies of state-of-the-art methods remain far below their non-robust counterparts, despite the fact that benchmark datasets have been shown to be well-separated at far larger radii than the literature generally attempts to certify. In this work, we offer insights that identify potential factors in this performance gap. Specifically, our analysis reveals that piecewise linearity imposes fundamental limitations on the tightness of leading certification techniques. These limitations are felt in practical terms as a greater need for capacity in models hoped to be certified efficiently. Moreover, this is in addition to the capacity necessary to learn a robust boundary, studied in prior work. However, we argue that addressing the limitations of piecewise linearity through scaling up model capacity may give rise to potential difficulties—particularly regarding robust generalization—therefore, we conclude by suggesting that developing smooth activation functions may be the way forward for advancing the performance of certified neural networks. 1 INTRODUCTION Since the discovery of adversarial examples (Szegedy et al., 2014), defenses against malicious input perturbations to deep learning systems have received notable attention. While many early-proposed defenses—such as adversarial training (Madry et al., 2018)—are heuristic in nature, a growing body of work seeking provable defenses has arisen (Cohen et al., 2019; Croce et al., 2019; Fromherz et al., 2021; Huang et al., 2021; Jordan et al., 2019; Lee et al., 2020; Leino & Fredrikson, 2021; Leino et al., 2021; Li et al., 2019; Singla et al., 2022; Trockman & Kolter, 2021; Wong et al., 2018; Zhang et al., 2018). Generally, such defenses attempt to provide a certificate of local robustness (given formally in Definition 1), which guarantees a network’s prediction on a given point is stable under small perturbations (typically in Euclidean or sometimes `∞ space); this precludes the possibility of small-norm adversarial examples on certified points. The success of a certified defense is typically measured empirically using verified robust accuracy (VRA), which reflects the fraction of points that are both (i) classified correctly and (ii) certified as locally robust. Despite the fact that perfect robust classification (i.e., 100% VRA) is known to be possible on standard datasets at the adversarial perturbation budgets used in the literature (Yang et al., 2020b), this possibility is far from realized in the current state of the art. For example, on the benchmark dataset CIFAR-10, state-of-the-art methods offering deterministic guarantees of `2 robustness1 have remained at approximately 60% VRA (Huang et al., 2021; Leino et al., 2021; Singla et al., 2022; Trockman & Kolter, 2021), while non-robust models handily eclipse 95% accuracy. It is difficult to precisely account for this discrepancy; though among other reasons, state-of-the-art methods typically use loose bounds to perform certification—as exact certification is (for general ReLU networks) NP-complete (Katz et al., 2017; Sinha et al., 2018)—which conceivably leads to falsely flagging truly robust points or to over-regularization of the learned model. While conservative approximations may be necessary to perform efficient certification (and to facilitate efficient robust training), it is certainly possible that they foil reasonable hopes for “optimality.” In this work, we 1In this work we primarily consider certified defenses that provide a deterministic guarantee of local robustness, as opposed to a statistical guarantee. For further discussion of this point, see Section 4. offer further insight into the shortcomings of modern certification techniques by analyzing their limitations in the context of the architectural settings in which they are conventionally employed. In particular, we find that piecewise linearity—a practically ubiquitous property of neural networks considered in the certification literature (e.g., standard ReLU and the more recently popularized “MinMax” (Anil et al., 2019) activations are both piecewise linear)—fundamentally limits the power of Lipschitz-based `2 local robustness certification. In effect, we argue, this means that extra capacity is needed simply for facilitating efficient certification—in addition to whatever capacity may be required for learning a robust boundary (e.g., as examined by Bubeck & Sellke (2021)). On the other hand, perhaps surprisingly, we prove that free from the constraint of piecewise linearity, Lipschitz-based certification is powerful enough to perform complete certification on any decision boundary, provided the implementation of the function giving rise to the boundary is under the learner’s control (indeed, this is consistent with the fact that the highest performing certified defenses incorporate Lipschitz-based certification into training). These latter findings suggest that continued progress towards improving state-of-the-art VRA may be enabled through carefully chosen smooth activation functions,2 which do not inherently limit the power of what are currently the most promising forms of efficient local robustness certification. In summary, the primary contributions of this work are as follows: (1) we show that piecewise linearity imposes inherent limitations on the tightness of efficient robustness certification—our primary focus is Lipschitz-based certification, but we discuss similar limitations of other methods in Appendix B; (2) we prove that Lipschitz-based certification is fundamentally powerful for tight robustness certification, provided (i) the robust learning procedure has power over the implementation of the classifier, and (ii) the hypothesis class is not limited to piecewise linear networks; and (3) we demonstrate that tight Lipschitz-based certification may require significant capacity overhead in piecewise-linear networks. These findings offer a new perspective on the sticking points of modern certified training methods, and suggest possible paths forward. We begin in Section 2 by introducing the limitations piecewise linearity imposes on robustness certification, starting generally, and narrowing our focus specifically to Lipschitz-based certification. We then discuss the role that capacity plays in mitigating these limitations in Section 3, which concludes with a discussion of the implications of our findings, both retrospectively and prescriptively. Finally, we discuss related work in Section 4, and offer our concluding remarks in Section 5. 2 LIMITATIONS OF PIECEWISE LINEARITY The main insights in this work stem from the simple, yet crucial observation that the points lying at a fixed Euclidean distance from a piecewise-linear decision boundary, in general, do not themselves comprise a piecewise-linear surface. Therefore, in order for a certification procedure to precisely recover the set of robust points—those which lie a distance of at least from the decision boundary—it must be capable of producing a boundary between robust and non-robust points that is not piecewise-linear, even on networks that are. However, as we will see, Lipschitz-based certification, for example, is in fact constrained to produce a piecewise-linear “certified frontier” on piecewise-linear networks, as the set of just-certifiable points essentially corresponds to a level curve in the output of the network being certified. On the other hand, if the level curves of the function being certified correspond (up to some constant factor) to their distance from the decision boundary (and must therefore include smooth curves), Lipschitz-based certification identifies precisely the points that are truly -locally robust, provided a tight bound on the Lipschitz constant. As we will make clear, this has important implications regarding the power of Lipschitz-based certification in properly suited network architectures. In the remainder of this section, we formalize this intuition and discuss some of its implications. Section 2.1 introduces our main theorem regarding the limitations imposed by piecewise linearity, along with the necessary background and definitions. Section 2.2 narrows the focus to Lipschitzbased certification, showing that despite being powerful in general, it is fundamentally limited within the hypothesis class of piecewise linear networks. Finally, Section 2.3 presents a thought experiment that provides basic intuition about the possible scale of the problems caused by these limitations. 2Or at least, activation functions which enable learning curved (as opposed to piecewise linear) functions. 2.1 FUNDAMENTAL LIMITATIONS TO CERTIFICATION COMPLETENESS For our purposes, we will consider a neural network to be a function f : Rn → Rm mapping ndimensional inputs to logit values corresponding to m different classes. From the network function f , we derive a neural classifier, F : Rn → Rm, by letting F (x) = argmaxi∈[m] fi(x). When it is clear from the context which we are referring to, we will use the term “neural network” for both the network function f and its corresponding classifier F . Note that two different neural network functions, f and f ′, may lead to the same predictions everywhere, i.e., ∀x . F (x) = F ′(x). When this happens, we say that f and f ′ share the same decision boundary, where the decision boundary is simply the set of points where fi(x) = fj(x) for some i 6= j ∈ [m]. In this work, we consider the problem of local robustness certification. As in prior work, we define local robustness as a property of a point x and classifier F , parameterized by a perturbation budget, or robustness radius, , as in Definition 1. Definition 1 ( -Local Robustness). A classifier F : Rn → [m] is -locally robust at point x ∈ Rn, with respect to norm || · ||, if ∀x′ ∈ Rn . ||x− x′|| ≤ =⇒ F (x) = F (x′). A certification procedure, cert, is a function that takes a neural network, f , a point, x, and a perturbation budget, , and produces a label in {0, 1}, where an output of 1 means that f is certified as -locally robust at x. A valid certification procedure must be sound, i.e., cert(f, x, ) = 1 =⇒ F is -locally robust at x; however, it need not be complete, i.e., it may be the case that cert(f, x, ) = 0 and yet F is in fact -locally robust at x. For a given certification procedure, let the certified regions of f , Ccert(f, ) = {x : cert(f, x, )} be the set of points that can be positively certified by cert. Similarly, let the robust regions of f be given by the set R(F, ) = {x : F is -locally robust at x} of -locally robust points (note that, in contrast to Ccert, R does not depend on the implementation of f , only its classification outputs, given by F ). Soundness entails that ∀f . Ccert(f, ) ⊆ R(F, ), but clearly it is desirable for Ccert(f, ) to match R(F, ) as tightly as possible; when this is achieved perfectly we can consider cert to be “complete.” However, as Ccert(f, ) can depend on the underlying function, f , which has a surjective mapping to classifiers, F , derived from the same hypothesis class, we must be careful in defining completeness precisely. Let F be a hypothesis class—a family of functions of type Rn → Rm, e.g., that are captured by some neural network architecture. We will also use the slight abuse of notation, F ∈ F , to denote any F : Rn → [m] such that there exists a function f ′ ∈ F which produces the same labels as F on all inputs, i.e., ∀x . F (x) = argmaxi∈[m] f ′i(x). We say that a certification procedure, cert, is complete on F if all possible decision boundaries achievable by functions in the hypothesis class have at least one implementation in F for which cert perfectly recovers the true robust regions. This is stated formally in Definition 2. Definition 2. A certification procedure, cert, is complete on hypothesis class, F , if for > 0 ∀F ∈ F . ∃f ′ ∈ F . ( ∀x . F (x) = argmax i∈[m] f ′i(x) ) ∧ ( Ccert(f ′, ) = R(F, ) ) Essentially, completeness over a hypothesis class entails a notion of compatibility between the certification procedure and the hypothesis class; specifically, it means that for any decision boundary expressible by the hypothesis class, it is possible for a learning procedure to produce a model that implements the decision boundary in a way that makes the certification procedure complete. Definition 2 provides a key relaxation from a stricter notion of completeness that would require Ccert(f, ) = R(F, ) for all f , as this would not be achievable by any polynomial certification procedure3 (Katz et al., 2017; Sinha et al., 2018). By requiring tight certification only modulo the decision boundary, we avoid this limitation, splitting the responsibility for completeness between the certification procedure, the learning algorithm, and the hypothesis class. Next, we will also find it useful to define the certified frontier of F under cert (Definition 3); essentially, the set of points that are just barely certified, which lie at the frontier of the certified 3Assuming P 6= NP . regions. We will similarly define the robust frontier as the set of points that are just barely -locally robust, which lie at the frontier of the robust regions. Definition 3 (Certified Frontier). The certified frontier of a neural network, F : Rn → [m], under certifier, cert, at perturbation budget, , is the set of points ∆ ( Ccert(f, ) ) = { x : cert(f, x, ) ∧ ( ∀δ > 0 . ¬cert ( f, x, + δ )) } . We now turn to the specifics of one of our main results, namely, that piecewise linearity is a limiting factor for tight certification. Of course, as alluded to earlier, some certification procedures do achieve complete certification on piecewise-linear networks—e.g., (Jordan et al., 2019; Tjeng et al., 2019)—however, such methods are invariably exponential. Thus, we characterize the set of piecewise-linear limited (PLL) methods in Definition 4. Intuitively, a certification procedure is PLL if it is constrained to produce piecewise-linear certified frontiers on piecewise-linear models. Definition 4 (Piecewise-linear Limited Certification). A certification procedure, cert, is piecewise-linear limited (PLL) if ∀f . f is piecewise-linear =⇒ ∆ ( Ccert(f, ) ) is piecewise-linear Note that the robust frontier of a network F is, in general, not piecewise linear, even if F (and thus its decision boundary) is piecewise linear. Thus, if the certified frontier of cert is piecewise linear, cert cannot be complete, i.e., C 6= R. Moreover, this means that any piecewise-linear limited certification procedure cannot be complete on the hypothesis class of piecewise linear networks (Theorem 1). The proof of Theorem 1 is given formally in Appendix A.1. Theorem 1. Any piecewise-linear limited certification procedure is incomplete on the hypothesis class of piecewise linear networks. The proof of Theorem 1 relies on the fact that a piecewise-linear function cannot be equal to a function exhibiting smooth curves. However, it is known that neural networks, provided with enough capacity, can approximate any function with arbitrary precision (Hornik, 1991). We address this point in Section 3, where we discuss the implications of Theorem 1 regarding the capacity requirements of tightly certifiable networks. 2.2 THE POWER AND LIMITATIONS OF LIPSCHITZ-BASED CERTIFICATION We will now narrow our focus to consider the specific family of Lipschitz-based certification methods. Such methods perform certification by using an upper bound, K, on the network’s Lipschitz constant; essentially, a point is certified if the margin by which the top-predicted class exceeds all other classes is greater than K. In our work, we will set aside the details around how the Lipschitz is obtained, though this is also a source of potential looseness in the general approach. That is, we will (optimistically) take for granted that a tight bound is obtained in our analysis. Lipschitz-based certification has proven effective in the literature, achieving state-of-the-art performance—when paired with an appropriate training routine—despite its simplicity (Leino et al., 2021; Trockman & Kolter, 2021). Lipschitz-based certification is advantageous in many ways; in addition to being easy to incorporate into a robust learning objective, it enables zero-cost certification at run time, as the Lipschitz constant does not need to be recomputed after training. On the other hand, it would seem that Lipschitz-based certification is fundamentally underpowered—the “global” Lipschitz constant is a conservative estimate of the local Lipschitz constant, which in turn gives a conservative estimate of how much the net output can change within a given neighborhood. If a primary sticking point for advancing certified accuracy is loose certification, it is fair to ask how promising Lipschitz-based certification will continue to be. The philosophy behind incorporating Lipschitz-based certification into training is essentially that the potential shortcomings of Lipschitz-based certification can be addressed by learning a easily certifiable network function. We show that this intuition is essentially correct. Perhaps surprisingly, we show that Lipschitz-based certification is sufficiently powerful to be complete on the hypothesis class of Lipschitz functions4 However, we also show that Lipschitz-based certification is PLL, meaning this potential cannot be achieved with a hypothesis class constrained by piecewise linearity. 4I.e., with bounded Lipschitz constant. Note that this is not a meaningful constraint for neural networks, as any neural network with Lipschitz activation functions and finite weights is Lipschitz in this sense. 2.2.1 LIPSCHITZ-BASED CERTIFICATION IS POWERFUL We begin by showing that for any boundary achievable by a Lipschitz network function, when the learner is given control over the precise network function implementing the boundary, it is always possible to find an implementation that can be tightly certified using Lipschitz-based certification. This is stated formally in Theorem 2. Theorem 2 further entails that there exists a network function for any 2 -separated data that achieves perfect VRA under Lipschitz-based certification. The proof of Theorem 2 is given in Appendix A.2. Theorem 2. When the hypothesis class, F , is given as the set of Lipschitz functions, Lipschitz-based certification is complete on F . 2.2.2 LIPSCHITZ-BASED CERTIFICATION IS LIMITED BY PIECEWISE-LINEARITY Despite the power of Lipschitz-based certification for general functions, when restricted to the hypothesis class of piecewise linear networks, it becomes fundamentally limited. That is, formally, Lipschitz-based certification is PLL (Proposition 3). Proposition 3. Lipschitz-based certification is piecewise-linear limited. Proposition 3 follows essentially because the certified frontier of Lipschitz-based certification corresponds to a particular level curve of the network function, which is piecewise linear whenever the function is. As a direct consequence of Proposition 3 and Theorem 1, we arrive at Corollary 4. Corollary 4. Lipschitz-based certification is not complete on the hypothesis class of piece-wise linear networks. Note that taken in the context of Theorem 2, Corollary 4 means that in a sense, the fundamental limitation of Lipschitz-based certification is not intrinsic to its simplicity (e.g., because the local Lipschitz constant might be tighter than the global constant on some functions), but rather, it is related to the hypothesis class of networks being certified. Put differently, piecewise linearity imposes real limitations on Lipschitz-based certification that cannot be attributed to practical, but non-fundamental, issues, such as efficient computation of Lipschitz bounds, etc. 2.3 THE PROBLEM WITH CORNERS AND THE CURSE OF DIMENSIONALITY The incongruence between the piecewise-linear certified frontier of Lipschitz-based methods, and the robust frontier of a piecewise-linear boundary, which features smooth curves, becomes relevant when the boundary comes to a “corner,” or relatively sharp inflection point. At corners, the robust frontier curves at a fixed radius around the corner, while the certified frontier, absent aid from additional capacity (see Section 3), runs parallel to the facets forming the corner, offset by a fixed amount (see Figure 2 in Appendix D for an illustration). The sharper the corner, the larger the difference will be between the corresponding robust and certified regions. Additionally, we will see that this is also true the higher the dimension of the corner, i.e., the more independent half-spaces meet to create the corner. As a thought experiment, we will model a d-dimensional corner as the intersection of d orthogonal half-spaces. Assuming the level curves near the corner run parallel to the half-spaces, h ∈ H , forming the corner, in the best case, the certified region is given by the union of half-spaces obtained by flipping each h ∈ H and shifting it by . Consider the hypercube of width just opposite the corner. This hypercube lies entirely outside the robust region, meaning all points within it cannot be certified using Lipschitz-based certification. However, only the points intersecting the hypersphere of radius centered at the corner are truly non- -robust. We can compute the ratio of the volume of the hypercube to the intersecting portion of the hypersphere, given by Equation 1: πd/2 Γ ( d 2 + 1 ) · ( 2 )d (1) As the dimension increases, this ratio tends to zero, meaning that in high dimensions, almost all points in this region opposite the corner are incorrectly uncertified. Furthermore, the maximum distance from an uncertified point within this region to the boundary is equal to the diagonal of the hypercube, which is given by √ d · . This means that even points that are significantly more robust than required may yet be uncertified. 3 THE ROLE OF CAPACITY The primary limitation of Lipschitz-based certification in piecewise-linear networks derives from the fact that we cannot have smoothly curved level curves in such networks (or, more generally, that PLL certification methods cannot have smoothly curved certified frontiers in such networks). However, while this is true in the strictest sense, a function with smooth curves can be approximated with arbitrary precision, given sufficient capacity. In other words, increased network capacity may be one possible option to mitigate the fundamental limitations discussed throughout Section 2. In this section, we investigate the capacity requirements necessary for tight PLL certification in piecewiselinear networks. While the precise meaning of “capacity” in a quantifiable sense is a bit nebulous, for our purposes, we will consider capacity in a piecewise-linear network to correspond to the number of piecewiselinear regions. This grows with the number of internal neurons, though the relationship may vary depending on other aspects of the network architecture, e.g., the depth of the network. Previous work has studied the capacity implications for learning a robust decision boundary, finding that separating points while controlling Lipschitzness may require additional capacity beyond what would be necessary to simply separate them (Bubeck & Sellke, 2021). Besides the capacity required to represent the decision boundary in a robust network, our work asks instead about the capacity required to tightly certify a given boundary. We find that in a piecewise linear network, even if the boundary is optimal—in that all points in the distribution are indeed a distance of or more from it—the network may require additional capacity to be able to prove this using the Lipschitz constant. Taking the data distribution aside, we consider the goal of certifying all points that are sufficiently far from the boundary. As highlighted in Section 2.3, in places where the decision boundary forms high-dimensional “corners,” there may be relatively large volumes of points that are -far from the boundary but cannot be certified as long as the level curves simply run parallel to the boundary. In such cases, tight certification requires extra capacity specifically to round out the level curves around the corners in the decision boundary. We begin by demonstrating this concept via an illustrative example. We conclude by discussing the implications of our results and suggest avenues for future work. 3.1 AN ILLUSTRATIVE EXAMPLE OF HOW CAPACITY ENABLES TIGHT CERTIFICATION As an example of how Lipschitz-based certification can require excess capacity beyond what is necessary to learn a robust boundary, we consider a synthetic 2-D dataset that can robustly separated by a simple piecewise linear boundary. An illustration is provided in Figure 1a. We begin with a decision boundary given by B = {(x1, x2) : max(x1, x2) = 0}; this boundary separates points with negative x- and y-coordinates from points in the other three quadrants, and forms a 90◦ corner at the origin. The data are then generated such that all the points with label 0 lie a distance of at least below and to the right of the boundary, and the points with label 1 lie a distance of at least above and to the right of the boundary. Specifically, the 1-labeled points curve around the boundary such that there is a tight margin of exactly 2 about the boundary. By construction, the function f(x) = [0,max(x1, x2)] produces logit values that yield the boundary B, with respect to which all points in the dataset are -locally robust. This function can be trivially implemented with minimal capacity by a simple MinMax network, f(x) = σ(xW 1)W 2, where σ is the MinMax activation function, and W 1 and W 2 are given by Equation 2. W 1 = [ 1 0 0 1 ] W 2 = [ 0 0 0 1 ] (2) Furthermore, the Lipschitz constant of f is 1;5 this can even be tightly obtained by taking the layerwise product of the layer operator norms, as is typically done in practice. Hence, the points that can be certified will be those for which |f1(x) − f0(x)| ≥ ; that is, the points outside the level curves max(x1, x2) = − and max(x1, x2) = . However, we see that this certified frontier fails to certify many points in the positive x-y quadrant, despite the fact that all the points are indeed robust with respect to the boundary of f . This is depicted in Figure 1b. In order to certify these points, we need the level curve corresponding to f1(x)− f0(x) = to bend smoothly around the boundary, rather than forming the same 90◦ angle. This requires more capacity. To gain a sense of how this plays out in practice, we consider adding capacity via expanding the number of neurons in the hidden layer (which contained only two neurons in our minimal example). In Figures 1d and 1e, we show the boundaries of two additional learned networks, g and h, with 20 and 200 internal neurons, respectively. We see that increasing the number of internal neurons by an order of magnitude yields a better set of level curves, but the network g still must compromise as the level curves are not smooth enough to tightly follow the contour of the data. Finally, when we increase the number of internal neurons by two orders of magnitude, we at last obtain a function h that achieves nearly 100% VRA on our sample data. This function, as desired, forms essentially smooth level curves that bend around the boundary corner with a radius of . Interestingly, h learns a boundary that is somewhat different from the boundary originally used to derive the data; however, both boundaries can be thought of as “equivalent” in the sense that they produce the same margin, reflecting that the optimal boundary for this dataset is not unique. Discussion. In our example, we needed 100 times more neurons than were necessary to construct an optimal decision boundary in order to tightly certify the boundary with the Lipschitz constant. While it is difficult to extrapolate from this toy example to a “real world” scenario, our results suggest that smoothing the level curves may require significant overhead beyond the capacity necessary to produce a truly robust boundary. Another aspect of this experiment worth noting is that when the network had insufficient capacity to learn an optimally robust, tightly certified boundary (e.g., in Figures 1c and 1d), the resulting model tended to compromise by making the corner less sharp (compared to the desired 90◦ angle). Geometrically, when the boundary has an inflection with a wider angle, the difference between the certifiable frontier and the frontier of robust points is less pronounced (consider for example, what happens then the inflection approaches 180◦). In effect, this means that while under-parameterization of piecewise-linear models may be a problem for robust model performance in practice, this limitation may be (at least in part) manifested as an under-fit model as opposed to one with many robust but non-certifiable points. This is reflected in the empirical results for certifiably trained models in the literature, which typically have lower “clean accuracies” than their standard-trained counterparts. However, we note that these models also exhibit a discrepancy between their certified accuracy and their vulnerability to actual attacks, leaving the possibility that they may also fail to certify some truly robust points. 5More properly put, the Lipschitz constant of |f1−f0|—which represents the margin by which the predicted class exceeds the non-predicted class—is 1. 3.2 POTENTIAL DRAWBACKS OF THE CAPACITY ESCAPE HATCH As we have seen, by adding capacity, we can help overcome the limitations of piecewise linearity by enabling the network to approximate smooth curves around corners in the decision boundary. For universal tight certification, this needs to be done in the neighborhood of all corners on the decision boundary. To the extent that each corner requires independent capacity, hopes for the scalability of such an approach seem slim; albeit, VRA only requires tight certification on the data manifold, meaning that extra capacity should only be needed in places where the decision boundary has sharp inflections near in-distribution points. However, this, too, presents an interesting problem. Namely, the network only has incentive to allocate capacity to round the level curves in the places that are necessary to certify its training set; i.e., where inflections in the decision boundary encroach on training points. Meanwhile, if similar inflections exist near test points not seen during training, the learned network may fail to certify them—even if the boundary is general, and even if it is also robust. In other words, we are faced with not only the challenge of learning a generally robust boundary, but additionally of learning a generally certifiable function. Indeed, generalization of VRA is empirically observed to be worse than the corresponding “clean accuracy” would indicate—a principle that has been noted in prior work due to its privacy implications (Yeom et al., 2020). A Proposed Way Forward. Another possibility for addressing the fact that Lipschitz-based certification is PLL is to expand the hypothesis class to enable smooth curves in the decision surface. Ultimately, our analysis shows that Lipschitz-based certification is most effective when the level curves of the network function accurately reflect the `2 distance to the boundary, which requires the possibility of smooth curves. This goal may be best achieved by purpose-built activations, as piecewise linearity stems from the choice in activation function. State-of-the-art Lipschitz-based certifiable training methods have enjoyed increased success in recent years through leveraging MinMax activations (Anil et al., 2019)—or a variant thereof proposed by Singla et al. (2022)—which are piecewise linear. MinMax has a distinct advantage over the more common ReLU activation, due to its gradient-norm-preserving (GNP) property, which Anil et al. demonstrate is key for tight, efficient Lipschitz bounds. While the need for gradient norm preservation remains clear, we posit that some form of smoothness is an additional desirable property, as it would free the hypothesis class from piecewise linearity. We believe the task of designing suitable smooth activation functions for PLL-certified networks is a promising avenue for future work. 4 RELATED WORK Power and Limitations of Lipschitz-based Certification. Several of the early efforts around robustness certification focused on post hoc certification of networks trained outside of the control of the certifier. This is a fundamentally hard problem, shown to be NP-complete by Katz et al. (2017) and Sinha et al. (2018). While this fundamentally limits the tractability of complete post hoc certification, the limitation is of lesser concern for modern approaches that incorporate certification into the training objective, thus encouraging learning models that better facilitate efficient certification. The specific limitations of Lipschitz-based certification have also been of great interest in the prior literature. Most of these results particularly consider the practical problem of bounding a neural network’s Lipschitz constant. For example, Huster et al. (2018) note that the common method of using the product of the layer-wise operator norm cannot tightly bound the Lipschitz constant of even basic functions in ReLU networks. Anil et al. (2019) study this point further demonstrating a trade-off between expressive power and efficient Lipschitz bound computation in networks with non-gradient-norm-preserving activation functions. This limitation is handled by using network architectures with gradient-norm-preserving activation function such as MinMax, and orthonormal linear operators (though the latter need not necessarily be strictly enforced as it is a learnable objective). Anil et al. conjecture that such networks are universal 1-Lipschitz function approximators, suggesting that learning any Lipschitz function in such a way that the Lipschitz constant can be bounded tightly and efficiently is possible. By contrast, our work points to previously unstudied limitations that are separate from the Lipschitz constant bounding problem, and are indeed not mitigated through the use of MinMax activations, which are piecewise linear. However, we propose that the limitations brought forth in our work may similarly be addressed via novel activation functions. On the flip side, previous work has also touched on the power of Lipschitz-based certification. (Leino et al., 2021) showed that certification with the global Lipschitz constant can be as powerful as with the local Lipschitz constant when the model is under the learner’s control. We extend this result in a number of key ways. First, we prove a stronger result that can be stated for all points, rather than for a finite set of points certified via the local Lipschitz constant. Second, we explicitly consider the hypothesis class, demonstrating that smoothness is a necessary condition to achieve this result. Capacity Requirements for Robust Neural Networks. Understanding the role of capacity in deep neural networks has been a topic of interest in general, particularly due to the demonstrated effectiveness of highly over-parameterized models (Arora et al., 2018; Bubeck & Sellke, 2021; Du et al., 2019; Garg et al., 2022; Zhang et al., 2017). Recent work has also investigated this subject in the particular context of robust models. Bubeck & Sellke (2021) showed that under mild regularity assumptions, learning a highly accurate model with small Lipschitz constant requires significantly more parameters than would be required with no constraint on the Lipschitz constant—where the capacity overhead, in terms of the number of parameters, scales with the dimension. While a controlled Lipschitz constant is central to successful Lipschitz-based certification, our work (e.g., our example in Section 3.1), shows that a Lipschitz interpolation between points of opposite class is not sufficient for certification. As our analysis is focused on certification rather than Lipschitz interpolation, we complement the work of Bubeck & Sellke, showing that even further capacity may be required to appropriately bend the function’s level curves to facilitate Lipschitz-based certification. In addition to the information-theoretic capacity requirements, large numbers of parameters in deep networks may be necessary to facilitate efficient learning (Arora et al., 2018; Du et al., 2019). Recently, Garg et al. (2022) showed that robust learning in particular may require even greater overparameterization than standard learning. Results such as these are complimentary to work such as ours, which focus on minimal parameterizations. Randomized Smoothing. Our work has focused on deterministic certification. By contrast, randomized smoothing (Cohen et al., 2019; Lecuyer et al., 2018) has become a popular method that instead provides a statistical guarantee of robustness. Randomized smoothing (RS) essentially modifies the original function by predicting the expected label under Gaussian6 noise. These predictions are empirically determined through sampling, with the statistical certificate depending on the unanimity of the sample labels. While RS provides a weaker robustness guarantee, it solidly outperforms deterministic methods in terms of certified accuracy. Interestingly, it seems clear that RS is not PLL, since it naturally smooths piecewise linear networks, leading to a smooth boundary and certified frontier—this may be one of the keys to its success. This observation gives further support to the notion that state-of-the-art deterministic methods may be held back by piecewise linearity, and may benefit from smooth activation functions. 5 CONCLUSIONS AND FUTURE DIRECTIONS Incorporating Lipschitz-based certification into robust training procedures has proven to be the most effective way to achieve high deterministic `2 verified-robust accuracy yet considered in the literature. Due to our Theorem 2, there is reason to believe Lipschitz-based certification has the power to remain as promising as current results suggest. However, we also showed that restricted to the hypothesis class of piecewise-linear networks, as has been the standard regime, Lipschitz-based certification becomes fundamentally limited. For piecewise-linear networks, this means that tight Lipschitz-based certification may require significantly more parameters, which, even if tractable, can complicate certifiably robust generalization (e.g., see Section 3.2). On the other hand, rather than viewing this as a fundamental drawback for Lipschitz-based certification, we propose that purpose-built activations—with the correct smoothness and gradient-norm-preserving properties— is a promising avenue for future work to free the most promising form of efficient deterministic certification from the limitations of piecewise linearity. 6Prior work has considered other distributions as well (Yang et al., 2020a) A PROOFS A.1 PROOF OF THEOREM 1 Theorem Statement. Any piecewise-linear limited certification procedure is incomplete on the hypothesis class of piecewise linear networks. Proof. It suffices to show that there exists a boundary achievable by a piecewise-linear network for which no PLL certification method can tightly certify. We proceed by producing a piecewise linear boundary that induces a smooth robust frontier. This is sufficient to prove our theorem, as ∆ ( Ccert(f, ) ) 6= ∆ ( R(F, ) ) =⇒ Ccert(f, ) 6= R(F, ). Consider the 2-D boundary given by max(x, y) = 0. Clearly, this boundary exists within the class of piecewise linear functions as the function f(x, y) = max(x, y) is piecewise linear. Now consider the points in the positive x-y quadrant. The points in this quadrant that are at distance from the boundary are given by √ x2 + y2 = , which is not piecewise linear. By definition, any certification method that is PLL must have a certified frontier that is piecewise linear. Thus, the certified frontier of such any such method cannot be equal to √ x2 + y2 = in this quadrant. A.2 PROOF OF THEOREM 2 Theorem Statement. When the hypothesis class, F , is given as the set of Lipschitz functions, Lipschitz-based certification is complete on F . Proof. Let F be the set of Lipschitz functions. Consider the decision boundary of any function f ∈ F . Define f ′ as follows: let d(x) be the minimum distance of x from the decision boundary and let f ′(x) = d(x) · 1F (x), where 1F (x) is the one-hot encoding of F (x). First, observe that f ′j − f ′i is 1-Lipschitz for all i 6= j. To see this consider the following. The Lipschitz constant is given by sup x,x′ ∣∣(f ′j(x)− f ′i(x))− (f ′j(x′)− f ′i(x′))∣∣ ||x− x′|| = sup x,x′ ∣∣f ′j(x)− f ′j(x′) + f ′i(x′)− f ′i(x)∣∣ ||x− x′|| (3) Consider points x and x′, and let us assume that ||x−x′|| = δ. We would like to bound the quantity given by (4), the numerator in (3), by δ.∣∣f ′j(x)− f ′j(x′) + f ′i(x′)− f ′i(x)∣∣ (4) There are a few cases to consider. First if F (x) and F (x′) are both different from i and j, then (4) is 0 ≤ δ. Since (4) is symmetric in both i and j, and x and x′, without loss of generality, we will assume F (x) = j. This leaves two cases: when F (x′) = j, and when F (x′) 6= j (in the latter case we will not be concerned with whether or not F (x′) = i). In the first case we have (4) = |f ′j(x)− f ′j(x′)| = |d(x)− d(x′)| (5) = d(x)− d(x′) without loss of generality (6) Let a be the nearest point on the boundary to x′, such that which d(x′) = ||x′ − a||. Thus, d(x) ≤ ||x− a|| as a is on the boundary (7) ≤ ||x− x′||+ ||x′ − a|| by the triangle inequality (8) = δ + d(x′) (9) =⇒ d(x)− d(x′) ≤ δ as desired (10) In the second case, x and x′ are given different labels and we have (4) = |f ′j(x) + f ′i(x′)| (11) ≤ d(x) + d(x′) as f ′i(x′) is at most d(x′) (achieved when F (x′) = i) (12) Since x and x′ are given different labels, there must be at least one part of decision boundary that bisects the line segment connecting x and x′; let a be this intersection point. Additionally, since a is on the boundary, we must have that d(x) ≤ ||x− a|| and d(x′) ≤ ||x′ − a||. Thus, as desired, d(x) + d(x′) ≤ ||x− a||+ ||x′ − a|| = δ (13) This allows us to conclude that f ′j − f ′i is 1-Lipschitz for all i 6= j, as claimed. The points that are certified by Lipschitz-based certification are those for which (14) holds, where j = F (x) and Kji is the Lipschitz constant of f ′j − f ′i . min i6=j { f ′j(x)− f ′i(x)− Kji } ≥ 0 (14) Notice that when i 6= F (x), f ′i(x) = 0. Thus (14) can be simplified to f ′j(x) = d(x) ≥ , noting also that Kji = 1 ∀i, j. Therefore, the points that can be certified via Lipschitz-based certification are those for which d(x) ≥ , which are precisely the points that are locally robust. A.3 PROOF OF PROPOSITION 3 Theorem Statement. Lipschitz-based certification is piecewise-linear limited. Proof. Assume the function, f , being certified is piecewise linear. Without loss of generality, consider inputs x for which the network predicts class j. The margin by which class j surpasses all other classes is given by m(x) = mini {fj(x)− fi(x)}. Note that m is piecewise linear as f is piecewise linear. Let K be the Lipschitz constant of m. The largest radius that can be certified at x is then m/K. Thus, the certified frontier is given by m/K = ; this corresponds to the level curve of m corresponding to m = ·K. Since m is piecewise linear, this level curve is piecewise linear. Thus, the certified frontier is piecewise linear, and Lipschitz-based certification is PLL. B LIMITATIONS OF OTHER CERTIFICATION METHODS B.1 LIMITATIONS OF LOCAL-LIPSCHITZ-BASED CERTIFICATION State-of-the-art deterministic `2 certified performance is currently achieved using Lipschitz-based certification, which outperforms other types of certified training methods (Leino et al., 2021; Trockman & Kolter, 2021) such as those based on convex relaxations—e.g., (Wong et al., 2018)—or maximizing linear regions—e.g., (Croce et al., 2019; Xiao et al., 2019). Unsurprisingly, however, methods that use the local Lipschitz constant for certification can achieve similarly high VRA (Huang et al., 2021), though this comes at the cost of significantly slower certification. The local Lipschitz constant at a point x is given by K (x) in Definition 5, which essentially corresponds to the maximum slope of the function within an neighborhood of x. Definition 5. The local Lipschitz constant is given by K (x) = sup x1,x2 . ||x−x1||≤ ||x−x2||≤ { |f(x1)− f(x2)| ||x1 − x2|| } Local-Lipschitz-based certification, similar to Lipschitz-based certification (Section 2.2), certifies points, x, when the margin by which the top-predicted class, F (x), exceeds all other classes is greater than ·K (x). While the local Lipschitz constant is always a lower bound for the global Lipschitz constant—and therefore local-Lipschitz-based certification can possibly be tighter—local-Lipschitz-based certification is nonetheless equally limited. We will consider a generous setting in which the bound used for certification is exact, i.e., where the certification procedure has oracle access to K (x). Because K (x) is not piecewise linear, localLipschitz-based certification is not strictly piecewise-linear limited (PLL) in this setting. It is worth noting, however, that methods for approximating the local Lipschitz constant may not leverage this smoothness in practice. Regardless, we show that local-Lipschitz-based certification is incomplete on piecewise-linear networks (Theorem 5). This result is related to the fact that when the learner is given control over the implementation of the boundary, (global) Lipschitz-based certification can match the power of local-Lipschitz-based certification; this result has been proven in a slightly weaker formulation by Leino et al. (2021). We provide an alternative theorem statement and proof here that better aligns with the insights in this work. Theorem 5. Local-Lipschitz-based certification is not complete on the hypothesis class of piecewise-linear networks. Proof. It suffices to show that there exists a boundary achievable by a piecewise-linear network for which no corresponding piecewise-linear implementation can be tightly certified by local-Lipschitzbased certification. Recall that by Corollary 4 there exists such a boundary for (global) Lipschitzbased certification. We will consider one of the same such boundaries. For a particular value of , consider the points ∆ (R(F, )), which are at distance exactly from the boundary. There are two cases to consider: either (1) the local Lipschitz constant is always the same everywhere, i.e., ∀ > 0, ∀x1, x2 ∈ ∆ (R(F, )), K (x1) = K (x2), or (2) there is some variation in the local Lipschitz constant, such that ∃ > 0, x1, x2 ∈ ∆ (R(F, )) where K (x1) 6= K (x2). In the first case, we see that K (x) = K (the global Lipschitz constant), meaning that localLipschitz-based certification will certify the exact same points as (global) Lipschitz-based certification. Thus, by Corollary 4, there must be a point which is robust at radius but not certifiable. In the second case, without loss of generality, assume K (x1) > K (x2). Because f is piecewise linear, it is comprised of a finite number of linear functions, which in turn have a finite number of distinct slopes (gradient norms). Thus, ifK (x1) > K (x2),K (x1)−K (x2) = δ where δ belongs to some finite set of strictly positive values. Furthermore, without loss of generality, x1 and x2 can be chosen to be arbitrarily close together, i.e., they lie arbitrarily near a point where the local Lipschitz constant changes. We will therefore consider x1 and x2 that are chosen such according to Equation 15. ||x1 − x2|| < · δ K (15) Let m2 be the margin by which the top-predicted class, F (x2), exceeds all other classes. The maximum radius that can be certified at x2 is thus m2/K (x2). Note that as certification is sound, we have m2 K (x2) ≤ (16) Now consider the maximum radius that can be certified at x1. Let m1 be the margin by which the top-predicted class, F (x1), exceeds all other classes. The maximum radius that can be certified at x1 is thus m1/K (x1) m1 K (x1) = m1 K (x2) + δ by assumption (17) ≤ m2 +K||x1 − x2|| K (x2) + δ by definition of the Lipschitz constant (18) < m2 + · δ K (x2) + δ by our choice of ||x1 − x2|| in (15) (19) ≤ ·K (x2) + · δ K (x2) + δ by (16) (20) = (21) Thus, we see that x1 cannot be certified with radius , despite that its distance from the boundary is exactly . B.2 OTHER PIECEWISE-LINEAR LIMITED METHODS Our work focuses primarily on Lipschitz-based certification, which we demonstrate is fundamentally limited on the hypothesis class of piecewise linear networks. However, this limitation is not due specifically to the use of the Lipschitz constant per se; instead, we attribute it more generally to the fact that Lipschitz-based certification always produces a piecewise-linear certified frontier on piecewise-linear networks, a property we refer to as PLL (Definition 4). In this section we briefly discuss how this property may apply to other flavors of certification techniques that have been proposed in the literature. Convex Relaxations and Dual Networks. One classic approach for certification is through convex relaxation. A survey of such methods is given by Salman et al. (2019), who point out the limitations (regarding tight certification) of convex relaxations (though the authors do not consider our setting where the learner may control the implementation of the boundary, but rather focus on post hoc certification). Though many approaches in this family have been proposed, we will consider two baseline methods that capture a primal and dual formulation of convex relaxations: Fast-Lin (Weng et al., 2018), and an approach proposed by Wong & Kolter (2018), often referred to as “KW.” Fast-Lin directly derives upper and lower bounds on the output of a ReLU network in order to determine if an adversarial example might exist. This is done by iteratively computing upper and lower bounds for the neurons in each layer and using them to replace the ReLU activations with linear upper and lower bounds. This computation resembles a piecewise-linear network, suggesting that Fast-Lin is PLL. The KW approach formulates the adversary as an LP that optimizes over the convex outer approximation of the set of top-level activations reachable through a norm-bounded perturbation. Crucially, for the sake of tractability, the LP can be bounded by the feasible set of the dual, which Wong & Kolter show can be expressed as a dual network, which resembles a backwards pass in the network being certified. For ReLU networks, the activations in the dual network are replaced with their upper convex envelopes (a linear function) over the bounded set [`, u], where ` and u represent lower and upper bounds on the pre-ReLU neural activations. The upper and lower bounds can be iteratively computed in a similar way to in Fast-Lip; thus, in its simplest form,7 the dual network inherits the piecewise linearity of the original ReLU network being certified, suggesting the resulting certified frontier is piecewise linear, and certification is PLL. Hyperplane Projections. As exact certification is NP-complete, the literature has often turned to training procedures that help simple, approximate certification enjoy greater success. In piecewise linear networks, the input can be partitioned into a polyhedral complex where each convex region corresponds to a single activation pattern, over which the network is linear (Croce et al., 2019; Fromherz et al., 2021; Jordan et al., 2019). Motivated by this view of ReLU networks, one family of robust training approaches attempts to expand the linear regions of the network to simplify the combinatorial analysis of the possible ReLU activation patterns (Croce et al., 2019; Xiao et al., 2019). Croce et al. proposed a simple certification technique for networks trained with their “Maximum Margin Regularization” (MMR), where a point, x, is certified only if (1) the entire -ball around x is contained in a single convex activation region, and (2) the linear function corresponding to the region does not have a boundary within from x. This approach is clearly PLL, as the certified regions can be obtained by shrinking each activation region (possibly split in two if a linear decision boundary crosses it) by . Since the original regions are convex polytopes, so too are the certified regions, thus the certified frontier is piecewise-linear. In contrast to our findings for Lipschitz-based certification, it is worth noting that the limitations of this approach go beyond PLL, as completeness of the MMR approach is in direct conflict with non-linearity; and moreover, the approach is designed specifically for piecewise-linear networks. C DETAILS ON EXPERIMENTS The experiments presented in Figure 1 in Section 3 were performed using the gloro Python library, which implements the GloRo Net method of Leino et al. (2021) for training certifiably robust 7This approach has been refined in subsequent work that we do not consider here (Wong et al., 2018). models by incorporating Lipschitz-based certification into training. All networks in the experiments consisted of a 1-hidden layer dense network with MinMax Anil et al. (2019) activations; three specific architectures were used, with 2, 20, and 200 hidden units, respectively. Models were trained for 64 epochs, with a batch size of 128. We chose hyperparameters inspired by those used by Leino et al. (see the original paper for details on the meaning of the various hyperparameters); namely, we used GloRo-TRADES loss with λ = 1.2, we scaled logarithmically to its ultimate value of 0.5 by the half-way point of training, and we linearly decreased the learning rate from 10−3 to 0 half-way through training. D AN ILLUSTRATIVE EXAMPLE OF THE CORNER PROBLEM For illustrative purposes a diagram is provided in Figure 2 that serves as a visual explanation of the “corner problem” described in Section 2.3. The boundary of a neural network, shown by the bold black line, forms a sharp corner. The complement to the robust region, i.e., the set of points that are not robust, is shown in gray. A simple implementation of this boundary has level curves that make similar sharp corners; the level curve corresponding to the certified frontier is shown by the dotted line, and the certified region is colored in blue. The region opposite the corner in the boundary is highlighted. We see that in this region, there is a set of points, shown in orange, that are not certified, despite the fact that they are robust, being at distance greater than from the boundary. In this two-dimensional example, these falsely flagged points make up a relatively small fraction of the uncertified points opposite the corner (represented as the union of the orange points and the highlighted gray points in the diagram); however, in high dimensions, virtually all uncertified points in this region would be falsely flagged, as indicated by Equation 1.
1. What is the focus of the paper regarding certification techniques? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its experimental evaluation and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper discusses the fundamental problem for certification using piece-wise linear activations like ReLU. Lipschitz-based certification is mainly investigated and advocated with smooth activations. Strengths And Weaknesses Strength The authors present an interesting theoretical discussion regarding Lipschitz-based certification with regard to piece-wise linear activations. Weakness Branch and bound (BaB) is widely adopted to break the barrier for piece-wise linear activations and enable complete certification. Representative works include the recent VNN-COMP winner alpha-beta-crown, as well as other participants like OVAL, VeriNet, Marabou, ERAN, etc (see VNN-COMP21 report [A] for details and VNN-COMP22 for updates). These SOTA complete verifiers significantly scale certification to larger models, speeding up certifications, and generalizing to different robustness properties and architectures. The paper does not mention any of them in related works or evaluation. Analysis and discussion on this are crucial to making claims solid and convincing. To support the power/limitations of Lipschitz-based certification approaches, it is necessary to use existing Lipschitz-based certification approaches to confirm the theoretical observations and insights on either toy examples or standard robustness benchmarks. Also, many existing works including some tools mentioned before support smooth activations. Experimental evaluation should be presented to support using smooth activations over piece-wise linear activations. Clarity, Quality, Novelty And Reproducibility The paper has moderate clarity. Claims are new but not well supported by evaluation results.
ICLR
Title Limitations of Piecewise Linearity for Efficient Robustness Certification Abstract Certified defenses against small-norm adversarial examples have received growing attention in recent years; though certified accuracies of state-of-the-art methods remain far below their non-robust counterparts, despite the fact that benchmark datasets have been shown to be well-separated at far larger radii than the literature generally attempts to certify. In this work, we offer insights that identify potential factors in this performance gap. Specifically, our analysis reveals that piecewise linearity imposes fundamental limitations on the tightness of leading certification techniques. These limitations are felt in practical terms as a greater need for capacity in models hoped to be certified efficiently. Moreover, this is in addition to the capacity necessary to learn a robust boundary, studied in prior work. However, we argue that addressing the limitations of piecewise linearity through scaling up model capacity may give rise to potential difficulties—particularly regarding robust generalization—therefore, we conclude by suggesting that developing smooth activation functions may be the way forward for advancing the performance of certified neural networks. 1 INTRODUCTION Since the discovery of adversarial examples (Szegedy et al., 2014), defenses against malicious input perturbations to deep learning systems have received notable attention. While many early-proposed defenses—such as adversarial training (Madry et al., 2018)—are heuristic in nature, a growing body of work seeking provable defenses has arisen (Cohen et al., 2019; Croce et al., 2019; Fromherz et al., 2021; Huang et al., 2021; Jordan et al., 2019; Lee et al., 2020; Leino & Fredrikson, 2021; Leino et al., 2021; Li et al., 2019; Singla et al., 2022; Trockman & Kolter, 2021; Wong et al., 2018; Zhang et al., 2018). Generally, such defenses attempt to provide a certificate of local robustness (given formally in Definition 1), which guarantees a network’s prediction on a given point is stable under small perturbations (typically in Euclidean or sometimes `∞ space); this precludes the possibility of small-norm adversarial examples on certified points. The success of a certified defense is typically measured empirically using verified robust accuracy (VRA), which reflects the fraction of points that are both (i) classified correctly and (ii) certified as locally robust. Despite the fact that perfect robust classification (i.e., 100% VRA) is known to be possible on standard datasets at the adversarial perturbation budgets used in the literature (Yang et al., 2020b), this possibility is far from realized in the current state of the art. For example, on the benchmark dataset CIFAR-10, state-of-the-art methods offering deterministic guarantees of `2 robustness1 have remained at approximately 60% VRA (Huang et al., 2021; Leino et al., 2021; Singla et al., 2022; Trockman & Kolter, 2021), while non-robust models handily eclipse 95% accuracy. It is difficult to precisely account for this discrepancy; though among other reasons, state-of-the-art methods typically use loose bounds to perform certification—as exact certification is (for general ReLU networks) NP-complete (Katz et al., 2017; Sinha et al., 2018)—which conceivably leads to falsely flagging truly robust points or to over-regularization of the learned model. While conservative approximations may be necessary to perform efficient certification (and to facilitate efficient robust training), it is certainly possible that they foil reasonable hopes for “optimality.” In this work, we 1In this work we primarily consider certified defenses that provide a deterministic guarantee of local robustness, as opposed to a statistical guarantee. For further discussion of this point, see Section 4. offer further insight into the shortcomings of modern certification techniques by analyzing their limitations in the context of the architectural settings in which they are conventionally employed. In particular, we find that piecewise linearity—a practically ubiquitous property of neural networks considered in the certification literature (e.g., standard ReLU and the more recently popularized “MinMax” (Anil et al., 2019) activations are both piecewise linear)—fundamentally limits the power of Lipschitz-based `2 local robustness certification. In effect, we argue, this means that extra capacity is needed simply for facilitating efficient certification—in addition to whatever capacity may be required for learning a robust boundary (e.g., as examined by Bubeck & Sellke (2021)). On the other hand, perhaps surprisingly, we prove that free from the constraint of piecewise linearity, Lipschitz-based certification is powerful enough to perform complete certification on any decision boundary, provided the implementation of the function giving rise to the boundary is under the learner’s control (indeed, this is consistent with the fact that the highest performing certified defenses incorporate Lipschitz-based certification into training). These latter findings suggest that continued progress towards improving state-of-the-art VRA may be enabled through carefully chosen smooth activation functions,2 which do not inherently limit the power of what are currently the most promising forms of efficient local robustness certification. In summary, the primary contributions of this work are as follows: (1) we show that piecewise linearity imposes inherent limitations on the tightness of efficient robustness certification—our primary focus is Lipschitz-based certification, but we discuss similar limitations of other methods in Appendix B; (2) we prove that Lipschitz-based certification is fundamentally powerful for tight robustness certification, provided (i) the robust learning procedure has power over the implementation of the classifier, and (ii) the hypothesis class is not limited to piecewise linear networks; and (3) we demonstrate that tight Lipschitz-based certification may require significant capacity overhead in piecewise-linear networks. These findings offer a new perspective on the sticking points of modern certified training methods, and suggest possible paths forward. We begin in Section 2 by introducing the limitations piecewise linearity imposes on robustness certification, starting generally, and narrowing our focus specifically to Lipschitz-based certification. We then discuss the role that capacity plays in mitigating these limitations in Section 3, which concludes with a discussion of the implications of our findings, both retrospectively and prescriptively. Finally, we discuss related work in Section 4, and offer our concluding remarks in Section 5. 2 LIMITATIONS OF PIECEWISE LINEARITY The main insights in this work stem from the simple, yet crucial observation that the points lying at a fixed Euclidean distance from a piecewise-linear decision boundary, in general, do not themselves comprise a piecewise-linear surface. Therefore, in order for a certification procedure to precisely recover the set of robust points—those which lie a distance of at least from the decision boundary—it must be capable of producing a boundary between robust and non-robust points that is not piecewise-linear, even on networks that are. However, as we will see, Lipschitz-based certification, for example, is in fact constrained to produce a piecewise-linear “certified frontier” on piecewise-linear networks, as the set of just-certifiable points essentially corresponds to a level curve in the output of the network being certified. On the other hand, if the level curves of the function being certified correspond (up to some constant factor) to their distance from the decision boundary (and must therefore include smooth curves), Lipschitz-based certification identifies precisely the points that are truly -locally robust, provided a tight bound on the Lipschitz constant. As we will make clear, this has important implications regarding the power of Lipschitz-based certification in properly suited network architectures. In the remainder of this section, we formalize this intuition and discuss some of its implications. Section 2.1 introduces our main theorem regarding the limitations imposed by piecewise linearity, along with the necessary background and definitions. Section 2.2 narrows the focus to Lipschitzbased certification, showing that despite being powerful in general, it is fundamentally limited within the hypothesis class of piecewise linear networks. Finally, Section 2.3 presents a thought experiment that provides basic intuition about the possible scale of the problems caused by these limitations. 2Or at least, activation functions which enable learning curved (as opposed to piecewise linear) functions. 2.1 FUNDAMENTAL LIMITATIONS TO CERTIFICATION COMPLETENESS For our purposes, we will consider a neural network to be a function f : Rn → Rm mapping ndimensional inputs to logit values corresponding to m different classes. From the network function f , we derive a neural classifier, F : Rn → Rm, by letting F (x) = argmaxi∈[m] fi(x). When it is clear from the context which we are referring to, we will use the term “neural network” for both the network function f and its corresponding classifier F . Note that two different neural network functions, f and f ′, may lead to the same predictions everywhere, i.e., ∀x . F (x) = F ′(x). When this happens, we say that f and f ′ share the same decision boundary, where the decision boundary is simply the set of points where fi(x) = fj(x) for some i 6= j ∈ [m]. In this work, we consider the problem of local robustness certification. As in prior work, we define local robustness as a property of a point x and classifier F , parameterized by a perturbation budget, or robustness radius, , as in Definition 1. Definition 1 ( -Local Robustness). A classifier F : Rn → [m] is -locally robust at point x ∈ Rn, with respect to norm || · ||, if ∀x′ ∈ Rn . ||x− x′|| ≤ =⇒ F (x) = F (x′). A certification procedure, cert, is a function that takes a neural network, f , a point, x, and a perturbation budget, , and produces a label in {0, 1}, where an output of 1 means that f is certified as -locally robust at x. A valid certification procedure must be sound, i.e., cert(f, x, ) = 1 =⇒ F is -locally robust at x; however, it need not be complete, i.e., it may be the case that cert(f, x, ) = 0 and yet F is in fact -locally robust at x. For a given certification procedure, let the certified regions of f , Ccert(f, ) = {x : cert(f, x, )} be the set of points that can be positively certified by cert. Similarly, let the robust regions of f be given by the set R(F, ) = {x : F is -locally robust at x} of -locally robust points (note that, in contrast to Ccert, R does not depend on the implementation of f , only its classification outputs, given by F ). Soundness entails that ∀f . Ccert(f, ) ⊆ R(F, ), but clearly it is desirable for Ccert(f, ) to match R(F, ) as tightly as possible; when this is achieved perfectly we can consider cert to be “complete.” However, as Ccert(f, ) can depend on the underlying function, f , which has a surjective mapping to classifiers, F , derived from the same hypothesis class, we must be careful in defining completeness precisely. Let F be a hypothesis class—a family of functions of type Rn → Rm, e.g., that are captured by some neural network architecture. We will also use the slight abuse of notation, F ∈ F , to denote any F : Rn → [m] such that there exists a function f ′ ∈ F which produces the same labels as F on all inputs, i.e., ∀x . F (x) = argmaxi∈[m] f ′i(x). We say that a certification procedure, cert, is complete on F if all possible decision boundaries achievable by functions in the hypothesis class have at least one implementation in F for which cert perfectly recovers the true robust regions. This is stated formally in Definition 2. Definition 2. A certification procedure, cert, is complete on hypothesis class, F , if for > 0 ∀F ∈ F . ∃f ′ ∈ F . ( ∀x . F (x) = argmax i∈[m] f ′i(x) ) ∧ ( Ccert(f ′, ) = R(F, ) ) Essentially, completeness over a hypothesis class entails a notion of compatibility between the certification procedure and the hypothesis class; specifically, it means that for any decision boundary expressible by the hypothesis class, it is possible for a learning procedure to produce a model that implements the decision boundary in a way that makes the certification procedure complete. Definition 2 provides a key relaxation from a stricter notion of completeness that would require Ccert(f, ) = R(F, ) for all f , as this would not be achievable by any polynomial certification procedure3 (Katz et al., 2017; Sinha et al., 2018). By requiring tight certification only modulo the decision boundary, we avoid this limitation, splitting the responsibility for completeness between the certification procedure, the learning algorithm, and the hypothesis class. Next, we will also find it useful to define the certified frontier of F under cert (Definition 3); essentially, the set of points that are just barely certified, which lie at the frontier of the certified 3Assuming P 6= NP . regions. We will similarly define the robust frontier as the set of points that are just barely -locally robust, which lie at the frontier of the robust regions. Definition 3 (Certified Frontier). The certified frontier of a neural network, F : Rn → [m], under certifier, cert, at perturbation budget, , is the set of points ∆ ( Ccert(f, ) ) = { x : cert(f, x, ) ∧ ( ∀δ > 0 . ¬cert ( f, x, + δ )) } . We now turn to the specifics of one of our main results, namely, that piecewise linearity is a limiting factor for tight certification. Of course, as alluded to earlier, some certification procedures do achieve complete certification on piecewise-linear networks—e.g., (Jordan et al., 2019; Tjeng et al., 2019)—however, such methods are invariably exponential. Thus, we characterize the set of piecewise-linear limited (PLL) methods in Definition 4. Intuitively, a certification procedure is PLL if it is constrained to produce piecewise-linear certified frontiers on piecewise-linear models. Definition 4 (Piecewise-linear Limited Certification). A certification procedure, cert, is piecewise-linear limited (PLL) if ∀f . f is piecewise-linear =⇒ ∆ ( Ccert(f, ) ) is piecewise-linear Note that the robust frontier of a network F is, in general, not piecewise linear, even if F (and thus its decision boundary) is piecewise linear. Thus, if the certified frontier of cert is piecewise linear, cert cannot be complete, i.e., C 6= R. Moreover, this means that any piecewise-linear limited certification procedure cannot be complete on the hypothesis class of piecewise linear networks (Theorem 1). The proof of Theorem 1 is given formally in Appendix A.1. Theorem 1. Any piecewise-linear limited certification procedure is incomplete on the hypothesis class of piecewise linear networks. The proof of Theorem 1 relies on the fact that a piecewise-linear function cannot be equal to a function exhibiting smooth curves. However, it is known that neural networks, provided with enough capacity, can approximate any function with arbitrary precision (Hornik, 1991). We address this point in Section 3, where we discuss the implications of Theorem 1 regarding the capacity requirements of tightly certifiable networks. 2.2 THE POWER AND LIMITATIONS OF LIPSCHITZ-BASED CERTIFICATION We will now narrow our focus to consider the specific family of Lipschitz-based certification methods. Such methods perform certification by using an upper bound, K, on the network’s Lipschitz constant; essentially, a point is certified if the margin by which the top-predicted class exceeds all other classes is greater than K. In our work, we will set aside the details around how the Lipschitz is obtained, though this is also a source of potential looseness in the general approach. That is, we will (optimistically) take for granted that a tight bound is obtained in our analysis. Lipschitz-based certification has proven effective in the literature, achieving state-of-the-art performance—when paired with an appropriate training routine—despite its simplicity (Leino et al., 2021; Trockman & Kolter, 2021). Lipschitz-based certification is advantageous in many ways; in addition to being easy to incorporate into a robust learning objective, it enables zero-cost certification at run time, as the Lipschitz constant does not need to be recomputed after training. On the other hand, it would seem that Lipschitz-based certification is fundamentally underpowered—the “global” Lipschitz constant is a conservative estimate of the local Lipschitz constant, which in turn gives a conservative estimate of how much the net output can change within a given neighborhood. If a primary sticking point for advancing certified accuracy is loose certification, it is fair to ask how promising Lipschitz-based certification will continue to be. The philosophy behind incorporating Lipschitz-based certification into training is essentially that the potential shortcomings of Lipschitz-based certification can be addressed by learning a easily certifiable network function. We show that this intuition is essentially correct. Perhaps surprisingly, we show that Lipschitz-based certification is sufficiently powerful to be complete on the hypothesis class of Lipschitz functions4 However, we also show that Lipschitz-based certification is PLL, meaning this potential cannot be achieved with a hypothesis class constrained by piecewise linearity. 4I.e., with bounded Lipschitz constant. Note that this is not a meaningful constraint for neural networks, as any neural network with Lipschitz activation functions and finite weights is Lipschitz in this sense. 2.2.1 LIPSCHITZ-BASED CERTIFICATION IS POWERFUL We begin by showing that for any boundary achievable by a Lipschitz network function, when the learner is given control over the precise network function implementing the boundary, it is always possible to find an implementation that can be tightly certified using Lipschitz-based certification. This is stated formally in Theorem 2. Theorem 2 further entails that there exists a network function for any 2 -separated data that achieves perfect VRA under Lipschitz-based certification. The proof of Theorem 2 is given in Appendix A.2. Theorem 2. When the hypothesis class, F , is given as the set of Lipschitz functions, Lipschitz-based certification is complete on F . 2.2.2 LIPSCHITZ-BASED CERTIFICATION IS LIMITED BY PIECEWISE-LINEARITY Despite the power of Lipschitz-based certification for general functions, when restricted to the hypothesis class of piecewise linear networks, it becomes fundamentally limited. That is, formally, Lipschitz-based certification is PLL (Proposition 3). Proposition 3. Lipschitz-based certification is piecewise-linear limited. Proposition 3 follows essentially because the certified frontier of Lipschitz-based certification corresponds to a particular level curve of the network function, which is piecewise linear whenever the function is. As a direct consequence of Proposition 3 and Theorem 1, we arrive at Corollary 4. Corollary 4. Lipschitz-based certification is not complete on the hypothesis class of piece-wise linear networks. Note that taken in the context of Theorem 2, Corollary 4 means that in a sense, the fundamental limitation of Lipschitz-based certification is not intrinsic to its simplicity (e.g., because the local Lipschitz constant might be tighter than the global constant on some functions), but rather, it is related to the hypothesis class of networks being certified. Put differently, piecewise linearity imposes real limitations on Lipschitz-based certification that cannot be attributed to practical, but non-fundamental, issues, such as efficient computation of Lipschitz bounds, etc. 2.3 THE PROBLEM WITH CORNERS AND THE CURSE OF DIMENSIONALITY The incongruence between the piecewise-linear certified frontier of Lipschitz-based methods, and the robust frontier of a piecewise-linear boundary, which features smooth curves, becomes relevant when the boundary comes to a “corner,” or relatively sharp inflection point. At corners, the robust frontier curves at a fixed radius around the corner, while the certified frontier, absent aid from additional capacity (see Section 3), runs parallel to the facets forming the corner, offset by a fixed amount (see Figure 2 in Appendix D for an illustration). The sharper the corner, the larger the difference will be between the corresponding robust and certified regions. Additionally, we will see that this is also true the higher the dimension of the corner, i.e., the more independent half-spaces meet to create the corner. As a thought experiment, we will model a d-dimensional corner as the intersection of d orthogonal half-spaces. Assuming the level curves near the corner run parallel to the half-spaces, h ∈ H , forming the corner, in the best case, the certified region is given by the union of half-spaces obtained by flipping each h ∈ H and shifting it by . Consider the hypercube of width just opposite the corner. This hypercube lies entirely outside the robust region, meaning all points within it cannot be certified using Lipschitz-based certification. However, only the points intersecting the hypersphere of radius centered at the corner are truly non- -robust. We can compute the ratio of the volume of the hypercube to the intersecting portion of the hypersphere, given by Equation 1: πd/2 Γ ( d 2 + 1 ) · ( 2 )d (1) As the dimension increases, this ratio tends to zero, meaning that in high dimensions, almost all points in this region opposite the corner are incorrectly uncertified. Furthermore, the maximum distance from an uncertified point within this region to the boundary is equal to the diagonal of the hypercube, which is given by √ d · . This means that even points that are significantly more robust than required may yet be uncertified. 3 THE ROLE OF CAPACITY The primary limitation of Lipschitz-based certification in piecewise-linear networks derives from the fact that we cannot have smoothly curved level curves in such networks (or, more generally, that PLL certification methods cannot have smoothly curved certified frontiers in such networks). However, while this is true in the strictest sense, a function with smooth curves can be approximated with arbitrary precision, given sufficient capacity. In other words, increased network capacity may be one possible option to mitigate the fundamental limitations discussed throughout Section 2. In this section, we investigate the capacity requirements necessary for tight PLL certification in piecewiselinear networks. While the precise meaning of “capacity” in a quantifiable sense is a bit nebulous, for our purposes, we will consider capacity in a piecewise-linear network to correspond to the number of piecewiselinear regions. This grows with the number of internal neurons, though the relationship may vary depending on other aspects of the network architecture, e.g., the depth of the network. Previous work has studied the capacity implications for learning a robust decision boundary, finding that separating points while controlling Lipschitzness may require additional capacity beyond what would be necessary to simply separate them (Bubeck & Sellke, 2021). Besides the capacity required to represent the decision boundary in a robust network, our work asks instead about the capacity required to tightly certify a given boundary. We find that in a piecewise linear network, even if the boundary is optimal—in that all points in the distribution are indeed a distance of or more from it—the network may require additional capacity to be able to prove this using the Lipschitz constant. Taking the data distribution aside, we consider the goal of certifying all points that are sufficiently far from the boundary. As highlighted in Section 2.3, in places where the decision boundary forms high-dimensional “corners,” there may be relatively large volumes of points that are -far from the boundary but cannot be certified as long as the level curves simply run parallel to the boundary. In such cases, tight certification requires extra capacity specifically to round out the level curves around the corners in the decision boundary. We begin by demonstrating this concept via an illustrative example. We conclude by discussing the implications of our results and suggest avenues for future work. 3.1 AN ILLUSTRATIVE EXAMPLE OF HOW CAPACITY ENABLES TIGHT CERTIFICATION As an example of how Lipschitz-based certification can require excess capacity beyond what is necessary to learn a robust boundary, we consider a synthetic 2-D dataset that can robustly separated by a simple piecewise linear boundary. An illustration is provided in Figure 1a. We begin with a decision boundary given by B = {(x1, x2) : max(x1, x2) = 0}; this boundary separates points with negative x- and y-coordinates from points in the other three quadrants, and forms a 90◦ corner at the origin. The data are then generated such that all the points with label 0 lie a distance of at least below and to the right of the boundary, and the points with label 1 lie a distance of at least above and to the right of the boundary. Specifically, the 1-labeled points curve around the boundary such that there is a tight margin of exactly 2 about the boundary. By construction, the function f(x) = [0,max(x1, x2)] produces logit values that yield the boundary B, with respect to which all points in the dataset are -locally robust. This function can be trivially implemented with minimal capacity by a simple MinMax network, f(x) = σ(xW 1)W 2, where σ is the MinMax activation function, and W 1 and W 2 are given by Equation 2. W 1 = [ 1 0 0 1 ] W 2 = [ 0 0 0 1 ] (2) Furthermore, the Lipschitz constant of f is 1;5 this can even be tightly obtained by taking the layerwise product of the layer operator norms, as is typically done in practice. Hence, the points that can be certified will be those for which |f1(x) − f0(x)| ≥ ; that is, the points outside the level curves max(x1, x2) = − and max(x1, x2) = . However, we see that this certified frontier fails to certify many points in the positive x-y quadrant, despite the fact that all the points are indeed robust with respect to the boundary of f . This is depicted in Figure 1b. In order to certify these points, we need the level curve corresponding to f1(x)− f0(x) = to bend smoothly around the boundary, rather than forming the same 90◦ angle. This requires more capacity. To gain a sense of how this plays out in practice, we consider adding capacity via expanding the number of neurons in the hidden layer (which contained only two neurons in our minimal example). In Figures 1d and 1e, we show the boundaries of two additional learned networks, g and h, with 20 and 200 internal neurons, respectively. We see that increasing the number of internal neurons by an order of magnitude yields a better set of level curves, but the network g still must compromise as the level curves are not smooth enough to tightly follow the contour of the data. Finally, when we increase the number of internal neurons by two orders of magnitude, we at last obtain a function h that achieves nearly 100% VRA on our sample data. This function, as desired, forms essentially smooth level curves that bend around the boundary corner with a radius of . Interestingly, h learns a boundary that is somewhat different from the boundary originally used to derive the data; however, both boundaries can be thought of as “equivalent” in the sense that they produce the same margin, reflecting that the optimal boundary for this dataset is not unique. Discussion. In our example, we needed 100 times more neurons than were necessary to construct an optimal decision boundary in order to tightly certify the boundary with the Lipschitz constant. While it is difficult to extrapolate from this toy example to a “real world” scenario, our results suggest that smoothing the level curves may require significant overhead beyond the capacity necessary to produce a truly robust boundary. Another aspect of this experiment worth noting is that when the network had insufficient capacity to learn an optimally robust, tightly certified boundary (e.g., in Figures 1c and 1d), the resulting model tended to compromise by making the corner less sharp (compared to the desired 90◦ angle). Geometrically, when the boundary has an inflection with a wider angle, the difference between the certifiable frontier and the frontier of robust points is less pronounced (consider for example, what happens then the inflection approaches 180◦). In effect, this means that while under-parameterization of piecewise-linear models may be a problem for robust model performance in practice, this limitation may be (at least in part) manifested as an under-fit model as opposed to one with many robust but non-certifiable points. This is reflected in the empirical results for certifiably trained models in the literature, which typically have lower “clean accuracies” than their standard-trained counterparts. However, we note that these models also exhibit a discrepancy between their certified accuracy and their vulnerability to actual attacks, leaving the possibility that they may also fail to certify some truly robust points. 5More properly put, the Lipschitz constant of |f1−f0|—which represents the margin by which the predicted class exceeds the non-predicted class—is 1. 3.2 POTENTIAL DRAWBACKS OF THE CAPACITY ESCAPE HATCH As we have seen, by adding capacity, we can help overcome the limitations of piecewise linearity by enabling the network to approximate smooth curves around corners in the decision boundary. For universal tight certification, this needs to be done in the neighborhood of all corners on the decision boundary. To the extent that each corner requires independent capacity, hopes for the scalability of such an approach seem slim; albeit, VRA only requires tight certification on the data manifold, meaning that extra capacity should only be needed in places where the decision boundary has sharp inflections near in-distribution points. However, this, too, presents an interesting problem. Namely, the network only has incentive to allocate capacity to round the level curves in the places that are necessary to certify its training set; i.e., where inflections in the decision boundary encroach on training points. Meanwhile, if similar inflections exist near test points not seen during training, the learned network may fail to certify them—even if the boundary is general, and even if it is also robust. In other words, we are faced with not only the challenge of learning a generally robust boundary, but additionally of learning a generally certifiable function. Indeed, generalization of VRA is empirically observed to be worse than the corresponding “clean accuracy” would indicate—a principle that has been noted in prior work due to its privacy implications (Yeom et al., 2020). A Proposed Way Forward. Another possibility for addressing the fact that Lipschitz-based certification is PLL is to expand the hypothesis class to enable smooth curves in the decision surface. Ultimately, our analysis shows that Lipschitz-based certification is most effective when the level curves of the network function accurately reflect the `2 distance to the boundary, which requires the possibility of smooth curves. This goal may be best achieved by purpose-built activations, as piecewise linearity stems from the choice in activation function. State-of-the-art Lipschitz-based certifiable training methods have enjoyed increased success in recent years through leveraging MinMax activations (Anil et al., 2019)—or a variant thereof proposed by Singla et al. (2022)—which are piecewise linear. MinMax has a distinct advantage over the more common ReLU activation, due to its gradient-norm-preserving (GNP) property, which Anil et al. demonstrate is key for tight, efficient Lipschitz bounds. While the need for gradient norm preservation remains clear, we posit that some form of smoothness is an additional desirable property, as it would free the hypothesis class from piecewise linearity. We believe the task of designing suitable smooth activation functions for PLL-certified networks is a promising avenue for future work. 4 RELATED WORK Power and Limitations of Lipschitz-based Certification. Several of the early efforts around robustness certification focused on post hoc certification of networks trained outside of the control of the certifier. This is a fundamentally hard problem, shown to be NP-complete by Katz et al. (2017) and Sinha et al. (2018). While this fundamentally limits the tractability of complete post hoc certification, the limitation is of lesser concern for modern approaches that incorporate certification into the training objective, thus encouraging learning models that better facilitate efficient certification. The specific limitations of Lipschitz-based certification have also been of great interest in the prior literature. Most of these results particularly consider the practical problem of bounding a neural network’s Lipschitz constant. For example, Huster et al. (2018) note that the common method of using the product of the layer-wise operator norm cannot tightly bound the Lipschitz constant of even basic functions in ReLU networks. Anil et al. (2019) study this point further demonstrating a trade-off between expressive power and efficient Lipschitz bound computation in networks with non-gradient-norm-preserving activation functions. This limitation is handled by using network architectures with gradient-norm-preserving activation function such as MinMax, and orthonormal linear operators (though the latter need not necessarily be strictly enforced as it is a learnable objective). Anil et al. conjecture that such networks are universal 1-Lipschitz function approximators, suggesting that learning any Lipschitz function in such a way that the Lipschitz constant can be bounded tightly and efficiently is possible. By contrast, our work points to previously unstudied limitations that are separate from the Lipschitz constant bounding problem, and are indeed not mitigated through the use of MinMax activations, which are piecewise linear. However, we propose that the limitations brought forth in our work may similarly be addressed via novel activation functions. On the flip side, previous work has also touched on the power of Lipschitz-based certification. (Leino et al., 2021) showed that certification with the global Lipschitz constant can be as powerful as with the local Lipschitz constant when the model is under the learner’s control. We extend this result in a number of key ways. First, we prove a stronger result that can be stated for all points, rather than for a finite set of points certified via the local Lipschitz constant. Second, we explicitly consider the hypothesis class, demonstrating that smoothness is a necessary condition to achieve this result. Capacity Requirements for Robust Neural Networks. Understanding the role of capacity in deep neural networks has been a topic of interest in general, particularly due to the demonstrated effectiveness of highly over-parameterized models (Arora et al., 2018; Bubeck & Sellke, 2021; Du et al., 2019; Garg et al., 2022; Zhang et al., 2017). Recent work has also investigated this subject in the particular context of robust models. Bubeck & Sellke (2021) showed that under mild regularity assumptions, learning a highly accurate model with small Lipschitz constant requires significantly more parameters than would be required with no constraint on the Lipschitz constant—where the capacity overhead, in terms of the number of parameters, scales with the dimension. While a controlled Lipschitz constant is central to successful Lipschitz-based certification, our work (e.g., our example in Section 3.1), shows that a Lipschitz interpolation between points of opposite class is not sufficient for certification. As our analysis is focused on certification rather than Lipschitz interpolation, we complement the work of Bubeck & Sellke, showing that even further capacity may be required to appropriately bend the function’s level curves to facilitate Lipschitz-based certification. In addition to the information-theoretic capacity requirements, large numbers of parameters in deep networks may be necessary to facilitate efficient learning (Arora et al., 2018; Du et al., 2019). Recently, Garg et al. (2022) showed that robust learning in particular may require even greater overparameterization than standard learning. Results such as these are complimentary to work such as ours, which focus on minimal parameterizations. Randomized Smoothing. Our work has focused on deterministic certification. By contrast, randomized smoothing (Cohen et al., 2019; Lecuyer et al., 2018) has become a popular method that instead provides a statistical guarantee of robustness. Randomized smoothing (RS) essentially modifies the original function by predicting the expected label under Gaussian6 noise. These predictions are empirically determined through sampling, with the statistical certificate depending on the unanimity of the sample labels. While RS provides a weaker robustness guarantee, it solidly outperforms deterministic methods in terms of certified accuracy. Interestingly, it seems clear that RS is not PLL, since it naturally smooths piecewise linear networks, leading to a smooth boundary and certified frontier—this may be one of the keys to its success. This observation gives further support to the notion that state-of-the-art deterministic methods may be held back by piecewise linearity, and may benefit from smooth activation functions. 5 CONCLUSIONS AND FUTURE DIRECTIONS Incorporating Lipschitz-based certification into robust training procedures has proven to be the most effective way to achieve high deterministic `2 verified-robust accuracy yet considered in the literature. Due to our Theorem 2, there is reason to believe Lipschitz-based certification has the power to remain as promising as current results suggest. However, we also showed that restricted to the hypothesis class of piecewise-linear networks, as has been the standard regime, Lipschitz-based certification becomes fundamentally limited. For piecewise-linear networks, this means that tight Lipschitz-based certification may require significantly more parameters, which, even if tractable, can complicate certifiably robust generalization (e.g., see Section 3.2). On the other hand, rather than viewing this as a fundamental drawback for Lipschitz-based certification, we propose that purpose-built activations—with the correct smoothness and gradient-norm-preserving properties— is a promising avenue for future work to free the most promising form of efficient deterministic certification from the limitations of piecewise linearity. 6Prior work has considered other distributions as well (Yang et al., 2020a) A PROOFS A.1 PROOF OF THEOREM 1 Theorem Statement. Any piecewise-linear limited certification procedure is incomplete on the hypothesis class of piecewise linear networks. Proof. It suffices to show that there exists a boundary achievable by a piecewise-linear network for which no PLL certification method can tightly certify. We proceed by producing a piecewise linear boundary that induces a smooth robust frontier. This is sufficient to prove our theorem, as ∆ ( Ccert(f, ) ) 6= ∆ ( R(F, ) ) =⇒ Ccert(f, ) 6= R(F, ). Consider the 2-D boundary given by max(x, y) = 0. Clearly, this boundary exists within the class of piecewise linear functions as the function f(x, y) = max(x, y) is piecewise linear. Now consider the points in the positive x-y quadrant. The points in this quadrant that are at distance from the boundary are given by √ x2 + y2 = , which is not piecewise linear. By definition, any certification method that is PLL must have a certified frontier that is piecewise linear. Thus, the certified frontier of such any such method cannot be equal to √ x2 + y2 = in this quadrant. A.2 PROOF OF THEOREM 2 Theorem Statement. When the hypothesis class, F , is given as the set of Lipschitz functions, Lipschitz-based certification is complete on F . Proof. Let F be the set of Lipschitz functions. Consider the decision boundary of any function f ∈ F . Define f ′ as follows: let d(x) be the minimum distance of x from the decision boundary and let f ′(x) = d(x) · 1F (x), where 1F (x) is the one-hot encoding of F (x). First, observe that f ′j − f ′i is 1-Lipschitz for all i 6= j. To see this consider the following. The Lipschitz constant is given by sup x,x′ ∣∣(f ′j(x)− f ′i(x))− (f ′j(x′)− f ′i(x′))∣∣ ||x− x′|| = sup x,x′ ∣∣f ′j(x)− f ′j(x′) + f ′i(x′)− f ′i(x)∣∣ ||x− x′|| (3) Consider points x and x′, and let us assume that ||x−x′|| = δ. We would like to bound the quantity given by (4), the numerator in (3), by δ.∣∣f ′j(x)− f ′j(x′) + f ′i(x′)− f ′i(x)∣∣ (4) There are a few cases to consider. First if F (x) and F (x′) are both different from i and j, then (4) is 0 ≤ δ. Since (4) is symmetric in both i and j, and x and x′, without loss of generality, we will assume F (x) = j. This leaves two cases: when F (x′) = j, and when F (x′) 6= j (in the latter case we will not be concerned with whether or not F (x′) = i). In the first case we have (4) = |f ′j(x)− f ′j(x′)| = |d(x)− d(x′)| (5) = d(x)− d(x′) without loss of generality (6) Let a be the nearest point on the boundary to x′, such that which d(x′) = ||x′ − a||. Thus, d(x) ≤ ||x− a|| as a is on the boundary (7) ≤ ||x− x′||+ ||x′ − a|| by the triangle inequality (8) = δ + d(x′) (9) =⇒ d(x)− d(x′) ≤ δ as desired (10) In the second case, x and x′ are given different labels and we have (4) = |f ′j(x) + f ′i(x′)| (11) ≤ d(x) + d(x′) as f ′i(x′) is at most d(x′) (achieved when F (x′) = i) (12) Since x and x′ are given different labels, there must be at least one part of decision boundary that bisects the line segment connecting x and x′; let a be this intersection point. Additionally, since a is on the boundary, we must have that d(x) ≤ ||x− a|| and d(x′) ≤ ||x′ − a||. Thus, as desired, d(x) + d(x′) ≤ ||x− a||+ ||x′ − a|| = δ (13) This allows us to conclude that f ′j − f ′i is 1-Lipschitz for all i 6= j, as claimed. The points that are certified by Lipschitz-based certification are those for which (14) holds, where j = F (x) and Kji is the Lipschitz constant of f ′j − f ′i . min i6=j { f ′j(x)− f ′i(x)− Kji } ≥ 0 (14) Notice that when i 6= F (x), f ′i(x) = 0. Thus (14) can be simplified to f ′j(x) = d(x) ≥ , noting also that Kji = 1 ∀i, j. Therefore, the points that can be certified via Lipschitz-based certification are those for which d(x) ≥ , which are precisely the points that are locally robust. A.3 PROOF OF PROPOSITION 3 Theorem Statement. Lipschitz-based certification is piecewise-linear limited. Proof. Assume the function, f , being certified is piecewise linear. Without loss of generality, consider inputs x for which the network predicts class j. The margin by which class j surpasses all other classes is given by m(x) = mini {fj(x)− fi(x)}. Note that m is piecewise linear as f is piecewise linear. Let K be the Lipschitz constant of m. The largest radius that can be certified at x is then m/K. Thus, the certified frontier is given by m/K = ; this corresponds to the level curve of m corresponding to m = ·K. Since m is piecewise linear, this level curve is piecewise linear. Thus, the certified frontier is piecewise linear, and Lipschitz-based certification is PLL. B LIMITATIONS OF OTHER CERTIFICATION METHODS B.1 LIMITATIONS OF LOCAL-LIPSCHITZ-BASED CERTIFICATION State-of-the-art deterministic `2 certified performance is currently achieved using Lipschitz-based certification, which outperforms other types of certified training methods (Leino et al., 2021; Trockman & Kolter, 2021) such as those based on convex relaxations—e.g., (Wong et al., 2018)—or maximizing linear regions—e.g., (Croce et al., 2019; Xiao et al., 2019). Unsurprisingly, however, methods that use the local Lipschitz constant for certification can achieve similarly high VRA (Huang et al., 2021), though this comes at the cost of significantly slower certification. The local Lipschitz constant at a point x is given by K (x) in Definition 5, which essentially corresponds to the maximum slope of the function within an neighborhood of x. Definition 5. The local Lipschitz constant is given by K (x) = sup x1,x2 . ||x−x1||≤ ||x−x2||≤ { |f(x1)− f(x2)| ||x1 − x2|| } Local-Lipschitz-based certification, similar to Lipschitz-based certification (Section 2.2), certifies points, x, when the margin by which the top-predicted class, F (x), exceeds all other classes is greater than ·K (x). While the local Lipschitz constant is always a lower bound for the global Lipschitz constant—and therefore local-Lipschitz-based certification can possibly be tighter—local-Lipschitz-based certification is nonetheless equally limited. We will consider a generous setting in which the bound used for certification is exact, i.e., where the certification procedure has oracle access to K (x). Because K (x) is not piecewise linear, localLipschitz-based certification is not strictly piecewise-linear limited (PLL) in this setting. It is worth noting, however, that methods for approximating the local Lipschitz constant may not leverage this smoothness in practice. Regardless, we show that local-Lipschitz-based certification is incomplete on piecewise-linear networks (Theorem 5). This result is related to the fact that when the learner is given control over the implementation of the boundary, (global) Lipschitz-based certification can match the power of local-Lipschitz-based certification; this result has been proven in a slightly weaker formulation by Leino et al. (2021). We provide an alternative theorem statement and proof here that better aligns with the insights in this work. Theorem 5. Local-Lipschitz-based certification is not complete on the hypothesis class of piecewise-linear networks. Proof. It suffices to show that there exists a boundary achievable by a piecewise-linear network for which no corresponding piecewise-linear implementation can be tightly certified by local-Lipschitzbased certification. Recall that by Corollary 4 there exists such a boundary for (global) Lipschitzbased certification. We will consider one of the same such boundaries. For a particular value of , consider the points ∆ (R(F, )), which are at distance exactly from the boundary. There are two cases to consider: either (1) the local Lipschitz constant is always the same everywhere, i.e., ∀ > 0, ∀x1, x2 ∈ ∆ (R(F, )), K (x1) = K (x2), or (2) there is some variation in the local Lipschitz constant, such that ∃ > 0, x1, x2 ∈ ∆ (R(F, )) where K (x1) 6= K (x2). In the first case, we see that K (x) = K (the global Lipschitz constant), meaning that localLipschitz-based certification will certify the exact same points as (global) Lipschitz-based certification. Thus, by Corollary 4, there must be a point which is robust at radius but not certifiable. In the second case, without loss of generality, assume K (x1) > K (x2). Because f is piecewise linear, it is comprised of a finite number of linear functions, which in turn have a finite number of distinct slopes (gradient norms). Thus, ifK (x1) > K (x2),K (x1)−K (x2) = δ where δ belongs to some finite set of strictly positive values. Furthermore, without loss of generality, x1 and x2 can be chosen to be arbitrarily close together, i.e., they lie arbitrarily near a point where the local Lipschitz constant changes. We will therefore consider x1 and x2 that are chosen such according to Equation 15. ||x1 − x2|| < · δ K (15) Let m2 be the margin by which the top-predicted class, F (x2), exceeds all other classes. The maximum radius that can be certified at x2 is thus m2/K (x2). Note that as certification is sound, we have m2 K (x2) ≤ (16) Now consider the maximum radius that can be certified at x1. Let m1 be the margin by which the top-predicted class, F (x1), exceeds all other classes. The maximum radius that can be certified at x1 is thus m1/K (x1) m1 K (x1) = m1 K (x2) + δ by assumption (17) ≤ m2 +K||x1 − x2|| K (x2) + δ by definition of the Lipschitz constant (18) < m2 + · δ K (x2) + δ by our choice of ||x1 − x2|| in (15) (19) ≤ ·K (x2) + · δ K (x2) + δ by (16) (20) = (21) Thus, we see that x1 cannot be certified with radius , despite that its distance from the boundary is exactly . B.2 OTHER PIECEWISE-LINEAR LIMITED METHODS Our work focuses primarily on Lipschitz-based certification, which we demonstrate is fundamentally limited on the hypothesis class of piecewise linear networks. However, this limitation is not due specifically to the use of the Lipschitz constant per se; instead, we attribute it more generally to the fact that Lipschitz-based certification always produces a piecewise-linear certified frontier on piecewise-linear networks, a property we refer to as PLL (Definition 4). In this section we briefly discuss how this property may apply to other flavors of certification techniques that have been proposed in the literature. Convex Relaxations and Dual Networks. One classic approach for certification is through convex relaxation. A survey of such methods is given by Salman et al. (2019), who point out the limitations (regarding tight certification) of convex relaxations (though the authors do not consider our setting where the learner may control the implementation of the boundary, but rather focus on post hoc certification). Though many approaches in this family have been proposed, we will consider two baseline methods that capture a primal and dual formulation of convex relaxations: Fast-Lin (Weng et al., 2018), and an approach proposed by Wong & Kolter (2018), often referred to as “KW.” Fast-Lin directly derives upper and lower bounds on the output of a ReLU network in order to determine if an adversarial example might exist. This is done by iteratively computing upper and lower bounds for the neurons in each layer and using them to replace the ReLU activations with linear upper and lower bounds. This computation resembles a piecewise-linear network, suggesting that Fast-Lin is PLL. The KW approach formulates the adversary as an LP that optimizes over the convex outer approximation of the set of top-level activations reachable through a norm-bounded perturbation. Crucially, for the sake of tractability, the LP can be bounded by the feasible set of the dual, which Wong & Kolter show can be expressed as a dual network, which resembles a backwards pass in the network being certified. For ReLU networks, the activations in the dual network are replaced with their upper convex envelopes (a linear function) over the bounded set [`, u], where ` and u represent lower and upper bounds on the pre-ReLU neural activations. The upper and lower bounds can be iteratively computed in a similar way to in Fast-Lip; thus, in its simplest form,7 the dual network inherits the piecewise linearity of the original ReLU network being certified, suggesting the resulting certified frontier is piecewise linear, and certification is PLL. Hyperplane Projections. As exact certification is NP-complete, the literature has often turned to training procedures that help simple, approximate certification enjoy greater success. In piecewise linear networks, the input can be partitioned into a polyhedral complex where each convex region corresponds to a single activation pattern, over which the network is linear (Croce et al., 2019; Fromherz et al., 2021; Jordan et al., 2019). Motivated by this view of ReLU networks, one family of robust training approaches attempts to expand the linear regions of the network to simplify the combinatorial analysis of the possible ReLU activation patterns (Croce et al., 2019; Xiao et al., 2019). Croce et al. proposed a simple certification technique for networks trained with their “Maximum Margin Regularization” (MMR), where a point, x, is certified only if (1) the entire -ball around x is contained in a single convex activation region, and (2) the linear function corresponding to the region does not have a boundary within from x. This approach is clearly PLL, as the certified regions can be obtained by shrinking each activation region (possibly split in two if a linear decision boundary crosses it) by . Since the original regions are convex polytopes, so too are the certified regions, thus the certified frontier is piecewise-linear. In contrast to our findings for Lipschitz-based certification, it is worth noting that the limitations of this approach go beyond PLL, as completeness of the MMR approach is in direct conflict with non-linearity; and moreover, the approach is designed specifically for piecewise-linear networks. C DETAILS ON EXPERIMENTS The experiments presented in Figure 1 in Section 3 were performed using the gloro Python library, which implements the GloRo Net method of Leino et al. (2021) for training certifiably robust 7This approach has been refined in subsequent work that we do not consider here (Wong et al., 2018). models by incorporating Lipschitz-based certification into training. All networks in the experiments consisted of a 1-hidden layer dense network with MinMax Anil et al. (2019) activations; three specific architectures were used, with 2, 20, and 200 hidden units, respectively. Models were trained for 64 epochs, with a batch size of 128. We chose hyperparameters inspired by those used by Leino et al. (see the original paper for details on the meaning of the various hyperparameters); namely, we used GloRo-TRADES loss with λ = 1.2, we scaled logarithmically to its ultimate value of 0.5 by the half-way point of training, and we linearly decreased the learning rate from 10−3 to 0 half-way through training. D AN ILLUSTRATIVE EXAMPLE OF THE CORNER PROBLEM For illustrative purposes a diagram is provided in Figure 2 that serves as a visual explanation of the “corner problem” described in Section 2.3. The boundary of a neural network, shown by the bold black line, forms a sharp corner. The complement to the robust region, i.e., the set of points that are not robust, is shown in gray. A simple implementation of this boundary has level curves that make similar sharp corners; the level curve corresponding to the certified frontier is shown by the dotted line, and the certified region is colored in blue. The region opposite the corner in the boundary is highlighted. We see that in this region, there is a set of points, shown in orange, that are not certified, despite the fact that they are robust, being at distance greater than from the boundary. In this two-dimensional example, these falsely flagged points make up a relatively small fraction of the uncertified points opposite the corner (represented as the union of the orange points and the highlighted gray points in the diagram); however, in high dimensions, virtually all uncertified points in this region would be falsely flagged, as indicated by Equation 1.
1. What are the key contributions and findings of the paper regarding piecewise-linear networks and PLL certification procedures? 2. What are the strengths and weaknesses of the paper, particularly in terms of its clarity, definitions, evidence, and proof? 3. Do you have any suggestions for improving the paper's title, abstract, or content? 4. How does the reviewer assess the paper's clarity, quality, novelty, and reproducibility? 5. Are there any minor issues or typos that the reviewer noticed while reading the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper shows how piecewise-linear networks limit the tightness of "piecewise-linear limited" (PLL) certification procedures (such as Lipschitz-based certification). In particular, it demonstrates why piecewise-linear networks that require a small capacity to produce a robust boundary can require a much larger capacity to be verifiable via a PLL certification procedure. Among the contributions of the paper include: Clear definitions for new ideas (e.g. completeness of certification procedures on a hypothesis class of decision boundaries; the notion of a certified frontier; PLL certification) Evidence (example + formal proof) explaining why PLL certification procedures can perform poorly around sharp inflection points in the boundary found in piecewise-linear networks. Strengths And Weaknesses The paper is clear and presents a novel idea that advances our understanding of Lipschitz-based verifiability. The paper has two key weaknesses: It is not clear what certification techniques are PLL; the paper could do a survey of leading techniques and characterize each of them. Even better: the paper could demonstrate that any non-PLL certification procedure must be exponential in the number of non-linear units. The paper does not provide any hypothesis classes that are both gradient-norm-preserving and yet enable smooth curves in the decision surface. While showing how PLL certification procedures are limited is a strong result, the paper would have been made even stronger if paired with examples of hypothesis classes that reduce the excess capacity required. A negative result (i.e. no hypothesis class requires less capacity than piecewise-linear networks) would also be valuable. Some suggestions: Suggestion: The title of the paper is very broad. What about specifying exactly when exactly limitations are encountered (e.g. "Limitations of Efficient Lipschitz-based Robustness Certification for Piecewise Linear Networks") Suggestion: Similarly, the abstract is vague on what certification techniques are affected by piecewise-linearity (it refers to "leading certification techniques", while only showing the Lipschitz-based methods are PLL). If no other certification techniques are shown to be PLL, the paper should either 1) provide evidence that Lipschitz-based methods significantly outstrip all other techniques 2) be more specific tha only Lipschitz-based methods are affected. Clarity, Quality, Novelty And Reproducibility Clarity The paper does a good job explaining ideas, with clear definitions and no unnecessary jargon. I particularly liked the example in Section 3.1. Some typos I noticed while reading the paper: Page 4: "Lipschitz-based certification has proven effective in the Literature" Page 6: "Would be necessary to simply separate them Bubeck & Sellke (2021) Page 9: "Due our Theorem 2" Quality The paper advances our understanding of the role of capacity in Lipschitz-based verifiability. This builds on existing work that demonstrates that constraining the Lipschitz constant of the network (without concerning ourselves verifiability) means that additional capacity is required in the network. As outlined in Section 3.2, other researchers could build on this work to improve Lipschitz-based certifiable training methods by designing better hypothesis families. Novelty To the best of my knowledge, the results and ideas in this paper are novel. Reproducibility Reproducibility is not much of an issue as the paper is largely theoretical. Minor issues: No details are provided on how the networks in Section 3.1 were trained. This makes it difficult to be certain that the order-of-magnitude increase in the number of neurons required to get to close to perfect VRA was not due to improper training. (For example: were the network trained adversarially? If so, what attack was used during adversarial training?)
ICLR
Title Correcting the Sub-optimal Bit Allocation Abstract In this paper, we investigate the problem of bit allocation in Neural Video Compression (NVC). First, we reveal that a recent bit allocation approach claimed to be optimal is, in fact, sub-optimal due to its implementation. Specifically, we find that its sub-optimality lies in the improper application of semi-amortized variational inference (SAVI) on latent with non-factorized variational posterior. Then, we show that the corrected version of SAVI on non-factorized latent requires recursively applying back-propagating through gradient ascent, based on which we derive the corrected optimal bit allocation algorithm. Due to the computational in-feasibility of the corrected bit allocation, we design an efficient approximation to make it tractable. Empirical results show that our proposed correction significantly improves the incorrect bit allocation in terms of R-D performance and bitrate error, and outperforms all other bit allocation methods by a large margin. The source code is provided in the supplementary material. 1 INTRODUCTION Recently, bit allocation for Neural Video Compression (NVC) has drawn growing attention thanks to its great potential in boosting compression performance. Due to the frame reference structure in video coding, it is sub-optimal to use the same R-D (Rate-Distortion) trade-off parameter λ for all frames. In bit allocation task, bitrate is allocated to different frames/regions to minimize R-D cost R + λD, where R is total bitrate, D is total distortion, and λ is the Lagrangian multiplier controlling R-D trade-off. Li et al. (2022) are the pioneer of bit allocation for NVC, who improve the empirical R-D (Rate-Distortion) model from traditional video codec (Li et al., 2014; 2016) and solve the per-frame Lagrangian multiplier λ. Other concurrent works adopt simple heuristics for coarse bit allocation (Cetin et al., 2022; Hu et al., 2022). Most recently, BAO (Bit Allocation using Optimization) (Xu et al., 2022) proposes to formulate bit allocation as semi-amortized variational inference (SAVI) (Kim et al., 2018; Marino et al., 2018) and solves it by gradient-based optimization. Specifically, it directly optimizes the variational posterior parameter to be quantized and encoded by gradient ascent, aiming at maximizing the minus overall R-D cost, which is also the evident lowerbound (ELBO). BAO does not rely on any empirical RD model and thus outperforms previous work. Further, BAO shows its optimality by proving its equivalence to bit allocation with precise R-D model. In this paper, we first show that BAO (Xu et al., 2022) is in fact, sub-optimal due to its implementation. Specifically, we find that it abuses SAVI (Kim et al., 2018; Marino et al., 2018) on latent with non-factorized variational posterior, which brings incorrect gradient signal during optimization. To solve this problem, we first extend SAVI to non-factorized latent by back-propagating through gradient ascent (Domke, 2012). Then based on that, we correct the sub-optimal bit allocation in BAO to produce true optimal bit allocation for NVC. Furthermore, we propose a computational feasible approximation to such correct but intractable bit allocation method. And we show that our approximation outperforms the incorrect bit allocation (BAO) in terms of R-D performance and bitrate error, and performs better than all other bit allocation methods. To summarize, our contributions are as follows: • We demonstrate that a previously claimed optimal bit allocation method is actually suboptimal. We find that its sub-optimality comes from the improper application of SAVI to non-factorized latent. • We present the correct way to conduct SAVI on non-factorized latent by recursively applying back-propagation through gradient ascent. Based on this, we derive the corrected optimal bit allocation algorithm for NVC. • Furthermore, we propose a computational efficient approximation of the optimal bit allocation to make it feasible. Our proposed approach improves the R-D performance and bitrate error over the incorrect bit allocation (BAO), and outperforms all other bit allocation methods for NVC. 2 PRELIMINARIES 2.1 NEURAL VIDEO COMPRESSION The input of NVC is a GoP (Group of Picture) x1:T , where xi ∈ RH×W is the ith frame with H ×W pixels, and T is the number of frame inside the GoP. Most of the works in NVC follow a latent variable model with temporal autoregressive relationship (Yang et al., 2020a). Specifically, to encode xi, we first extract the motion latent wi = fwϕ (xi,x ′ i) from current frame xi and previous reconstructed frame x′i−1, where f w ϕ (·) is the motion encoder parameterized by ϕ1. Then, we encode the quantized latent w̃i = ⌊wi⌉ with the probability mass function (pmf) estimator Pθ(w̃i|w̃<i, ỹ<i) parameterized by θ, where ⌊·⌉ is the rounding. Then, we obtain the residual latent yi = f y ϕ(x,x ′, w̃), where fyϕ(·) is the residual encoder. Then, similar to how we treat wi, we encode the quantized latent ỹi = ⌊yi⌉with pmf Pθ(ỹi|w̃≤i, ỹ<i). Finally, we obtain the reconstructed frame x′i = g x θ (x ′ i−1, w̃i, ỹi), where g x θ (·) is the decoder parameterized by θ. As only the motion latent w̃i and residual latent ỹi exist in the bitstream, the above process can be simplified as Eq. 1 and Eq. 2, where fϕ(·) is the generalized encoder and gθ(·) is the generalized decoder. The target of NVC is to minimize the per-frame R-D cost Ri + λiDi (Eq. 3), where Ri is the bitrate, Di is the distortion and λi is the Lagrangian multiplier controlling R-D trade-off. The bitrate Ri and distortion Di is computed as Eq. 2, where d(·, ·) is the distortion metric. And λiDi can be further interpreted as the data likelihood term − log pθ(xi|w̃≤i, ỹ≤i) so long as we treat λiDi as the energy function of a Gibbs distribution (Minnen et al., 2018). Specifically, when d(·, ·) is MSE, we can interpret λiDi = − log pθ(xi|w̃≤i, ỹ≤i) + const, where pθ(xi|w̃≤i, ỹ≤i) is a Gaussian distribution N (x̂i, 1/2λiI). wi = fϕ(xi, w̃<i, ỹ<i),yi = fϕ(xi, w̃≤i, ỹ<i), where w̃i = ⌊wi⌉, ỹi = ⌊yi⌉ (1) Ri = logPθ(w̃i, ỹi|w̃<i, ỹ<i), Di = d(xi, gθ(w̃≤i, ỹ≤i)) (2) max−(Ri + λiDi) (3) On the other hand, NVC is also closely related to Variational Autoencoder (VAE) (Kingma & Welling, 2013). As the rounding ⌊·⌉ is not differentiable, Ballé et al. (2016); Theis et al. (2017) propose to relax it by additive uniform noise (AUN), and replace w̃i = ⌊wi⌉, ỹi = ⌊yi⌉ with w̃i = wi + U(−0.5, 0.5), ỹi = yi + U(−0.5, 0.5). Under such formulation, the above encodingdecoding process becomes a VAE on graphic model w̃≤i, ỹ≤i → xi with variational posterior as Eq. 4, where wi,yi plays the role of variational posterior parameter. Then, minimizing the overall R-D cost (Eq. 3) is equivalent to maximizing the evident lowerbound (ELBO) (Eq. 5). qϕ(w̃i|xi, w̃<i, ỹ<i) = U(wi − 0.5,wi + 0.5), qϕ(ỹi|xi, w̃≤i, ỹ<i) = U(yi − 0.5,yi + 0.5) (4) −(Ri + λiDi) = Eqϕ [logPθ(w̃i, ỹi|w̃<i, ỹ<i)︸ ︷︷ ︸ −Ri + log pθ(xi|w̃≤i, ỹ≤i)︸ ︷︷ ︸ −λiDi − log qϕ︸ ︷︷ ︸ bits-back bitrate: 0 ] (5) 2.2 BIT ALLOCATION FOR NEURAL VIDEO COMPRESSION It is well known to video coding community that using the same R-D trade-off parameter λi to optimize R-D cost in Eq. 3 for all T frames inside a GoP is suboptimal (Li et al., 2014; 2016). This sub-optimality comes from the frame reference structure and is explained in detail by Li et al. (2022); Xu et al. (2022). The target of bit allocation is to maximize the minus of overall R-D cost 1Following previous works in deep generative modeling (Kingma & Welling, 2013; Kim et al., 2018), we denote all parameters related to encoder as ϕ, and all parameters related to decoder and prior as θ. (ELBO) L as Eq. 6 given the overall R-D trade-off parameter λ0, instead of maximizing Li of each frame i separately. The pioneer work of bit allocation in NVC (Li et al., 2022) follows bit allocation for traditional video codec (Li et al., 2016). Specifically, it adopts empirical models to approximate the relationship of the rate dependency ∂Ri+1/∂Ri and distortion dependency ∂Di+1/∂Di between frames. Then it takes those models into Eq. 6 to solve λ∗1:T explicitly as Eq. 7.left. However, its performance heavily relies on the accuracy of empirical models. maxL = T∑ i=1 Li, where Li = −(Ri + λ0Di) (6) λ∗1:T ← argmax λ1:T L(λ1:T ), versus w∗1:T ,y∗1:T ← arg max w1:T ,y1:T L(w1:T ,y1:T ) (7) On the other hand, BAO (Xu et al., 2022) does not solve λ∗1:T explicitly. Instead, it adopts SAVI (Kim et al., 2018; Marino et al., 2018) to achieve implicit bit allocation. To be specific, it initializes the variational posterior parameter w01:T ,y 0 1:T from fully amortized variational inference (FAVI) as Eq. 1. Then, it optimizes w1:T ,y1:T via gradient ascent to maximize L as Eq. 7.right. During this procedure, no empirical model is required. BAO further proofs that optimizing Eq. 7.right is equivalent to optimizing Eq. 7.left with precise rate and distortion dependency model ∂Ri+1/∂Ri, ∂Di+1/∂Di (See Thm. 1, Thm. 2 in Xu et al. (2022)). Thus, BAO claims that it is optimal assuming gradient ascent achieves global maximum. However, in next section, we show that BAO (Xu et al., 2022) is in fact suboptimal due to its implementation. 3 WHY BAO IS SUP-OPTIMAL BAO (Xu et al., 2022) achieves the SAVI (Kim et al., 2018; Marino et al., 2018) target in Eq. 7.right by gradient-based optimization. More specifically, its update rule is described as Eq. 8 and Eq. 9, where K is the total number of gradient ascent steps, and wki ,y k i is the posterior parameter wi,yi after k steps of gradient ascent. In the original paper of BAO, the authors also find that directly optimizing wi,yi simultaneously by Eq. 8 and Eq. 9 performs worse than optimizing yi alone using Eq. 9, but they have not offered any explanation. It is obvious that optimizing yi alone is sub-optimal. However, it is not obvious why jointly optimizing wi,yi with Eq. 8 and Eq. 9 fails. wk+1i ← w k i + α dL(wk1:T ,yk1:T ) dwki , where dL(wk1:T ,yk1:T ) dwki = T∑ j=i ∂Lj(wk1:j ,yk1:j) ∂wki (8) yk+1i ← y k i + α dL(wk1:T ,yk1:T ) dyki , where dL(wk1:T ,yk1:T ) dyki = T∑ j=i ∂Lj(wk1:j ,yk1:j) ∂yki (9) In fact, the update rule in Eq. 8 and Eq. 9 is exactly the SAVI (Kim et al., 2018; Marino et al., 2018) when wi,yi fully factorizes (e.g. the full factorization used in mean-field (Blei et al., 2017)). However, in NVC the wi,yi has complicated auto-regressive relationships (See Eq. 1 and Fig. 1.(a)). Abusing SAVI on non-factorized latent causes gradient error in two aspects: (1). The total derivative dL/dwi, dL/dyi is incomplete. (2). The total derivative dL/dwi, dL/dyi and partial derivative ∂Lj/∂wi, ∂Lj/∂yi is evaluated at wrong value. In next two sections, we elaborate those two issues with wi related equations in main text and yi related equations in Appendix. A.2. 3.1 INCOMPLETE TOTAL DERIVATIVE EVALUATION According to the latent generation procedure described by Eq. 1 and Eq. 2, we draw the computational graph to describe the latent dependency as Fig. 1.(a). Based on that, we expand the total derivative dL/dwi, dL/dyi as Eq. 10 and Eq. 22. dL(w1:T ,y1:T ) dwi = T∑ j=i dLj(w1:j ,y1:j) dwi dLj(w1:j ,y1:j) dwi = j∑ l=i+1 ∂wl ∂wi dLj(w1:j ,y1:j) dwl + j∑ l=i ∂yl ∂wi dLj(w1:j ,y1:j) dyl︸ ︷︷ ︸ ignored by BAO + ∂Lj(w1:j ,y1:j) ∂wi︸ ︷︷ ︸ considered by BAO (10) As shown in Eq. 8, Eq. 9 and Fig. 1.(b), BAO (Xu et al., 2022) treats the total derivative dL/dwi, dL/dyi as the sum of the frame level partial derivative ∂Lj/∂wi, ∂Lj/∂yi, which is the direct contribution of frame ith latent wi,yi to jth frame’s R-D cost Lj (as marked in Eq. 10 and Eq. 22). This incomplete evaluation of gradient signal brings sub-optimality. Further, it is not possible to correct BAO by simply including other parts of gradient into consideration. As BAO jointly updates all the latent w1:T ,y1:T , the relationship of Eq. 2 only holds for the initial latent parameters w01:T ,y 0 1:T produced by FAVI. And this important relationship is broken for parameters w k 1:T ,y k 1:T after k ≥ 1 steps of update. 3.2 INCORRECT VALUE TO EVALUATE GRADIENT As shown in Eq. 8 and Eq. 9, BAO (Xu et al., 2022) simultaneously updates all the posterior parameter w1:T ,y1:T with gradient evaluated at the same gradient ascent step wk1:T ,y k 1:T . However, as we show later in Sec. 4.1 and Fig. 1.(c), this is sub-optimal as all the descendant latent w>i,y≥i of wi should already complete all K steps of gradient ascent before the gradient of wi is evaluated. Moreover, w>i,y≥i should be initialized by FAVI using precedents latent. Similar rule applies to yi. Specifically, the correct value to evaluate the gradient is as Eq. 11 and Eq. 23, where wkii denotes the latent wi after ki steps of update, and y k′j i denotes the latent yi after k ′ i steps of update. wki+1i ← w ki i + α dL(wk11 , ...,w ki i ,w K >i,y k′1 1 , ...,y k′i−1 i−1 ,y K ≥i) dwkii , where w0>i,y 0 ≥i = f(x,w k1 1 , ...,w ki i ,y k′1 1 , ...,y k′i−1 i−1 ) (11) Similar to the incomplete total derivative evaluation, this problem does not have a simple solution. In next section, we show how to correct both of the above-mentioned issues by recursively applying back-propagating through gradient ascent (Domke, 2012). 4 CORRECTING THE SUB-OPTIMAL BIT ALLOCATION In this section, we first extend the generic SAVI Kim et al. (2018); Marino et al. (2018) to 2-level non-factorized latent. Then we further extend this result to latent with any dependency that can be described by a DAG (Directed Acyclic Graph). And finally, we correct the sub-optimal bit allocation by applying the result in DAG latent to NVC. 4.1 SAVI ON 2-LEVEL NON-FACTORIZED LATENT In this section, we extend the SAVI on 1-level latent (Kim et al., 2018) to 2-level non-factorized latent. We denote x as evidence, a as the variational posterior parameter of the first level latent ã, b as the variational posterior parameter of the second level latent b̃, and the ELBO to maximize as L(a, b). The posterior q(ã, b̃|x) factorizes as q(ã|x)q(b̃|ã,x), which means that b depends on a. Given a is fixed, we can directly follow Kim et al. (2018); Marino et al. (2018) to optimize b to maximize ELBO by SAVI. However, it requires some tricks to optimize a. Algorithm 1: SAVI on 2-level Latent 1 procedure solve-2-level(x,ak) 2 initialize a0 ← f(x) from FAVI 3 for k = 0, ...,K − 1 do 4 dL(ak,bK) dak = grad-2-level(x,ak) 5 ak+1 ← ak + αdL(a k,bK) dak 6 return aK , bK 7 procedure grad-2-level(x,ak) 8 b0 ← f(x,ak) from FAVI 9 for k′ = 0, ...,K − 1 do 10 bk ′+1 ← bk′ + αdL(a k,bk ′ ) dbk′ 11 ←−a ← ∂L(a k,bK) ∂ak 12 ←− bK ← dL(a k,bK) dbK 13 for k′ = K − 1, ..., 0 do 14 ←−a ←←−a + α∂ 2L(ak,bk ′ ) ∂ak∂bk′ ←−−− bk ′+1 15 ←− bk ′ ← ←− bk ′ + α∂ 2L(ak,bk ′ ) ∂bk′∂bk′ ←−−− bk ′+1 16 ←−a =←−a + ∂b 0 ∂ak ←− b0 17 return dL(a k,bK) dak =←−a Algorithm 2: SAVI on DAG Latent 1 procedure solve-dag(x) 2 sort a1, ...,aN in topological order 3 for aj with parent P(aj) = ∅ 4 add aj to fake node a0’s children C(a0) 5 grad-dag(x,a00) 6 return aK1 , ...,aKN 7 procedure grad-dag(x,ak00 , ...,a ki i ) 8 for aj ∈ C(ai) in topological order do 9 a0j ← f(x,a k0 0 , ...,a k<j <j ) from FAVI 10 for kj = 0, ...,K − 1 do 11 dL(ak00 ,...,a kj j ,a K >j) da kj j ← grad-dag(x,ak00 , ...,a kj j ) 12 a kj+1 j ← a kj j + α dL(ak00 ,...,a kj j ,a K >j) da kj j 13 ←−ai ← ∂L(ak00 ,...,a ki i ,a K >i) ∂a ki i 14 for aj ∈ C(ai) do 15 ←−aj ← 0, ←− aKj ← dL(ak00 ,...,a ki i ,a K >i) daKj 16 for kj = K − 1, ..., 0 do 17 ←−aj ←←−aj + α ∂2L(ak00 ,...,a kj j ,a K >j) ∂a ki i ∂a kj j ←−−− a kj+1 j 18 ←− a kj j ← ←−−− a kj+1 j + α ∂2L(ak00 ,...,a kj j ,a K >j) ∂a kj j ∂a kj j ←−−− a kj+1 j 19 ←−ai ←←−ai +←−aj + ∂a0j ∂a ki i ←− a0j 20 returndL(a k0 0 ,...,a ki i ,a K >i) da ki i =←−ai The intuition is, we do not want to find a a that maximizes L(a, b) given a fixed b (or we have the gradient issue described in Sec. 3). Instead, we want to find a a, whose maxb L(a, b) is maximum. This translates to the optimization problem as Eq. 12. In fact, Eq. 12 is a variant of setup in backpropagating through gradient ascent (Samuel & Tappen, 2009; Domke, 2012). The difference is, our a also contributes directly to optimization target L(a, b). From this perspective, Eq. 12 is more closely connected to Kim et al. (2018), if we treat a as the model parameter and b as latent. a← argmax a L(a, b∗(a)), where b∗(a)← argmax b L(a, b) (12) And as SAVI on 1-level latent (Kim et al., 2018; Marino et al., 2018), we need to solve Eq. 12 using gradient ascent. Specifically, denote α as step size (learning rate), K as the total gradient ascent steps, ak as the a after k step update, bk ′ as the b after k′ step update, and f(.) as FAVI procedure generating initial posterior parameters a0, b0, the optimization problem as Eq. 12 translates into the update rule as Eq. 13. Eq. 13 is the guidance for designing optimization algorithm, and it also explains why the gradient of BAO (Xu et al., 2022) is evaluated at wrong value (See Sec. 3.2). ak+1 ← ak + αdL(a k, bK) dak , bk ′+1 ← bk ′ + α dL(ak, bk′) dbk′ , where b0 = f(x,ak) (13) To solve Eq. 13, we note that although dL(ak, bk′)/dbk′ is directly computed, dL(ak, bK)/dak is not straightforward. Resorting to previous works (Samuel & Tappen, 2009; Domke, 2012) in implicit differentiation and extending the results in Kim et al. (2018) from model parameters to variational posterior parameters, we implement Eq. 13 as Alg. 1. Specifically, we first initialize a0 from FAVI. Then we conduct gradient ascent on a with gradient dL(ak, bK)/dak computed from the procedure grad-2-level(x,ak). And inside grad-2-level(x,ak), b is also updated by gradient ascent, the above procedure corresponds to Eq. 13. The key of Alg. 1 is the evaluation of gradient dL(ak, bK)/dak. Formally, we have: Theorem 1. After grad-2-level(x,ak) of Alg. 1 executes, we have the return value dL(ak, bK)/dak =←−a . (See proof in Appendix. A.1.) 4.2 SAVI ON DAG-DEFINED NON-FACTORIZED LATENT In this section, we extend the result from previous section to SAVI on general non-factorized latent with dependency described by any DAG. This DAG is the computational graph during network inference, and it is also the directed graphical model (DGM) (Koller & Friedman, 2009) defining the factorization of latent variables during inference. This is the general case covering all dependency that can be described by DGM. This extension is necessary to perform SAVI on latent with complicated dependency (e.g. bit allocation of NVC). Similar to the 2-level latent setup, we consider performing SAVI on N variational posterior parameter a1, ...,aN with their dependency defined by a computational graph G, i.e., their corresponding latent variable ã1, ..., ãN ’s posterior distribution factorizes as G. Specifically, we denote aj ∈ C(ai),ai ∈ P(aj) if an edge exists from ai to aj . This indicates that ãj conditions on ãi. Without loss of generality, we assume a1, ...,aN is sorted in topological order. This means that if aj ∈ C(ai),ai ∈ P(aj), then i < j. Each latent is optimized by K-step gradient ascent, and akii denotes the latent ai after ki steps of update. Then, similar to 2-level latent, we have the update rule as Eq. 14: aki+1i ← a ki i + α dL(ak11 , ...,a ki i ,a K >i) daki , where a0>i = f(x,a k1 1 , ...,a ki i ) (14) , which can be translated into Alg. 2. Specifically, we first sort the latent in topological order. Then, we add a fake latent a0 to the front of all as. Its children are all the as with 0 in-degree. Then, we can solve the SAVI on a1, ...,aN using gradient ascent by executing the procedure graddag(x,ak00 , ...,a ki i ) in Alg. 2 recursively. Inside procedure grad-dag(x,a k0 0 , ...,a ki i ), the gradient to update ai relies on the convergence of its children aj ∈ C(ai), which is implemented by the recursive depth-first search (DFS) in line 11. And upon the completion of procedure grad-dag(x,a00), all the latent converges to aK1 , ...,a K N . Similar to the 2-level latent case, the key of Alg. 2 is the evaluation of gradient dL(ak00 , ...,a ki i ,a K >i)/da ki i . Formally, we have: Theorem 2. After the procedure grad-dag(x,ak00 , ...,a ki i ) in Alg. 2 executes, we have the return value dL(ak00 , ...,a ki i ,a K >i)/da ki i = ←−ai. (See proof in Appendix. A.1.) To better understand how Alg. 2 works, we provide a detailed example in Fig. 5 of Appendix. A.3. 4.3 CORRECTING THE SUB-OPTIMAL BIT ALLOCATION USING SAVI ON DAG With the result in previous section, correcting BAO (Xu et al., 2022) seems to be trivial. We only need to sort the latent in topological order as w1,y1, ...,wT ,yT , treat them as a1, ...,a2T+1 and run Alg. 2 to obtain the optimized latent parameters wK1 ,y K 1 , ...,w K T ,y K T . And the gradient dL(ak00 , ...,a ki i ,a K >i)/da ki i computed in Alg. 2 resolves the issue of BAO described in Sec. 3.1 and Sec. 3.2. However, an evident problem is the temporal complexity. Given the latent number N and gradient ascent step number K, Alg. 2 has temporal complexity of Θ(KN ). NVC with GoP size 10 has approximately N = 20 latent, and the SAVI on NVC (Xu et al., 2022) takes around K = 2000 step to converge. For bit allocation, the complexity of Alg. 2 is ≈ 200020, which is intractable. On the other hand, BAO’s complexity is reasonable (Θ(KN) ≈ 4 × 104). Thus, in next section, we provide a feasible approximation to such intractable corrected bit allocation. 4.4 FEASIBLE APPROXIMATION TO THE CORRECTED BIT ALLOCATION In order to solve problem with practical size such as bit allocation on NVC, we provide an approximation to the SAVI (Kim et al., 2018; Marino et al., 2018) on DAG described in Sec. 4.2. The general idea is that, when being applied to bit allocation of NVC, the accurate SAVI on DAG (Alg. 2) satisfies both requirement on gradient signal described in Sec. 3.1 and Sec. 3.2. We can not make it tractable without breaking them. Thus, we break one of them and achieve a reasonable complexity, while maintain a superior performance compared with BAO (Xu et al., 2022). We consider the approximation in Eq. 15 which breaks the requirement for gradient evaluation in Sec. 3.2. Based on Eq. 15 and the requirement in Sec. 3.1, we design an approximation of accurate SAVI as Alg. 4. When being applied to bit allocation in NVC, it satisfies the gradient requirement in Sec. 3.1 while maintaining a temporal complexity of Θ(KN) as BAO. dL(ak00 , ...,a ki i ,a K >i) dakii ≈ dL(ak00 , ...,a ki i ,a 0 >i) dakii (15) Specifically, with the approximation in Eq. 15, the recurrent gradient computation in Alg. 2 becomes unnecessary as the right hand side of Eq. 15 does not require aK>i. However, to maintain the dependency of latent described in Sec. 3.1, as Alg. 2, we still need to ensure that the children node aj ∈ C(ai) are re-initialized by FAVI every-time when ai is updated. Therefore, a reasonable approach is to traverse the graph in topological order. We keep the children node aj untouched until all its parent node ai ∈ P(aj)’s gradient ascent is completed and aKi is known. And the resulting approximate SAVI algorithm is as Alg. 4. When applied to bit allocation, it satisfies the gradient requirement in Sec. 3.1, and as BAO, its temporal complexity is Θ(KN). Algorithm 3: BAO on DAG Latent 1 procedure solve-bao(x) 2 a01, ...,a 0 N ← f(x) from FAVI 3 for k = 0, ...,K − 1 do 4 for i = 1, ..., N do 5 ak+1i ← aki + α ∂L(ak1 ,...,a k N ) ∂aki 6 return aK1 , ..., aKN Algorithm 4: Approximate SAVI on DAG latent 1 procedure solve-approx-dag(x) 2 sort a1, ...,aN in topological order 3 for i = 1, ..., N do 4 a0i , ...,a 0 N ← f(x,aK<i) from FAVI 5 for k = 0, ...,K − 1 do 6 dL(aK<i,a k i ,a K >i) daki ≈ dL(a K <i,a k i ,a 0 >i) daki 7 ak+1i ← aki + α dL(aK<i,a k i ,a K >i) daki 8 return aK1 , ..., aKN To better understand BAO (Xu et al., 2022) in SAVI context, we rewrite it by general SAVI notation instead of NVC notation in Alg. 3. We highlight the difference between BAO (Alg. 3) (Xu et al., 2022), the accurate SAVI on DAG latent (Alg. 2) and the approximate SAVI on DAG latent (Alg. 4) from several aspects: • Graph Traversal Order: BAO performs gradient ascent on a1:T all together. The accurate SAVI only updates ai when a>i’s update is complete and aK>i is known. The approximate SAVI only updates ai when a<i’s update is complete and aK<i is known. • Gradient Correctness: When being applied to bit allocation in NVC, BAO violates the gradient rule in Sec. 3.1 and Sec. 3.2, accurate SAVI satisfies both rules, approximate SAVI satisfies Sec. 3.1 and violates Sec. 3.2. • Temporal Complexity: With the latent number N and steps of gradient ascent K, the complexity of BAO is Θ(KN), the complexity of accurate SAVI is Θ(KN ) and the complexity of approximate SAVI is Θ(KN). Then we can simply apply Alg. 4 to bit allocation in NVC to obtain a feasible approximation of the corrected optimal bit allocation. And in Sec. 6.2, we empirically show that our approximation improves the R-D performance over BAO (Xu et al., 2022) with even smaller number of updates. 5 RELATED WORK: BIT ALLOCATION & SAVI FOR NEURAL COMPRESSION Li et al. (2022) are the pioneer of bit allocation for NVC and their work is elaborated in Sec. 2.2. Other recent works that consider bit allocation for NVC only adopt simple heuristic such as inserting 1 high quality frame per 4 frames (Hu et al., 2022; Cetin et al., 2022). On the other hand, OEU (Lu et al., 2020) is also recognised as frame-level bit allocation while its performance is inferior than BAO (Xu et al., 2022). BAO is the most recent work with best R-D performance. It is elaborated in Sec. 2.2 and Sec. 3, and corrected in the previous section. Semi-Amortized Variational Inference (SAVI) is proposed by Kim et al. (2018); Marino et al. (2018). The idea is that works following Kingma & Welling (2013) use fully amortized inference parameter ϕ for all data, which leads to the amortization gap (Cremer et al., 2018). SAVI reduces this gap by optimizing the variational posterior parameter after initializing it with inference network. It adopts back-propagating through gradient ascent (Domke, 2012) to evaluate the gradient of model parameters. We adopt a similar method to extend SAVI to non-factorized latent. When applying SAVI to practical neural codec, researchers abandon the nested model parameter update for efficiency. Prior works (Djelouah & Schroers, 2019; Yang et al., 2020b; Zhao et al., 2021; Gao et al., 2022) adopt SAVI to boost R-D performance and achieve variable bitrate in image compression. And BAO (Xu et al., 2022) is the first to consider SAVI for bit allocation. 6 EXPERIMENTS 6.1 EXPERIMENTAL SETTINGS We implement our approach in PyTorch 1.9 with CUDA 11.2, and run the experiments on NVIDIA(R) A100 GPU. Most of the other settings are intentionally kept the same as BAO (Xu et al., 2022). Specifically, we adopt HEVC Common Testing Condition (CTC) (Bossen et al., 2013) and UVG dataset (Mercat et al., 2020). And we measure the R-D performance in BjontegaardBitrate (BD-BR) and BD-PSNR (Bjontegaard, 2001). For baseline NVC (Lu et al., 2019; Li et al., 2021), we adopt the official pre-trained models. And we select target λ0 = {256, 512, 1024, 2048}. For gradient ascent, we adopt Adam (Kingma & Ba, 2014) optimizer with lr = 1 × 10−3. We set the gradient ascent step K = 2000 for the first frame and K = 400 for other frames. More details are presented in Appendix. A.5. 6.2 QUANTITATIVE RESULTS As shown in Tab. 1, our method consistently improves the R-D performance in terms of BD-BR over BAO (Xu et al., 2022) on both baseline methods and all datasets. Moreover, this improvement is especially significant (more than 10% in BD-BR) when the baseline is DCVC (Li et al., 2021). And both BAO and our proposed correction significantly outperform other approaches. It is also noteworthy that with our bit allocation, DVC (the SOTA method in 2019) already outperforms DCVC (the SOTA method in 2021) by large margin (See the red solid line and black dash line in Fig. 2). BD-BR (%) ↓ Method Class B Class C Class D Class E UVG DVC (Lu et al., 2019) as Baseline Li et al. (2016)1 20.21 17.13 13.71 10.32 16.69 Li et al. (2022)1 -6.80 -2.96 0.48 -6.85 -4.12 OEU (Lu et al., 2020)2 -13.57 -11.29 -18.97 -12.43 -13.78 BAO (Xu et al., 2022)2 -28.55 -26.82 -25.37 -32.54 -27.68 Proposed -32.10 -31.71 -35.86 -32.93 -30.92 DCVC (Li et al., 2021) as Baseline OEU (Lu et al., 2020)2 -10.75 -14.34 -16.30 -7.15 -16.07 BAO (Xu et al., 2022)2 -20.59 -19.69 -20.60 -23.33 -25.22 Proposed -32.89 -33.10 -32.01 -36.88 -39.66 Table 1: The BD-BR of our approach compared with others. 1 comes from Li et al. (2022). 2 comes from Xu et al. (2022). Figure 2: The R-D curve on HEVC Class D. Other than R-D performance, the bitrate error of our approach is also significantly smaller than BAO (Xu et al., 2022) (See Tab. 2). The bitrate error is measured as the relative bitrate difference before and after bit allocation. The smaller it is, the easier it is to achieve the desired bitrate accurately. For complexity, our approach only performs 920 steps of gradient ascent per-frame, while BAO requires 2000 steps. See more quantitative results (BD-PSNR & R-D curves) in Appendix. A.6. 6.3 ABLATION STUDY, ANALYSIS & QUALITATIVE RESULTS Tab. 3 shows that for BAO (Xu et al., 2022), jointly optimizing w1:T ,y1:T performs worse than optimizing y1:T or w1:T alone. This counter-intuitive phenomena comes from its incorrect estimation of gradient signal. For the proposed approach that corrects this, jointly optimizing w1:T ,y1:T performs better than optimizing y1:T or w1:T alone, which is aligned with our intuition. Bitrate-Error (%) ↓ Method Class B Class C Class D Class E UVG DVC (Lu et al., 2019) as Baseline BAO (Xu et al., 2022)2 8.41 12.86 21.39 5.94 3.73 Proposed 3.16 4.27 1.81 6.14 1.73 DCVC (Li et al., 2021) as Baseline BAO (Xu et al., 2022)2 25.67 23.90 23.74 24.88 21.86 Proposed 4.27 7.29 5.73 8.03 3.06 Table 2: The bitrate error of our approach compared with BAO. Method BD-BR (%) ↓ BAO (y) -25.37 BAO (w) -22.24 BAO (y,w) -14.76 Proposed (y) -32.60 Proposed (w) -31.56 Proposed (y,w) -35.86 Table 3: Ablation study with HEVC Class D and DVC (Lu et al., 2019). To better understand why our method works, we present the R-D cost, distortion and rate versus frame/latent index for different methods in Fig. 3: top-left shows that the R-D cost of our approach consistently decreases according to SAVI stage. Moreover, it outperforms BAO after 4th frame; top-right shows that for each frame the R-D cost of our method is lower than BAO; bottom-left shows that the distortion part of R-D cost of our approach is approximately the same as BAO. While bottom-right shows that the advantage of our approach over BAO lies in the bitrate. More specifically, BAO increases the bitrate of yis after SAVI, while our correction decreases it. See more analysis in Appendix. A.9 and qualitative results in Appendix. A.10. 7 DISCUSSION & CONCLUSION Despite our correction is already more efficient than original BAO (Xu et al., 2022), its encoding speed remains far from real-time. Thus, it is limited to scenarios where R-D performance matters much more than encoding time (e.g. video on demand). See more discussion in Appendix. A.11. To conclude, we show that a previous bit allocation method for NVC is sub-optimal as it abuses SAVI on non-factorized latent. Then, we propose the correct SAVI on general non-factorized latent by back-propagating through gradient ascent, and we further propose a feasible approximation to make it tractable for bit allocation. Experimental results show that our correction significantly improves the R-D performance. ETHICS STATEMENT Improving the R-D performance of NVC has positive social value, in terms of reducing carbon emission by saving the resources required to transfer and store videos. Moreover, unlike traditional codecs such as H.266 (Bross et al., 2021), neural video codec does not require dedicated hardware. Instead, it can be deployed with general neural accelerators. Improving the R-D performance of NVC prompts the practical deployment of video codecs that are independent of dedicated hardware, and lowers the hardware-barrier of playing multi-media contents. REPRODUCIBILITY STATEMENT For theoretical results, both of the two theorems are followed by proof in Appendix. A.1. For a relatively complicated novel algorithm (Alg. 2), we provide an illustration of the step by step execution procedure in Appendix. A.3. For experiment, both of the two datasets are publicly accessible. In Appendix. A.5, we provide more implementation details including all the hyper-parameters. Moreover, we provide our source code for reproducing the empirical results in supplementary material. A APPENDIX A.1 PROOF OF THM 1 AND THM 2 Theorem 1. After the procedure grad-2-level(x,ak) of Alg. 1 executes, we have the return value dL(ak, bK)/dak =←−a . Proof. This proof extends the proof of Thm. 1 in Domke (2012), and it also serves as a formal justification of Alg. 1 in Kim et al. (2018). Note that our paper and Kim et al. (2018) are subtly different from Samuel & Tappen (2009); Domke (2012) as our high level parameter w not only generate low level parameter y, but also directly contributes to optimization target (See Fig. 4). As the computational graph in Fig. 4 shows, we can expand dL(ak, bK)/dak as Eq. 16, with each term solved in Eq. 18 and Eq. 19. dL(ak, bK) dak = ∂L(ak, bK) ∂ak︸ ︷︷ ︸ known + K∑ k′=0 ∂bk ′ ∂ak︸ ︷︷ ︸ Eq. 18 dL(ak, bK) dbk′︸ ︷︷ ︸ Eq. 19 (16) To solve Eq. 16, we first note that ∂L(ak, bK)/∂ak, dL(ak, bK)/dbK , ∂b0/∂ak is naturally known. Then, by taking partial derivative of the update rule of gradient ascent bk ′+1 ← bk′ + αdL(ak, bk′)/dbk′ with regard to ak, bk′ , we have Eq. 17 and Eq. 18. Note that Eq. 18 is the partial derivative ∂bk ′+1/∂ak instead of total derivative dbk ′+1/dak = (∂bk ′+1/∂bk ′ )(dbk ′ /dak) + ∂bk ′+1/∂ak. ∂bk ′+1 ∂bk′ = I + α ∂2L(ak, bk′) ∂bk′∂bk′ (17) ∂bk ′+1 ∂ak = α ∂2L(ak, bk′) ∂ak∂bk′ (18) And those second order terms can either be directly evaluated or approximated via finite difference as Eq. 20. As Eq. 18 already solves the first term on the right hand side of Eq. 16, the remaining issue is dL(ak, bK)/dbk′ . To solve this term, we expand it recursively as Eq. 19 and take Eq. 17 into it. dL(ak, bK) dbk′ = ∂bk ′+1 ∂bk′ dL(ak, bK) dbk′+1 (19) And the above solving process can be described by the procedure grad-2-level(x,ak) of Alg. 1. Specifically, the iterative update of ←−−− bk ′+1 in line 15 corresponds to recursively expanding Eq. 19 with Eq. 17, and the iterative update of ←−a in line 14 corresponds to recursively expanding Eq. 16 with Eq. 18 and Eq. 19. Upon the return of grad-2-level(x,ak) of Alg. 1, we have←−a = dL(ak, bK)/dbk. The complexity of the Hessian-vector product in line 14 and 15 of Alg. 1 may be reduced using finite difference following (Domke, 2012) as Eq. 20. ∂2L(ak, bk′) ∂ak∂bk′ v= lim r→0 1 r ( dL(ak, bk′ + rv) dak − dL(a k, bk ′ ) dak ) ∂2L(ak, bk′) ∂bk′∂bk′ v = lim r→0 1 r ( dL(ak, bk′ + rv) dbk′ − dL(a k, bk ′ ) dbk′ ) (20) Theorem 2. After the procedure grad-dag(x,ak00 , ...,a ki i ) in Alg. 2 executes, we have the return value dL(ak00 , ...,a ki i ,a K >i)/da ki i = ←−ai. Proof. Consider computing the target gradient with DAG G. The aki ’s gradient is composed of its own contribution to L in addition to the gradient from its children aj ∈ C(ai). Further, as we are considering the optimized children aKj , we expand the children node aj as Fig. 4. Then, we have: dL(ak00 , ...,a ki i ,a K >i) dakii = ∂L(ak00 , ...,a ki i ,a K >i) ∂akii︸ ︷︷ ︸ known + ∑ aj∈C(ai) ( K∑ kj=0 ∂a kj j ∂akii︸ ︷︷ ︸ Eq. 18 dL(ak00 , ...,a kj−1 j−1 ,a K ≥j) da kj j︸ ︷︷ ︸ Eq. 19 ) (21) The first term on the right-hand side of Eq. 21 can be trivially evaluated. The ∂akjj /∂a ki i can be evaluated as Eq. 18. And the dL(ak00 , ...,a kj−1 j−1 ,a K ≥j)/da kj j can be iteratively expanded as Eq. 19. We highlight several key differences between Alg. 2 and Alg. 1 which are reflected in the implementation of Alg. 2: • The gradient evaluation of current node yi requires gradient of its plural direct children aj ∈ C(ai), instead of the single child in 2-level case. The children traversal part of Eq. 19 corresponds to the two extra for loop in line 8 and 14 of Alg. 2. • The gradient ascent update of child latent parameter akj+1j ← a kj j + αdL(ak00 , ...,a kj j ,a K >j)/da kj j can be conducted trivially only if C(aj) is empty, otherwise the gradient has to be evaluated recursively using Eq. 21. And this part corresponds to the recursive call in line 11 of Alg. 2. And the other part of Alg. 2 is the same as Alg. 1. So the rest of the proof follows Thm. 1. Similarly, the Hessian-vector product in line 17 and 18 of Alg. 2 may be approximated as Eq. 20. However, this does not save Alg. 2 from an overall complexity of Θ(KN ). A.2 THE COMPLETE FORMULA FOR SEC. 3.1 AND SEC. 3.2 In this section, we provide the complete formula on yi related gradient for Sec. 3.1 and Sec. 3.2. Specifically, Eq. 22 is paired with Eq. 10, and Eq. 23 is paired with Eq. 11. dL(w1:T ,y1:T ) dyi = T∑ j=i dLj(w1:j ,y1:j) dyi dLj(w1:j ,y1:j) dyi = j∑ l=i+1 ( ∂yl ∂yi dLj(w1:j ,y1:j) dyl + ∂wl ∂yi dLj(w1:j ,y1:j) dwl )︸ ︷︷ ︸ ignored by BAO + ∂Lj(w1:j ,y1:j) ∂yi︸ ︷︷ ︸ considered by BAO (22) y k′i+1 i ← y k′i + α dL(wk11 , ...,w ki i ,w K >i,y k′1 1 , ...,y k′i i ,y K >i) dy k′i i , where w0>i,y 0 >i = f(x,w k1 1 , ...,w ki i ,y k′1 1 , ...,y k′i i ) (23) A.3 AN EXAMPLE OF EXECUTION OF ALG. 2 In this section, we provide an example of the full procedure of execution of Alg. 2 in Fig. 5. The setup is as Fig. 5.(0): we have N = 3 latent a1,a2,a3 and gradient ascent step K = 2, connected by a DAG shown in the figure. A.4 EXTENDING THE ANALYSIS SEC. 3 TO GENERAL DAG CASE As the Alg. 2 and Alg. 4 are applicable to general SAVI (Kim et al., 2018; Marino et al., 2018) beyond bit allocation, it is helpful to understand their merit to extend the analysis in Sec. 3 from bit allocation to general DAG scenario. In this section, we consider the same problem setup as Sec. 4.2. Similar to bit allocation case, BAO has the gradient incomplete and gradient value incorrect problem. The gradient incomplete issue is presented as Eq. 24, and gradient value incorrect issue is presented as Eq. 25. dL(ak00 , ...,a ki i ,a K >i) dakii = ∂L(ak00 , ...,a ki i ,a K >i) ∂akii︸ ︷︷ ︸ considered by BAO + ∑ aj∈C(ai) ( K∑ kj=0 ∂a kj j ∂akii dL(ak00 , ...,a kj−1 j−1 ,a K ≥j) da kj j ) ︸ ︷︷ ︸ ignored by BAO (24) ∂L(ak00 , ...,a ki i ,a K >i) ∂akii ≈ ∂L(aki0 , ...,a ki i ,a ki >i) ∂akii︸ ︷︷ ︸ approximation of BAO in gradient value (25) A.5 MORE IMPLEMENTATION DETAILS In the main text, we use yi as all the latent variable related to residual. In practice, it is divided into yi, zi,∆ y i , which refer to the first level latent of residual, second level latent of residual and quantization step size of first level latent of residual respectively. In practice, as BAO (Xu et al., 2022), all of the 3 parts are involved in SAVI jointly. We note that this is not a problem as they fully factorize. And for DVC (Lu et al., 2019), wi indeed represent the latent of motion. As for DVC, the motion has only one level of latent. However for DCVC (Li et al., 2021), wi is divided into wi,vi,∆wi , which refer to the first level latent of motion, second level latent of motion and quantization step size of first level latent of motion respectively. Similar to yi, all of the 3 parts are involved in SAVI jointly, and this is not a problem as they fully factorize. Following BAO (Xu et al., 2022), we set the target λ0 = {256, 512, 1024, 2048}, which also follows the baselines (Lu et al., 2019; Li et al., 2021). We adopt the official pre-train models for both of the baseline methods (Lu et al., 2019; Li et al., 2021). We do not have a training dataset or implementation details for training amortized encoder / decoder as all the experiments are performed on official pre-trained models. For gradient ascent, we set K = 2000 for the first I frame and K = 400 for all other P frames. On average, the gradient ascent steps for each frame is 920, which is smaller than 2000 in BAO. A.6 MORE QUANTITATIVE RESULTS In this section we present more quantitative results. In Tab. 4 we show the BD-PSNR of our proposed method and other methods as a supplementary to the BD-BR results (Tab. 1). Furthermore, in Fig. 6, we present R-D curve on all classes of HEVC CTC and UVG dataset as a supplementary to the HEVC Class D plot (Fig. 2). 0.050 0.075 0.100 0.125 0.150 0.175 Bpp 32 33 34 35 PS N R HEVC Class B 0.10 0.15 0.20 0.25 0.30 Bpp 29 30 31 32 33 34 PS N R HEVC Class C 0.10 0.15 0.20 0.25 0.30 0.35 Bpp 28 29 30 31 32 33 34 PS N R HEVC Class D 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Bpp 35 36 37 38 39 40 PS N R HEVC Class E 0.04 0.06 0.08 0.10 0.12 Bpp 34 35 36 37 38 PS N R UVG −0.04 −0.02 0.00 0.02 0.04 Bpp −0.04 −0.02 0.00 0.02 0.04 PS N R DCVC OEU (on DCVC) BAO (on DCVC) Proposed (on DCVC) DVC OEU (on DVC) BAO (on DVC) Proposed (on DVC) Figure 6: The R-D performance of our approach compared with baselines (w/o bit allocation) and other bit allocation approaches. A.7 COMPLEXITY & SCALABILITY Figure 7: Spatial temporal complexity analysis comparing BAO (Xu et al., 2022), the proposed approach and a fast approximation of the proposed approach. The analysis is done on DVC baseline and HEVC Class D dataset. We perform additional evaluation to compare the proposed method with BAO (Xu et al., 2022) in terms of temporal complexity and memory cost. The evaluation result can be found in Fig. 7. The general result is that our approach is ≈ 2.8 times slower and cost ≈ 2.0 times memory than BAO, despite the optimization stepsize is smaller. This extra complexity comes from the cost of sequential optimization of latent. And our current method in its naı̈ve form is slower than BAO while performs better. Jointly consider RD performance, time and memory, our method does not dominate BAO. However, as our approach enables a sequential style semi-amortized variational inference (SAVI) (Kim et al., 2018; Marino et al., 2018) on latents, there exists a very simple trick to speed it up. Moreover, this trick also resolves the scalability issue. Specifically, to optimize the ith frame’s latent, we do not compute the R-D cost of all the frames after it as we do now. Instead, we limit the R-D cost computation to a small fixed size of frames. Formally, we approximate the gradient as: dL(w1:T ,y1:T ) dwi ≈ i+C∑ j=i dLj(w1:j ,y1:j) dwi , dL(w1:T ,y1:T ) dyi = i+C∑ j=i dLj(w1:j ,y1:j) dyi (26) , where C is a preset constant indicating the number of future frames we included for consideration. With this trick, our algorithm approach cost only ≈ 50% of time and ≈ 60% of memory compared with BAO, while remains a superior performance (≈ 5% better in BDBR) (Ours (fast) in Fig. 7, the results are based on DVC Class D c = 2). With this trick, jointly consider RD performance, time and memory, our approach clearly dominates BAO. Furthermore, with this trick, the scalability issue of our approach is significantly ellivated. As shown in Fig. 8, the memory cost our approach with this trick is constant to GoP size, while that of BAO and our approach without this trick grows linearly with GoP size. This means that with this trick, our approach becomes scalable to any GoP, which is superior than BAO. A.8 IMPACT ON OEU Another interesting question to ask is whether the sequential updating algorithm (Alg. 4) benefits the OEU (Lu et al., 2020). Indeed, OEU (Lu et al., 2020) and BAO (Xu et al., 2022) are quite similar at the first glance. However, it is important to note that the theoretical foundation of BAO and this paper is SAVI (Kim et al., 2018; Marino et al., 2018). However, OEU does not fit into SAVI framework. More specifically, its encoder parameter to be updated does not factorizes as the DAG defined by variational posterior. Thus, applying Alg. 4 is incorrect. To verify this empirically, we change the OEU from BAO’s joint optimization to ours sequential optimization (Alg. 4), and the results show that this change degrades R-D performance (See COEU line in Fig. 9). A.9 MORE ANALYSIS In this section, we extend the analysis on why the proposed approach works and what is the difference between the proposed approach and BAO (Xu et al., 2022). In the approximate SAVI on DAG latent (Alg. 4), we solve SAVI approximately latent by latent in topological order. For bit allocation of NVC with 10 frames, this topological order is y0,w1,y1, ...,w9,y9, where y0 is the latent of I frame, wi is the motion latent of ith P frame and yi is the residual latent of ith P frame. In Fig. 10, we show the relationship between R-D cost and the stage of approximate SAVI. We can see that the R-D cost reduces almost consistently with the growing of SAVI stage, which indicates that our approximate SAVI on DAG (Alg. 4) is successful. Specifically, despite our approach is inferior to BAO (Xu et al., 2022) upon the convergence of y3, it attains significant advantage over BAO after y9 converges. In Fig. 11, we compare the distribution of R-D cost, PSNR and Bpp across frame and latent of the baseline DVC (Lu et al., 2019), BAO Xu et al. (2022) and the proposed approach. For R-D cost, it is obvious that our proposed approach’s R-D cost is lower than BAO and baseline, which indicates a better R-D performance. For bpp, it is interesting to observe that despite all three methods have similar bpp of motion related latent w1:T , the bpp of residual related latent y1:T is quite different. Specifically, BAO increases the bpp of y1:T compared with baseline, while our approach decreases the bpp of y1:T compared with baseline. This explains why our approach has lower bitrate compared with BAO, and also explains why our approach has significantly less bitrate error. For the PSNR metric, both our approach and BAO significantly improve the baseline. And the difference between proposed approach and BAO is not obvious. We can conclude that the benefits of the proposed approach over BAO comes from the bitrate saving instead of quality enhancing. A.10 QUALITATIVE RESULTS In Fig. 12, Fig. 13, Fig. 14 and Fig. 15, we present the qualitative result of our approach compared with the baseline approach. We note that compared with the reconstruction frame of baseline approach, the reconstruction frame of our proposed approach preserves significantly more details with lower bitrate, and looks much more similar to the original frame. We intentionally omit the qualitative comparison with BAO (Xu et al., 2022) as it is not quite informative. Specifically, from Fig. 2 we can observe that the PSNR difference of BAO and our approach is very small (within±0.1dB). And our main advantage over BAO comes from bitrate saving instead of quality improvement. Thus the qualitative difference between the proposed method and BAO is likely to fall below just noticeable difference (JND). A.11 MORE DISCUSSION Other weakness includes scalability. Our method requires jointly considering all the frame inside the GoP, which is impossible when the GoP size is large or when GoP size is unknown for live streaming tasks. Furthermore, currently the gradient ascent step number is merely chosen as an empirical sweet spot between speed and performance. A thorough grid search is desired to better understand its effect on performance.
1. What is the focus and contribution of the paper regarding learned video compression? 2. What are the strengths and weaknesses of the proposed approach compared to prior works like BAO and OEU? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions or concerns regarding the practicality and scalability of the method, especially for large GOP sizes and low-delay applications? 5. Do you have any suggestions for extending the proposed idea to update the encoder rather than the latents?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper aims to address optimal bit allocation for learned video compression. Specifically, it points out the suboptimality of the arXiv publication "Bit Allocation using Optimization (BAO)", and proposes an improved version. The proposed method shows promising results as compared to BAO and Online Encoder Update (OEU). Strengths And Weaknesses Strengths: (1) The idea of expending 1-level latent to 2-level non-factorized latent and using fixed w to optimize y to maximize ELBO by SAVI is interesting. (2) The experiment showcases the merits of the proposed method. Weaknesses: (1) The authors admit that the proposed method can NOT scale up to large GOP sizes (and likely high-resolution videos) and is not applicable to low-delay applications. In the current manuscript, there is no discussion and comparison on the encoding runtimes of different optimization techniques. The practicality of the proposed method is questionable. (2) More insights into the approximation in Algorithm 4 & Eq. (15) should be provided. The implications of the approximation should be highlighted too. It appears that the optimization of the current latent y_i has no impact on the RD cost of the future frames. (3) It is unclear whether the proposed idea can be extended to update the encoder rather than the latents. According to the OEU paper, this may have the advantages of reducing the number of iterations needed. Clarity, Quality, Novelty And Reproducibility The readability is a bit low. There are also some TYPOs. This work appears to be an extended version of the arXiv publication "Bit Allocation using Optimization (BAO)". The idea of using GOP-based optimization is intuitively agreeable. But, how to achieve this efficiently is the major contribution of this paper. In this regard, the authors should be credited. There is no code available. But, the paper has enough implementation details. Reproducibility should be relatively less of a problem.
ICLR
Title Correcting the Sub-optimal Bit Allocation Abstract In this paper, we investigate the problem of bit allocation in Neural Video Compression (NVC). First, we reveal that a recent bit allocation approach claimed to be optimal is, in fact, sub-optimal due to its implementation. Specifically, we find that its sub-optimality lies in the improper application of semi-amortized variational inference (SAVI) on latent with non-factorized variational posterior. Then, we show that the corrected version of SAVI on non-factorized latent requires recursively applying back-propagating through gradient ascent, based on which we derive the corrected optimal bit allocation algorithm. Due to the computational in-feasibility of the corrected bit allocation, we design an efficient approximation to make it tractable. Empirical results show that our proposed correction significantly improves the incorrect bit allocation in terms of R-D performance and bitrate error, and outperforms all other bit allocation methods by a large margin. The source code is provided in the supplementary material. 1 INTRODUCTION Recently, bit allocation for Neural Video Compression (NVC) has drawn growing attention thanks to its great potential in boosting compression performance. Due to the frame reference structure in video coding, it is sub-optimal to use the same R-D (Rate-Distortion) trade-off parameter λ for all frames. In bit allocation task, bitrate is allocated to different frames/regions to minimize R-D cost R + λD, where R is total bitrate, D is total distortion, and λ is the Lagrangian multiplier controlling R-D trade-off. Li et al. (2022) are the pioneer of bit allocation for NVC, who improve the empirical R-D (Rate-Distortion) model from traditional video codec (Li et al., 2014; 2016) and solve the per-frame Lagrangian multiplier λ. Other concurrent works adopt simple heuristics for coarse bit allocation (Cetin et al., 2022; Hu et al., 2022). Most recently, BAO (Bit Allocation using Optimization) (Xu et al., 2022) proposes to formulate bit allocation as semi-amortized variational inference (SAVI) (Kim et al., 2018; Marino et al., 2018) and solves it by gradient-based optimization. Specifically, it directly optimizes the variational posterior parameter to be quantized and encoded by gradient ascent, aiming at maximizing the minus overall R-D cost, which is also the evident lowerbound (ELBO). BAO does not rely on any empirical RD model and thus outperforms previous work. Further, BAO shows its optimality by proving its equivalence to bit allocation with precise R-D model. In this paper, we first show that BAO (Xu et al., 2022) is in fact, sub-optimal due to its implementation. Specifically, we find that it abuses SAVI (Kim et al., 2018; Marino et al., 2018) on latent with non-factorized variational posterior, which brings incorrect gradient signal during optimization. To solve this problem, we first extend SAVI to non-factorized latent by back-propagating through gradient ascent (Domke, 2012). Then based on that, we correct the sub-optimal bit allocation in BAO to produce true optimal bit allocation for NVC. Furthermore, we propose a computational feasible approximation to such correct but intractable bit allocation method. And we show that our approximation outperforms the incorrect bit allocation (BAO) in terms of R-D performance and bitrate error, and performs better than all other bit allocation methods. To summarize, our contributions are as follows: • We demonstrate that a previously claimed optimal bit allocation method is actually suboptimal. We find that its sub-optimality comes from the improper application of SAVI to non-factorized latent. • We present the correct way to conduct SAVI on non-factorized latent by recursively applying back-propagation through gradient ascent. Based on this, we derive the corrected optimal bit allocation algorithm for NVC. • Furthermore, we propose a computational efficient approximation of the optimal bit allocation to make it feasible. Our proposed approach improves the R-D performance and bitrate error over the incorrect bit allocation (BAO), and outperforms all other bit allocation methods for NVC. 2 PRELIMINARIES 2.1 NEURAL VIDEO COMPRESSION The input of NVC is a GoP (Group of Picture) x1:T , where xi ∈ RH×W is the ith frame with H ×W pixels, and T is the number of frame inside the GoP. Most of the works in NVC follow a latent variable model with temporal autoregressive relationship (Yang et al., 2020a). Specifically, to encode xi, we first extract the motion latent wi = fwϕ (xi,x ′ i) from current frame xi and previous reconstructed frame x′i−1, where f w ϕ (·) is the motion encoder parameterized by ϕ1. Then, we encode the quantized latent w̃i = ⌊wi⌉ with the probability mass function (pmf) estimator Pθ(w̃i|w̃<i, ỹ<i) parameterized by θ, where ⌊·⌉ is the rounding. Then, we obtain the residual latent yi = f y ϕ(x,x ′, w̃), where fyϕ(·) is the residual encoder. Then, similar to how we treat wi, we encode the quantized latent ỹi = ⌊yi⌉with pmf Pθ(ỹi|w̃≤i, ỹ<i). Finally, we obtain the reconstructed frame x′i = g x θ (x ′ i−1, w̃i, ỹi), where g x θ (·) is the decoder parameterized by θ. As only the motion latent w̃i and residual latent ỹi exist in the bitstream, the above process can be simplified as Eq. 1 and Eq. 2, where fϕ(·) is the generalized encoder and gθ(·) is the generalized decoder. The target of NVC is to minimize the per-frame R-D cost Ri + λiDi (Eq. 3), where Ri is the bitrate, Di is the distortion and λi is the Lagrangian multiplier controlling R-D trade-off. The bitrate Ri and distortion Di is computed as Eq. 2, where d(·, ·) is the distortion metric. And λiDi can be further interpreted as the data likelihood term − log pθ(xi|w̃≤i, ỹ≤i) so long as we treat λiDi as the energy function of a Gibbs distribution (Minnen et al., 2018). Specifically, when d(·, ·) is MSE, we can interpret λiDi = − log pθ(xi|w̃≤i, ỹ≤i) + const, where pθ(xi|w̃≤i, ỹ≤i) is a Gaussian distribution N (x̂i, 1/2λiI). wi = fϕ(xi, w̃<i, ỹ<i),yi = fϕ(xi, w̃≤i, ỹ<i), where w̃i = ⌊wi⌉, ỹi = ⌊yi⌉ (1) Ri = logPθ(w̃i, ỹi|w̃<i, ỹ<i), Di = d(xi, gθ(w̃≤i, ỹ≤i)) (2) max−(Ri + λiDi) (3) On the other hand, NVC is also closely related to Variational Autoencoder (VAE) (Kingma & Welling, 2013). As the rounding ⌊·⌉ is not differentiable, Ballé et al. (2016); Theis et al. (2017) propose to relax it by additive uniform noise (AUN), and replace w̃i = ⌊wi⌉, ỹi = ⌊yi⌉ with w̃i = wi + U(−0.5, 0.5), ỹi = yi + U(−0.5, 0.5). Under such formulation, the above encodingdecoding process becomes a VAE on graphic model w̃≤i, ỹ≤i → xi with variational posterior as Eq. 4, where wi,yi plays the role of variational posterior parameter. Then, minimizing the overall R-D cost (Eq. 3) is equivalent to maximizing the evident lowerbound (ELBO) (Eq. 5). qϕ(w̃i|xi, w̃<i, ỹ<i) = U(wi − 0.5,wi + 0.5), qϕ(ỹi|xi, w̃≤i, ỹ<i) = U(yi − 0.5,yi + 0.5) (4) −(Ri + λiDi) = Eqϕ [logPθ(w̃i, ỹi|w̃<i, ỹ<i)︸ ︷︷ ︸ −Ri + log pθ(xi|w̃≤i, ỹ≤i)︸ ︷︷ ︸ −λiDi − log qϕ︸ ︷︷ ︸ bits-back bitrate: 0 ] (5) 2.2 BIT ALLOCATION FOR NEURAL VIDEO COMPRESSION It is well known to video coding community that using the same R-D trade-off parameter λi to optimize R-D cost in Eq. 3 for all T frames inside a GoP is suboptimal (Li et al., 2014; 2016). This sub-optimality comes from the frame reference structure and is explained in detail by Li et al. (2022); Xu et al. (2022). The target of bit allocation is to maximize the minus of overall R-D cost 1Following previous works in deep generative modeling (Kingma & Welling, 2013; Kim et al., 2018), we denote all parameters related to encoder as ϕ, and all parameters related to decoder and prior as θ. (ELBO) L as Eq. 6 given the overall R-D trade-off parameter λ0, instead of maximizing Li of each frame i separately. The pioneer work of bit allocation in NVC (Li et al., 2022) follows bit allocation for traditional video codec (Li et al., 2016). Specifically, it adopts empirical models to approximate the relationship of the rate dependency ∂Ri+1/∂Ri and distortion dependency ∂Di+1/∂Di between frames. Then it takes those models into Eq. 6 to solve λ∗1:T explicitly as Eq. 7.left. However, its performance heavily relies on the accuracy of empirical models. maxL = T∑ i=1 Li, where Li = −(Ri + λ0Di) (6) λ∗1:T ← argmax λ1:T L(λ1:T ), versus w∗1:T ,y∗1:T ← arg max w1:T ,y1:T L(w1:T ,y1:T ) (7) On the other hand, BAO (Xu et al., 2022) does not solve λ∗1:T explicitly. Instead, it adopts SAVI (Kim et al., 2018; Marino et al., 2018) to achieve implicit bit allocation. To be specific, it initializes the variational posterior parameter w01:T ,y 0 1:T from fully amortized variational inference (FAVI) as Eq. 1. Then, it optimizes w1:T ,y1:T via gradient ascent to maximize L as Eq. 7.right. During this procedure, no empirical model is required. BAO further proofs that optimizing Eq. 7.right is equivalent to optimizing Eq. 7.left with precise rate and distortion dependency model ∂Ri+1/∂Ri, ∂Di+1/∂Di (See Thm. 1, Thm. 2 in Xu et al. (2022)). Thus, BAO claims that it is optimal assuming gradient ascent achieves global maximum. However, in next section, we show that BAO (Xu et al., 2022) is in fact suboptimal due to its implementation. 3 WHY BAO IS SUP-OPTIMAL BAO (Xu et al., 2022) achieves the SAVI (Kim et al., 2018; Marino et al., 2018) target in Eq. 7.right by gradient-based optimization. More specifically, its update rule is described as Eq. 8 and Eq. 9, where K is the total number of gradient ascent steps, and wki ,y k i is the posterior parameter wi,yi after k steps of gradient ascent. In the original paper of BAO, the authors also find that directly optimizing wi,yi simultaneously by Eq. 8 and Eq. 9 performs worse than optimizing yi alone using Eq. 9, but they have not offered any explanation. It is obvious that optimizing yi alone is sub-optimal. However, it is not obvious why jointly optimizing wi,yi with Eq. 8 and Eq. 9 fails. wk+1i ← w k i + α dL(wk1:T ,yk1:T ) dwki , where dL(wk1:T ,yk1:T ) dwki = T∑ j=i ∂Lj(wk1:j ,yk1:j) ∂wki (8) yk+1i ← y k i + α dL(wk1:T ,yk1:T ) dyki , where dL(wk1:T ,yk1:T ) dyki = T∑ j=i ∂Lj(wk1:j ,yk1:j) ∂yki (9) In fact, the update rule in Eq. 8 and Eq. 9 is exactly the SAVI (Kim et al., 2018; Marino et al., 2018) when wi,yi fully factorizes (e.g. the full factorization used in mean-field (Blei et al., 2017)). However, in NVC the wi,yi has complicated auto-regressive relationships (See Eq. 1 and Fig. 1.(a)). Abusing SAVI on non-factorized latent causes gradient error in two aspects: (1). The total derivative dL/dwi, dL/dyi is incomplete. (2). The total derivative dL/dwi, dL/dyi and partial derivative ∂Lj/∂wi, ∂Lj/∂yi is evaluated at wrong value. In next two sections, we elaborate those two issues with wi related equations in main text and yi related equations in Appendix. A.2. 3.1 INCOMPLETE TOTAL DERIVATIVE EVALUATION According to the latent generation procedure described by Eq. 1 and Eq. 2, we draw the computational graph to describe the latent dependency as Fig. 1.(a). Based on that, we expand the total derivative dL/dwi, dL/dyi as Eq. 10 and Eq. 22. dL(w1:T ,y1:T ) dwi = T∑ j=i dLj(w1:j ,y1:j) dwi dLj(w1:j ,y1:j) dwi = j∑ l=i+1 ∂wl ∂wi dLj(w1:j ,y1:j) dwl + j∑ l=i ∂yl ∂wi dLj(w1:j ,y1:j) dyl︸ ︷︷ ︸ ignored by BAO + ∂Lj(w1:j ,y1:j) ∂wi︸ ︷︷ ︸ considered by BAO (10) As shown in Eq. 8, Eq. 9 and Fig. 1.(b), BAO (Xu et al., 2022) treats the total derivative dL/dwi, dL/dyi as the sum of the frame level partial derivative ∂Lj/∂wi, ∂Lj/∂yi, which is the direct contribution of frame ith latent wi,yi to jth frame’s R-D cost Lj (as marked in Eq. 10 and Eq. 22). This incomplete evaluation of gradient signal brings sub-optimality. Further, it is not possible to correct BAO by simply including other parts of gradient into consideration. As BAO jointly updates all the latent w1:T ,y1:T , the relationship of Eq. 2 only holds for the initial latent parameters w01:T ,y 0 1:T produced by FAVI. And this important relationship is broken for parameters w k 1:T ,y k 1:T after k ≥ 1 steps of update. 3.2 INCORRECT VALUE TO EVALUATE GRADIENT As shown in Eq. 8 and Eq. 9, BAO (Xu et al., 2022) simultaneously updates all the posterior parameter w1:T ,y1:T with gradient evaluated at the same gradient ascent step wk1:T ,y k 1:T . However, as we show later in Sec. 4.1 and Fig. 1.(c), this is sub-optimal as all the descendant latent w>i,y≥i of wi should already complete all K steps of gradient ascent before the gradient of wi is evaluated. Moreover, w>i,y≥i should be initialized by FAVI using precedents latent. Similar rule applies to yi. Specifically, the correct value to evaluate the gradient is as Eq. 11 and Eq. 23, where wkii denotes the latent wi after ki steps of update, and y k′j i denotes the latent yi after k ′ i steps of update. wki+1i ← w ki i + α dL(wk11 , ...,w ki i ,w K >i,y k′1 1 , ...,y k′i−1 i−1 ,y K ≥i) dwkii , where w0>i,y 0 ≥i = f(x,w k1 1 , ...,w ki i ,y k′1 1 , ...,y k′i−1 i−1 ) (11) Similar to the incomplete total derivative evaluation, this problem does not have a simple solution. In next section, we show how to correct both of the above-mentioned issues by recursively applying back-propagating through gradient ascent (Domke, 2012). 4 CORRECTING THE SUB-OPTIMAL BIT ALLOCATION In this section, we first extend the generic SAVI Kim et al. (2018); Marino et al. (2018) to 2-level non-factorized latent. Then we further extend this result to latent with any dependency that can be described by a DAG (Directed Acyclic Graph). And finally, we correct the sub-optimal bit allocation by applying the result in DAG latent to NVC. 4.1 SAVI ON 2-LEVEL NON-FACTORIZED LATENT In this section, we extend the SAVI on 1-level latent (Kim et al., 2018) to 2-level non-factorized latent. We denote x as evidence, a as the variational posterior parameter of the first level latent ã, b as the variational posterior parameter of the second level latent b̃, and the ELBO to maximize as L(a, b). The posterior q(ã, b̃|x) factorizes as q(ã|x)q(b̃|ã,x), which means that b depends on a. Given a is fixed, we can directly follow Kim et al. (2018); Marino et al. (2018) to optimize b to maximize ELBO by SAVI. However, it requires some tricks to optimize a. Algorithm 1: SAVI on 2-level Latent 1 procedure solve-2-level(x,ak) 2 initialize a0 ← f(x) from FAVI 3 for k = 0, ...,K − 1 do 4 dL(ak,bK) dak = grad-2-level(x,ak) 5 ak+1 ← ak + αdL(a k,bK) dak 6 return aK , bK 7 procedure grad-2-level(x,ak) 8 b0 ← f(x,ak) from FAVI 9 for k′ = 0, ...,K − 1 do 10 bk ′+1 ← bk′ + αdL(a k,bk ′ ) dbk′ 11 ←−a ← ∂L(a k,bK) ∂ak 12 ←− bK ← dL(a k,bK) dbK 13 for k′ = K − 1, ..., 0 do 14 ←−a ←←−a + α∂ 2L(ak,bk ′ ) ∂ak∂bk′ ←−−− bk ′+1 15 ←− bk ′ ← ←− bk ′ + α∂ 2L(ak,bk ′ ) ∂bk′∂bk′ ←−−− bk ′+1 16 ←−a =←−a + ∂b 0 ∂ak ←− b0 17 return dL(a k,bK) dak =←−a Algorithm 2: SAVI on DAG Latent 1 procedure solve-dag(x) 2 sort a1, ...,aN in topological order 3 for aj with parent P(aj) = ∅ 4 add aj to fake node a0’s children C(a0) 5 grad-dag(x,a00) 6 return aK1 , ...,aKN 7 procedure grad-dag(x,ak00 , ...,a ki i ) 8 for aj ∈ C(ai) in topological order do 9 a0j ← f(x,a k0 0 , ...,a k<j <j ) from FAVI 10 for kj = 0, ...,K − 1 do 11 dL(ak00 ,...,a kj j ,a K >j) da kj j ← grad-dag(x,ak00 , ...,a kj j ) 12 a kj+1 j ← a kj j + α dL(ak00 ,...,a kj j ,a K >j) da kj j 13 ←−ai ← ∂L(ak00 ,...,a ki i ,a K >i) ∂a ki i 14 for aj ∈ C(ai) do 15 ←−aj ← 0, ←− aKj ← dL(ak00 ,...,a ki i ,a K >i) daKj 16 for kj = K − 1, ..., 0 do 17 ←−aj ←←−aj + α ∂2L(ak00 ,...,a kj j ,a K >j) ∂a ki i ∂a kj j ←−−− a kj+1 j 18 ←− a kj j ← ←−−− a kj+1 j + α ∂2L(ak00 ,...,a kj j ,a K >j) ∂a kj j ∂a kj j ←−−− a kj+1 j 19 ←−ai ←←−ai +←−aj + ∂a0j ∂a ki i ←− a0j 20 returndL(a k0 0 ,...,a ki i ,a K >i) da ki i =←−ai The intuition is, we do not want to find a a that maximizes L(a, b) given a fixed b (or we have the gradient issue described in Sec. 3). Instead, we want to find a a, whose maxb L(a, b) is maximum. This translates to the optimization problem as Eq. 12. In fact, Eq. 12 is a variant of setup in backpropagating through gradient ascent (Samuel & Tappen, 2009; Domke, 2012). The difference is, our a also contributes directly to optimization target L(a, b). From this perspective, Eq. 12 is more closely connected to Kim et al. (2018), if we treat a as the model parameter and b as latent. a← argmax a L(a, b∗(a)), where b∗(a)← argmax b L(a, b) (12) And as SAVI on 1-level latent (Kim et al., 2018; Marino et al., 2018), we need to solve Eq. 12 using gradient ascent. Specifically, denote α as step size (learning rate), K as the total gradient ascent steps, ak as the a after k step update, bk ′ as the b after k′ step update, and f(.) as FAVI procedure generating initial posterior parameters a0, b0, the optimization problem as Eq. 12 translates into the update rule as Eq. 13. Eq. 13 is the guidance for designing optimization algorithm, and it also explains why the gradient of BAO (Xu et al., 2022) is evaluated at wrong value (See Sec. 3.2). ak+1 ← ak + αdL(a k, bK) dak , bk ′+1 ← bk ′ + α dL(ak, bk′) dbk′ , where b0 = f(x,ak) (13) To solve Eq. 13, we note that although dL(ak, bk′)/dbk′ is directly computed, dL(ak, bK)/dak is not straightforward. Resorting to previous works (Samuel & Tappen, 2009; Domke, 2012) in implicit differentiation and extending the results in Kim et al. (2018) from model parameters to variational posterior parameters, we implement Eq. 13 as Alg. 1. Specifically, we first initialize a0 from FAVI. Then we conduct gradient ascent on a with gradient dL(ak, bK)/dak computed from the procedure grad-2-level(x,ak). And inside grad-2-level(x,ak), b is also updated by gradient ascent, the above procedure corresponds to Eq. 13. The key of Alg. 1 is the evaluation of gradient dL(ak, bK)/dak. Formally, we have: Theorem 1. After grad-2-level(x,ak) of Alg. 1 executes, we have the return value dL(ak, bK)/dak =←−a . (See proof in Appendix. A.1.) 4.2 SAVI ON DAG-DEFINED NON-FACTORIZED LATENT In this section, we extend the result from previous section to SAVI on general non-factorized latent with dependency described by any DAG. This DAG is the computational graph during network inference, and it is also the directed graphical model (DGM) (Koller & Friedman, 2009) defining the factorization of latent variables during inference. This is the general case covering all dependency that can be described by DGM. This extension is necessary to perform SAVI on latent with complicated dependency (e.g. bit allocation of NVC). Similar to the 2-level latent setup, we consider performing SAVI on N variational posterior parameter a1, ...,aN with their dependency defined by a computational graph G, i.e., their corresponding latent variable ã1, ..., ãN ’s posterior distribution factorizes as G. Specifically, we denote aj ∈ C(ai),ai ∈ P(aj) if an edge exists from ai to aj . This indicates that ãj conditions on ãi. Without loss of generality, we assume a1, ...,aN is sorted in topological order. This means that if aj ∈ C(ai),ai ∈ P(aj), then i < j. Each latent is optimized by K-step gradient ascent, and akii denotes the latent ai after ki steps of update. Then, similar to 2-level latent, we have the update rule as Eq. 14: aki+1i ← a ki i + α dL(ak11 , ...,a ki i ,a K >i) daki , where a0>i = f(x,a k1 1 , ...,a ki i ) (14) , which can be translated into Alg. 2. Specifically, we first sort the latent in topological order. Then, we add a fake latent a0 to the front of all as. Its children are all the as with 0 in-degree. Then, we can solve the SAVI on a1, ...,aN using gradient ascent by executing the procedure graddag(x,ak00 , ...,a ki i ) in Alg. 2 recursively. Inside procedure grad-dag(x,a k0 0 , ...,a ki i ), the gradient to update ai relies on the convergence of its children aj ∈ C(ai), which is implemented by the recursive depth-first search (DFS) in line 11. And upon the completion of procedure grad-dag(x,a00), all the latent converges to aK1 , ...,a K N . Similar to the 2-level latent case, the key of Alg. 2 is the evaluation of gradient dL(ak00 , ...,a ki i ,a K >i)/da ki i . Formally, we have: Theorem 2. After the procedure grad-dag(x,ak00 , ...,a ki i ) in Alg. 2 executes, we have the return value dL(ak00 , ...,a ki i ,a K >i)/da ki i = ←−ai. (See proof in Appendix. A.1.) To better understand how Alg. 2 works, we provide a detailed example in Fig. 5 of Appendix. A.3. 4.3 CORRECTING THE SUB-OPTIMAL BIT ALLOCATION USING SAVI ON DAG With the result in previous section, correcting BAO (Xu et al., 2022) seems to be trivial. We only need to sort the latent in topological order as w1,y1, ...,wT ,yT , treat them as a1, ...,a2T+1 and run Alg. 2 to obtain the optimized latent parameters wK1 ,y K 1 , ...,w K T ,y K T . And the gradient dL(ak00 , ...,a ki i ,a K >i)/da ki i computed in Alg. 2 resolves the issue of BAO described in Sec. 3.1 and Sec. 3.2. However, an evident problem is the temporal complexity. Given the latent number N and gradient ascent step number K, Alg. 2 has temporal complexity of Θ(KN ). NVC with GoP size 10 has approximately N = 20 latent, and the SAVI on NVC (Xu et al., 2022) takes around K = 2000 step to converge. For bit allocation, the complexity of Alg. 2 is ≈ 200020, which is intractable. On the other hand, BAO’s complexity is reasonable (Θ(KN) ≈ 4 × 104). Thus, in next section, we provide a feasible approximation to such intractable corrected bit allocation. 4.4 FEASIBLE APPROXIMATION TO THE CORRECTED BIT ALLOCATION In order to solve problem with practical size such as bit allocation on NVC, we provide an approximation to the SAVI (Kim et al., 2018; Marino et al., 2018) on DAG described in Sec. 4.2. The general idea is that, when being applied to bit allocation of NVC, the accurate SAVI on DAG (Alg. 2) satisfies both requirement on gradient signal described in Sec. 3.1 and Sec. 3.2. We can not make it tractable without breaking them. Thus, we break one of them and achieve a reasonable complexity, while maintain a superior performance compared with BAO (Xu et al., 2022). We consider the approximation in Eq. 15 which breaks the requirement for gradient evaluation in Sec. 3.2. Based on Eq. 15 and the requirement in Sec. 3.1, we design an approximation of accurate SAVI as Alg. 4. When being applied to bit allocation in NVC, it satisfies the gradient requirement in Sec. 3.1 while maintaining a temporal complexity of Θ(KN) as BAO. dL(ak00 , ...,a ki i ,a K >i) dakii ≈ dL(ak00 , ...,a ki i ,a 0 >i) dakii (15) Specifically, with the approximation in Eq. 15, the recurrent gradient computation in Alg. 2 becomes unnecessary as the right hand side of Eq. 15 does not require aK>i. However, to maintain the dependency of latent described in Sec. 3.1, as Alg. 2, we still need to ensure that the children node aj ∈ C(ai) are re-initialized by FAVI every-time when ai is updated. Therefore, a reasonable approach is to traverse the graph in topological order. We keep the children node aj untouched until all its parent node ai ∈ P(aj)’s gradient ascent is completed and aKi is known. And the resulting approximate SAVI algorithm is as Alg. 4. When applied to bit allocation, it satisfies the gradient requirement in Sec. 3.1, and as BAO, its temporal complexity is Θ(KN). Algorithm 3: BAO on DAG Latent 1 procedure solve-bao(x) 2 a01, ...,a 0 N ← f(x) from FAVI 3 for k = 0, ...,K − 1 do 4 for i = 1, ..., N do 5 ak+1i ← aki + α ∂L(ak1 ,...,a k N ) ∂aki 6 return aK1 , ..., aKN Algorithm 4: Approximate SAVI on DAG latent 1 procedure solve-approx-dag(x) 2 sort a1, ...,aN in topological order 3 for i = 1, ..., N do 4 a0i , ...,a 0 N ← f(x,aK<i) from FAVI 5 for k = 0, ...,K − 1 do 6 dL(aK<i,a k i ,a K >i) daki ≈ dL(a K <i,a k i ,a 0 >i) daki 7 ak+1i ← aki + α dL(aK<i,a k i ,a K >i) daki 8 return aK1 , ..., aKN To better understand BAO (Xu et al., 2022) in SAVI context, we rewrite it by general SAVI notation instead of NVC notation in Alg. 3. We highlight the difference between BAO (Alg. 3) (Xu et al., 2022), the accurate SAVI on DAG latent (Alg. 2) and the approximate SAVI on DAG latent (Alg. 4) from several aspects: • Graph Traversal Order: BAO performs gradient ascent on a1:T all together. The accurate SAVI only updates ai when a>i’s update is complete and aK>i is known. The approximate SAVI only updates ai when a<i’s update is complete and aK<i is known. • Gradient Correctness: When being applied to bit allocation in NVC, BAO violates the gradient rule in Sec. 3.1 and Sec. 3.2, accurate SAVI satisfies both rules, approximate SAVI satisfies Sec. 3.1 and violates Sec. 3.2. • Temporal Complexity: With the latent number N and steps of gradient ascent K, the complexity of BAO is Θ(KN), the complexity of accurate SAVI is Θ(KN ) and the complexity of approximate SAVI is Θ(KN). Then we can simply apply Alg. 4 to bit allocation in NVC to obtain a feasible approximation of the corrected optimal bit allocation. And in Sec. 6.2, we empirically show that our approximation improves the R-D performance over BAO (Xu et al., 2022) with even smaller number of updates. 5 RELATED WORK: BIT ALLOCATION & SAVI FOR NEURAL COMPRESSION Li et al. (2022) are the pioneer of bit allocation for NVC and their work is elaborated in Sec. 2.2. Other recent works that consider bit allocation for NVC only adopt simple heuristic such as inserting 1 high quality frame per 4 frames (Hu et al., 2022; Cetin et al., 2022). On the other hand, OEU (Lu et al., 2020) is also recognised as frame-level bit allocation while its performance is inferior than BAO (Xu et al., 2022). BAO is the most recent work with best R-D performance. It is elaborated in Sec. 2.2 and Sec. 3, and corrected in the previous section. Semi-Amortized Variational Inference (SAVI) is proposed by Kim et al. (2018); Marino et al. (2018). The idea is that works following Kingma & Welling (2013) use fully amortized inference parameter ϕ for all data, which leads to the amortization gap (Cremer et al., 2018). SAVI reduces this gap by optimizing the variational posterior parameter after initializing it with inference network. It adopts back-propagating through gradient ascent (Domke, 2012) to evaluate the gradient of model parameters. We adopt a similar method to extend SAVI to non-factorized latent. When applying SAVI to practical neural codec, researchers abandon the nested model parameter update for efficiency. Prior works (Djelouah & Schroers, 2019; Yang et al., 2020b; Zhao et al., 2021; Gao et al., 2022) adopt SAVI to boost R-D performance and achieve variable bitrate in image compression. And BAO (Xu et al., 2022) is the first to consider SAVI for bit allocation. 6 EXPERIMENTS 6.1 EXPERIMENTAL SETTINGS We implement our approach in PyTorch 1.9 with CUDA 11.2, and run the experiments on NVIDIA(R) A100 GPU. Most of the other settings are intentionally kept the same as BAO (Xu et al., 2022). Specifically, we adopt HEVC Common Testing Condition (CTC) (Bossen et al., 2013) and UVG dataset (Mercat et al., 2020). And we measure the R-D performance in BjontegaardBitrate (BD-BR) and BD-PSNR (Bjontegaard, 2001). For baseline NVC (Lu et al., 2019; Li et al., 2021), we adopt the official pre-trained models. And we select target λ0 = {256, 512, 1024, 2048}. For gradient ascent, we adopt Adam (Kingma & Ba, 2014) optimizer with lr = 1 × 10−3. We set the gradient ascent step K = 2000 for the first frame and K = 400 for other frames. More details are presented in Appendix. A.5. 6.2 QUANTITATIVE RESULTS As shown in Tab. 1, our method consistently improves the R-D performance in terms of BD-BR over BAO (Xu et al., 2022) on both baseline methods and all datasets. Moreover, this improvement is especially significant (more than 10% in BD-BR) when the baseline is DCVC (Li et al., 2021). And both BAO and our proposed correction significantly outperform other approaches. It is also noteworthy that with our bit allocation, DVC (the SOTA method in 2019) already outperforms DCVC (the SOTA method in 2021) by large margin (See the red solid line and black dash line in Fig. 2). BD-BR (%) ↓ Method Class B Class C Class D Class E UVG DVC (Lu et al., 2019) as Baseline Li et al. (2016)1 20.21 17.13 13.71 10.32 16.69 Li et al. (2022)1 -6.80 -2.96 0.48 -6.85 -4.12 OEU (Lu et al., 2020)2 -13.57 -11.29 -18.97 -12.43 -13.78 BAO (Xu et al., 2022)2 -28.55 -26.82 -25.37 -32.54 -27.68 Proposed -32.10 -31.71 -35.86 -32.93 -30.92 DCVC (Li et al., 2021) as Baseline OEU (Lu et al., 2020)2 -10.75 -14.34 -16.30 -7.15 -16.07 BAO (Xu et al., 2022)2 -20.59 -19.69 -20.60 -23.33 -25.22 Proposed -32.89 -33.10 -32.01 -36.88 -39.66 Table 1: The BD-BR of our approach compared with others. 1 comes from Li et al. (2022). 2 comes from Xu et al. (2022). Figure 2: The R-D curve on HEVC Class D. Other than R-D performance, the bitrate error of our approach is also significantly smaller than BAO (Xu et al., 2022) (See Tab. 2). The bitrate error is measured as the relative bitrate difference before and after bit allocation. The smaller it is, the easier it is to achieve the desired bitrate accurately. For complexity, our approach only performs 920 steps of gradient ascent per-frame, while BAO requires 2000 steps. See more quantitative results (BD-PSNR & R-D curves) in Appendix. A.6. 6.3 ABLATION STUDY, ANALYSIS & QUALITATIVE RESULTS Tab. 3 shows that for BAO (Xu et al., 2022), jointly optimizing w1:T ,y1:T performs worse than optimizing y1:T or w1:T alone. This counter-intuitive phenomena comes from its incorrect estimation of gradient signal. For the proposed approach that corrects this, jointly optimizing w1:T ,y1:T performs better than optimizing y1:T or w1:T alone, which is aligned with our intuition. Bitrate-Error (%) ↓ Method Class B Class C Class D Class E UVG DVC (Lu et al., 2019) as Baseline BAO (Xu et al., 2022)2 8.41 12.86 21.39 5.94 3.73 Proposed 3.16 4.27 1.81 6.14 1.73 DCVC (Li et al., 2021) as Baseline BAO (Xu et al., 2022)2 25.67 23.90 23.74 24.88 21.86 Proposed 4.27 7.29 5.73 8.03 3.06 Table 2: The bitrate error of our approach compared with BAO. Method BD-BR (%) ↓ BAO (y) -25.37 BAO (w) -22.24 BAO (y,w) -14.76 Proposed (y) -32.60 Proposed (w) -31.56 Proposed (y,w) -35.86 Table 3: Ablation study with HEVC Class D and DVC (Lu et al., 2019). To better understand why our method works, we present the R-D cost, distortion and rate versus frame/latent index for different methods in Fig. 3: top-left shows that the R-D cost of our approach consistently decreases according to SAVI stage. Moreover, it outperforms BAO after 4th frame; top-right shows that for each frame the R-D cost of our method is lower than BAO; bottom-left shows that the distortion part of R-D cost of our approach is approximately the same as BAO. While bottom-right shows that the advantage of our approach over BAO lies in the bitrate. More specifically, BAO increases the bitrate of yis after SAVI, while our correction decreases it. See more analysis in Appendix. A.9 and qualitative results in Appendix. A.10. 7 DISCUSSION & CONCLUSION Despite our correction is already more efficient than original BAO (Xu et al., 2022), its encoding speed remains far from real-time. Thus, it is limited to scenarios where R-D performance matters much more than encoding time (e.g. video on demand). See more discussion in Appendix. A.11. To conclude, we show that a previous bit allocation method for NVC is sub-optimal as it abuses SAVI on non-factorized latent. Then, we propose the correct SAVI on general non-factorized latent by back-propagating through gradient ascent, and we further propose a feasible approximation to make it tractable for bit allocation. Experimental results show that our correction significantly improves the R-D performance. ETHICS STATEMENT Improving the R-D performance of NVC has positive social value, in terms of reducing carbon emission by saving the resources required to transfer and store videos. Moreover, unlike traditional codecs such as H.266 (Bross et al., 2021), neural video codec does not require dedicated hardware. Instead, it can be deployed with general neural accelerators. Improving the R-D performance of NVC prompts the practical deployment of video codecs that are independent of dedicated hardware, and lowers the hardware-barrier of playing multi-media contents. REPRODUCIBILITY STATEMENT For theoretical results, both of the two theorems are followed by proof in Appendix. A.1. For a relatively complicated novel algorithm (Alg. 2), we provide an illustration of the step by step execution procedure in Appendix. A.3. For experiment, both of the two datasets are publicly accessible. In Appendix. A.5, we provide more implementation details including all the hyper-parameters. Moreover, we provide our source code for reproducing the empirical results in supplementary material. A APPENDIX A.1 PROOF OF THM 1 AND THM 2 Theorem 1. After the procedure grad-2-level(x,ak) of Alg. 1 executes, we have the return value dL(ak, bK)/dak =←−a . Proof. This proof extends the proof of Thm. 1 in Domke (2012), and it also serves as a formal justification of Alg. 1 in Kim et al. (2018). Note that our paper and Kim et al. (2018) are subtly different from Samuel & Tappen (2009); Domke (2012) as our high level parameter w not only generate low level parameter y, but also directly contributes to optimization target (See Fig. 4). As the computational graph in Fig. 4 shows, we can expand dL(ak, bK)/dak as Eq. 16, with each term solved in Eq. 18 and Eq. 19. dL(ak, bK) dak = ∂L(ak, bK) ∂ak︸ ︷︷ ︸ known + K∑ k′=0 ∂bk ′ ∂ak︸ ︷︷ ︸ Eq. 18 dL(ak, bK) dbk′︸ ︷︷ ︸ Eq. 19 (16) To solve Eq. 16, we first note that ∂L(ak, bK)/∂ak, dL(ak, bK)/dbK , ∂b0/∂ak is naturally known. Then, by taking partial derivative of the update rule of gradient ascent bk ′+1 ← bk′ + αdL(ak, bk′)/dbk′ with regard to ak, bk′ , we have Eq. 17 and Eq. 18. Note that Eq. 18 is the partial derivative ∂bk ′+1/∂ak instead of total derivative dbk ′+1/dak = (∂bk ′+1/∂bk ′ )(dbk ′ /dak) + ∂bk ′+1/∂ak. ∂bk ′+1 ∂bk′ = I + α ∂2L(ak, bk′) ∂bk′∂bk′ (17) ∂bk ′+1 ∂ak = α ∂2L(ak, bk′) ∂ak∂bk′ (18) And those second order terms can either be directly evaluated or approximated via finite difference as Eq. 20. As Eq. 18 already solves the first term on the right hand side of Eq. 16, the remaining issue is dL(ak, bK)/dbk′ . To solve this term, we expand it recursively as Eq. 19 and take Eq. 17 into it. dL(ak, bK) dbk′ = ∂bk ′+1 ∂bk′ dL(ak, bK) dbk′+1 (19) And the above solving process can be described by the procedure grad-2-level(x,ak) of Alg. 1. Specifically, the iterative update of ←−−− bk ′+1 in line 15 corresponds to recursively expanding Eq. 19 with Eq. 17, and the iterative update of ←−a in line 14 corresponds to recursively expanding Eq. 16 with Eq. 18 and Eq. 19. Upon the return of grad-2-level(x,ak) of Alg. 1, we have←−a = dL(ak, bK)/dbk. The complexity of the Hessian-vector product in line 14 and 15 of Alg. 1 may be reduced using finite difference following (Domke, 2012) as Eq. 20. ∂2L(ak, bk′) ∂ak∂bk′ v= lim r→0 1 r ( dL(ak, bk′ + rv) dak − dL(a k, bk ′ ) dak ) ∂2L(ak, bk′) ∂bk′∂bk′ v = lim r→0 1 r ( dL(ak, bk′ + rv) dbk′ − dL(a k, bk ′ ) dbk′ ) (20) Theorem 2. After the procedure grad-dag(x,ak00 , ...,a ki i ) in Alg. 2 executes, we have the return value dL(ak00 , ...,a ki i ,a K >i)/da ki i = ←−ai. Proof. Consider computing the target gradient with DAG G. The aki ’s gradient is composed of its own contribution to L in addition to the gradient from its children aj ∈ C(ai). Further, as we are considering the optimized children aKj , we expand the children node aj as Fig. 4. Then, we have: dL(ak00 , ...,a ki i ,a K >i) dakii = ∂L(ak00 , ...,a ki i ,a K >i) ∂akii︸ ︷︷ ︸ known + ∑ aj∈C(ai) ( K∑ kj=0 ∂a kj j ∂akii︸ ︷︷ ︸ Eq. 18 dL(ak00 , ...,a kj−1 j−1 ,a K ≥j) da kj j︸ ︷︷ ︸ Eq. 19 ) (21) The first term on the right-hand side of Eq. 21 can be trivially evaluated. The ∂akjj /∂a ki i can be evaluated as Eq. 18. And the dL(ak00 , ...,a kj−1 j−1 ,a K ≥j)/da kj j can be iteratively expanded as Eq. 19. We highlight several key differences between Alg. 2 and Alg. 1 which are reflected in the implementation of Alg. 2: • The gradient evaluation of current node yi requires gradient of its plural direct children aj ∈ C(ai), instead of the single child in 2-level case. The children traversal part of Eq. 19 corresponds to the two extra for loop in line 8 and 14 of Alg. 2. • The gradient ascent update of child latent parameter akj+1j ← a kj j + αdL(ak00 , ...,a kj j ,a K >j)/da kj j can be conducted trivially only if C(aj) is empty, otherwise the gradient has to be evaluated recursively using Eq. 21. And this part corresponds to the recursive call in line 11 of Alg. 2. And the other part of Alg. 2 is the same as Alg. 1. So the rest of the proof follows Thm. 1. Similarly, the Hessian-vector product in line 17 and 18 of Alg. 2 may be approximated as Eq. 20. However, this does not save Alg. 2 from an overall complexity of Θ(KN ). A.2 THE COMPLETE FORMULA FOR SEC. 3.1 AND SEC. 3.2 In this section, we provide the complete formula on yi related gradient for Sec. 3.1 and Sec. 3.2. Specifically, Eq. 22 is paired with Eq. 10, and Eq. 23 is paired with Eq. 11. dL(w1:T ,y1:T ) dyi = T∑ j=i dLj(w1:j ,y1:j) dyi dLj(w1:j ,y1:j) dyi = j∑ l=i+1 ( ∂yl ∂yi dLj(w1:j ,y1:j) dyl + ∂wl ∂yi dLj(w1:j ,y1:j) dwl )︸ ︷︷ ︸ ignored by BAO + ∂Lj(w1:j ,y1:j) ∂yi︸ ︷︷ ︸ considered by BAO (22) y k′i+1 i ← y k′i + α dL(wk11 , ...,w ki i ,w K >i,y k′1 1 , ...,y k′i i ,y K >i) dy k′i i , where w0>i,y 0 >i = f(x,w k1 1 , ...,w ki i ,y k′1 1 , ...,y k′i i ) (23) A.3 AN EXAMPLE OF EXECUTION OF ALG. 2 In this section, we provide an example of the full procedure of execution of Alg. 2 in Fig. 5. The setup is as Fig. 5.(0): we have N = 3 latent a1,a2,a3 and gradient ascent step K = 2, connected by a DAG shown in the figure. A.4 EXTENDING THE ANALYSIS SEC. 3 TO GENERAL DAG CASE As the Alg. 2 and Alg. 4 are applicable to general SAVI (Kim et al., 2018; Marino et al., 2018) beyond bit allocation, it is helpful to understand their merit to extend the analysis in Sec. 3 from bit allocation to general DAG scenario. In this section, we consider the same problem setup as Sec. 4.2. Similar to bit allocation case, BAO has the gradient incomplete and gradient value incorrect problem. The gradient incomplete issue is presented as Eq. 24, and gradient value incorrect issue is presented as Eq. 25. dL(ak00 , ...,a ki i ,a K >i) dakii = ∂L(ak00 , ...,a ki i ,a K >i) ∂akii︸ ︷︷ ︸ considered by BAO + ∑ aj∈C(ai) ( K∑ kj=0 ∂a kj j ∂akii dL(ak00 , ...,a kj−1 j−1 ,a K ≥j) da kj j ) ︸ ︷︷ ︸ ignored by BAO (24) ∂L(ak00 , ...,a ki i ,a K >i) ∂akii ≈ ∂L(aki0 , ...,a ki i ,a ki >i) ∂akii︸ ︷︷ ︸ approximation of BAO in gradient value (25) A.5 MORE IMPLEMENTATION DETAILS In the main text, we use yi as all the latent variable related to residual. In practice, it is divided into yi, zi,∆ y i , which refer to the first level latent of residual, second level latent of residual and quantization step size of first level latent of residual respectively. In practice, as BAO (Xu et al., 2022), all of the 3 parts are involved in SAVI jointly. We note that this is not a problem as they fully factorize. And for DVC (Lu et al., 2019), wi indeed represent the latent of motion. As for DVC, the motion has only one level of latent. However for DCVC (Li et al., 2021), wi is divided into wi,vi,∆wi , which refer to the first level latent of motion, second level latent of motion and quantization step size of first level latent of motion respectively. Similar to yi, all of the 3 parts are involved in SAVI jointly, and this is not a problem as they fully factorize. Following BAO (Xu et al., 2022), we set the target λ0 = {256, 512, 1024, 2048}, which also follows the baselines (Lu et al., 2019; Li et al., 2021). We adopt the official pre-train models for both of the baseline methods (Lu et al., 2019; Li et al., 2021). We do not have a training dataset or implementation details for training amortized encoder / decoder as all the experiments are performed on official pre-trained models. For gradient ascent, we set K = 2000 for the first I frame and K = 400 for all other P frames. On average, the gradient ascent steps for each frame is 920, which is smaller than 2000 in BAO. A.6 MORE QUANTITATIVE RESULTS In this section we present more quantitative results. In Tab. 4 we show the BD-PSNR of our proposed method and other methods as a supplementary to the BD-BR results (Tab. 1). Furthermore, in Fig. 6, we present R-D curve on all classes of HEVC CTC and UVG dataset as a supplementary to the HEVC Class D plot (Fig. 2). 0.050 0.075 0.100 0.125 0.150 0.175 Bpp 32 33 34 35 PS N R HEVC Class B 0.10 0.15 0.20 0.25 0.30 Bpp 29 30 31 32 33 34 PS N R HEVC Class C 0.10 0.15 0.20 0.25 0.30 0.35 Bpp 28 29 30 31 32 33 34 PS N R HEVC Class D 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Bpp 35 36 37 38 39 40 PS N R HEVC Class E 0.04 0.06 0.08 0.10 0.12 Bpp 34 35 36 37 38 PS N R UVG −0.04 −0.02 0.00 0.02 0.04 Bpp −0.04 −0.02 0.00 0.02 0.04 PS N R DCVC OEU (on DCVC) BAO (on DCVC) Proposed (on DCVC) DVC OEU (on DVC) BAO (on DVC) Proposed (on DVC) Figure 6: The R-D performance of our approach compared with baselines (w/o bit allocation) and other bit allocation approaches. A.7 COMPLEXITY & SCALABILITY Figure 7: Spatial temporal complexity analysis comparing BAO (Xu et al., 2022), the proposed approach and a fast approximation of the proposed approach. The analysis is done on DVC baseline and HEVC Class D dataset. We perform additional evaluation to compare the proposed method with BAO (Xu et al., 2022) in terms of temporal complexity and memory cost. The evaluation result can be found in Fig. 7. The general result is that our approach is ≈ 2.8 times slower and cost ≈ 2.0 times memory than BAO, despite the optimization stepsize is smaller. This extra complexity comes from the cost of sequential optimization of latent. And our current method in its naı̈ve form is slower than BAO while performs better. Jointly consider RD performance, time and memory, our method does not dominate BAO. However, as our approach enables a sequential style semi-amortized variational inference (SAVI) (Kim et al., 2018; Marino et al., 2018) on latents, there exists a very simple trick to speed it up. Moreover, this trick also resolves the scalability issue. Specifically, to optimize the ith frame’s latent, we do not compute the R-D cost of all the frames after it as we do now. Instead, we limit the R-D cost computation to a small fixed size of frames. Formally, we approximate the gradient as: dL(w1:T ,y1:T ) dwi ≈ i+C∑ j=i dLj(w1:j ,y1:j) dwi , dL(w1:T ,y1:T ) dyi = i+C∑ j=i dLj(w1:j ,y1:j) dyi (26) , where C is a preset constant indicating the number of future frames we included for consideration. With this trick, our algorithm approach cost only ≈ 50% of time and ≈ 60% of memory compared with BAO, while remains a superior performance (≈ 5% better in BDBR) (Ours (fast) in Fig. 7, the results are based on DVC Class D c = 2). With this trick, jointly consider RD performance, time and memory, our approach clearly dominates BAO. Furthermore, with this trick, the scalability issue of our approach is significantly ellivated. As shown in Fig. 8, the memory cost our approach with this trick is constant to GoP size, while that of BAO and our approach without this trick grows linearly with GoP size. This means that with this trick, our approach becomes scalable to any GoP, which is superior than BAO. A.8 IMPACT ON OEU Another interesting question to ask is whether the sequential updating algorithm (Alg. 4) benefits the OEU (Lu et al., 2020). Indeed, OEU (Lu et al., 2020) and BAO (Xu et al., 2022) are quite similar at the first glance. However, it is important to note that the theoretical foundation of BAO and this paper is SAVI (Kim et al., 2018; Marino et al., 2018). However, OEU does not fit into SAVI framework. More specifically, its encoder parameter to be updated does not factorizes as the DAG defined by variational posterior. Thus, applying Alg. 4 is incorrect. To verify this empirically, we change the OEU from BAO’s joint optimization to ours sequential optimization (Alg. 4), and the results show that this change degrades R-D performance (See COEU line in Fig. 9). A.9 MORE ANALYSIS In this section, we extend the analysis on why the proposed approach works and what is the difference between the proposed approach and BAO (Xu et al., 2022). In the approximate SAVI on DAG latent (Alg. 4), we solve SAVI approximately latent by latent in topological order. For bit allocation of NVC with 10 frames, this topological order is y0,w1,y1, ...,w9,y9, where y0 is the latent of I frame, wi is the motion latent of ith P frame and yi is the residual latent of ith P frame. In Fig. 10, we show the relationship between R-D cost and the stage of approximate SAVI. We can see that the R-D cost reduces almost consistently with the growing of SAVI stage, which indicates that our approximate SAVI on DAG (Alg. 4) is successful. Specifically, despite our approach is inferior to BAO (Xu et al., 2022) upon the convergence of y3, it attains significant advantage over BAO after y9 converges. In Fig. 11, we compare the distribution of R-D cost, PSNR and Bpp across frame and latent of the baseline DVC (Lu et al., 2019), BAO Xu et al. (2022) and the proposed approach. For R-D cost, it is obvious that our proposed approach’s R-D cost is lower than BAO and baseline, which indicates a better R-D performance. For bpp, it is interesting to observe that despite all three methods have similar bpp of motion related latent w1:T , the bpp of residual related latent y1:T is quite different. Specifically, BAO increases the bpp of y1:T compared with baseline, while our approach decreases the bpp of y1:T compared with baseline. This explains why our approach has lower bitrate compared with BAO, and also explains why our approach has significantly less bitrate error. For the PSNR metric, both our approach and BAO significantly improve the baseline. And the difference between proposed approach and BAO is not obvious. We can conclude that the benefits of the proposed approach over BAO comes from the bitrate saving instead of quality enhancing. A.10 QUALITATIVE RESULTS In Fig. 12, Fig. 13, Fig. 14 and Fig. 15, we present the qualitative result of our approach compared with the baseline approach. We note that compared with the reconstruction frame of baseline approach, the reconstruction frame of our proposed approach preserves significantly more details with lower bitrate, and looks much more similar to the original frame. We intentionally omit the qualitative comparison with BAO (Xu et al., 2022) as it is not quite informative. Specifically, from Fig. 2 we can observe that the PSNR difference of BAO and our approach is very small (within±0.1dB). And our main advantage over BAO comes from bitrate saving instead of quality improvement. Thus the qualitative difference between the proposed method and BAO is likely to fall below just noticeable difference (JND). A.11 MORE DISCUSSION Other weakness includes scalability. Our method requires jointly considering all the frame inside the GoP, which is impossible when the GoP size is large or when GoP size is unknown for live streaming tasks. Furthermore, currently the gradient ascent step number is merely chosen as an empirical sweet spot between speed and performance. A thorough grid search is desired to better understand its effect on performance.
1. What is the focus of the paper regarding neural video compression? 2. What are the strengths of the proposed approach, especially in terms of its organization, analysis, and experimental support? 3. Do you have any concerns or weaknesses regarding the paper's complexity and implementation comparisons? 4. How would you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The bit allocation issue in neural video compression is the main topic of this research. In order to arrive at the ideal bit allocation, the problem is defined as semi-amortized variational inference, and gradient-based optimization is used as a solution. They point out the source of the sub-optimal performance in another paper's implementation. According to a concrete analysis, they present the corrected version of the optimal algorithm. Moreover, a computationally efficient approximation is designed to balance performance and time complexity. The newly developed algorithm demonstrates a clear performance improvement. Strengths And Weaknesses Strength: 1) The paper is well organized with detailed analysis and theoretical derivation. Reasons behind sub-optimality are well explored. The consideration for preserving and fully exploiting inter-frame dependencies during optimization is convincing. The reference and extension of the SAVI optimization method also has certain theoretical innovations. 2) The experiment performed well and successfully supported the theory. This reveals the significant impact of this optimization method on video compression performance. 3) Clear writing. Weakness: 1) Even thought the complexity issue is taken into consideration in this paper, it’s still hard to get a clear impression on the time complexity in the actual codec implementation. The time consumption of the method and the comparison with that of baseline in the implementation can be briefly listed. More in-depth analysis on the codec complexity is necessary. Clarity, Quality, Novelty And Reproducibility This paper is well-written and well-organized. Although the idea is not particularly original, the analysis work is thorough.
ICLR
Title Correcting the Sub-optimal Bit Allocation Abstract In this paper, we investigate the problem of bit allocation in Neural Video Compression (NVC). First, we reveal that a recent bit allocation approach claimed to be optimal is, in fact, sub-optimal due to its implementation. Specifically, we find that its sub-optimality lies in the improper application of semi-amortized variational inference (SAVI) on latent with non-factorized variational posterior. Then, we show that the corrected version of SAVI on non-factorized latent requires recursively applying back-propagating through gradient ascent, based on which we derive the corrected optimal bit allocation algorithm. Due to the computational in-feasibility of the corrected bit allocation, we design an efficient approximation to make it tractable. Empirical results show that our proposed correction significantly improves the incorrect bit allocation in terms of R-D performance and bitrate error, and outperforms all other bit allocation methods by a large margin. The source code is provided in the supplementary material. 1 INTRODUCTION Recently, bit allocation for Neural Video Compression (NVC) has drawn growing attention thanks to its great potential in boosting compression performance. Due to the frame reference structure in video coding, it is sub-optimal to use the same R-D (Rate-Distortion) trade-off parameter λ for all frames. In bit allocation task, bitrate is allocated to different frames/regions to minimize R-D cost R + λD, where R is total bitrate, D is total distortion, and λ is the Lagrangian multiplier controlling R-D trade-off. Li et al. (2022) are the pioneer of bit allocation for NVC, who improve the empirical R-D (Rate-Distortion) model from traditional video codec (Li et al., 2014; 2016) and solve the per-frame Lagrangian multiplier λ. Other concurrent works adopt simple heuristics for coarse bit allocation (Cetin et al., 2022; Hu et al., 2022). Most recently, BAO (Bit Allocation using Optimization) (Xu et al., 2022) proposes to formulate bit allocation as semi-amortized variational inference (SAVI) (Kim et al., 2018; Marino et al., 2018) and solves it by gradient-based optimization. Specifically, it directly optimizes the variational posterior parameter to be quantized and encoded by gradient ascent, aiming at maximizing the minus overall R-D cost, which is also the evident lowerbound (ELBO). BAO does not rely on any empirical RD model and thus outperforms previous work. Further, BAO shows its optimality by proving its equivalence to bit allocation with precise R-D model. In this paper, we first show that BAO (Xu et al., 2022) is in fact, sub-optimal due to its implementation. Specifically, we find that it abuses SAVI (Kim et al., 2018; Marino et al., 2018) on latent with non-factorized variational posterior, which brings incorrect gradient signal during optimization. To solve this problem, we first extend SAVI to non-factorized latent by back-propagating through gradient ascent (Domke, 2012). Then based on that, we correct the sub-optimal bit allocation in BAO to produce true optimal bit allocation for NVC. Furthermore, we propose a computational feasible approximation to such correct but intractable bit allocation method. And we show that our approximation outperforms the incorrect bit allocation (BAO) in terms of R-D performance and bitrate error, and performs better than all other bit allocation methods. To summarize, our contributions are as follows: • We demonstrate that a previously claimed optimal bit allocation method is actually suboptimal. We find that its sub-optimality comes from the improper application of SAVI to non-factorized latent. • We present the correct way to conduct SAVI on non-factorized latent by recursively applying back-propagation through gradient ascent. Based on this, we derive the corrected optimal bit allocation algorithm for NVC. • Furthermore, we propose a computational efficient approximation of the optimal bit allocation to make it feasible. Our proposed approach improves the R-D performance and bitrate error over the incorrect bit allocation (BAO), and outperforms all other bit allocation methods for NVC. 2 PRELIMINARIES 2.1 NEURAL VIDEO COMPRESSION The input of NVC is a GoP (Group of Picture) x1:T , where xi ∈ RH×W is the ith frame with H ×W pixels, and T is the number of frame inside the GoP. Most of the works in NVC follow a latent variable model with temporal autoregressive relationship (Yang et al., 2020a). Specifically, to encode xi, we first extract the motion latent wi = fwϕ (xi,x ′ i) from current frame xi and previous reconstructed frame x′i−1, where f w ϕ (·) is the motion encoder parameterized by ϕ1. Then, we encode the quantized latent w̃i = ⌊wi⌉ with the probability mass function (pmf) estimator Pθ(w̃i|w̃<i, ỹ<i) parameterized by θ, where ⌊·⌉ is the rounding. Then, we obtain the residual latent yi = f y ϕ(x,x ′, w̃), where fyϕ(·) is the residual encoder. Then, similar to how we treat wi, we encode the quantized latent ỹi = ⌊yi⌉with pmf Pθ(ỹi|w̃≤i, ỹ<i). Finally, we obtain the reconstructed frame x′i = g x θ (x ′ i−1, w̃i, ỹi), where g x θ (·) is the decoder parameterized by θ. As only the motion latent w̃i and residual latent ỹi exist in the bitstream, the above process can be simplified as Eq. 1 and Eq. 2, where fϕ(·) is the generalized encoder and gθ(·) is the generalized decoder. The target of NVC is to minimize the per-frame R-D cost Ri + λiDi (Eq. 3), where Ri is the bitrate, Di is the distortion and λi is the Lagrangian multiplier controlling R-D trade-off. The bitrate Ri and distortion Di is computed as Eq. 2, where d(·, ·) is the distortion metric. And λiDi can be further interpreted as the data likelihood term − log pθ(xi|w̃≤i, ỹ≤i) so long as we treat λiDi as the energy function of a Gibbs distribution (Minnen et al., 2018). Specifically, when d(·, ·) is MSE, we can interpret λiDi = − log pθ(xi|w̃≤i, ỹ≤i) + const, where pθ(xi|w̃≤i, ỹ≤i) is a Gaussian distribution N (x̂i, 1/2λiI). wi = fϕ(xi, w̃<i, ỹ<i),yi = fϕ(xi, w̃≤i, ỹ<i), where w̃i = ⌊wi⌉, ỹi = ⌊yi⌉ (1) Ri = logPθ(w̃i, ỹi|w̃<i, ỹ<i), Di = d(xi, gθ(w̃≤i, ỹ≤i)) (2) max−(Ri + λiDi) (3) On the other hand, NVC is also closely related to Variational Autoencoder (VAE) (Kingma & Welling, 2013). As the rounding ⌊·⌉ is not differentiable, Ballé et al. (2016); Theis et al. (2017) propose to relax it by additive uniform noise (AUN), and replace w̃i = ⌊wi⌉, ỹi = ⌊yi⌉ with w̃i = wi + U(−0.5, 0.5), ỹi = yi + U(−0.5, 0.5). Under such formulation, the above encodingdecoding process becomes a VAE on graphic model w̃≤i, ỹ≤i → xi with variational posterior as Eq. 4, where wi,yi plays the role of variational posterior parameter. Then, minimizing the overall R-D cost (Eq. 3) is equivalent to maximizing the evident lowerbound (ELBO) (Eq. 5). qϕ(w̃i|xi, w̃<i, ỹ<i) = U(wi − 0.5,wi + 0.5), qϕ(ỹi|xi, w̃≤i, ỹ<i) = U(yi − 0.5,yi + 0.5) (4) −(Ri + λiDi) = Eqϕ [logPθ(w̃i, ỹi|w̃<i, ỹ<i)︸ ︷︷ ︸ −Ri + log pθ(xi|w̃≤i, ỹ≤i)︸ ︷︷ ︸ −λiDi − log qϕ︸ ︷︷ ︸ bits-back bitrate: 0 ] (5) 2.2 BIT ALLOCATION FOR NEURAL VIDEO COMPRESSION It is well known to video coding community that using the same R-D trade-off parameter λi to optimize R-D cost in Eq. 3 for all T frames inside a GoP is suboptimal (Li et al., 2014; 2016). This sub-optimality comes from the frame reference structure and is explained in detail by Li et al. (2022); Xu et al. (2022). The target of bit allocation is to maximize the minus of overall R-D cost 1Following previous works in deep generative modeling (Kingma & Welling, 2013; Kim et al., 2018), we denote all parameters related to encoder as ϕ, and all parameters related to decoder and prior as θ. (ELBO) L as Eq. 6 given the overall R-D trade-off parameter λ0, instead of maximizing Li of each frame i separately. The pioneer work of bit allocation in NVC (Li et al., 2022) follows bit allocation for traditional video codec (Li et al., 2016). Specifically, it adopts empirical models to approximate the relationship of the rate dependency ∂Ri+1/∂Ri and distortion dependency ∂Di+1/∂Di between frames. Then it takes those models into Eq. 6 to solve λ∗1:T explicitly as Eq. 7.left. However, its performance heavily relies on the accuracy of empirical models. maxL = T∑ i=1 Li, where Li = −(Ri + λ0Di) (6) λ∗1:T ← argmax λ1:T L(λ1:T ), versus w∗1:T ,y∗1:T ← arg max w1:T ,y1:T L(w1:T ,y1:T ) (7) On the other hand, BAO (Xu et al., 2022) does not solve λ∗1:T explicitly. Instead, it adopts SAVI (Kim et al., 2018; Marino et al., 2018) to achieve implicit bit allocation. To be specific, it initializes the variational posterior parameter w01:T ,y 0 1:T from fully amortized variational inference (FAVI) as Eq. 1. Then, it optimizes w1:T ,y1:T via gradient ascent to maximize L as Eq. 7.right. During this procedure, no empirical model is required. BAO further proofs that optimizing Eq. 7.right is equivalent to optimizing Eq. 7.left with precise rate and distortion dependency model ∂Ri+1/∂Ri, ∂Di+1/∂Di (See Thm. 1, Thm. 2 in Xu et al. (2022)). Thus, BAO claims that it is optimal assuming gradient ascent achieves global maximum. However, in next section, we show that BAO (Xu et al., 2022) is in fact suboptimal due to its implementation. 3 WHY BAO IS SUP-OPTIMAL BAO (Xu et al., 2022) achieves the SAVI (Kim et al., 2018; Marino et al., 2018) target in Eq. 7.right by gradient-based optimization. More specifically, its update rule is described as Eq. 8 and Eq. 9, where K is the total number of gradient ascent steps, and wki ,y k i is the posterior parameter wi,yi after k steps of gradient ascent. In the original paper of BAO, the authors also find that directly optimizing wi,yi simultaneously by Eq. 8 and Eq. 9 performs worse than optimizing yi alone using Eq. 9, but they have not offered any explanation. It is obvious that optimizing yi alone is sub-optimal. However, it is not obvious why jointly optimizing wi,yi with Eq. 8 and Eq. 9 fails. wk+1i ← w k i + α dL(wk1:T ,yk1:T ) dwki , where dL(wk1:T ,yk1:T ) dwki = T∑ j=i ∂Lj(wk1:j ,yk1:j) ∂wki (8) yk+1i ← y k i + α dL(wk1:T ,yk1:T ) dyki , where dL(wk1:T ,yk1:T ) dyki = T∑ j=i ∂Lj(wk1:j ,yk1:j) ∂yki (9) In fact, the update rule in Eq. 8 and Eq. 9 is exactly the SAVI (Kim et al., 2018; Marino et al., 2018) when wi,yi fully factorizes (e.g. the full factorization used in mean-field (Blei et al., 2017)). However, in NVC the wi,yi has complicated auto-regressive relationships (See Eq. 1 and Fig. 1.(a)). Abusing SAVI on non-factorized latent causes gradient error in two aspects: (1). The total derivative dL/dwi, dL/dyi is incomplete. (2). The total derivative dL/dwi, dL/dyi and partial derivative ∂Lj/∂wi, ∂Lj/∂yi is evaluated at wrong value. In next two sections, we elaborate those two issues with wi related equations in main text and yi related equations in Appendix. A.2. 3.1 INCOMPLETE TOTAL DERIVATIVE EVALUATION According to the latent generation procedure described by Eq. 1 and Eq. 2, we draw the computational graph to describe the latent dependency as Fig. 1.(a). Based on that, we expand the total derivative dL/dwi, dL/dyi as Eq. 10 and Eq. 22. dL(w1:T ,y1:T ) dwi = T∑ j=i dLj(w1:j ,y1:j) dwi dLj(w1:j ,y1:j) dwi = j∑ l=i+1 ∂wl ∂wi dLj(w1:j ,y1:j) dwl + j∑ l=i ∂yl ∂wi dLj(w1:j ,y1:j) dyl︸ ︷︷ ︸ ignored by BAO + ∂Lj(w1:j ,y1:j) ∂wi︸ ︷︷ ︸ considered by BAO (10) As shown in Eq. 8, Eq. 9 and Fig. 1.(b), BAO (Xu et al., 2022) treats the total derivative dL/dwi, dL/dyi as the sum of the frame level partial derivative ∂Lj/∂wi, ∂Lj/∂yi, which is the direct contribution of frame ith latent wi,yi to jth frame’s R-D cost Lj (as marked in Eq. 10 and Eq. 22). This incomplete evaluation of gradient signal brings sub-optimality. Further, it is not possible to correct BAO by simply including other parts of gradient into consideration. As BAO jointly updates all the latent w1:T ,y1:T , the relationship of Eq. 2 only holds for the initial latent parameters w01:T ,y 0 1:T produced by FAVI. And this important relationship is broken for parameters w k 1:T ,y k 1:T after k ≥ 1 steps of update. 3.2 INCORRECT VALUE TO EVALUATE GRADIENT As shown in Eq. 8 and Eq. 9, BAO (Xu et al., 2022) simultaneously updates all the posterior parameter w1:T ,y1:T with gradient evaluated at the same gradient ascent step wk1:T ,y k 1:T . However, as we show later in Sec. 4.1 and Fig. 1.(c), this is sub-optimal as all the descendant latent w>i,y≥i of wi should already complete all K steps of gradient ascent before the gradient of wi is evaluated. Moreover, w>i,y≥i should be initialized by FAVI using precedents latent. Similar rule applies to yi. Specifically, the correct value to evaluate the gradient is as Eq. 11 and Eq. 23, where wkii denotes the latent wi after ki steps of update, and y k′j i denotes the latent yi after k ′ i steps of update. wki+1i ← w ki i + α dL(wk11 , ...,w ki i ,w K >i,y k′1 1 , ...,y k′i−1 i−1 ,y K ≥i) dwkii , where w0>i,y 0 ≥i = f(x,w k1 1 , ...,w ki i ,y k′1 1 , ...,y k′i−1 i−1 ) (11) Similar to the incomplete total derivative evaluation, this problem does not have a simple solution. In next section, we show how to correct both of the above-mentioned issues by recursively applying back-propagating through gradient ascent (Domke, 2012). 4 CORRECTING THE SUB-OPTIMAL BIT ALLOCATION In this section, we first extend the generic SAVI Kim et al. (2018); Marino et al. (2018) to 2-level non-factorized latent. Then we further extend this result to latent with any dependency that can be described by a DAG (Directed Acyclic Graph). And finally, we correct the sub-optimal bit allocation by applying the result in DAG latent to NVC. 4.1 SAVI ON 2-LEVEL NON-FACTORIZED LATENT In this section, we extend the SAVI on 1-level latent (Kim et al., 2018) to 2-level non-factorized latent. We denote x as evidence, a as the variational posterior parameter of the first level latent ã, b as the variational posterior parameter of the second level latent b̃, and the ELBO to maximize as L(a, b). The posterior q(ã, b̃|x) factorizes as q(ã|x)q(b̃|ã,x), which means that b depends on a. Given a is fixed, we can directly follow Kim et al. (2018); Marino et al. (2018) to optimize b to maximize ELBO by SAVI. However, it requires some tricks to optimize a. Algorithm 1: SAVI on 2-level Latent 1 procedure solve-2-level(x,ak) 2 initialize a0 ← f(x) from FAVI 3 for k = 0, ...,K − 1 do 4 dL(ak,bK) dak = grad-2-level(x,ak) 5 ak+1 ← ak + αdL(a k,bK) dak 6 return aK , bK 7 procedure grad-2-level(x,ak) 8 b0 ← f(x,ak) from FAVI 9 for k′ = 0, ...,K − 1 do 10 bk ′+1 ← bk′ + αdL(a k,bk ′ ) dbk′ 11 ←−a ← ∂L(a k,bK) ∂ak 12 ←− bK ← dL(a k,bK) dbK 13 for k′ = K − 1, ..., 0 do 14 ←−a ←←−a + α∂ 2L(ak,bk ′ ) ∂ak∂bk′ ←−−− bk ′+1 15 ←− bk ′ ← ←− bk ′ + α∂ 2L(ak,bk ′ ) ∂bk′∂bk′ ←−−− bk ′+1 16 ←−a =←−a + ∂b 0 ∂ak ←− b0 17 return dL(a k,bK) dak =←−a Algorithm 2: SAVI on DAG Latent 1 procedure solve-dag(x) 2 sort a1, ...,aN in topological order 3 for aj with parent P(aj) = ∅ 4 add aj to fake node a0’s children C(a0) 5 grad-dag(x,a00) 6 return aK1 , ...,aKN 7 procedure grad-dag(x,ak00 , ...,a ki i ) 8 for aj ∈ C(ai) in topological order do 9 a0j ← f(x,a k0 0 , ...,a k<j <j ) from FAVI 10 for kj = 0, ...,K − 1 do 11 dL(ak00 ,...,a kj j ,a K >j) da kj j ← grad-dag(x,ak00 , ...,a kj j ) 12 a kj+1 j ← a kj j + α dL(ak00 ,...,a kj j ,a K >j) da kj j 13 ←−ai ← ∂L(ak00 ,...,a ki i ,a K >i) ∂a ki i 14 for aj ∈ C(ai) do 15 ←−aj ← 0, ←− aKj ← dL(ak00 ,...,a ki i ,a K >i) daKj 16 for kj = K − 1, ..., 0 do 17 ←−aj ←←−aj + α ∂2L(ak00 ,...,a kj j ,a K >j) ∂a ki i ∂a kj j ←−−− a kj+1 j 18 ←− a kj j ← ←−−− a kj+1 j + α ∂2L(ak00 ,...,a kj j ,a K >j) ∂a kj j ∂a kj j ←−−− a kj+1 j 19 ←−ai ←←−ai +←−aj + ∂a0j ∂a ki i ←− a0j 20 returndL(a k0 0 ,...,a ki i ,a K >i) da ki i =←−ai The intuition is, we do not want to find a a that maximizes L(a, b) given a fixed b (or we have the gradient issue described in Sec. 3). Instead, we want to find a a, whose maxb L(a, b) is maximum. This translates to the optimization problem as Eq. 12. In fact, Eq. 12 is a variant of setup in backpropagating through gradient ascent (Samuel & Tappen, 2009; Domke, 2012). The difference is, our a also contributes directly to optimization target L(a, b). From this perspective, Eq. 12 is more closely connected to Kim et al. (2018), if we treat a as the model parameter and b as latent. a← argmax a L(a, b∗(a)), where b∗(a)← argmax b L(a, b) (12) And as SAVI on 1-level latent (Kim et al., 2018; Marino et al., 2018), we need to solve Eq. 12 using gradient ascent. Specifically, denote α as step size (learning rate), K as the total gradient ascent steps, ak as the a after k step update, bk ′ as the b after k′ step update, and f(.) as FAVI procedure generating initial posterior parameters a0, b0, the optimization problem as Eq. 12 translates into the update rule as Eq. 13. Eq. 13 is the guidance for designing optimization algorithm, and it also explains why the gradient of BAO (Xu et al., 2022) is evaluated at wrong value (See Sec. 3.2). ak+1 ← ak + αdL(a k, bK) dak , bk ′+1 ← bk ′ + α dL(ak, bk′) dbk′ , where b0 = f(x,ak) (13) To solve Eq. 13, we note that although dL(ak, bk′)/dbk′ is directly computed, dL(ak, bK)/dak is not straightforward. Resorting to previous works (Samuel & Tappen, 2009; Domke, 2012) in implicit differentiation and extending the results in Kim et al. (2018) from model parameters to variational posterior parameters, we implement Eq. 13 as Alg. 1. Specifically, we first initialize a0 from FAVI. Then we conduct gradient ascent on a with gradient dL(ak, bK)/dak computed from the procedure grad-2-level(x,ak). And inside grad-2-level(x,ak), b is also updated by gradient ascent, the above procedure corresponds to Eq. 13. The key of Alg. 1 is the evaluation of gradient dL(ak, bK)/dak. Formally, we have: Theorem 1. After grad-2-level(x,ak) of Alg. 1 executes, we have the return value dL(ak, bK)/dak =←−a . (See proof in Appendix. A.1.) 4.2 SAVI ON DAG-DEFINED NON-FACTORIZED LATENT In this section, we extend the result from previous section to SAVI on general non-factorized latent with dependency described by any DAG. This DAG is the computational graph during network inference, and it is also the directed graphical model (DGM) (Koller & Friedman, 2009) defining the factorization of latent variables during inference. This is the general case covering all dependency that can be described by DGM. This extension is necessary to perform SAVI on latent with complicated dependency (e.g. bit allocation of NVC). Similar to the 2-level latent setup, we consider performing SAVI on N variational posterior parameter a1, ...,aN with their dependency defined by a computational graph G, i.e., their corresponding latent variable ã1, ..., ãN ’s posterior distribution factorizes as G. Specifically, we denote aj ∈ C(ai),ai ∈ P(aj) if an edge exists from ai to aj . This indicates that ãj conditions on ãi. Without loss of generality, we assume a1, ...,aN is sorted in topological order. This means that if aj ∈ C(ai),ai ∈ P(aj), then i < j. Each latent is optimized by K-step gradient ascent, and akii denotes the latent ai after ki steps of update. Then, similar to 2-level latent, we have the update rule as Eq. 14: aki+1i ← a ki i + α dL(ak11 , ...,a ki i ,a K >i) daki , where a0>i = f(x,a k1 1 , ...,a ki i ) (14) , which can be translated into Alg. 2. Specifically, we first sort the latent in topological order. Then, we add a fake latent a0 to the front of all as. Its children are all the as with 0 in-degree. Then, we can solve the SAVI on a1, ...,aN using gradient ascent by executing the procedure graddag(x,ak00 , ...,a ki i ) in Alg. 2 recursively. Inside procedure grad-dag(x,a k0 0 , ...,a ki i ), the gradient to update ai relies on the convergence of its children aj ∈ C(ai), which is implemented by the recursive depth-first search (DFS) in line 11. And upon the completion of procedure grad-dag(x,a00), all the latent converges to aK1 , ...,a K N . Similar to the 2-level latent case, the key of Alg. 2 is the evaluation of gradient dL(ak00 , ...,a ki i ,a K >i)/da ki i . Formally, we have: Theorem 2. After the procedure grad-dag(x,ak00 , ...,a ki i ) in Alg. 2 executes, we have the return value dL(ak00 , ...,a ki i ,a K >i)/da ki i = ←−ai. (See proof in Appendix. A.1.) To better understand how Alg. 2 works, we provide a detailed example in Fig. 5 of Appendix. A.3. 4.3 CORRECTING THE SUB-OPTIMAL BIT ALLOCATION USING SAVI ON DAG With the result in previous section, correcting BAO (Xu et al., 2022) seems to be trivial. We only need to sort the latent in topological order as w1,y1, ...,wT ,yT , treat them as a1, ...,a2T+1 and run Alg. 2 to obtain the optimized latent parameters wK1 ,y K 1 , ...,w K T ,y K T . And the gradient dL(ak00 , ...,a ki i ,a K >i)/da ki i computed in Alg. 2 resolves the issue of BAO described in Sec. 3.1 and Sec. 3.2. However, an evident problem is the temporal complexity. Given the latent number N and gradient ascent step number K, Alg. 2 has temporal complexity of Θ(KN ). NVC with GoP size 10 has approximately N = 20 latent, and the SAVI on NVC (Xu et al., 2022) takes around K = 2000 step to converge. For bit allocation, the complexity of Alg. 2 is ≈ 200020, which is intractable. On the other hand, BAO’s complexity is reasonable (Θ(KN) ≈ 4 × 104). Thus, in next section, we provide a feasible approximation to such intractable corrected bit allocation. 4.4 FEASIBLE APPROXIMATION TO THE CORRECTED BIT ALLOCATION In order to solve problem with practical size such as bit allocation on NVC, we provide an approximation to the SAVI (Kim et al., 2018; Marino et al., 2018) on DAG described in Sec. 4.2. The general idea is that, when being applied to bit allocation of NVC, the accurate SAVI on DAG (Alg. 2) satisfies both requirement on gradient signal described in Sec. 3.1 and Sec. 3.2. We can not make it tractable without breaking them. Thus, we break one of them and achieve a reasonable complexity, while maintain a superior performance compared with BAO (Xu et al., 2022). We consider the approximation in Eq. 15 which breaks the requirement for gradient evaluation in Sec. 3.2. Based on Eq. 15 and the requirement in Sec. 3.1, we design an approximation of accurate SAVI as Alg. 4. When being applied to bit allocation in NVC, it satisfies the gradient requirement in Sec. 3.1 while maintaining a temporal complexity of Θ(KN) as BAO. dL(ak00 , ...,a ki i ,a K >i) dakii ≈ dL(ak00 , ...,a ki i ,a 0 >i) dakii (15) Specifically, with the approximation in Eq. 15, the recurrent gradient computation in Alg. 2 becomes unnecessary as the right hand side of Eq. 15 does not require aK>i. However, to maintain the dependency of latent described in Sec. 3.1, as Alg. 2, we still need to ensure that the children node aj ∈ C(ai) are re-initialized by FAVI every-time when ai is updated. Therefore, a reasonable approach is to traverse the graph in topological order. We keep the children node aj untouched until all its parent node ai ∈ P(aj)’s gradient ascent is completed and aKi is known. And the resulting approximate SAVI algorithm is as Alg. 4. When applied to bit allocation, it satisfies the gradient requirement in Sec. 3.1, and as BAO, its temporal complexity is Θ(KN). Algorithm 3: BAO on DAG Latent 1 procedure solve-bao(x) 2 a01, ...,a 0 N ← f(x) from FAVI 3 for k = 0, ...,K − 1 do 4 for i = 1, ..., N do 5 ak+1i ← aki + α ∂L(ak1 ,...,a k N ) ∂aki 6 return aK1 , ..., aKN Algorithm 4: Approximate SAVI on DAG latent 1 procedure solve-approx-dag(x) 2 sort a1, ...,aN in topological order 3 for i = 1, ..., N do 4 a0i , ...,a 0 N ← f(x,aK<i) from FAVI 5 for k = 0, ...,K − 1 do 6 dL(aK<i,a k i ,a K >i) daki ≈ dL(a K <i,a k i ,a 0 >i) daki 7 ak+1i ← aki + α dL(aK<i,a k i ,a K >i) daki 8 return aK1 , ..., aKN To better understand BAO (Xu et al., 2022) in SAVI context, we rewrite it by general SAVI notation instead of NVC notation in Alg. 3. We highlight the difference between BAO (Alg. 3) (Xu et al., 2022), the accurate SAVI on DAG latent (Alg. 2) and the approximate SAVI on DAG latent (Alg. 4) from several aspects: • Graph Traversal Order: BAO performs gradient ascent on a1:T all together. The accurate SAVI only updates ai when a>i’s update is complete and aK>i is known. The approximate SAVI only updates ai when a<i’s update is complete and aK<i is known. • Gradient Correctness: When being applied to bit allocation in NVC, BAO violates the gradient rule in Sec. 3.1 and Sec. 3.2, accurate SAVI satisfies both rules, approximate SAVI satisfies Sec. 3.1 and violates Sec. 3.2. • Temporal Complexity: With the latent number N and steps of gradient ascent K, the complexity of BAO is Θ(KN), the complexity of accurate SAVI is Θ(KN ) and the complexity of approximate SAVI is Θ(KN). Then we can simply apply Alg. 4 to bit allocation in NVC to obtain a feasible approximation of the corrected optimal bit allocation. And in Sec. 6.2, we empirically show that our approximation improves the R-D performance over BAO (Xu et al., 2022) with even smaller number of updates. 5 RELATED WORK: BIT ALLOCATION & SAVI FOR NEURAL COMPRESSION Li et al. (2022) are the pioneer of bit allocation for NVC and their work is elaborated in Sec. 2.2. Other recent works that consider bit allocation for NVC only adopt simple heuristic such as inserting 1 high quality frame per 4 frames (Hu et al., 2022; Cetin et al., 2022). On the other hand, OEU (Lu et al., 2020) is also recognised as frame-level bit allocation while its performance is inferior than BAO (Xu et al., 2022). BAO is the most recent work with best R-D performance. It is elaborated in Sec. 2.2 and Sec. 3, and corrected in the previous section. Semi-Amortized Variational Inference (SAVI) is proposed by Kim et al. (2018); Marino et al. (2018). The idea is that works following Kingma & Welling (2013) use fully amortized inference parameter ϕ for all data, which leads to the amortization gap (Cremer et al., 2018). SAVI reduces this gap by optimizing the variational posterior parameter after initializing it with inference network. It adopts back-propagating through gradient ascent (Domke, 2012) to evaluate the gradient of model parameters. We adopt a similar method to extend SAVI to non-factorized latent. When applying SAVI to practical neural codec, researchers abandon the nested model parameter update for efficiency. Prior works (Djelouah & Schroers, 2019; Yang et al., 2020b; Zhao et al., 2021; Gao et al., 2022) adopt SAVI to boost R-D performance and achieve variable bitrate in image compression. And BAO (Xu et al., 2022) is the first to consider SAVI for bit allocation. 6 EXPERIMENTS 6.1 EXPERIMENTAL SETTINGS We implement our approach in PyTorch 1.9 with CUDA 11.2, and run the experiments on NVIDIA(R) A100 GPU. Most of the other settings are intentionally kept the same as BAO (Xu et al., 2022). Specifically, we adopt HEVC Common Testing Condition (CTC) (Bossen et al., 2013) and UVG dataset (Mercat et al., 2020). And we measure the R-D performance in BjontegaardBitrate (BD-BR) and BD-PSNR (Bjontegaard, 2001). For baseline NVC (Lu et al., 2019; Li et al., 2021), we adopt the official pre-trained models. And we select target λ0 = {256, 512, 1024, 2048}. For gradient ascent, we adopt Adam (Kingma & Ba, 2014) optimizer with lr = 1 × 10−3. We set the gradient ascent step K = 2000 for the first frame and K = 400 for other frames. More details are presented in Appendix. A.5. 6.2 QUANTITATIVE RESULTS As shown in Tab. 1, our method consistently improves the R-D performance in terms of BD-BR over BAO (Xu et al., 2022) on both baseline methods and all datasets. Moreover, this improvement is especially significant (more than 10% in BD-BR) when the baseline is DCVC (Li et al., 2021). And both BAO and our proposed correction significantly outperform other approaches. It is also noteworthy that with our bit allocation, DVC (the SOTA method in 2019) already outperforms DCVC (the SOTA method in 2021) by large margin (See the red solid line and black dash line in Fig. 2). BD-BR (%) ↓ Method Class B Class C Class D Class E UVG DVC (Lu et al., 2019) as Baseline Li et al. (2016)1 20.21 17.13 13.71 10.32 16.69 Li et al. (2022)1 -6.80 -2.96 0.48 -6.85 -4.12 OEU (Lu et al., 2020)2 -13.57 -11.29 -18.97 -12.43 -13.78 BAO (Xu et al., 2022)2 -28.55 -26.82 -25.37 -32.54 -27.68 Proposed -32.10 -31.71 -35.86 -32.93 -30.92 DCVC (Li et al., 2021) as Baseline OEU (Lu et al., 2020)2 -10.75 -14.34 -16.30 -7.15 -16.07 BAO (Xu et al., 2022)2 -20.59 -19.69 -20.60 -23.33 -25.22 Proposed -32.89 -33.10 -32.01 -36.88 -39.66 Table 1: The BD-BR of our approach compared with others. 1 comes from Li et al. (2022). 2 comes from Xu et al. (2022). Figure 2: The R-D curve on HEVC Class D. Other than R-D performance, the bitrate error of our approach is also significantly smaller than BAO (Xu et al., 2022) (See Tab. 2). The bitrate error is measured as the relative bitrate difference before and after bit allocation. The smaller it is, the easier it is to achieve the desired bitrate accurately. For complexity, our approach only performs 920 steps of gradient ascent per-frame, while BAO requires 2000 steps. See more quantitative results (BD-PSNR & R-D curves) in Appendix. A.6. 6.3 ABLATION STUDY, ANALYSIS & QUALITATIVE RESULTS Tab. 3 shows that for BAO (Xu et al., 2022), jointly optimizing w1:T ,y1:T performs worse than optimizing y1:T or w1:T alone. This counter-intuitive phenomena comes from its incorrect estimation of gradient signal. For the proposed approach that corrects this, jointly optimizing w1:T ,y1:T performs better than optimizing y1:T or w1:T alone, which is aligned with our intuition. Bitrate-Error (%) ↓ Method Class B Class C Class D Class E UVG DVC (Lu et al., 2019) as Baseline BAO (Xu et al., 2022)2 8.41 12.86 21.39 5.94 3.73 Proposed 3.16 4.27 1.81 6.14 1.73 DCVC (Li et al., 2021) as Baseline BAO (Xu et al., 2022)2 25.67 23.90 23.74 24.88 21.86 Proposed 4.27 7.29 5.73 8.03 3.06 Table 2: The bitrate error of our approach compared with BAO. Method BD-BR (%) ↓ BAO (y) -25.37 BAO (w) -22.24 BAO (y,w) -14.76 Proposed (y) -32.60 Proposed (w) -31.56 Proposed (y,w) -35.86 Table 3: Ablation study with HEVC Class D and DVC (Lu et al., 2019). To better understand why our method works, we present the R-D cost, distortion and rate versus frame/latent index for different methods in Fig. 3: top-left shows that the R-D cost of our approach consistently decreases according to SAVI stage. Moreover, it outperforms BAO after 4th frame; top-right shows that for each frame the R-D cost of our method is lower than BAO; bottom-left shows that the distortion part of R-D cost of our approach is approximately the same as BAO. While bottom-right shows that the advantage of our approach over BAO lies in the bitrate. More specifically, BAO increases the bitrate of yis after SAVI, while our correction decreases it. See more analysis in Appendix. A.9 and qualitative results in Appendix. A.10. 7 DISCUSSION & CONCLUSION Despite our correction is already more efficient than original BAO (Xu et al., 2022), its encoding speed remains far from real-time. Thus, it is limited to scenarios where R-D performance matters much more than encoding time (e.g. video on demand). See more discussion in Appendix. A.11. To conclude, we show that a previous bit allocation method for NVC is sub-optimal as it abuses SAVI on non-factorized latent. Then, we propose the correct SAVI on general non-factorized latent by back-propagating through gradient ascent, and we further propose a feasible approximation to make it tractable for bit allocation. Experimental results show that our correction significantly improves the R-D performance. ETHICS STATEMENT Improving the R-D performance of NVC has positive social value, in terms of reducing carbon emission by saving the resources required to transfer and store videos. Moreover, unlike traditional codecs such as H.266 (Bross et al., 2021), neural video codec does not require dedicated hardware. Instead, it can be deployed with general neural accelerators. Improving the R-D performance of NVC prompts the practical deployment of video codecs that are independent of dedicated hardware, and lowers the hardware-barrier of playing multi-media contents. REPRODUCIBILITY STATEMENT For theoretical results, both of the two theorems are followed by proof in Appendix. A.1. For a relatively complicated novel algorithm (Alg. 2), we provide an illustration of the step by step execution procedure in Appendix. A.3. For experiment, both of the two datasets are publicly accessible. In Appendix. A.5, we provide more implementation details including all the hyper-parameters. Moreover, we provide our source code for reproducing the empirical results in supplementary material. A APPENDIX A.1 PROOF OF THM 1 AND THM 2 Theorem 1. After the procedure grad-2-level(x,ak) of Alg. 1 executes, we have the return value dL(ak, bK)/dak =←−a . Proof. This proof extends the proof of Thm. 1 in Domke (2012), and it also serves as a formal justification of Alg. 1 in Kim et al. (2018). Note that our paper and Kim et al. (2018) are subtly different from Samuel & Tappen (2009); Domke (2012) as our high level parameter w not only generate low level parameter y, but also directly contributes to optimization target (See Fig. 4). As the computational graph in Fig. 4 shows, we can expand dL(ak, bK)/dak as Eq. 16, with each term solved in Eq. 18 and Eq. 19. dL(ak, bK) dak = ∂L(ak, bK) ∂ak︸ ︷︷ ︸ known + K∑ k′=0 ∂bk ′ ∂ak︸ ︷︷ ︸ Eq. 18 dL(ak, bK) dbk′︸ ︷︷ ︸ Eq. 19 (16) To solve Eq. 16, we first note that ∂L(ak, bK)/∂ak, dL(ak, bK)/dbK , ∂b0/∂ak is naturally known. Then, by taking partial derivative of the update rule of gradient ascent bk ′+1 ← bk′ + αdL(ak, bk′)/dbk′ with regard to ak, bk′ , we have Eq. 17 and Eq. 18. Note that Eq. 18 is the partial derivative ∂bk ′+1/∂ak instead of total derivative dbk ′+1/dak = (∂bk ′+1/∂bk ′ )(dbk ′ /dak) + ∂bk ′+1/∂ak. ∂bk ′+1 ∂bk′ = I + α ∂2L(ak, bk′) ∂bk′∂bk′ (17) ∂bk ′+1 ∂ak = α ∂2L(ak, bk′) ∂ak∂bk′ (18) And those second order terms can either be directly evaluated or approximated via finite difference as Eq. 20. As Eq. 18 already solves the first term on the right hand side of Eq. 16, the remaining issue is dL(ak, bK)/dbk′ . To solve this term, we expand it recursively as Eq. 19 and take Eq. 17 into it. dL(ak, bK) dbk′ = ∂bk ′+1 ∂bk′ dL(ak, bK) dbk′+1 (19) And the above solving process can be described by the procedure grad-2-level(x,ak) of Alg. 1. Specifically, the iterative update of ←−−− bk ′+1 in line 15 corresponds to recursively expanding Eq. 19 with Eq. 17, and the iterative update of ←−a in line 14 corresponds to recursively expanding Eq. 16 with Eq. 18 and Eq. 19. Upon the return of grad-2-level(x,ak) of Alg. 1, we have←−a = dL(ak, bK)/dbk. The complexity of the Hessian-vector product in line 14 and 15 of Alg. 1 may be reduced using finite difference following (Domke, 2012) as Eq. 20. ∂2L(ak, bk′) ∂ak∂bk′ v= lim r→0 1 r ( dL(ak, bk′ + rv) dak − dL(a k, bk ′ ) dak ) ∂2L(ak, bk′) ∂bk′∂bk′ v = lim r→0 1 r ( dL(ak, bk′ + rv) dbk′ − dL(a k, bk ′ ) dbk′ ) (20) Theorem 2. After the procedure grad-dag(x,ak00 , ...,a ki i ) in Alg. 2 executes, we have the return value dL(ak00 , ...,a ki i ,a K >i)/da ki i = ←−ai. Proof. Consider computing the target gradient with DAG G. The aki ’s gradient is composed of its own contribution to L in addition to the gradient from its children aj ∈ C(ai). Further, as we are considering the optimized children aKj , we expand the children node aj as Fig. 4. Then, we have: dL(ak00 , ...,a ki i ,a K >i) dakii = ∂L(ak00 , ...,a ki i ,a K >i) ∂akii︸ ︷︷ ︸ known + ∑ aj∈C(ai) ( K∑ kj=0 ∂a kj j ∂akii︸ ︷︷ ︸ Eq. 18 dL(ak00 , ...,a kj−1 j−1 ,a K ≥j) da kj j︸ ︷︷ ︸ Eq. 19 ) (21) The first term on the right-hand side of Eq. 21 can be trivially evaluated. The ∂akjj /∂a ki i can be evaluated as Eq. 18. And the dL(ak00 , ...,a kj−1 j−1 ,a K ≥j)/da kj j can be iteratively expanded as Eq. 19. We highlight several key differences between Alg. 2 and Alg. 1 which are reflected in the implementation of Alg. 2: • The gradient evaluation of current node yi requires gradient of its plural direct children aj ∈ C(ai), instead of the single child in 2-level case. The children traversal part of Eq. 19 corresponds to the two extra for loop in line 8 and 14 of Alg. 2. • The gradient ascent update of child latent parameter akj+1j ← a kj j + αdL(ak00 , ...,a kj j ,a K >j)/da kj j can be conducted trivially only if C(aj) is empty, otherwise the gradient has to be evaluated recursively using Eq. 21. And this part corresponds to the recursive call in line 11 of Alg. 2. And the other part of Alg. 2 is the same as Alg. 1. So the rest of the proof follows Thm. 1. Similarly, the Hessian-vector product in line 17 and 18 of Alg. 2 may be approximated as Eq. 20. However, this does not save Alg. 2 from an overall complexity of Θ(KN ). A.2 THE COMPLETE FORMULA FOR SEC. 3.1 AND SEC. 3.2 In this section, we provide the complete formula on yi related gradient for Sec. 3.1 and Sec. 3.2. Specifically, Eq. 22 is paired with Eq. 10, and Eq. 23 is paired with Eq. 11. dL(w1:T ,y1:T ) dyi = T∑ j=i dLj(w1:j ,y1:j) dyi dLj(w1:j ,y1:j) dyi = j∑ l=i+1 ( ∂yl ∂yi dLj(w1:j ,y1:j) dyl + ∂wl ∂yi dLj(w1:j ,y1:j) dwl )︸ ︷︷ ︸ ignored by BAO + ∂Lj(w1:j ,y1:j) ∂yi︸ ︷︷ ︸ considered by BAO (22) y k′i+1 i ← y k′i + α dL(wk11 , ...,w ki i ,w K >i,y k′1 1 , ...,y k′i i ,y K >i) dy k′i i , where w0>i,y 0 >i = f(x,w k1 1 , ...,w ki i ,y k′1 1 , ...,y k′i i ) (23) A.3 AN EXAMPLE OF EXECUTION OF ALG. 2 In this section, we provide an example of the full procedure of execution of Alg. 2 in Fig. 5. The setup is as Fig. 5.(0): we have N = 3 latent a1,a2,a3 and gradient ascent step K = 2, connected by a DAG shown in the figure. A.4 EXTENDING THE ANALYSIS SEC. 3 TO GENERAL DAG CASE As the Alg. 2 and Alg. 4 are applicable to general SAVI (Kim et al., 2018; Marino et al., 2018) beyond bit allocation, it is helpful to understand their merit to extend the analysis in Sec. 3 from bit allocation to general DAG scenario. In this section, we consider the same problem setup as Sec. 4.2. Similar to bit allocation case, BAO has the gradient incomplete and gradient value incorrect problem. The gradient incomplete issue is presented as Eq. 24, and gradient value incorrect issue is presented as Eq. 25. dL(ak00 , ...,a ki i ,a K >i) dakii = ∂L(ak00 , ...,a ki i ,a K >i) ∂akii︸ ︷︷ ︸ considered by BAO + ∑ aj∈C(ai) ( K∑ kj=0 ∂a kj j ∂akii dL(ak00 , ...,a kj−1 j−1 ,a K ≥j) da kj j ) ︸ ︷︷ ︸ ignored by BAO (24) ∂L(ak00 , ...,a ki i ,a K >i) ∂akii ≈ ∂L(aki0 , ...,a ki i ,a ki >i) ∂akii︸ ︷︷ ︸ approximation of BAO in gradient value (25) A.5 MORE IMPLEMENTATION DETAILS In the main text, we use yi as all the latent variable related to residual. In practice, it is divided into yi, zi,∆ y i , which refer to the first level latent of residual, second level latent of residual and quantization step size of first level latent of residual respectively. In practice, as BAO (Xu et al., 2022), all of the 3 parts are involved in SAVI jointly. We note that this is not a problem as they fully factorize. And for DVC (Lu et al., 2019), wi indeed represent the latent of motion. As for DVC, the motion has only one level of latent. However for DCVC (Li et al., 2021), wi is divided into wi,vi,∆wi , which refer to the first level latent of motion, second level latent of motion and quantization step size of first level latent of motion respectively. Similar to yi, all of the 3 parts are involved in SAVI jointly, and this is not a problem as they fully factorize. Following BAO (Xu et al., 2022), we set the target λ0 = {256, 512, 1024, 2048}, which also follows the baselines (Lu et al., 2019; Li et al., 2021). We adopt the official pre-train models for both of the baseline methods (Lu et al., 2019; Li et al., 2021). We do not have a training dataset or implementation details for training amortized encoder / decoder as all the experiments are performed on official pre-trained models. For gradient ascent, we set K = 2000 for the first I frame and K = 400 for all other P frames. On average, the gradient ascent steps for each frame is 920, which is smaller than 2000 in BAO. A.6 MORE QUANTITATIVE RESULTS In this section we present more quantitative results. In Tab. 4 we show the BD-PSNR of our proposed method and other methods as a supplementary to the BD-BR results (Tab. 1). Furthermore, in Fig. 6, we present R-D curve on all classes of HEVC CTC and UVG dataset as a supplementary to the HEVC Class D plot (Fig. 2). 0.050 0.075 0.100 0.125 0.150 0.175 Bpp 32 33 34 35 PS N R HEVC Class B 0.10 0.15 0.20 0.25 0.30 Bpp 29 30 31 32 33 34 PS N R HEVC Class C 0.10 0.15 0.20 0.25 0.30 0.35 Bpp 28 29 30 31 32 33 34 PS N R HEVC Class D 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Bpp 35 36 37 38 39 40 PS N R HEVC Class E 0.04 0.06 0.08 0.10 0.12 Bpp 34 35 36 37 38 PS N R UVG −0.04 −0.02 0.00 0.02 0.04 Bpp −0.04 −0.02 0.00 0.02 0.04 PS N R DCVC OEU (on DCVC) BAO (on DCVC) Proposed (on DCVC) DVC OEU (on DVC) BAO (on DVC) Proposed (on DVC) Figure 6: The R-D performance of our approach compared with baselines (w/o bit allocation) and other bit allocation approaches. A.7 COMPLEXITY & SCALABILITY Figure 7: Spatial temporal complexity analysis comparing BAO (Xu et al., 2022), the proposed approach and a fast approximation of the proposed approach. The analysis is done on DVC baseline and HEVC Class D dataset. We perform additional evaluation to compare the proposed method with BAO (Xu et al., 2022) in terms of temporal complexity and memory cost. The evaluation result can be found in Fig. 7. The general result is that our approach is ≈ 2.8 times slower and cost ≈ 2.0 times memory than BAO, despite the optimization stepsize is smaller. This extra complexity comes from the cost of sequential optimization of latent. And our current method in its naı̈ve form is slower than BAO while performs better. Jointly consider RD performance, time and memory, our method does not dominate BAO. However, as our approach enables a sequential style semi-amortized variational inference (SAVI) (Kim et al., 2018; Marino et al., 2018) on latents, there exists a very simple trick to speed it up. Moreover, this trick also resolves the scalability issue. Specifically, to optimize the ith frame’s latent, we do not compute the R-D cost of all the frames after it as we do now. Instead, we limit the R-D cost computation to a small fixed size of frames. Formally, we approximate the gradient as: dL(w1:T ,y1:T ) dwi ≈ i+C∑ j=i dLj(w1:j ,y1:j) dwi , dL(w1:T ,y1:T ) dyi = i+C∑ j=i dLj(w1:j ,y1:j) dyi (26) , where C is a preset constant indicating the number of future frames we included for consideration. With this trick, our algorithm approach cost only ≈ 50% of time and ≈ 60% of memory compared with BAO, while remains a superior performance (≈ 5% better in BDBR) (Ours (fast) in Fig. 7, the results are based on DVC Class D c = 2). With this trick, jointly consider RD performance, time and memory, our approach clearly dominates BAO. Furthermore, with this trick, the scalability issue of our approach is significantly ellivated. As shown in Fig. 8, the memory cost our approach with this trick is constant to GoP size, while that of BAO and our approach without this trick grows linearly with GoP size. This means that with this trick, our approach becomes scalable to any GoP, which is superior than BAO. A.8 IMPACT ON OEU Another interesting question to ask is whether the sequential updating algorithm (Alg. 4) benefits the OEU (Lu et al., 2020). Indeed, OEU (Lu et al., 2020) and BAO (Xu et al., 2022) are quite similar at the first glance. However, it is important to note that the theoretical foundation of BAO and this paper is SAVI (Kim et al., 2018; Marino et al., 2018). However, OEU does not fit into SAVI framework. More specifically, its encoder parameter to be updated does not factorizes as the DAG defined by variational posterior. Thus, applying Alg. 4 is incorrect. To verify this empirically, we change the OEU from BAO’s joint optimization to ours sequential optimization (Alg. 4), and the results show that this change degrades R-D performance (See COEU line in Fig. 9). A.9 MORE ANALYSIS In this section, we extend the analysis on why the proposed approach works and what is the difference between the proposed approach and BAO (Xu et al., 2022). In the approximate SAVI on DAG latent (Alg. 4), we solve SAVI approximately latent by latent in topological order. For bit allocation of NVC with 10 frames, this topological order is y0,w1,y1, ...,w9,y9, where y0 is the latent of I frame, wi is the motion latent of ith P frame and yi is the residual latent of ith P frame. In Fig. 10, we show the relationship between R-D cost and the stage of approximate SAVI. We can see that the R-D cost reduces almost consistently with the growing of SAVI stage, which indicates that our approximate SAVI on DAG (Alg. 4) is successful. Specifically, despite our approach is inferior to BAO (Xu et al., 2022) upon the convergence of y3, it attains significant advantage over BAO after y9 converges. In Fig. 11, we compare the distribution of R-D cost, PSNR and Bpp across frame and latent of the baseline DVC (Lu et al., 2019), BAO Xu et al. (2022) and the proposed approach. For R-D cost, it is obvious that our proposed approach’s R-D cost is lower than BAO and baseline, which indicates a better R-D performance. For bpp, it is interesting to observe that despite all three methods have similar bpp of motion related latent w1:T , the bpp of residual related latent y1:T is quite different. Specifically, BAO increases the bpp of y1:T compared with baseline, while our approach decreases the bpp of y1:T compared with baseline. This explains why our approach has lower bitrate compared with BAO, and also explains why our approach has significantly less bitrate error. For the PSNR metric, both our approach and BAO significantly improve the baseline. And the difference between proposed approach and BAO is not obvious. We can conclude that the benefits of the proposed approach over BAO comes from the bitrate saving instead of quality enhancing. A.10 QUALITATIVE RESULTS In Fig. 12, Fig. 13, Fig. 14 and Fig. 15, we present the qualitative result of our approach compared with the baseline approach. We note that compared with the reconstruction frame of baseline approach, the reconstruction frame of our proposed approach preserves significantly more details with lower bitrate, and looks much more similar to the original frame. We intentionally omit the qualitative comparison with BAO (Xu et al., 2022) as it is not quite informative. Specifically, from Fig. 2 we can observe that the PSNR difference of BAO and our approach is very small (within±0.1dB). And our main advantage over BAO comes from bitrate saving instead of quality improvement. Thus the qualitative difference between the proposed method and BAO is likely to fall below just noticeable difference (JND). A.11 MORE DISCUSSION Other weakness includes scalability. Our method requires jointly considering all the frame inside the GoP, which is impossible when the GoP size is large or when GoP size is unknown for live streaming tasks. Furthermore, currently the gradient ascent step number is merely chosen as an empirical sweet spot between speed and performance. A thorough grid search is desired to better understand its effect on performance.
1. What is the focus and contribution of the paper regarding neural video compression? 2. What are the strengths and weaknesses of the proposed approach compared to prior works, particularly BAO? 3. What are the concerns regarding the timing of the paper's submission and its relation to the previous work? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper provides a fix to a recent frame/region-level bit allocation method for neural video compression, called BAO. In particular, the paper finds that BAO does not properly perform semi-amortized variational inference for non-factorized latents, and provide a compute-efficient approximation to the proper SAVI. Empirically, the method outperforms the existing bit allocation methods for NVC. Strengths And Weaknesses There is a significant issue with the paper, which I believe should be addressed before delving deep into the technical quality of the paper. The first (as far as I know) publicly available version of BAO (Xu et al.) appears on the arXiv on September 20th of 2022 (https://arxiv.org/abs/2209.09422), which is only a day before the ICLR 2023 abstract submission deadline. From this fact, I could only guess that either (1) the authors of this paper have been extremely productive and GPU-rich that they could identify the error of the previous work in a day and perform all experiments in a week, or (2) this submission has been written by the BAO authors (or close collaborators), who caught and fixed their own errors. If (2) is the case, I believe that the authors should retract the previous BAO submission and fix the manuscript directly. This is especially true because this paper is using (the unpublished) BAO as a main baseline, and the academic value of fixing an error of one's own, unpublished work is quite unclear. Clarity, Quality, Novelty And Reproducibility Please refer to the "Strength and Weakness" above.
ICLR
Title Correcting the Sub-optimal Bit Allocation Abstract In this paper, we investigate the problem of bit allocation in Neural Video Compression (NVC). First, we reveal that a recent bit allocation approach claimed to be optimal is, in fact, sub-optimal due to its implementation. Specifically, we find that its sub-optimality lies in the improper application of semi-amortized variational inference (SAVI) on latent with non-factorized variational posterior. Then, we show that the corrected version of SAVI on non-factorized latent requires recursively applying back-propagating through gradient ascent, based on which we derive the corrected optimal bit allocation algorithm. Due to the computational in-feasibility of the corrected bit allocation, we design an efficient approximation to make it tractable. Empirical results show that our proposed correction significantly improves the incorrect bit allocation in terms of R-D performance and bitrate error, and outperforms all other bit allocation methods by a large margin. The source code is provided in the supplementary material. 1 INTRODUCTION Recently, bit allocation for Neural Video Compression (NVC) has drawn growing attention thanks to its great potential in boosting compression performance. Due to the frame reference structure in video coding, it is sub-optimal to use the same R-D (Rate-Distortion) trade-off parameter λ for all frames. In bit allocation task, bitrate is allocated to different frames/regions to minimize R-D cost R + λD, where R is total bitrate, D is total distortion, and λ is the Lagrangian multiplier controlling R-D trade-off. Li et al. (2022) are the pioneer of bit allocation for NVC, who improve the empirical R-D (Rate-Distortion) model from traditional video codec (Li et al., 2014; 2016) and solve the per-frame Lagrangian multiplier λ. Other concurrent works adopt simple heuristics for coarse bit allocation (Cetin et al., 2022; Hu et al., 2022). Most recently, BAO (Bit Allocation using Optimization) (Xu et al., 2022) proposes to formulate bit allocation as semi-amortized variational inference (SAVI) (Kim et al., 2018; Marino et al., 2018) and solves it by gradient-based optimization. Specifically, it directly optimizes the variational posterior parameter to be quantized and encoded by gradient ascent, aiming at maximizing the minus overall R-D cost, which is also the evident lowerbound (ELBO). BAO does not rely on any empirical RD model and thus outperforms previous work. Further, BAO shows its optimality by proving its equivalence to bit allocation with precise R-D model. In this paper, we first show that BAO (Xu et al., 2022) is in fact, sub-optimal due to its implementation. Specifically, we find that it abuses SAVI (Kim et al., 2018; Marino et al., 2018) on latent with non-factorized variational posterior, which brings incorrect gradient signal during optimization. To solve this problem, we first extend SAVI to non-factorized latent by back-propagating through gradient ascent (Domke, 2012). Then based on that, we correct the sub-optimal bit allocation in BAO to produce true optimal bit allocation for NVC. Furthermore, we propose a computational feasible approximation to such correct but intractable bit allocation method. And we show that our approximation outperforms the incorrect bit allocation (BAO) in terms of R-D performance and bitrate error, and performs better than all other bit allocation methods. To summarize, our contributions are as follows: • We demonstrate that a previously claimed optimal bit allocation method is actually suboptimal. We find that its sub-optimality comes from the improper application of SAVI to non-factorized latent. • We present the correct way to conduct SAVI on non-factorized latent by recursively applying back-propagation through gradient ascent. Based on this, we derive the corrected optimal bit allocation algorithm for NVC. • Furthermore, we propose a computational efficient approximation of the optimal bit allocation to make it feasible. Our proposed approach improves the R-D performance and bitrate error over the incorrect bit allocation (BAO), and outperforms all other bit allocation methods for NVC. 2 PRELIMINARIES 2.1 NEURAL VIDEO COMPRESSION The input of NVC is a GoP (Group of Picture) x1:T , where xi ∈ RH×W is the ith frame with H ×W pixels, and T is the number of frame inside the GoP. Most of the works in NVC follow a latent variable model with temporal autoregressive relationship (Yang et al., 2020a). Specifically, to encode xi, we first extract the motion latent wi = fwϕ (xi,x ′ i) from current frame xi and previous reconstructed frame x′i−1, where f w ϕ (·) is the motion encoder parameterized by ϕ1. Then, we encode the quantized latent w̃i = ⌊wi⌉ with the probability mass function (pmf) estimator Pθ(w̃i|w̃<i, ỹ<i) parameterized by θ, where ⌊·⌉ is the rounding. Then, we obtain the residual latent yi = f y ϕ(x,x ′, w̃), where fyϕ(·) is the residual encoder. Then, similar to how we treat wi, we encode the quantized latent ỹi = ⌊yi⌉with pmf Pθ(ỹi|w̃≤i, ỹ<i). Finally, we obtain the reconstructed frame x′i = g x θ (x ′ i−1, w̃i, ỹi), where g x θ (·) is the decoder parameterized by θ. As only the motion latent w̃i and residual latent ỹi exist in the bitstream, the above process can be simplified as Eq. 1 and Eq. 2, where fϕ(·) is the generalized encoder and gθ(·) is the generalized decoder. The target of NVC is to minimize the per-frame R-D cost Ri + λiDi (Eq. 3), where Ri is the bitrate, Di is the distortion and λi is the Lagrangian multiplier controlling R-D trade-off. The bitrate Ri and distortion Di is computed as Eq. 2, where d(·, ·) is the distortion metric. And λiDi can be further interpreted as the data likelihood term − log pθ(xi|w̃≤i, ỹ≤i) so long as we treat λiDi as the energy function of a Gibbs distribution (Minnen et al., 2018). Specifically, when d(·, ·) is MSE, we can interpret λiDi = − log pθ(xi|w̃≤i, ỹ≤i) + const, where pθ(xi|w̃≤i, ỹ≤i) is a Gaussian distribution N (x̂i, 1/2λiI). wi = fϕ(xi, w̃<i, ỹ<i),yi = fϕ(xi, w̃≤i, ỹ<i), where w̃i = ⌊wi⌉, ỹi = ⌊yi⌉ (1) Ri = logPθ(w̃i, ỹi|w̃<i, ỹ<i), Di = d(xi, gθ(w̃≤i, ỹ≤i)) (2) max−(Ri + λiDi) (3) On the other hand, NVC is also closely related to Variational Autoencoder (VAE) (Kingma & Welling, 2013). As the rounding ⌊·⌉ is not differentiable, Ballé et al. (2016); Theis et al. (2017) propose to relax it by additive uniform noise (AUN), and replace w̃i = ⌊wi⌉, ỹi = ⌊yi⌉ with w̃i = wi + U(−0.5, 0.5), ỹi = yi + U(−0.5, 0.5). Under such formulation, the above encodingdecoding process becomes a VAE on graphic model w̃≤i, ỹ≤i → xi with variational posterior as Eq. 4, where wi,yi plays the role of variational posterior parameter. Then, minimizing the overall R-D cost (Eq. 3) is equivalent to maximizing the evident lowerbound (ELBO) (Eq. 5). qϕ(w̃i|xi, w̃<i, ỹ<i) = U(wi − 0.5,wi + 0.5), qϕ(ỹi|xi, w̃≤i, ỹ<i) = U(yi − 0.5,yi + 0.5) (4) −(Ri + λiDi) = Eqϕ [logPθ(w̃i, ỹi|w̃<i, ỹ<i)︸ ︷︷ ︸ −Ri + log pθ(xi|w̃≤i, ỹ≤i)︸ ︷︷ ︸ −λiDi − log qϕ︸ ︷︷ ︸ bits-back bitrate: 0 ] (5) 2.2 BIT ALLOCATION FOR NEURAL VIDEO COMPRESSION It is well known to video coding community that using the same R-D trade-off parameter λi to optimize R-D cost in Eq. 3 for all T frames inside a GoP is suboptimal (Li et al., 2014; 2016). This sub-optimality comes from the frame reference structure and is explained in detail by Li et al. (2022); Xu et al. (2022). The target of bit allocation is to maximize the minus of overall R-D cost 1Following previous works in deep generative modeling (Kingma & Welling, 2013; Kim et al., 2018), we denote all parameters related to encoder as ϕ, and all parameters related to decoder and prior as θ. (ELBO) L as Eq. 6 given the overall R-D trade-off parameter λ0, instead of maximizing Li of each frame i separately. The pioneer work of bit allocation in NVC (Li et al., 2022) follows bit allocation for traditional video codec (Li et al., 2016). Specifically, it adopts empirical models to approximate the relationship of the rate dependency ∂Ri+1/∂Ri and distortion dependency ∂Di+1/∂Di between frames. Then it takes those models into Eq. 6 to solve λ∗1:T explicitly as Eq. 7.left. However, its performance heavily relies on the accuracy of empirical models. maxL = T∑ i=1 Li, where Li = −(Ri + λ0Di) (6) λ∗1:T ← argmax λ1:T L(λ1:T ), versus w∗1:T ,y∗1:T ← arg max w1:T ,y1:T L(w1:T ,y1:T ) (7) On the other hand, BAO (Xu et al., 2022) does not solve λ∗1:T explicitly. Instead, it adopts SAVI (Kim et al., 2018; Marino et al., 2018) to achieve implicit bit allocation. To be specific, it initializes the variational posterior parameter w01:T ,y 0 1:T from fully amortized variational inference (FAVI) as Eq. 1. Then, it optimizes w1:T ,y1:T via gradient ascent to maximize L as Eq. 7.right. During this procedure, no empirical model is required. BAO further proofs that optimizing Eq. 7.right is equivalent to optimizing Eq. 7.left with precise rate and distortion dependency model ∂Ri+1/∂Ri, ∂Di+1/∂Di (See Thm. 1, Thm. 2 in Xu et al. (2022)). Thus, BAO claims that it is optimal assuming gradient ascent achieves global maximum. However, in next section, we show that BAO (Xu et al., 2022) is in fact suboptimal due to its implementation. 3 WHY BAO IS SUP-OPTIMAL BAO (Xu et al., 2022) achieves the SAVI (Kim et al., 2018; Marino et al., 2018) target in Eq. 7.right by gradient-based optimization. More specifically, its update rule is described as Eq. 8 and Eq. 9, where K is the total number of gradient ascent steps, and wki ,y k i is the posterior parameter wi,yi after k steps of gradient ascent. In the original paper of BAO, the authors also find that directly optimizing wi,yi simultaneously by Eq. 8 and Eq. 9 performs worse than optimizing yi alone using Eq. 9, but they have not offered any explanation. It is obvious that optimizing yi alone is sub-optimal. However, it is not obvious why jointly optimizing wi,yi with Eq. 8 and Eq. 9 fails. wk+1i ← w k i + α dL(wk1:T ,yk1:T ) dwki , where dL(wk1:T ,yk1:T ) dwki = T∑ j=i ∂Lj(wk1:j ,yk1:j) ∂wki (8) yk+1i ← y k i + α dL(wk1:T ,yk1:T ) dyki , where dL(wk1:T ,yk1:T ) dyki = T∑ j=i ∂Lj(wk1:j ,yk1:j) ∂yki (9) In fact, the update rule in Eq. 8 and Eq. 9 is exactly the SAVI (Kim et al., 2018; Marino et al., 2018) when wi,yi fully factorizes (e.g. the full factorization used in mean-field (Blei et al., 2017)). However, in NVC the wi,yi has complicated auto-regressive relationships (See Eq. 1 and Fig. 1.(a)). Abusing SAVI on non-factorized latent causes gradient error in two aspects: (1). The total derivative dL/dwi, dL/dyi is incomplete. (2). The total derivative dL/dwi, dL/dyi and partial derivative ∂Lj/∂wi, ∂Lj/∂yi is evaluated at wrong value. In next two sections, we elaborate those two issues with wi related equations in main text and yi related equations in Appendix. A.2. 3.1 INCOMPLETE TOTAL DERIVATIVE EVALUATION According to the latent generation procedure described by Eq. 1 and Eq. 2, we draw the computational graph to describe the latent dependency as Fig. 1.(a). Based on that, we expand the total derivative dL/dwi, dL/dyi as Eq. 10 and Eq. 22. dL(w1:T ,y1:T ) dwi = T∑ j=i dLj(w1:j ,y1:j) dwi dLj(w1:j ,y1:j) dwi = j∑ l=i+1 ∂wl ∂wi dLj(w1:j ,y1:j) dwl + j∑ l=i ∂yl ∂wi dLj(w1:j ,y1:j) dyl︸ ︷︷ ︸ ignored by BAO + ∂Lj(w1:j ,y1:j) ∂wi︸ ︷︷ ︸ considered by BAO (10) As shown in Eq. 8, Eq. 9 and Fig. 1.(b), BAO (Xu et al., 2022) treats the total derivative dL/dwi, dL/dyi as the sum of the frame level partial derivative ∂Lj/∂wi, ∂Lj/∂yi, which is the direct contribution of frame ith latent wi,yi to jth frame’s R-D cost Lj (as marked in Eq. 10 and Eq. 22). This incomplete evaluation of gradient signal brings sub-optimality. Further, it is not possible to correct BAO by simply including other parts of gradient into consideration. As BAO jointly updates all the latent w1:T ,y1:T , the relationship of Eq. 2 only holds for the initial latent parameters w01:T ,y 0 1:T produced by FAVI. And this important relationship is broken for parameters w k 1:T ,y k 1:T after k ≥ 1 steps of update. 3.2 INCORRECT VALUE TO EVALUATE GRADIENT As shown in Eq. 8 and Eq. 9, BAO (Xu et al., 2022) simultaneously updates all the posterior parameter w1:T ,y1:T with gradient evaluated at the same gradient ascent step wk1:T ,y k 1:T . However, as we show later in Sec. 4.1 and Fig. 1.(c), this is sub-optimal as all the descendant latent w>i,y≥i of wi should already complete all K steps of gradient ascent before the gradient of wi is evaluated. Moreover, w>i,y≥i should be initialized by FAVI using precedents latent. Similar rule applies to yi. Specifically, the correct value to evaluate the gradient is as Eq. 11 and Eq. 23, where wkii denotes the latent wi after ki steps of update, and y k′j i denotes the latent yi after k ′ i steps of update. wki+1i ← w ki i + α dL(wk11 , ...,w ki i ,w K >i,y k′1 1 , ...,y k′i−1 i−1 ,y K ≥i) dwkii , where w0>i,y 0 ≥i = f(x,w k1 1 , ...,w ki i ,y k′1 1 , ...,y k′i−1 i−1 ) (11) Similar to the incomplete total derivative evaluation, this problem does not have a simple solution. In next section, we show how to correct both of the above-mentioned issues by recursively applying back-propagating through gradient ascent (Domke, 2012). 4 CORRECTING THE SUB-OPTIMAL BIT ALLOCATION In this section, we first extend the generic SAVI Kim et al. (2018); Marino et al. (2018) to 2-level non-factorized latent. Then we further extend this result to latent with any dependency that can be described by a DAG (Directed Acyclic Graph). And finally, we correct the sub-optimal bit allocation by applying the result in DAG latent to NVC. 4.1 SAVI ON 2-LEVEL NON-FACTORIZED LATENT In this section, we extend the SAVI on 1-level latent (Kim et al., 2018) to 2-level non-factorized latent. We denote x as evidence, a as the variational posterior parameter of the first level latent ã, b as the variational posterior parameter of the second level latent b̃, and the ELBO to maximize as L(a, b). The posterior q(ã, b̃|x) factorizes as q(ã|x)q(b̃|ã,x), which means that b depends on a. Given a is fixed, we can directly follow Kim et al. (2018); Marino et al. (2018) to optimize b to maximize ELBO by SAVI. However, it requires some tricks to optimize a. Algorithm 1: SAVI on 2-level Latent 1 procedure solve-2-level(x,ak) 2 initialize a0 ← f(x) from FAVI 3 for k = 0, ...,K − 1 do 4 dL(ak,bK) dak = grad-2-level(x,ak) 5 ak+1 ← ak + αdL(a k,bK) dak 6 return aK , bK 7 procedure grad-2-level(x,ak) 8 b0 ← f(x,ak) from FAVI 9 for k′ = 0, ...,K − 1 do 10 bk ′+1 ← bk′ + αdL(a k,bk ′ ) dbk′ 11 ←−a ← ∂L(a k,bK) ∂ak 12 ←− bK ← dL(a k,bK) dbK 13 for k′ = K − 1, ..., 0 do 14 ←−a ←←−a + α∂ 2L(ak,bk ′ ) ∂ak∂bk′ ←−−− bk ′+1 15 ←− bk ′ ← ←− bk ′ + α∂ 2L(ak,bk ′ ) ∂bk′∂bk′ ←−−− bk ′+1 16 ←−a =←−a + ∂b 0 ∂ak ←− b0 17 return dL(a k,bK) dak =←−a Algorithm 2: SAVI on DAG Latent 1 procedure solve-dag(x) 2 sort a1, ...,aN in topological order 3 for aj with parent P(aj) = ∅ 4 add aj to fake node a0’s children C(a0) 5 grad-dag(x,a00) 6 return aK1 , ...,aKN 7 procedure grad-dag(x,ak00 , ...,a ki i ) 8 for aj ∈ C(ai) in topological order do 9 a0j ← f(x,a k0 0 , ...,a k<j <j ) from FAVI 10 for kj = 0, ...,K − 1 do 11 dL(ak00 ,...,a kj j ,a K >j) da kj j ← grad-dag(x,ak00 , ...,a kj j ) 12 a kj+1 j ← a kj j + α dL(ak00 ,...,a kj j ,a K >j) da kj j 13 ←−ai ← ∂L(ak00 ,...,a ki i ,a K >i) ∂a ki i 14 for aj ∈ C(ai) do 15 ←−aj ← 0, ←− aKj ← dL(ak00 ,...,a ki i ,a K >i) daKj 16 for kj = K − 1, ..., 0 do 17 ←−aj ←←−aj + α ∂2L(ak00 ,...,a kj j ,a K >j) ∂a ki i ∂a kj j ←−−− a kj+1 j 18 ←− a kj j ← ←−−− a kj+1 j + α ∂2L(ak00 ,...,a kj j ,a K >j) ∂a kj j ∂a kj j ←−−− a kj+1 j 19 ←−ai ←←−ai +←−aj + ∂a0j ∂a ki i ←− a0j 20 returndL(a k0 0 ,...,a ki i ,a K >i) da ki i =←−ai The intuition is, we do not want to find a a that maximizes L(a, b) given a fixed b (or we have the gradient issue described in Sec. 3). Instead, we want to find a a, whose maxb L(a, b) is maximum. This translates to the optimization problem as Eq. 12. In fact, Eq. 12 is a variant of setup in backpropagating through gradient ascent (Samuel & Tappen, 2009; Domke, 2012). The difference is, our a also contributes directly to optimization target L(a, b). From this perspective, Eq. 12 is more closely connected to Kim et al. (2018), if we treat a as the model parameter and b as latent. a← argmax a L(a, b∗(a)), where b∗(a)← argmax b L(a, b) (12) And as SAVI on 1-level latent (Kim et al., 2018; Marino et al., 2018), we need to solve Eq. 12 using gradient ascent. Specifically, denote α as step size (learning rate), K as the total gradient ascent steps, ak as the a after k step update, bk ′ as the b after k′ step update, and f(.) as FAVI procedure generating initial posterior parameters a0, b0, the optimization problem as Eq. 12 translates into the update rule as Eq. 13. Eq. 13 is the guidance for designing optimization algorithm, and it also explains why the gradient of BAO (Xu et al., 2022) is evaluated at wrong value (See Sec. 3.2). ak+1 ← ak + αdL(a k, bK) dak , bk ′+1 ← bk ′ + α dL(ak, bk′) dbk′ , where b0 = f(x,ak) (13) To solve Eq. 13, we note that although dL(ak, bk′)/dbk′ is directly computed, dL(ak, bK)/dak is not straightforward. Resorting to previous works (Samuel & Tappen, 2009; Domke, 2012) in implicit differentiation and extending the results in Kim et al. (2018) from model parameters to variational posterior parameters, we implement Eq. 13 as Alg. 1. Specifically, we first initialize a0 from FAVI. Then we conduct gradient ascent on a with gradient dL(ak, bK)/dak computed from the procedure grad-2-level(x,ak). And inside grad-2-level(x,ak), b is also updated by gradient ascent, the above procedure corresponds to Eq. 13. The key of Alg. 1 is the evaluation of gradient dL(ak, bK)/dak. Formally, we have: Theorem 1. After grad-2-level(x,ak) of Alg. 1 executes, we have the return value dL(ak, bK)/dak =←−a . (See proof in Appendix. A.1.) 4.2 SAVI ON DAG-DEFINED NON-FACTORIZED LATENT In this section, we extend the result from previous section to SAVI on general non-factorized latent with dependency described by any DAG. This DAG is the computational graph during network inference, and it is also the directed graphical model (DGM) (Koller & Friedman, 2009) defining the factorization of latent variables during inference. This is the general case covering all dependency that can be described by DGM. This extension is necessary to perform SAVI on latent with complicated dependency (e.g. bit allocation of NVC). Similar to the 2-level latent setup, we consider performing SAVI on N variational posterior parameter a1, ...,aN with their dependency defined by a computational graph G, i.e., their corresponding latent variable ã1, ..., ãN ’s posterior distribution factorizes as G. Specifically, we denote aj ∈ C(ai),ai ∈ P(aj) if an edge exists from ai to aj . This indicates that ãj conditions on ãi. Without loss of generality, we assume a1, ...,aN is sorted in topological order. This means that if aj ∈ C(ai),ai ∈ P(aj), then i < j. Each latent is optimized by K-step gradient ascent, and akii denotes the latent ai after ki steps of update. Then, similar to 2-level latent, we have the update rule as Eq. 14: aki+1i ← a ki i + α dL(ak11 , ...,a ki i ,a K >i) daki , where a0>i = f(x,a k1 1 , ...,a ki i ) (14) , which can be translated into Alg. 2. Specifically, we first sort the latent in topological order. Then, we add a fake latent a0 to the front of all as. Its children are all the as with 0 in-degree. Then, we can solve the SAVI on a1, ...,aN using gradient ascent by executing the procedure graddag(x,ak00 , ...,a ki i ) in Alg. 2 recursively. Inside procedure grad-dag(x,a k0 0 , ...,a ki i ), the gradient to update ai relies on the convergence of its children aj ∈ C(ai), which is implemented by the recursive depth-first search (DFS) in line 11. And upon the completion of procedure grad-dag(x,a00), all the latent converges to aK1 , ...,a K N . Similar to the 2-level latent case, the key of Alg. 2 is the evaluation of gradient dL(ak00 , ...,a ki i ,a K >i)/da ki i . Formally, we have: Theorem 2. After the procedure grad-dag(x,ak00 , ...,a ki i ) in Alg. 2 executes, we have the return value dL(ak00 , ...,a ki i ,a K >i)/da ki i = ←−ai. (See proof in Appendix. A.1.) To better understand how Alg. 2 works, we provide a detailed example in Fig. 5 of Appendix. A.3. 4.3 CORRECTING THE SUB-OPTIMAL BIT ALLOCATION USING SAVI ON DAG With the result in previous section, correcting BAO (Xu et al., 2022) seems to be trivial. We only need to sort the latent in topological order as w1,y1, ...,wT ,yT , treat them as a1, ...,a2T+1 and run Alg. 2 to obtain the optimized latent parameters wK1 ,y K 1 , ...,w K T ,y K T . And the gradient dL(ak00 , ...,a ki i ,a K >i)/da ki i computed in Alg. 2 resolves the issue of BAO described in Sec. 3.1 and Sec. 3.2. However, an evident problem is the temporal complexity. Given the latent number N and gradient ascent step number K, Alg. 2 has temporal complexity of Θ(KN ). NVC with GoP size 10 has approximately N = 20 latent, and the SAVI on NVC (Xu et al., 2022) takes around K = 2000 step to converge. For bit allocation, the complexity of Alg. 2 is ≈ 200020, which is intractable. On the other hand, BAO’s complexity is reasonable (Θ(KN) ≈ 4 × 104). Thus, in next section, we provide a feasible approximation to such intractable corrected bit allocation. 4.4 FEASIBLE APPROXIMATION TO THE CORRECTED BIT ALLOCATION In order to solve problem with practical size such as bit allocation on NVC, we provide an approximation to the SAVI (Kim et al., 2018; Marino et al., 2018) on DAG described in Sec. 4.2. The general idea is that, when being applied to bit allocation of NVC, the accurate SAVI on DAG (Alg. 2) satisfies both requirement on gradient signal described in Sec. 3.1 and Sec. 3.2. We can not make it tractable without breaking them. Thus, we break one of them and achieve a reasonable complexity, while maintain a superior performance compared with BAO (Xu et al., 2022). We consider the approximation in Eq. 15 which breaks the requirement for gradient evaluation in Sec. 3.2. Based on Eq. 15 and the requirement in Sec. 3.1, we design an approximation of accurate SAVI as Alg. 4. When being applied to bit allocation in NVC, it satisfies the gradient requirement in Sec. 3.1 while maintaining a temporal complexity of Θ(KN) as BAO. dL(ak00 , ...,a ki i ,a K >i) dakii ≈ dL(ak00 , ...,a ki i ,a 0 >i) dakii (15) Specifically, with the approximation in Eq. 15, the recurrent gradient computation in Alg. 2 becomes unnecessary as the right hand side of Eq. 15 does not require aK>i. However, to maintain the dependency of latent described in Sec. 3.1, as Alg. 2, we still need to ensure that the children node aj ∈ C(ai) are re-initialized by FAVI every-time when ai is updated. Therefore, a reasonable approach is to traverse the graph in topological order. We keep the children node aj untouched until all its parent node ai ∈ P(aj)’s gradient ascent is completed and aKi is known. And the resulting approximate SAVI algorithm is as Alg. 4. When applied to bit allocation, it satisfies the gradient requirement in Sec. 3.1, and as BAO, its temporal complexity is Θ(KN). Algorithm 3: BAO on DAG Latent 1 procedure solve-bao(x) 2 a01, ...,a 0 N ← f(x) from FAVI 3 for k = 0, ...,K − 1 do 4 for i = 1, ..., N do 5 ak+1i ← aki + α ∂L(ak1 ,...,a k N ) ∂aki 6 return aK1 , ..., aKN Algorithm 4: Approximate SAVI on DAG latent 1 procedure solve-approx-dag(x) 2 sort a1, ...,aN in topological order 3 for i = 1, ..., N do 4 a0i , ...,a 0 N ← f(x,aK<i) from FAVI 5 for k = 0, ...,K − 1 do 6 dL(aK<i,a k i ,a K >i) daki ≈ dL(a K <i,a k i ,a 0 >i) daki 7 ak+1i ← aki + α dL(aK<i,a k i ,a K >i) daki 8 return aK1 , ..., aKN To better understand BAO (Xu et al., 2022) in SAVI context, we rewrite it by general SAVI notation instead of NVC notation in Alg. 3. We highlight the difference between BAO (Alg. 3) (Xu et al., 2022), the accurate SAVI on DAG latent (Alg. 2) and the approximate SAVI on DAG latent (Alg. 4) from several aspects: • Graph Traversal Order: BAO performs gradient ascent on a1:T all together. The accurate SAVI only updates ai when a>i’s update is complete and aK>i is known. The approximate SAVI only updates ai when a<i’s update is complete and aK<i is known. • Gradient Correctness: When being applied to bit allocation in NVC, BAO violates the gradient rule in Sec. 3.1 and Sec. 3.2, accurate SAVI satisfies both rules, approximate SAVI satisfies Sec. 3.1 and violates Sec. 3.2. • Temporal Complexity: With the latent number N and steps of gradient ascent K, the complexity of BAO is Θ(KN), the complexity of accurate SAVI is Θ(KN ) and the complexity of approximate SAVI is Θ(KN). Then we can simply apply Alg. 4 to bit allocation in NVC to obtain a feasible approximation of the corrected optimal bit allocation. And in Sec. 6.2, we empirically show that our approximation improves the R-D performance over BAO (Xu et al., 2022) with even smaller number of updates. 5 RELATED WORK: BIT ALLOCATION & SAVI FOR NEURAL COMPRESSION Li et al. (2022) are the pioneer of bit allocation for NVC and their work is elaborated in Sec. 2.2. Other recent works that consider bit allocation for NVC only adopt simple heuristic such as inserting 1 high quality frame per 4 frames (Hu et al., 2022; Cetin et al., 2022). On the other hand, OEU (Lu et al., 2020) is also recognised as frame-level bit allocation while its performance is inferior than BAO (Xu et al., 2022). BAO is the most recent work with best R-D performance. It is elaborated in Sec. 2.2 and Sec. 3, and corrected in the previous section. Semi-Amortized Variational Inference (SAVI) is proposed by Kim et al. (2018); Marino et al. (2018). The idea is that works following Kingma & Welling (2013) use fully amortized inference parameter ϕ for all data, which leads to the amortization gap (Cremer et al., 2018). SAVI reduces this gap by optimizing the variational posterior parameter after initializing it with inference network. It adopts back-propagating through gradient ascent (Domke, 2012) to evaluate the gradient of model parameters. We adopt a similar method to extend SAVI to non-factorized latent. When applying SAVI to practical neural codec, researchers abandon the nested model parameter update for efficiency. Prior works (Djelouah & Schroers, 2019; Yang et al., 2020b; Zhao et al., 2021; Gao et al., 2022) adopt SAVI to boost R-D performance and achieve variable bitrate in image compression. And BAO (Xu et al., 2022) is the first to consider SAVI for bit allocation. 6 EXPERIMENTS 6.1 EXPERIMENTAL SETTINGS We implement our approach in PyTorch 1.9 with CUDA 11.2, and run the experiments on NVIDIA(R) A100 GPU. Most of the other settings are intentionally kept the same as BAO (Xu et al., 2022). Specifically, we adopt HEVC Common Testing Condition (CTC) (Bossen et al., 2013) and UVG dataset (Mercat et al., 2020). And we measure the R-D performance in BjontegaardBitrate (BD-BR) and BD-PSNR (Bjontegaard, 2001). For baseline NVC (Lu et al., 2019; Li et al., 2021), we adopt the official pre-trained models. And we select target λ0 = {256, 512, 1024, 2048}. For gradient ascent, we adopt Adam (Kingma & Ba, 2014) optimizer with lr = 1 × 10−3. We set the gradient ascent step K = 2000 for the first frame and K = 400 for other frames. More details are presented in Appendix. A.5. 6.2 QUANTITATIVE RESULTS As shown in Tab. 1, our method consistently improves the R-D performance in terms of BD-BR over BAO (Xu et al., 2022) on both baseline methods and all datasets. Moreover, this improvement is especially significant (more than 10% in BD-BR) when the baseline is DCVC (Li et al., 2021). And both BAO and our proposed correction significantly outperform other approaches. It is also noteworthy that with our bit allocation, DVC (the SOTA method in 2019) already outperforms DCVC (the SOTA method in 2021) by large margin (See the red solid line and black dash line in Fig. 2). BD-BR (%) ↓ Method Class B Class C Class D Class E UVG DVC (Lu et al., 2019) as Baseline Li et al. (2016)1 20.21 17.13 13.71 10.32 16.69 Li et al. (2022)1 -6.80 -2.96 0.48 -6.85 -4.12 OEU (Lu et al., 2020)2 -13.57 -11.29 -18.97 -12.43 -13.78 BAO (Xu et al., 2022)2 -28.55 -26.82 -25.37 -32.54 -27.68 Proposed -32.10 -31.71 -35.86 -32.93 -30.92 DCVC (Li et al., 2021) as Baseline OEU (Lu et al., 2020)2 -10.75 -14.34 -16.30 -7.15 -16.07 BAO (Xu et al., 2022)2 -20.59 -19.69 -20.60 -23.33 -25.22 Proposed -32.89 -33.10 -32.01 -36.88 -39.66 Table 1: The BD-BR of our approach compared with others. 1 comes from Li et al. (2022). 2 comes from Xu et al. (2022). Figure 2: The R-D curve on HEVC Class D. Other than R-D performance, the bitrate error of our approach is also significantly smaller than BAO (Xu et al., 2022) (See Tab. 2). The bitrate error is measured as the relative bitrate difference before and after bit allocation. The smaller it is, the easier it is to achieve the desired bitrate accurately. For complexity, our approach only performs 920 steps of gradient ascent per-frame, while BAO requires 2000 steps. See more quantitative results (BD-PSNR & R-D curves) in Appendix. A.6. 6.3 ABLATION STUDY, ANALYSIS & QUALITATIVE RESULTS Tab. 3 shows that for BAO (Xu et al., 2022), jointly optimizing w1:T ,y1:T performs worse than optimizing y1:T or w1:T alone. This counter-intuitive phenomena comes from its incorrect estimation of gradient signal. For the proposed approach that corrects this, jointly optimizing w1:T ,y1:T performs better than optimizing y1:T or w1:T alone, which is aligned with our intuition. Bitrate-Error (%) ↓ Method Class B Class C Class D Class E UVG DVC (Lu et al., 2019) as Baseline BAO (Xu et al., 2022)2 8.41 12.86 21.39 5.94 3.73 Proposed 3.16 4.27 1.81 6.14 1.73 DCVC (Li et al., 2021) as Baseline BAO (Xu et al., 2022)2 25.67 23.90 23.74 24.88 21.86 Proposed 4.27 7.29 5.73 8.03 3.06 Table 2: The bitrate error of our approach compared with BAO. Method BD-BR (%) ↓ BAO (y) -25.37 BAO (w) -22.24 BAO (y,w) -14.76 Proposed (y) -32.60 Proposed (w) -31.56 Proposed (y,w) -35.86 Table 3: Ablation study with HEVC Class D and DVC (Lu et al., 2019). To better understand why our method works, we present the R-D cost, distortion and rate versus frame/latent index for different methods in Fig. 3: top-left shows that the R-D cost of our approach consistently decreases according to SAVI stage. Moreover, it outperforms BAO after 4th frame; top-right shows that for each frame the R-D cost of our method is lower than BAO; bottom-left shows that the distortion part of R-D cost of our approach is approximately the same as BAO. While bottom-right shows that the advantage of our approach over BAO lies in the bitrate. More specifically, BAO increases the bitrate of yis after SAVI, while our correction decreases it. See more analysis in Appendix. A.9 and qualitative results in Appendix. A.10. 7 DISCUSSION & CONCLUSION Despite our correction is already more efficient than original BAO (Xu et al., 2022), its encoding speed remains far from real-time. Thus, it is limited to scenarios where R-D performance matters much more than encoding time (e.g. video on demand). See more discussion in Appendix. A.11. To conclude, we show that a previous bit allocation method for NVC is sub-optimal as it abuses SAVI on non-factorized latent. Then, we propose the correct SAVI on general non-factorized latent by back-propagating through gradient ascent, and we further propose a feasible approximation to make it tractable for bit allocation. Experimental results show that our correction significantly improves the R-D performance. ETHICS STATEMENT Improving the R-D performance of NVC has positive social value, in terms of reducing carbon emission by saving the resources required to transfer and store videos. Moreover, unlike traditional codecs such as H.266 (Bross et al., 2021), neural video codec does not require dedicated hardware. Instead, it can be deployed with general neural accelerators. Improving the R-D performance of NVC prompts the practical deployment of video codecs that are independent of dedicated hardware, and lowers the hardware-barrier of playing multi-media contents. REPRODUCIBILITY STATEMENT For theoretical results, both of the two theorems are followed by proof in Appendix. A.1. For a relatively complicated novel algorithm (Alg. 2), we provide an illustration of the step by step execution procedure in Appendix. A.3. For experiment, both of the two datasets are publicly accessible. In Appendix. A.5, we provide more implementation details including all the hyper-parameters. Moreover, we provide our source code for reproducing the empirical results in supplementary material. A APPENDIX A.1 PROOF OF THM 1 AND THM 2 Theorem 1. After the procedure grad-2-level(x,ak) of Alg. 1 executes, we have the return value dL(ak, bK)/dak =←−a . Proof. This proof extends the proof of Thm. 1 in Domke (2012), and it also serves as a formal justification of Alg. 1 in Kim et al. (2018). Note that our paper and Kim et al. (2018) are subtly different from Samuel & Tappen (2009); Domke (2012) as our high level parameter w not only generate low level parameter y, but also directly contributes to optimization target (See Fig. 4). As the computational graph in Fig. 4 shows, we can expand dL(ak, bK)/dak as Eq. 16, with each term solved in Eq. 18 and Eq. 19. dL(ak, bK) dak = ∂L(ak, bK) ∂ak︸ ︷︷ ︸ known + K∑ k′=0 ∂bk ′ ∂ak︸ ︷︷ ︸ Eq. 18 dL(ak, bK) dbk′︸ ︷︷ ︸ Eq. 19 (16) To solve Eq. 16, we first note that ∂L(ak, bK)/∂ak, dL(ak, bK)/dbK , ∂b0/∂ak is naturally known. Then, by taking partial derivative of the update rule of gradient ascent bk ′+1 ← bk′ + αdL(ak, bk′)/dbk′ with regard to ak, bk′ , we have Eq. 17 and Eq. 18. Note that Eq. 18 is the partial derivative ∂bk ′+1/∂ak instead of total derivative dbk ′+1/dak = (∂bk ′+1/∂bk ′ )(dbk ′ /dak) + ∂bk ′+1/∂ak. ∂bk ′+1 ∂bk′ = I + α ∂2L(ak, bk′) ∂bk′∂bk′ (17) ∂bk ′+1 ∂ak = α ∂2L(ak, bk′) ∂ak∂bk′ (18) And those second order terms can either be directly evaluated or approximated via finite difference as Eq. 20. As Eq. 18 already solves the first term on the right hand side of Eq. 16, the remaining issue is dL(ak, bK)/dbk′ . To solve this term, we expand it recursively as Eq. 19 and take Eq. 17 into it. dL(ak, bK) dbk′ = ∂bk ′+1 ∂bk′ dL(ak, bK) dbk′+1 (19) And the above solving process can be described by the procedure grad-2-level(x,ak) of Alg. 1. Specifically, the iterative update of ←−−− bk ′+1 in line 15 corresponds to recursively expanding Eq. 19 with Eq. 17, and the iterative update of ←−a in line 14 corresponds to recursively expanding Eq. 16 with Eq. 18 and Eq. 19. Upon the return of grad-2-level(x,ak) of Alg. 1, we have←−a = dL(ak, bK)/dbk. The complexity of the Hessian-vector product in line 14 and 15 of Alg. 1 may be reduced using finite difference following (Domke, 2012) as Eq. 20. ∂2L(ak, bk′) ∂ak∂bk′ v= lim r→0 1 r ( dL(ak, bk′ + rv) dak − dL(a k, bk ′ ) dak ) ∂2L(ak, bk′) ∂bk′∂bk′ v = lim r→0 1 r ( dL(ak, bk′ + rv) dbk′ − dL(a k, bk ′ ) dbk′ ) (20) Theorem 2. After the procedure grad-dag(x,ak00 , ...,a ki i ) in Alg. 2 executes, we have the return value dL(ak00 , ...,a ki i ,a K >i)/da ki i = ←−ai. Proof. Consider computing the target gradient with DAG G. The aki ’s gradient is composed of its own contribution to L in addition to the gradient from its children aj ∈ C(ai). Further, as we are considering the optimized children aKj , we expand the children node aj as Fig. 4. Then, we have: dL(ak00 , ...,a ki i ,a K >i) dakii = ∂L(ak00 , ...,a ki i ,a K >i) ∂akii︸ ︷︷ ︸ known + ∑ aj∈C(ai) ( K∑ kj=0 ∂a kj j ∂akii︸ ︷︷ ︸ Eq. 18 dL(ak00 , ...,a kj−1 j−1 ,a K ≥j) da kj j︸ ︷︷ ︸ Eq. 19 ) (21) The first term on the right-hand side of Eq. 21 can be trivially evaluated. The ∂akjj /∂a ki i can be evaluated as Eq. 18. And the dL(ak00 , ...,a kj−1 j−1 ,a K ≥j)/da kj j can be iteratively expanded as Eq. 19. We highlight several key differences between Alg. 2 and Alg. 1 which are reflected in the implementation of Alg. 2: • The gradient evaluation of current node yi requires gradient of its plural direct children aj ∈ C(ai), instead of the single child in 2-level case. The children traversal part of Eq. 19 corresponds to the two extra for loop in line 8 and 14 of Alg. 2. • The gradient ascent update of child latent parameter akj+1j ← a kj j + αdL(ak00 , ...,a kj j ,a K >j)/da kj j can be conducted trivially only if C(aj) is empty, otherwise the gradient has to be evaluated recursively using Eq. 21. And this part corresponds to the recursive call in line 11 of Alg. 2. And the other part of Alg. 2 is the same as Alg. 1. So the rest of the proof follows Thm. 1. Similarly, the Hessian-vector product in line 17 and 18 of Alg. 2 may be approximated as Eq. 20. However, this does not save Alg. 2 from an overall complexity of Θ(KN ). A.2 THE COMPLETE FORMULA FOR SEC. 3.1 AND SEC. 3.2 In this section, we provide the complete formula on yi related gradient for Sec. 3.1 and Sec. 3.2. Specifically, Eq. 22 is paired with Eq. 10, and Eq. 23 is paired with Eq. 11. dL(w1:T ,y1:T ) dyi = T∑ j=i dLj(w1:j ,y1:j) dyi dLj(w1:j ,y1:j) dyi = j∑ l=i+1 ( ∂yl ∂yi dLj(w1:j ,y1:j) dyl + ∂wl ∂yi dLj(w1:j ,y1:j) dwl )︸ ︷︷ ︸ ignored by BAO + ∂Lj(w1:j ,y1:j) ∂yi︸ ︷︷ ︸ considered by BAO (22) y k′i+1 i ← y k′i + α dL(wk11 , ...,w ki i ,w K >i,y k′1 1 , ...,y k′i i ,y K >i) dy k′i i , where w0>i,y 0 >i = f(x,w k1 1 , ...,w ki i ,y k′1 1 , ...,y k′i i ) (23) A.3 AN EXAMPLE OF EXECUTION OF ALG. 2 In this section, we provide an example of the full procedure of execution of Alg. 2 in Fig. 5. The setup is as Fig. 5.(0): we have N = 3 latent a1,a2,a3 and gradient ascent step K = 2, connected by a DAG shown in the figure. A.4 EXTENDING THE ANALYSIS SEC. 3 TO GENERAL DAG CASE As the Alg. 2 and Alg. 4 are applicable to general SAVI (Kim et al., 2018; Marino et al., 2018) beyond bit allocation, it is helpful to understand their merit to extend the analysis in Sec. 3 from bit allocation to general DAG scenario. In this section, we consider the same problem setup as Sec. 4.2. Similar to bit allocation case, BAO has the gradient incomplete and gradient value incorrect problem. The gradient incomplete issue is presented as Eq. 24, and gradient value incorrect issue is presented as Eq. 25. dL(ak00 , ...,a ki i ,a K >i) dakii = ∂L(ak00 , ...,a ki i ,a K >i) ∂akii︸ ︷︷ ︸ considered by BAO + ∑ aj∈C(ai) ( K∑ kj=0 ∂a kj j ∂akii dL(ak00 , ...,a kj−1 j−1 ,a K ≥j) da kj j ) ︸ ︷︷ ︸ ignored by BAO (24) ∂L(ak00 , ...,a ki i ,a K >i) ∂akii ≈ ∂L(aki0 , ...,a ki i ,a ki >i) ∂akii︸ ︷︷ ︸ approximation of BAO in gradient value (25) A.5 MORE IMPLEMENTATION DETAILS In the main text, we use yi as all the latent variable related to residual. In practice, it is divided into yi, zi,∆ y i , which refer to the first level latent of residual, second level latent of residual and quantization step size of first level latent of residual respectively. In practice, as BAO (Xu et al., 2022), all of the 3 parts are involved in SAVI jointly. We note that this is not a problem as they fully factorize. And for DVC (Lu et al., 2019), wi indeed represent the latent of motion. As for DVC, the motion has only one level of latent. However for DCVC (Li et al., 2021), wi is divided into wi,vi,∆wi , which refer to the first level latent of motion, second level latent of motion and quantization step size of first level latent of motion respectively. Similar to yi, all of the 3 parts are involved in SAVI jointly, and this is not a problem as they fully factorize. Following BAO (Xu et al., 2022), we set the target λ0 = {256, 512, 1024, 2048}, which also follows the baselines (Lu et al., 2019; Li et al., 2021). We adopt the official pre-train models for both of the baseline methods (Lu et al., 2019; Li et al., 2021). We do not have a training dataset or implementation details for training amortized encoder / decoder as all the experiments are performed on official pre-trained models. For gradient ascent, we set K = 2000 for the first I frame and K = 400 for all other P frames. On average, the gradient ascent steps for each frame is 920, which is smaller than 2000 in BAO. A.6 MORE QUANTITATIVE RESULTS In this section we present more quantitative results. In Tab. 4 we show the BD-PSNR of our proposed method and other methods as a supplementary to the BD-BR results (Tab. 1). Furthermore, in Fig. 6, we present R-D curve on all classes of HEVC CTC and UVG dataset as a supplementary to the HEVC Class D plot (Fig. 2). 0.050 0.075 0.100 0.125 0.150 0.175 Bpp 32 33 34 35 PS N R HEVC Class B 0.10 0.15 0.20 0.25 0.30 Bpp 29 30 31 32 33 34 PS N R HEVC Class C 0.10 0.15 0.20 0.25 0.30 0.35 Bpp 28 29 30 31 32 33 34 PS N R HEVC Class D 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Bpp 35 36 37 38 39 40 PS N R HEVC Class E 0.04 0.06 0.08 0.10 0.12 Bpp 34 35 36 37 38 PS N R UVG −0.04 −0.02 0.00 0.02 0.04 Bpp −0.04 −0.02 0.00 0.02 0.04 PS N R DCVC OEU (on DCVC) BAO (on DCVC) Proposed (on DCVC) DVC OEU (on DVC) BAO (on DVC) Proposed (on DVC) Figure 6: The R-D performance of our approach compared with baselines (w/o bit allocation) and other bit allocation approaches. A.7 COMPLEXITY & SCALABILITY Figure 7: Spatial temporal complexity analysis comparing BAO (Xu et al., 2022), the proposed approach and a fast approximation of the proposed approach. The analysis is done on DVC baseline and HEVC Class D dataset. We perform additional evaluation to compare the proposed method with BAO (Xu et al., 2022) in terms of temporal complexity and memory cost. The evaluation result can be found in Fig. 7. The general result is that our approach is ≈ 2.8 times slower and cost ≈ 2.0 times memory than BAO, despite the optimization stepsize is smaller. This extra complexity comes from the cost of sequential optimization of latent. And our current method in its naı̈ve form is slower than BAO while performs better. Jointly consider RD performance, time and memory, our method does not dominate BAO. However, as our approach enables a sequential style semi-amortized variational inference (SAVI) (Kim et al., 2018; Marino et al., 2018) on latents, there exists a very simple trick to speed it up. Moreover, this trick also resolves the scalability issue. Specifically, to optimize the ith frame’s latent, we do not compute the R-D cost of all the frames after it as we do now. Instead, we limit the R-D cost computation to a small fixed size of frames. Formally, we approximate the gradient as: dL(w1:T ,y1:T ) dwi ≈ i+C∑ j=i dLj(w1:j ,y1:j) dwi , dL(w1:T ,y1:T ) dyi = i+C∑ j=i dLj(w1:j ,y1:j) dyi (26) , where C is a preset constant indicating the number of future frames we included for consideration. With this trick, our algorithm approach cost only ≈ 50% of time and ≈ 60% of memory compared with BAO, while remains a superior performance (≈ 5% better in BDBR) (Ours (fast) in Fig. 7, the results are based on DVC Class D c = 2). With this trick, jointly consider RD performance, time and memory, our approach clearly dominates BAO. Furthermore, with this trick, the scalability issue of our approach is significantly ellivated. As shown in Fig. 8, the memory cost our approach with this trick is constant to GoP size, while that of BAO and our approach without this trick grows linearly with GoP size. This means that with this trick, our approach becomes scalable to any GoP, which is superior than BAO. A.8 IMPACT ON OEU Another interesting question to ask is whether the sequential updating algorithm (Alg. 4) benefits the OEU (Lu et al., 2020). Indeed, OEU (Lu et al., 2020) and BAO (Xu et al., 2022) are quite similar at the first glance. However, it is important to note that the theoretical foundation of BAO and this paper is SAVI (Kim et al., 2018; Marino et al., 2018). However, OEU does not fit into SAVI framework. More specifically, its encoder parameter to be updated does not factorizes as the DAG defined by variational posterior. Thus, applying Alg. 4 is incorrect. To verify this empirically, we change the OEU from BAO’s joint optimization to ours sequential optimization (Alg. 4), and the results show that this change degrades R-D performance (See COEU line in Fig. 9). A.9 MORE ANALYSIS In this section, we extend the analysis on why the proposed approach works and what is the difference between the proposed approach and BAO (Xu et al., 2022). In the approximate SAVI on DAG latent (Alg. 4), we solve SAVI approximately latent by latent in topological order. For bit allocation of NVC with 10 frames, this topological order is y0,w1,y1, ...,w9,y9, where y0 is the latent of I frame, wi is the motion latent of ith P frame and yi is the residual latent of ith P frame. In Fig. 10, we show the relationship between R-D cost and the stage of approximate SAVI. We can see that the R-D cost reduces almost consistently with the growing of SAVI stage, which indicates that our approximate SAVI on DAG (Alg. 4) is successful. Specifically, despite our approach is inferior to BAO (Xu et al., 2022) upon the convergence of y3, it attains significant advantage over BAO after y9 converges. In Fig. 11, we compare the distribution of R-D cost, PSNR and Bpp across frame and latent of the baseline DVC (Lu et al., 2019), BAO Xu et al. (2022) and the proposed approach. For R-D cost, it is obvious that our proposed approach’s R-D cost is lower than BAO and baseline, which indicates a better R-D performance. For bpp, it is interesting to observe that despite all three methods have similar bpp of motion related latent w1:T , the bpp of residual related latent y1:T is quite different. Specifically, BAO increases the bpp of y1:T compared with baseline, while our approach decreases the bpp of y1:T compared with baseline. This explains why our approach has lower bitrate compared with BAO, and also explains why our approach has significantly less bitrate error. For the PSNR metric, both our approach and BAO significantly improve the baseline. And the difference between proposed approach and BAO is not obvious. We can conclude that the benefits of the proposed approach over BAO comes from the bitrate saving instead of quality enhancing. A.10 QUALITATIVE RESULTS In Fig. 12, Fig. 13, Fig. 14 and Fig. 15, we present the qualitative result of our approach compared with the baseline approach. We note that compared with the reconstruction frame of baseline approach, the reconstruction frame of our proposed approach preserves significantly more details with lower bitrate, and looks much more similar to the original frame. We intentionally omit the qualitative comparison with BAO (Xu et al., 2022) as it is not quite informative. Specifically, from Fig. 2 we can observe that the PSNR difference of BAO and our approach is very small (within±0.1dB). And our main advantage over BAO comes from bitrate saving instead of quality improvement. Thus the qualitative difference between the proposed method and BAO is likely to fall below just noticeable difference (JND). A.11 MORE DISCUSSION Other weakness includes scalability. Our method requires jointly considering all the frame inside the GoP, which is impossible when the GoP size is large or when GoP size is unknown for live streaming tasks. Furthermore, currently the gradient ascent step number is merely chosen as an empirical sweet spot between speed and performance. A thorough grid search is desired to better understand its effect on performance.
1. What is the focus of the paper regarding neural video compression? 2. What are the strengths of the proposed approach, particularly in its contribution to the bit allocation problem? 3. Do you have any concerns or questions about the notation used in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for improving the approach or expanding its applications?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies optimal bit allocation in a group of pictures for neural video compression. It begins by pointing out an algorithmic error in a previous work (BAO) that applied semi-amortized variational inference (SAVI) to this problem. The root of this problem is that the dependencies between variables were not considered during optimization, leading to incorrect derivative formula and incorrect evaluation points. The paper then derives the correct formula that exploit the factorization of the variables and, noting it is intractable for bit allocation, proposes to approximate the evaluation point but keeps the correct formula. Both the proposed relaxation and BOA are then evaluated and compare to other approaches using the HEVC Common Testing Conditions. The proposed relaxation is shown to be better than BOA in terms of BD-BR and bitrate error. BAO and the relaxation are then further analyzed by comparing the performance on successive frames of a Group of Picture. Strengths And Weaknesses Strength This approach is an interesting contribution to the bit allocation problem in neural video compression. The SAVI algorithm on a DAG is interesting beyond this specific problem for the machine learning community. The experiments are well conducted, with multiple baselines, and clearly show the strength of the approach. Furthermore, the in-depth comparison with BAO is helpful to highlight how the proposed approach fix the problem. The source code is provided. Weaknesses Using notations y and w for bit allocation and y for all variables for general SAVI confused me a little bit. It was fine until both are combined (Section 4.4). For example, it is not clear for me whether equation 15 indicates that the approximation applies only to the y of the bit allocation problem or to both y and w . I assumed it was the latter. For a more general use of the approach, maybe it would be helpful to have a general idea of the DAG structures where the advantage over the BOA approach is significant. Similarly, maybe additional experiments on generic data could be interesting to better show the behavior of the approach. Clarity, Quality, Novelty And Reproducibility I think the authors did a great job explaining their approach. It remains hard to understand for me, but the presentation helped a lot. Overall, the paper is well written and mostly clear, except for the minor point raised above. As far as I know, this approach is novel. Note that I am not familiar with recent literature on variational inference. The code is provided, so reproducibility is great. I did not try to run it. The idea presented is not trivial to me, application motivated and of interest to the machine learning community.
ICLR
Title Truthful Self-Play Abstract We present a general framework for evolutionary learning to emergent unbiased state representation without any supervision. Evolutionary frameworks such as self-play converge to bad local optima in case of multi-agent reinforcement learning in non-cooperative partially observable environments with communication due to information asymmetry. Our proposed framework is a simple modification of selfplay inspired by mechanism design, also known as reverse game theory, to elicit truthful signals and make the agents cooperative. The key idea is to add imaginary rewards using the peer prediction method, i.e., a mechanism for evaluating the validity of information exchanged between agents in a decentralized environment. Numerical experiments with predator prey, traffic junction and StarCraft tasks demonstrate that the state-of-the-art performance of our framework. N/A We present a general framework for evolutionary learning to emergent unbiased state representation without any supervision. Evolutionary frameworks such as self-play converge to bad local optima in case of multi-agent reinforcement learning in non-cooperative partially observable environments with communication due to information asymmetry. Our proposed framework is a simple modification of selfplay inspired by mechanism design, also known as reverse game theory, to elicit truthful signals and make the agents cooperative. The key idea is to add imaginary rewards using the peer prediction method, i.e., a mechanism for evaluating the validity of information exchanged between agents in a decentralized environment. Numerical experiments with predator prey, traffic junction and StarCraft tasks demonstrate that the state-of-the-art performance of our framework. 1 INTRODUCTION Evolving culture prevents deep neural networks from falling into bad local optima (Bengio, 2012). Self-play (Samuel, 1967; Tesauro, 1995) has not only demonstrated the ability to abstract highdimensional state spaces as typified by AlphaGo (Silver et al., 2017), but also improved exploration coverage in partially observable environments. Communication (Sukhbaatar et al., 2016; Singh et al., 2019) exchanges their internal representations such as explored observation and hidden state in RNNs. Evolutionary learning is expected to be a general framework for creating superhuman AIs as such learning can generate a high-level abstract representation without any bias in supervision. However, when applying evolutionary learning to a partially observable environment with noncooperative agents, improper bias is injected into the state representation. This bias originates from the environment. A partially observable environment with non-cooperative agents induces actions that disable an agent from honestly sharing the correct internal state resulting in the agent taking actions such as concealing information and deceiving other agents at equilibrium (Singh et al., 2019). The problem arises because the agent cannot fully observe the state of the environment, and thus, it does not have sufficient knowledge to verify the information provided by other agents. Furthermore, neural networks are vulnerable to adversarial examples (Szegedy et al., 2014) and are likely to induce erroneous behavior with small perturbations. Many discriminative models for information accuracy are available; these include GANs (Goodfellow et al., 2014; Radford et al., 2016) and curriculum learning (Lowe et al., 2020). However, these models assume that accurate samples can be obtained by supervision. Because of this assumption, is it impossible to apply these models to a partially observable environment, where the distribution is not stable. We generalize self-play to non-cooperative partially observable environments via mechanism design (Myerson, 1983; Miller et al., 2005), which is also known as reverse game theory. The key idea is to add imaginary rewards by using the peer prediction method (Miller et al., 2005), that is, a mechanism for evaluating the validity of information exchanged between agents in a decentralized environment, which is calculated based on social influence on the signals. We formulate the non-cooperative partially observable environment as an extention of the partially observable stochastic games (POSG) (Hansen et al., 2004); introduce truthfulness (Vickrey, 1961), which is an indicator of the validity of state representation. We show that the imaginary reward enables us to reflect the bias of state representation on the gradient without oracles. As the first contribution, we propose truthful self-play (TSP) and analytically demonstrate convergence to the global optimum (Section 4). We propose the imaginary reward on the basis of the peer prediction method (Miller et al., 2005) and apply it to self-play. The mechanism affects the gradient of the local optima, but not the global optima. The trick is to use the actions taken by the agents as feedback to verify the received signal from the every other agent, instead of the true state, input, and intent, which the agents cannot fully observe. TSP only requires a modification of the baseline function for self-play; it drastically improves the convergence to the global optimum in Comm-POSG. As the second contribution, based on the results of numerical experiments, we report that the TSP achieved state-of-the-art performance for various multi-agent tasks made of up to 20 agents (Section 5). Using predator prey (Barrett et al., 2011), traffic junction (Sukhbaatar et al., 2016; Singh et al., 2019), and StarCraft (Synnaeve et al., 2016) environments, which are typically used in Comm-POSG research, we compared the performances of TSP with the current neural nets, including the state-ofthe-art method, with LSTM, CommNet (Sukhbaatar et al., 2016), and IC3Net (Singh et al., 2019). We report that the model with IC3Net optimized by TSP has the best performance. This work is the first attempt to apply mechanism design to evolutionary learning. TSP is a general optimization algorithm whose convergence is theoretically guaranteed for arbitrary policies and environments. Since no supervision is required, TSP has a wide range of applications to not only game AIs (Silver et al., 2017), but also the robots (Jaderberg et al., 2018), chatbots (Gupta et al., 2019; Chevalier et al., 2019), and autonomous cars (Tang, 2019) employed in multiagent tasks. Notation: Vectors are columns. Let JnK := {1, . . . , n}. R is a set of real numbers. i is the imaginary unit. Reu and Imu are a real and an imaginary part of complex number u, respectively. n-tuple are written as boldface of the original variables a := 〈a1, . . . , an〉 , and a−i is a (n− 1)-tuple obtained by removing the i-th entry from a. Let 1 := (1, . . . , 1)T. Matrices are shown in uppercase letters L := (`ij). E is the unit matrix. The set of probability distributions based on the support X is described as P(X ). 2 RELATED WORK Neural communication has gained attention in the field of multiagent reinforcement learning (MARL) for both discrete (Foerster et al., 2016) and continuous (Sukhbaatar et al., 2016; Singh et al., 2019) signals. Those networks are trained via self-play to exchange the internal state of the environment stored in the working memory of recurrent neural networks (RNNs) to learn the right policy in partially observable environments. The term self-play was coined by the game AI community in the latter half of the century. Samuel (Samuel, 1967) introduced self-play as a framework for sharing a state-action value among two opposing agents to efficiently search the state space at Checkers. TD-Gammon (Tesauro, 1995) introduced self-play as a framework to learn TD(λ) (Sutton & Barto, 1998) and achieve professionalgrade levels in backgammon. AlphaGo (Silver et al., 2017) defeated the Go champion by combining supervised learning with professional game records and self-play. AlphaZero (Silver et al., 2018) successfully learnt beyond its own performance entirely based on self-play. All these studies explain that eliminating the bias of human knowledge in supervision is the advantage of self-play. Self-play is also known as evolutionary learning (Bengio, 2012) in the deep learning community mainly as an approach to emerging representations without supervision (Bansal et al., 2018; Balduzzi et al., 2019). Bansal et al. (2018) show that competitive environments contribute to emerging diversity and complexity. Rich generative models such as GANs (Goodfellow et al., 2014; Radford et al., 2016) are frameworks for acquiring an environmental model by employing competitive settings. RNNs such as world models (Ha & Schmidhuber, 2018; Eslami et al., 2018) are capable of more comprehensive ranges of exploration in partially observable environments and generation of symbols and languages (Bengio, 2017; Gupta et al., 2019; Chevalier et al., 2019). The difference between evolutionary learning and supervised learning is the absence of human knowledge and oracles. Several works have formalized those in which the agents exchange environmental information as a formal class of the games such as Dec-POMDP-Com (Goldman & Zilberstein, 2003) and COMMTDP (Pynadath & Tambe, 2002), and several frameworks are proposed to aim to solve the problems. However, the limitation of the frameworks is that they assume a common reward. As there are yet no formal definition of non-cooperative communication game, we formalize such a game to Comm-POSG as a superset of POSGs (Hansen et al., 2004), a more general class of multi-agent games including the cases of non-cooperativity (Hansen et al., 2004). To the best of our knowledge, there are no studies that have introduced truthful mechanisms into the field of MARL, but it may be possible to introduce it by using agents that can learn flexibly, such as neural networks. A typical truthful mechanism is the VCG mechanism (Vickrey, 1961), which is a generalization of the pivot method used in auction theory, but whereas the subject of the report that must satisfy truthfulness must be a valuation (or a value function if interpreted from a RL perspective). In this study, the scope of application is different because the belief states of the environment are subject to reporting. Therefore, we introduce instead a peer prediction method (Miller et al., 2005) that guarantees truthfulness with respect to reporting beliefs about arbitrary probability distributions using proper scoring rules (Gneiting & Raftery, 2007). 3 PROBLEM DEFINITION 3.1 COMM-POSG A communicative partially-observable stochastic game (Comm-POSG) is a class of non-cooperative Bayesian games in which every agent does not fully observe the environment but interacts each other. We define Comm-POSG as an extension of POSG (Hansen et al., 2004) with a message protocol. Definition 3.1 (Hansen et al., 2004) POSG 〈n, T,S,A,X , T ,P,R〉 is a class for multi-agent decision making under uncertainty in which the state evolves over time 1 ≤ t ≤ T , where • n is the number of agents, • T is a horizon i.e., the episode length, • S is a set of discrete/continuous state st ∈ S with an initial probabilistic distribution p(s0), • A is a set of discrete/continuous action ati ∈ A, • X is a set of discrete/continuous observation xti ∈ X , • T ∈ P (S ×A× S) is state transition probability, • P ∈ P (S × Xn) is an observation probability, and • R : S ×An → Rn is a reward function that outputs an n-dimensional vector. In Comm-POSGs, every agent further follows a message protocol Zn×n, where Z is the discrete/continuous signal space. The complete information exchanged among the agent in time is Zt, where Zt := (ztij)i,j∈JnK ∈ Zn×n is a signal matrix in which (i, j)-th entry ztij represents a signal from Agent i to Agent j at t. The i-th diagonal entry of Zt, hti := ztii represents the pre-state, an internal state of i-th agent before receiving the singals from the others. A game in Comm-POSG is denoted as G := 〈n, T,S,A,X , T ,P,R,Z〉. The objective of Comm-POSG is social welfare (Arrow, 1963) defined by the following, J := n∑ i=1 V πi ; V πi := Eπi [ T∑ t=1 γt−1rti ] , (1) where γ ∈ [0, 1] is discount rate, rti is reward πi is a stochastic policy, and V πi is the value function. In extensive-form games including Comm-POSG, in addition to the information in the environment, the information of other agents cannot be observed. In the optimization problem under these assumptions, a policy converges to a solution called the Bayesian Nash equilibrium (BNE) (Fudenberg, 1993). We denote the social welfare at the BNE is J∗, and the global maximum Ĵ . In general, J∗ 6= Ĵ holds, which is closely related to the information asymmetry. 3.2 COMMUNICATIVE RECURRENT AGENTS In order to propose an optimization algorithm in this paper, we do not propose a concrete structure of the network, but we propose an abstract structure that can cover existing neural communication models (Sukhbaatar et al., 2016; Singh et al., 2019), namely communicative recurrent agents (CRAs) 〈fφ, σφ, qφ, πθ〉, where • fφ(ĥt−1,i, xti) 7→ Z is a deep RNN for the high-dimensional input xti ∈ X and the previous post-state ĥt−1,i ∈ Z , with a parameter φ and an initial state ĥ0 ∈ Z , • σφ(zti|hti) is a stochastic messaging policy for a pre-state hti := fφ(ĥt−1,i, xti), • qφ(ĥti|ẑti) is a stochastic model for a post-state ĥti ∈ Z and the received messages ẑti := Z T t:i = (zt1i, . . . , zt,i−1,i, hti, zt,i+1,i, . . . , ztni) T, and • πθ(ati|ĥti) is the stochastic action policy with a parameter θ. These agents are trained through self-play using on-policy learning such as REINFORCE (Williams, 1992). All n-agents share the same weight per episode, and the weights are updated based on the cumulative reward after the episode. In addition to the recurrent agent’s output of actions with the observation series as input, the CRA has signals for communication as input and output. CRAs estimate current state of the environment and current value of the agent herself based on the post-state model with the pre-state hti in the hidden layer of the RNN and the received signals ẑti,−i from other agents. Hence, the veracity of the signals zti is the point of contention. 3.3 TRUTHFULNESS In mechanism design, a truthful game (Vickrey, 1961) is a game in which all agents make an honest reporting in the Bayesian Nash equilibrium. In Comm-POSGs, the truthfulness of the game is achieved if all the sent signal equals the pre-state ztij = hti i.e., all the agent share a complete information. In such case, every agent has the same information ẑti = hti := (ht1, . . . , htn)T for all i’s and the same post-state model probability distribution, and hence the mean of the cross entropy between the distributions below will be minimized. Dφ(Zt) := 1 n n∑ i=1 H [ qφ(ĥti|ẑti) ] + 1 n2 n∑ i=1 n∑ j=1 DKL ( qφ(ĥti|ẑti) || qφ(ĥtj |ẑtj) ) . (2) The first term represents the entropy of knowledge each agent has about the environment, and the second the information asymmetry between the agents. Dφ is a lower bound on the amount of true information the environment has H [p(st)]. Since achieving truthfulness is essentially the same problem as minimizing Dφ, it also maximizes J∗ simultaneously. Proposition 3.1. (global optimality) For any games G in Comm-POSG, if Dφ(Zt) = H [p(st)] for 0 ≤ t ≤ T andRi is symmetric for any permutation of i ∈ JnK, J∗(G) = Ĵ(G) holds. Proof. Let w := 〈θ, φ〉 on a parameter space W(G), and W∗(G) the BNE. Since J is obviously maximized if σφ is truthful, we prove σφ must be truthful under a given condition. To this end, we show the following Pareto-optimality. Lemma 3.1. For any G in Comm-POSG and given w, if J(w) ≥ J(w′) holds for any w′ ∈ W(G), then either of U(w) ≥ U(w′) or DKL (p || qφ) ≤ DKL (p || qφ′) holds, where U(w) := Eqφ [V πθ ] = ∫ st∈S,ẑt∈Zn V πθ (st) dqφ(st|hti, zt,−i) dσφ(ẑt). (3) Proof. The first inequality U(w) ≥ U(w′) indicates that w is on the BNEW∗(G) given φ, and of the second that the belief state qφ is as close to the true state p as possible. On a fully-observable environment, from the theorem of value iteration, there exists the solution π?(st) w.r.t. V π ? (st) for any st ∈ S. We name π? and V ? := V π ? the unbiased policy and value, respectively. Since the unbiased policy solves the objective as J(w) = nEp(st) [V πθ (st)] from Eq (1), the goal intrinsicly is to find the policy π∗ as close as π? for 〈π∗, φ∗〉 ∈ W(G), that maximizes U(w). The π∗ further can be represented as a mixed policy made of a couple of the unbiased policy π? and a biased policy π′ 6= π? as follows, π∗(ati|xti, φ) = Eqφ(sti|xti) [π ?(ati|sti)] = qφ(st0|xti)π?(ati|st0) + (1− qφ(st0|xti))π′(ati|φ,xti), (4) for the observations xti ∈ Xn, where st0 ∈ S is the true state. Hence V π ∗ (st0|φ) = EP(xti|st0) [qφ(st0|xti)V ?(st0) + (1− qφ(st0|xti))V ′(xti|φ))] = qφ(st0)V ?(st0) + EP(xti|st0) [(1− qφ(st0|xti))V ′(xti|φ))] = qφ(st0)V ?(st0) + (1− qφ(st0))V̄ ′(st0|φ), (5) where V ′(xti|φ) := ∫ st∈Sn+1,at∈An Ri(st0,at) n∏ i=1 dπ′(ati|sti)qφ(sti|xti), (6) and V̄ ′(st0|φ) := EP(xti|st0) [ V ′(xti|φ) 1− qφ(st0|xti) 1− qφ(st0) ] . (7) Thus, the error from the unbiased value function can be written as V ?(st0) − V π ∗ (st0|φ) = (1 − qφ(st0))(V ?(st0)− V̄ ′(st0|φ)), which is minimized if qφ(st0) = 1 as V ?(st0) > V̄ ′(st0|φ) by the definition. From the Jensen’s equation, logEp(st0) [qφ(st0)] ≥ Ep(st0) [log qφ(st0)] = −DKL (p || qφ)−H [p] . (8) The right-hand side of the inequation corresponds to the negative cross-entropy to be maximized. Therefore, as the second term H [p] depends not on φ, the optimization is achieved by minimizing DKL (p || qφ). Suppose that Ĵ(G) = J(w) > J∗(G) for a non-truthful reporting policy s.t. σφ(h|h) < 1. From lemma 3.1, qφ(st|hti) for an internal state hti = f(xti) of Agent i with an encoder f minimizes DKL (p || qφ(st|hti)). As that qφ(st|zti) 6= qφ(st|hti) and DKL (p || qφ(st|zti)) > DKL (p || qφ(st|hti)) contradicts the Pareto-optimality, σφ must be truthful. 4 PROPOSED FRAMEWORK An obvious way to achieve truthful learning is to add Dφ as a penalty term of the objective, but there are two obstacles to this approach. One is that the new regularization term also adds a bias to the social welfare J , and the other is that Dφ contains the agent’s internal state, post-state ĥti, so the exact amount cannot be measured by the designer of the learner. If post-states are reported correctly, then pre-states should also be reported honestly, and thus truthfulness is achieved. Therefore, it must be assumed that the post-states cannot be observed during optimization. Our framework, truthful self-play (TSP), consists of two elements: one is the introduction of imaginary rewards, a general framework for unbiased regularization in Comm-POSG, and the other is the introduction of peer prediction method (Miller et al., 2005), a truthful mechanism to encourage honest reporting based solely on observable variables. In the following, each of them is described separately and we clarify that the proposed framework converges to the global optimum in Comm-POSG. We show the whole procedure in Algorithm 1. 4.1 IMAGINARY REWARD Imaginary rewards are virtual rewards passed from an agent and have a different basis i than rewards passed from the environment, with the characteristic that they sum to zero. Since the most of RL environments, including Comm-POSG, are of no other entities than agent and environment, twodimensional structure are sufficient to describe them comprehensively if we wish to distinguish the sender of the reward. To remain the social welfare of the system is real, the system must be designed so that the sum of the imaginary rewards, i.e., imaginary part of the social welfare, is zero. In other words, it is not observed macroscopically and affects only the relative expected rewards of agents. The real and imaginary parts of the complex rewards are ordered by the mass parameter β during training, which allows the weights of the network to maintain a real structure. The whole imaginary reward is denoted as iY = (iyij)i,j∈JnK1 where iyij is the imaginary reward passed from Agent i to Agent j, and the complex reward for the whole game is R+ := R + iY where R is a diagonal matrix with the environmental reward ri as (i, i)-th entry. We write G[iY] as the structure in which this structure is introduced. In this case, the following proposition holds. Proposition 4.1. For any G in Comm-POSG, if G[iY] is truthful and R+ is an Hermitian matrix, J∗(G[iY]) = Ĵ(G) holds. Proof. Since G[iY] is truthful, J∗(G[iY]) = Ĵ(G[iY]) holds from Proposition 3.1. Further, since R+ is Hermitian, iyij = −iyji, and hence Im Ĵ(G[iY]) = 0 holds; Ĵ(G[iY]) = Ĵ(G) holds. This indicates that the BNE could be improved by introducing imaginary rewards: J∗(G[iY]) ≥ J∗(G). Also, since ∑n i=1 ∑n j=1 iyij = 0 from the condition that R + is Hermitian, the imaginary rewards affect not the social welfare of the system, which is a macroscopic objective, but only the expected rewards of each agent. The baseline in policy gradient (Williams, 1992) is an example of a function that affects not the objective when the mean gets zero. However, the baseline function is a quantity that is determined based on the value function of a single agent, whereas the imaginary reward is different in that (1) it affects the value function of each agent and (2) it is a meaningful quantity only when n ≥ 2 and is not observed when n = 1. 4.2 PEER PREDICTION MECHANISM The peer prediction mechanism (Miller et al., 2005) is derived from a mechanism design using proper scoring rules (Gneiting & Raftery, 2007), which aims to encourage verifiers to honestly report their beliefs by assigning a reward measure score to their responses when predicting probabilistic events. These mechanisms assume at least two agents, a reporter and a verifier. The general scoring rule can be expressed as F(ps‖s) where ps is the probability of occurrence reported by the verifier for the event s, and F(ps‖s) is the score to be obtained if the reported event s actually occurred. The scoring rule is proper if an honest declaration consistent with the beliefs of the verifier and the reported ps maximizes the expected value of the earned score, and it is strictly proper if it is the only option for maximizing the expected value. A representative example that is strictly proper is the logarithmic scoring rule F(ps‖s) = log ps, where the expected value for a 1-bit signal is the cross-entropy p∗s log ps + (1− p∗s) log(1− ps) for belief p∗s . One can find that ps = p∗s is the only report that maximizes the score. Since the proper scoring rule assumes that events s are observable, it is not applicable to problems such as partial observation environments where the true value is hidden. Miller et al. (2005), who first presented a peer prediction mechanism, posed scoring to a posterior of the verifiers that are updated by the signal, rather than the event. This concept is formulated by a model that assumes that an event s emits a signal z stochastically and infers the type of s by the signal of the reporters who receive it. The peer prediction mechanism (Miller et al., 2005) is denoted as F(p(s|z)‖s) under the assumption that (1) the type of event s and the signal z emitted by each type follow a given prior, (2) the priors are shared knowledge among verifiers, and (3) the posterior is updated according to the reports. We apply the mechanism to RL, i.e., the problem of predicting the agent’s optimal behavior ati ∼ πθ|st for the true state st ∈ S. In self-play, the conditions of 1 and 2 can be satisfied because the prior πθ is shared among the agents, and furthermore, the post-state in Comm-POSG corresponds to 3, so that the peer prediction mechanism can be applied to the problem of predicting agent behavior. To summarize the above discussion, we can see that we can allocate a score matrix Lt as follows, Lt := (`(ati|ztji))i,j∈JnK; `(ati|ztji) := F(πθ(ati|ztji)‖ati) = log πθ(ati|ztji), (9) which is an n-order square matrix representing the score from Agent i to Agent j. 4.3 THE TRUTHFUL SELF-PLAY In TSP, a truthful system is constructed by introducing a proper scoring rule into imaginary rewards. However, since the score matrix obtained from the proper scoring rule does not satisfy Hermitianity, we perform zero-averaging by subtracting the mean of the scores from each element of the matrix, 1Note the use of i for imaginary units and i for indices. Algorithm 1 The truthful self-play (TSP). Require: Comm-POSG G = 〈n, T,S,A+,X , T ,P,R+〉, recurrent neural network 〈σφ, qφ, πθ〉 with initial weight w0 = 〈θ0, φ0〉 and initial state h0, learning rate α > 0, and mass parameter β ≥ 0. Initialize w← w0. for each episode do Genesis: s1 ∼ p(s), ĥ0i ← h0 ∀i ∈ JnK. for t = 1 to T do 1. Self-Play Observe xt ∼ Pn(·|st), Update pre-state hti ← fφ(ĥt−1,i, xti), ∀i ∈ JnK. Generate message zti ∼ σφ(·|hti), ∀i ∈ JnK. Send message Zt ← (zt1, . . . , ztn) Receive the message ẑti ← ZTt:i, ∀i ∈ JnK. Update post-state ĥti ∼ qφ(·|ẑti), ∀i ∈ JnK. Act ati ∼ πθ(·|ĥti), ∀i ∈ JnK. Get the real reward rt ← R(st,at), 2. Compute the score matrix with peer prediction mechanism (Miller et al., 2005), `ij ← { log πθ(aj |zi) (i 6= j) 0 (i = j) ∀i, j ∈ JnK; (12) 3. Combine real and imaginary rewards and construct a complex reward. R+t ← Rt + ı∆L . (13) 4. Update the weights by policy gradient (Williams, 1992), gt ← n∑ i=1 r+ti ∇w [ log πθ(ati|ĥti) + log qφ(ĥti|ẑti) + log σφ(zti|ĥt−1,i, xti) ] (14) w← w + αRegt + αβImgt 5. Proceed to the next state st+1 ∼ T (·|st,at). end for end for return w thereby making the sum to zero. This can be expressed as follows using the graph Laplacian ∆ := E − 11T/n. Y = ∆Lψ = 1 n n− 1 −1 . . . −1 −1 n− 1 . . . −1 ... ... . . . ... −1 −1 . . . n− 1 0 `ψ(at2|zt1) . . . `ψ(atn|zt1) `ψ(at1|zt2) 0 . . . `ψ(atn|zt2) ... ... . . . ... `ψ(at1|ztn) `ψ(at2|ztn) . . . 0 , (10) to get R+ = R + i∆Lψ, (11) which is the formula that connects reinforcement learning and mechanism design. We show the truthful self-play (TSP) in Algorithm 1. The only modification required from self-play is the imaginary reward. Theorem 4.1. (global optimality) For any G in Comm-POSG, TSP converges to the global optimum Ĵ(G) if the following convergence condition are met, sup φ ∣∣∣∣∂ ReV πθ∂ ImV πθ ∣∣∣∣ < β, (15) where β <∞ is bounded mass parameter. Proof (in summary2) From Proposition A.2, [β∆Lψ] is unbiased truthful. Therefore, from Proposition A.1, convergence to the global optima is achieved. 5 NUMERICAL EXPERIMENT In this section, we establish the convergence of TSP through the results of numerical experiments with deep neural nets. We consider three environments for our analysis and experiments (→ Fig. 1). (a) a predator prey environment (PP) in which predators with limited vision look for a prey on a square grid. (b) a traffic junction environment (TJ) similar to Sukhbaatar et al. (2016) in which agents with limited vision learn to signal in order to avoid collisions. (c) StarCraft: Brood Wars (SC) explore and combat tasks which test control on multiple agents in various scenarios where agent needs to understand and decouple observations for multiple opposing units. We compare the performances of TSP with self-play (SP) and SP with curiosity (Houthooft et al., 2016) using three tasks belonging to Comm-POSG, comprising up to 20 agents. The hyperparameters are listed in the appendix. With regard CRAs, three models namely LSTM, CommNet (Sukhbaatar et al., 2016), and IC3Net (Singh et al., 2019), were compared. The empirical mean of the social welfare function J was used as a measure for the comparison. IC3Net is an improvement over CommNet, which is a continuous communication method based on LSTM. Actor-critic and value functions were added to the baselines in all the frameworks. We performed 2,000 epochs of experiment with 500 steps, each using 120 CPUs; the experiment was conducted over a period of three days. PP and TJ: Table 1 lists the experimental results for each task. We can see that IC3Net with TSP outperforms the one with SP for all tasks. Fig. 2 (a) shows that TSP elicits truthful information, and 2Refer Section A for the proofs. StarCraft task CommNet IC3Net IC3Net w/ TSP (b) confirms that the social welfare of TSP exceeds that of the SPs. (c) confirms that the average of the imaginary part is zero. From these experimental results, we conclude that the TSP successfully realized truthful learning and state-of-the-art in tasks comprising 3 to 20 agents. StarCraft: Table 2 shows a comparison of social welfare in the exploration and combat tasks in StarCraft. (i) In the search task, 10 Medics find one enemy medic on a 50×50-cell grid; similar to PP, the reward is a competitive task where the reward is divided by the number of medics found. (ii) In the combat task, 10 Marines versus 3 Zealots fight on a 50×50 cell grid. The maximum step of the episode is set at 60. We find that IC3Net, with its information-hiding gate, performs less well than CommNet but performs better when trained in TSP due to the truthful mechanism. 6 CONCLUDING REMARK Our objective was to construct a general framework for emergent unbiased state representation without any supervision. Firstly, we proposed the TSP and theoretically clarified its convergence to the global optimum in the general case. Secondly, we performed experiments involving up to 20 agents and achieved the state-of-the-art performance for all the tasks. Herein, we summarize the advantages of our framework. 1. Strong convergence: TSP guarantees convergence to the global optimum theoretically and experimentally; self-play cannot provide such a guarantee. Besides, the imaginary reward i∆L satisfies the baseline condition. 2. Simple solution: The only modification required for TSP is that i∆L should be added to the baseline in order to easily implement it for deep learning software libraries such as TensorFlow and PyTorch. 3. Broad coverage: TSP is a general framework, the same as self-play. Since TSP is independent of both agents and environments and supports both discrete and continuous control, it can be applied to a wide range of domains. No supervision is required. To the best of our knowledge, introducing mechanism design to MARL is a new direction for the deep-learning community. In future work, we will consider fairness (Sen, 1984) as the social choice function. We expect that many other frameworks will be developed by using the methodology employed in this study. A THEORY “. . . a human brain can learn such high-level abstractions if guided by the messages produced by other humans, which act as hints or indirect supervision for these high-level abstractions; and, language and the recombination and optimization of mental concepts provide an efficient evolutionary recombination operator, and this gives rise to rapid search in the space of communicable ideas that help humans build up better high-level internal representations of their world.” (Bengio, 2012) Proposition A.1. If [C] is an unbiased truthful mechanism of G, self-play with G[C] converges to Ĵ(G). Proof. Since [C] is unbiased, Eπθ [Ci] = 0 holds. Hence, for an arbitrary baseline b, b+ Ci also satisfies the baseline condition. Therefore, from the policy gradient theorem (Sutton & Barto, 1998), self-play converges to J∗(G[C]). Further, since [C] is an unbiased truthful mechanism, J∗(G[C]) = Ĵ(G[C]) = Ĵ(G) holds from Proposition 3.1. A general loss function `ψ : A × Z → R∞ for any strictly concave nonnegative function ψ : P(A)→ R∞ is defined as follows: `ψ := Dψ (πθ(aj |zi)‖δ(aj |·)) , (16) where δ(aj |·) is a point-wise probability that satisfies lim →0 ∫ B( ;ãj) dδ(aj |ãj) = 1 for an open ball B( ; ãj), and Dψ is Bregman divergence (Bregman, 1967) defined by the following equation. Dψ(p‖q) := ψ(p)− ψ(q) + ∫ ∇ψ(q) d(p− q). (17) Sending a truthful signal is the best response to minimize the expectation of the general loss function. For example, KL-diveregence is a special case of Bregman divergence when ψ = −H [·], and the following equation holds. Eπθ [`ψ] = ∫ Dψ (πθ(aj |zi)‖δ(aj |·)) dπθ(aj |hi) = DKL (πθ(ai|zi) || πθ(ai|hi)) ≥ 0. (18) The equality holds if and only if zi = hi. Notice that πθ(ai|hi) = πθ(aj |hi). Now, we generalize the zero-one mechanism to arbitrary signaling games. Proposition A.2. (Bregman mechanism) For any signaling games, if supφ ‖ dV πθ/dβIψ‖ < 1, [ıIψ] is an unbiased truthful mechanism of G ∈ G for a general cost function: Iψ(a|z) := ∆Lψ(a|z)1 = n− 1 −1 . . . −1 −1 n− 1 . . . −1 ... ... . . . ... −1 −1 . . . n− 1 · 0 `ψ(a2|z1) . . . `ψ(an|z1) `ψ(a1|z2) 0 . . . `ψ(an|z2) ... ... . . . ... `ψ(a1|zn) `ψ(a2|zn) . . . 0 1 1 ... 1 , (19) where ∆ := nE − 11T is a graph Laplacian. Proof. The problem we dealt with is designing a scoring rule for information that satisfies two properties: 1) regularity, the score should be finite if the information is sufficiently correct, and 2) properness, the score should be maximized if and only if the information is true. The well-known example of the scoring rule is mutual information (MI), which compares a pair of probabilistic distributions p and q in the logarithmic domain. However, MI cannot apply to continuous actions. Instead, we introduce a more general tool, the regular proper scoring rule as defined below. Definition A.1. (Gneiting & Raftery, 2007) For a set Ω, F(·‖·) : P (Ω) × Ω → R∞ is a regular proper scoring rule iff there exists a strictly concave, real-valued function f on P (Ω) such that F (p‖x) = f(p)− ∫ Ω f∗(p(ω), x) dp(ω) + f∗(p, x) (20) for p ∈ P (Ω) and x ∈ Ω, where f∗ is a subgradient of f that satisfies the following property, f(q) ≥ f(p) + ∫ Ω f∗(p, ω) d(q − p)(ω) (21) for q ∈ P (Ω). We also define F for q ∈ P (Ω) as F (p‖q) := ∫ Ω F (p‖x) dq(x), and describe a set of regular proper scoring rules B. For F ,F1,F2 ∈ B, the following property holds (Gneiting & Raftery, 2007). 1. Strict concavity: if q 6= p, then F(q‖q) > F(p‖q). 2. F1(p‖q) + aF2(p‖q) + f(q) ∈ B where a > 0 and f are not dependent on p. 3. −Dψ ∈ B, where Dψ is the Bregman divergence. Lemma A.1. For any F ∈ B and a function `F defined as shown below, if supφ ‖ dV πθ /dβ`ψ‖ < 1, then [ı`F ] is a truthful mechanism of G. `F (aj |zi) := { −F (πθ(aj |zi)‖aj) (i 6= j) 0 (i = j) . (22) Proof. We prove that the surrogate objective of G[ı`F ], V πθβ := V πθ − β`F is strictly concave, and if ∇φV πθβ = 0, then zi = hi with probability 1. We denote φ̂ as the truthful parameter where σφ̂(hi|hi) = 1. The policy gradient for φ is ∇φV πθβ dqφ =∇φV πθ dqφ + β∇φ ∫ F (πθ(aj |zi)‖aj) dπθ dqφ =∇φV πθ dqφ + β∇φF (πθ(aj |zi)‖πθ(aj |hi)) dqφ. (23) First, we consider the local optima, i.e., ∇φV πθ dqφ = 0 and φ 6= φ̂. For Gâteaux differential with respect to ~φ := (φ̂−φ)T/‖φ̂−φ‖, ~φ∇V πθβ = β ~φ∇`ψ > 0 holds from the strict concaveity. At the global optima i.e., ∇φV πθ dqφ = 0 and φ = φ̂, ∇φV πθβ = β∇`ψ = 0 holds. Next, if ~φ∇V πθ < 0, as supφ ‖dV πθ / d`F‖ < β, the following equation holds for φ 6= φ̂. ~φ∇V πθβ dqφ = ~φ (∇V πθ + β∇`F ) dqφ > ~φ∇V πθ dqφ + sup φ ∥∥∥∥ dVd`F ∥∥∥∥ ~φ∇`F dqφ ≥ ~φ∇V πθ dqφ − inf φ (~φ∇V πθ ) dqφ ≥ 0. (24) Hence, ~φ∇V πθβ ≥ 0 holds, and the equality holds if and only if φ = φ̂. Therefore, V πθ β is strictly concave, and the following equation holds for αk ∈ o(1/k). lim K→∞ K∑ k=1 ∇φV πθβ (φk) ‖∇φV πθβ (φk)‖ αk = φ̂. (a.s.) (25) Iψ is defined for both discrete and continuous actions. Table 3 lists examples of scoring rules `ψ for arbitrary actions. In particular, minimizing `ψ for continuous action is known as probability density estimation (Gneiting & Raftery, 2007). −Iψ is a proper scoring rule (Gneiting & Raftery, 2007) since it is a linear combination of Bregman divergence. Hence, from Lemma A.1, [iIψ] is truthful. Besides, since 1TIψ = 1T∆Lφ1 = 0, [iIψ] is unbiased. Theorem A.1. (global optimally) For any G in Comm-POSG, TSP converges to the global optimum Ĵ(G) if the following convergence condition are met, sup φ ∣∣∣∣∂ ReV πθ∂ ImV πθ ∣∣∣∣ < β, (26) where β <∞ is bounded mass parameter. Proof. From Proposition A.2, [ıIψ] is unbiased truthful. Therefore, from Proposition A.1, convergence to the global optima is achieved. A.1 SELF-PLAY CONVERGES TO LOCAL OPTIMA Theorem A.2. If G ∈ G is non-truthful, self-play does not converge to the global optimum Ĵ(G). Proof. Example A.1. (One-bit two-way communication game) Fig. 4 shows an example of a non-cooperative partially observable environment with 1-bit state. The reward structure is presented in Table 4. The sum of rewards is maximized when both agents report the correct state to the environment, n∑ i=1 Rni (s,a) = { 2c (a1 = a2 = s) 0 (otherwise) . Hence, the objective varies in the range 0 ≤ J(G2com) ≤ 2c. Proposition A.3. If c < 1, J∗(G2com) < Ĵ(G2com) holds. Proof. Since p(s) = 1/2, we can assume s = 1 without loss of generality. Besides, we discuss only Agent 1 because of symmetry. From Z = {0, 1}, Agent 1’s messageling policy σ1 sends the correct information x or false information 1 − x when it knows x. Hence, we can represent the policy by using parameter φ ∈ [0, 1] as follows. σφ(z|x) = { φz(1− φ)1−z (x = 1) 1/2 (x = ◦) , (27) Differentiating σφ with φ gives the following. dσφ dφ = { 2z − 1 (x = 1) 0 (x = ◦) G 2 com. (28) Therefore, from Eq (??), if 〈π∗, ·〉 ∈ W∗(G2com), then the policy gradient for φ is as follows. d dφ U(π∗, φ) = d dφ ∫ V ∗1 dq1 dσ1 dP dp = ∫ V ∗1 dq1 dσ1 dφ dP dp =λ ∫ V ∗1 (2z1 − 1) dz1 ∣∣∣∣ s=x1=1 =λ ∫ (2z1 − 1)R1 dπ∗1 dπ∗2 dq2 dz1 dP ∣∣∣∣ s=x1=1 =λ(1− λ) ∫ (2z1 − 1)R1 dπ∗1 dπ∗2 dq2 dz1 ∣∣∣∣ s=x1=1,x2=◦ =λ(1− λ) 1∑ z1=0 (2z1 − 1)R1(s, 〈x1, z1〉) ∣∣∣∣∣ s=x1=1 =λ(1− λ) [R1(1, 〈1, 1〉)−R1(1, 〈1, 0〉)] =λ(1− λ)(c− 1) < 0. (29) As the policy gradient is negative from the assumption of c ∈ (0, 1), φ∗ = 0 gets the Nash equilibrium from φ ≥ 0, thereby resulting in always sending false information to the opposite: σφ∗(z|x) = { 1− z (x = 1) 1/2 (x = ◦) . (30) Let J〈x1, x2〉 := J |x=〈x1,x2〉. We can get J∗ and Ĵ as follows. J∗ = ∫ J∗〈x1, x2〉dP2 dp = J∗〈1, 1〉λ2 + J∗〈1, ◦〉 · 2λ(1− λ) + J∗〈◦, ◦〉(1− λ)2 = 2cλ2 + 0 + 2c/22 · (1− λ)2 = 2c [ λ2 + 1 4 (1− λ)2 ] , (31) and Ĵ = ∫ Ĵ〈x1, x2〉dP2 dp = Ĵ〈1, 1〉λ2 + Ĵ〈1, ◦〉 · 2λ(1− λ) + Ĵ〈◦, ◦〉(1− λ)2 = 2cλ2 + 2c · 2λ(1− λ) + 2c/22 · (1− λ)2 = 2c [ λ2 + 2λ(1− λ) + 1 4 (1− λ)2 ] , (32) respectively. Therefore, Ĵ − J∗ = 4cλ(1− λ) > 0, and J∗ < Ĵ holds. From Proposition A.1, G = G2com is the counterexample that global optimally does not occur. A.2 ZERO-ONE MECHANISM SOLVES G2com . Proposition A.4. (zero-one mechanism) Let ` : A × Z → {0, 1} be a zero-one loss between an action and a message `(ai|zj) := aj(1− zi) + (1− ai)zi, and I(a|z) := [ 1 −1 −1 1 ] [ 0 `(a2|z1) `(a1|z2) 0 ] [ 1 1 ] . (33) If β > (1 − c)(1 − λ)/λ, then [iI] is an unbiased truthful mechanism of G2com, and self-play with G2com[ıI] converges to the global optima Ĵ(G2com) = 2c[1− 3/4 · (1− λ)2]. Proof. The following equation holds. d dφ V ∗β,1 dq1 = d dφ ∫ (V ∗1 − βI1 dπ∗2 dq2) dq1 =− λ(1− λ)(1− c)− β ∫ I1 dπ∗2 dq1 dq2 dσ1 dφ dσ2 dP2 dp =− λ(1− λ)(1− c)− βλ ∫ I1 dπ∗2 dq2(2z1 − 1) dz1 dσ2 dP ∣∣∣∣ s=x1=1 =− λ(1− λ)(1− c)− βλ2 ∫ (2z1 − 1)`(a2|z1) dπ∗2 dq2 dz1 ∣∣∣∣ s=x1=x2=1 =− λ(1− λ)(1− c)− βλ2 1∑ z1=0 (2z1 − 1)`(1|z1) =− λ(1− λ)(1− c)− βλ2(`(1|1)− `(1|0)) =− λ(1− λ)(1− c) + βλ2 =λ2 [ ı− (1− c)1− λ λ ] . (34) Therefore, if β > (1− c)(1− λ)/λ, then φ∗ = 1 holds, and J∗ = Ĵ holds. The value of Ĵ is clear from the proof of Lemma A.1. [ıI] is also known as the peer prediction method (Miller et al., 2005), which is inspired by peer review. This process is illustrated in Fig. 4 Left, and the state-action value functions are listed in Fig. 4 Right. B COMPLEXITY ANALYSIS Although the computational complexity of βIψ per iteration is O(n3) as it involves the multiplication of n-order square matrices, we can reduce it to O(n2) to obtain Iψ = nLψ − 1TLψ1. The spatial complexity is O(n2), and the sample size is O(n). C EXPERIMENTAL ENVIRONMENTS In the experiment, we used two partial observation environments. This setting is the same as that adopted in existing studies (Sukhbaatar et al., 2016; Singh et al., 2019). Fig. 1 shows the environments. C.1 PREDATOR PREY (PP) Predator-Prey (PP) is a widely used benchmark environment in MARL (Barrett et al., 2011; Sukhbaatar et al., 2016; Singh et al., 2019). Multiple predators search for prey in a randomly initialized location in the grid world with a limited field of view. The field of view of a predator is limited so that only a few blocks can be seen. Therefore, for the predator to reach the prey faster, it is necessary to inform other predators about the prey’s location and the locations already searched. Thus, the prey’s location is conveyed through communication among the predators, but predators can also send false messages to keep other predators away from the prey. In this experiment, experiments are performed using PP-3 and PP-5, which have two difficulty levels. In PP-3, the range of the visual field is set to 0 in a 5× 5 environment. In PP-5, the field of view is set to 1 in a 10× 10 environment. The numbers represent the number of agents. C.2 TRAFFIC JUNCTION (TJ) Traffic Junction (TJ) is a simplified road intersection task. An n body agent with a limited field of view informs the other bodies of its location to avoid collisions. In this experiment, the difficulty level of TJ is changed to three. TJ-5 solves the task of crossing two direct one-way streets. For TJ-10, there are two lanes, and each body can not only go straight, but also turn left or right. For TJ-20, the two-lane road will comprise two parallel roads, for a total of four intersections. Each number corresponds to n. In the initial state, each vehicle is given a starting point and a destination and is trained to follow a determined path as fast as possible while avoiding collisions. The agent is in each body and takes two actions, i.e., accelerator and brake, in a single time step. It is crucial to ensure that other vehicles do not approach each other to prevent collision while making good use of multiagent communication. That is similar to blinkers and brake lights. C.3 STARCRAFT: BLOOD WARS (SC) Explore: In order to complete the exploration task, the agent must be within a specific range (field of view) of the enemy unit. Once the agent is within the enemy unit’s field of view, it does not take any further action. The reward structure of the enemy unit is the same as the PP task, with the only difference being that the agent The point is that instead of being in the same place, you explore the enemy unit’s range of vision and get a non-negative reward. Medic units that do not attack enemy units are used to prevent combat from interfering with the mission objective. For observation, for each agent, it is the agent’s (absolute x, absolute y) and the enemy’s (relative x, relative y, visible), where visiblea is a visual range. If the enemy is not in exploration range, the relative x and relative y are zero. The agent has nine actions to choose from: eight basic directions and one stay action. Combat: Agents make their own observations (absolute x, absolute y, health point + shield, weapon cooldown, previous action) and (relative x, relative y, visible, health point + shield, weapon cooldown). Relative x and y are only observed when the enemy is visible, corresponds to a visible flag. All observations are normalized to lie between (0, 1). The agent must choose from 9+M actions, including 9 basic actions and 1 action to attack M agents. The attack action is only effective if the enemy is within the agent’s view, otherwise is a no-problem. In combat, the environment doesn’t compare to Starcraft’s predecessors. The setup is much more difficult, restrictive, new and different, and therefore not directly comparable. In Combat task, we give a negative reward rtime = −0.01 at each time step to avoid delaying the enemy team’s detection. When an agent is not participating in a battle, at each time step, the agent is rewarded with (i) normalized health status at the current and previous time step, and (ii) normalized health status at the previous time step displays the time steps of the enemies you have attacked so far and the current time step. The final reward for each agent consists of (i) all remaining health * 3 as a negative reward and (ii) 5 * m + all remaining health * 3 as a positive reward if the agent wins. Give health*3 to all living enemies as a negative reward when you lose. In this task, a group of enemies is randomly initialized in half of the map. Thus the other half making communication-demanding tasks even more difficult. D HYPERPARAMETERS
1. What is the focus and contribution of the paper on truthful self-play? 2. What are the strengths of the proposed framework, particularly in terms of its generality and convergence guarantees? 3. What are the weaknesses of the paper regarding its explanations, innovation, experiments, and clarity? 4. Do you have any concerns or suggestions regarding the improvements needed for the experiment section? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents a general framework named truthful self-play (TSP) which is suitable for communicative partially-observable stochastic games and analytically demonstrates convergence to the global optimum. Strengths And Weaknesses Strength The proposed framework TSP is general and guarantees convergence to the global optimum theoretically and experimentally. Weaknesses This paper improves the self-play problem by introducing the 'truthful' reward. However, this paper lacks a full explanation of the meaning of 'truthful', in other words, we do not find relevant information about 'truthful' in the method section, and it is too farcical to use this reward as an inverse game mechanism. The contribution of this paper is too few and its innovation is general. The experiment needs improvement. In Table 1, the effect of PP-5 and TJ-5 is not significantly improved; compared with IC3Net+SP, the value including standard deviation has some coincidence(need more experiments iteration). In TJ-20, there is no reasonable explanation for the reduction of effect variance. In Figure 2, the description of (a) is unclear, and the existence of (c) is unnecessary. In Table 2, whether the CommNet+TSP's performance can be further improved should be verified. Spelling problems and unclear statements. Introduction para3 pertially -> partially The input of function q ϕ is inconsistent, $\hat{h}{t i} i n S e c t i o n 3.2 a n d \hat{h}{t}$ in Section3.3. Clarity, Quality, Novelty And Reproducibility This paper is original and reproducible, but the quality and clarity need to be improved.
ICLR
Title Truthful Self-Play Abstract We present a general framework for evolutionary learning to emergent unbiased state representation without any supervision. Evolutionary frameworks such as self-play converge to bad local optima in case of multi-agent reinforcement learning in non-cooperative partially observable environments with communication due to information asymmetry. Our proposed framework is a simple modification of selfplay inspired by mechanism design, also known as reverse game theory, to elicit truthful signals and make the agents cooperative. The key idea is to add imaginary rewards using the peer prediction method, i.e., a mechanism for evaluating the validity of information exchanged between agents in a decentralized environment. Numerical experiments with predator prey, traffic junction and StarCraft tasks demonstrate that the state-of-the-art performance of our framework. N/A We present a general framework for evolutionary learning to emergent unbiased state representation without any supervision. Evolutionary frameworks such as self-play converge to bad local optima in case of multi-agent reinforcement learning in non-cooperative partially observable environments with communication due to information asymmetry. Our proposed framework is a simple modification of selfplay inspired by mechanism design, also known as reverse game theory, to elicit truthful signals and make the agents cooperative. The key idea is to add imaginary rewards using the peer prediction method, i.e., a mechanism for evaluating the validity of information exchanged between agents in a decentralized environment. Numerical experiments with predator prey, traffic junction and StarCraft tasks demonstrate that the state-of-the-art performance of our framework. 1 INTRODUCTION Evolving culture prevents deep neural networks from falling into bad local optima (Bengio, 2012). Self-play (Samuel, 1967; Tesauro, 1995) has not only demonstrated the ability to abstract highdimensional state spaces as typified by AlphaGo (Silver et al., 2017), but also improved exploration coverage in partially observable environments. Communication (Sukhbaatar et al., 2016; Singh et al., 2019) exchanges their internal representations such as explored observation and hidden state in RNNs. Evolutionary learning is expected to be a general framework for creating superhuman AIs as such learning can generate a high-level abstract representation without any bias in supervision. However, when applying evolutionary learning to a partially observable environment with noncooperative agents, improper bias is injected into the state representation. This bias originates from the environment. A partially observable environment with non-cooperative agents induces actions that disable an agent from honestly sharing the correct internal state resulting in the agent taking actions such as concealing information and deceiving other agents at equilibrium (Singh et al., 2019). The problem arises because the agent cannot fully observe the state of the environment, and thus, it does not have sufficient knowledge to verify the information provided by other agents. Furthermore, neural networks are vulnerable to adversarial examples (Szegedy et al., 2014) and are likely to induce erroneous behavior with small perturbations. Many discriminative models for information accuracy are available; these include GANs (Goodfellow et al., 2014; Radford et al., 2016) and curriculum learning (Lowe et al., 2020). However, these models assume that accurate samples can be obtained by supervision. Because of this assumption, is it impossible to apply these models to a partially observable environment, where the distribution is not stable. We generalize self-play to non-cooperative partially observable environments via mechanism design (Myerson, 1983; Miller et al., 2005), which is also known as reverse game theory. The key idea is to add imaginary rewards by using the peer prediction method (Miller et al., 2005), that is, a mechanism for evaluating the validity of information exchanged between agents in a decentralized environment, which is calculated based on social influence on the signals. We formulate the non-cooperative partially observable environment as an extention of the partially observable stochastic games (POSG) (Hansen et al., 2004); introduce truthfulness (Vickrey, 1961), which is an indicator of the validity of state representation. We show that the imaginary reward enables us to reflect the bias of state representation on the gradient without oracles. As the first contribution, we propose truthful self-play (TSP) and analytically demonstrate convergence to the global optimum (Section 4). We propose the imaginary reward on the basis of the peer prediction method (Miller et al., 2005) and apply it to self-play. The mechanism affects the gradient of the local optima, but not the global optima. The trick is to use the actions taken by the agents as feedback to verify the received signal from the every other agent, instead of the true state, input, and intent, which the agents cannot fully observe. TSP only requires a modification of the baseline function for self-play; it drastically improves the convergence to the global optimum in Comm-POSG. As the second contribution, based on the results of numerical experiments, we report that the TSP achieved state-of-the-art performance for various multi-agent tasks made of up to 20 agents (Section 5). Using predator prey (Barrett et al., 2011), traffic junction (Sukhbaatar et al., 2016; Singh et al., 2019), and StarCraft (Synnaeve et al., 2016) environments, which are typically used in Comm-POSG research, we compared the performances of TSP with the current neural nets, including the state-ofthe-art method, with LSTM, CommNet (Sukhbaatar et al., 2016), and IC3Net (Singh et al., 2019). We report that the model with IC3Net optimized by TSP has the best performance. This work is the first attempt to apply mechanism design to evolutionary learning. TSP is a general optimization algorithm whose convergence is theoretically guaranteed for arbitrary policies and environments. Since no supervision is required, TSP has a wide range of applications to not only game AIs (Silver et al., 2017), but also the robots (Jaderberg et al., 2018), chatbots (Gupta et al., 2019; Chevalier et al., 2019), and autonomous cars (Tang, 2019) employed in multiagent tasks. Notation: Vectors are columns. Let JnK := {1, . . . , n}. R is a set of real numbers. i is the imaginary unit. Reu and Imu are a real and an imaginary part of complex number u, respectively. n-tuple are written as boldface of the original variables a := 〈a1, . . . , an〉 , and a−i is a (n− 1)-tuple obtained by removing the i-th entry from a. Let 1 := (1, . . . , 1)T. Matrices are shown in uppercase letters L := (`ij). E is the unit matrix. The set of probability distributions based on the support X is described as P(X ). 2 RELATED WORK Neural communication has gained attention in the field of multiagent reinforcement learning (MARL) for both discrete (Foerster et al., 2016) and continuous (Sukhbaatar et al., 2016; Singh et al., 2019) signals. Those networks are trained via self-play to exchange the internal state of the environment stored in the working memory of recurrent neural networks (RNNs) to learn the right policy in partially observable environments. The term self-play was coined by the game AI community in the latter half of the century. Samuel (Samuel, 1967) introduced self-play as a framework for sharing a state-action value among two opposing agents to efficiently search the state space at Checkers. TD-Gammon (Tesauro, 1995) introduced self-play as a framework to learn TD(λ) (Sutton & Barto, 1998) and achieve professionalgrade levels in backgammon. AlphaGo (Silver et al., 2017) defeated the Go champion by combining supervised learning with professional game records and self-play. AlphaZero (Silver et al., 2018) successfully learnt beyond its own performance entirely based on self-play. All these studies explain that eliminating the bias of human knowledge in supervision is the advantage of self-play. Self-play is also known as evolutionary learning (Bengio, 2012) in the deep learning community mainly as an approach to emerging representations without supervision (Bansal et al., 2018; Balduzzi et al., 2019). Bansal et al. (2018) show that competitive environments contribute to emerging diversity and complexity. Rich generative models such as GANs (Goodfellow et al., 2014; Radford et al., 2016) are frameworks for acquiring an environmental model by employing competitive settings. RNNs such as world models (Ha & Schmidhuber, 2018; Eslami et al., 2018) are capable of more comprehensive ranges of exploration in partially observable environments and generation of symbols and languages (Bengio, 2017; Gupta et al., 2019; Chevalier et al., 2019). The difference between evolutionary learning and supervised learning is the absence of human knowledge and oracles. Several works have formalized those in which the agents exchange environmental information as a formal class of the games such as Dec-POMDP-Com (Goldman & Zilberstein, 2003) and COMMTDP (Pynadath & Tambe, 2002), and several frameworks are proposed to aim to solve the problems. However, the limitation of the frameworks is that they assume a common reward. As there are yet no formal definition of non-cooperative communication game, we formalize such a game to Comm-POSG as a superset of POSGs (Hansen et al., 2004), a more general class of multi-agent games including the cases of non-cooperativity (Hansen et al., 2004). To the best of our knowledge, there are no studies that have introduced truthful mechanisms into the field of MARL, but it may be possible to introduce it by using agents that can learn flexibly, such as neural networks. A typical truthful mechanism is the VCG mechanism (Vickrey, 1961), which is a generalization of the pivot method used in auction theory, but whereas the subject of the report that must satisfy truthfulness must be a valuation (or a value function if interpreted from a RL perspective). In this study, the scope of application is different because the belief states of the environment are subject to reporting. Therefore, we introduce instead a peer prediction method (Miller et al., 2005) that guarantees truthfulness with respect to reporting beliefs about arbitrary probability distributions using proper scoring rules (Gneiting & Raftery, 2007). 3 PROBLEM DEFINITION 3.1 COMM-POSG A communicative partially-observable stochastic game (Comm-POSG) is a class of non-cooperative Bayesian games in which every agent does not fully observe the environment but interacts each other. We define Comm-POSG as an extension of POSG (Hansen et al., 2004) with a message protocol. Definition 3.1 (Hansen et al., 2004) POSG 〈n, T,S,A,X , T ,P,R〉 is a class for multi-agent decision making under uncertainty in which the state evolves over time 1 ≤ t ≤ T , where • n is the number of agents, • T is a horizon i.e., the episode length, • S is a set of discrete/continuous state st ∈ S with an initial probabilistic distribution p(s0), • A is a set of discrete/continuous action ati ∈ A, • X is a set of discrete/continuous observation xti ∈ X , • T ∈ P (S ×A× S) is state transition probability, • P ∈ P (S × Xn) is an observation probability, and • R : S ×An → Rn is a reward function that outputs an n-dimensional vector. In Comm-POSGs, every agent further follows a message protocol Zn×n, where Z is the discrete/continuous signal space. The complete information exchanged among the agent in time is Zt, where Zt := (ztij)i,j∈JnK ∈ Zn×n is a signal matrix in which (i, j)-th entry ztij represents a signal from Agent i to Agent j at t. The i-th diagonal entry of Zt, hti := ztii represents the pre-state, an internal state of i-th agent before receiving the singals from the others. A game in Comm-POSG is denoted as G := 〈n, T,S,A,X , T ,P,R,Z〉. The objective of Comm-POSG is social welfare (Arrow, 1963) defined by the following, J := n∑ i=1 V πi ; V πi := Eπi [ T∑ t=1 γt−1rti ] , (1) where γ ∈ [0, 1] is discount rate, rti is reward πi is a stochastic policy, and V πi is the value function. In extensive-form games including Comm-POSG, in addition to the information in the environment, the information of other agents cannot be observed. In the optimization problem under these assumptions, a policy converges to a solution called the Bayesian Nash equilibrium (BNE) (Fudenberg, 1993). We denote the social welfare at the BNE is J∗, and the global maximum Ĵ . In general, J∗ 6= Ĵ holds, which is closely related to the information asymmetry. 3.2 COMMUNICATIVE RECURRENT AGENTS In order to propose an optimization algorithm in this paper, we do not propose a concrete structure of the network, but we propose an abstract structure that can cover existing neural communication models (Sukhbaatar et al., 2016; Singh et al., 2019), namely communicative recurrent agents (CRAs) 〈fφ, σφ, qφ, πθ〉, where • fφ(ĥt−1,i, xti) 7→ Z is a deep RNN for the high-dimensional input xti ∈ X and the previous post-state ĥt−1,i ∈ Z , with a parameter φ and an initial state ĥ0 ∈ Z , • σφ(zti|hti) is a stochastic messaging policy for a pre-state hti := fφ(ĥt−1,i, xti), • qφ(ĥti|ẑti) is a stochastic model for a post-state ĥti ∈ Z and the received messages ẑti := Z T t:i = (zt1i, . . . , zt,i−1,i, hti, zt,i+1,i, . . . , ztni) T, and • πθ(ati|ĥti) is the stochastic action policy with a parameter θ. These agents are trained through self-play using on-policy learning such as REINFORCE (Williams, 1992). All n-agents share the same weight per episode, and the weights are updated based on the cumulative reward after the episode. In addition to the recurrent agent’s output of actions with the observation series as input, the CRA has signals for communication as input and output. CRAs estimate current state of the environment and current value of the agent herself based on the post-state model with the pre-state hti in the hidden layer of the RNN and the received signals ẑti,−i from other agents. Hence, the veracity of the signals zti is the point of contention. 3.3 TRUTHFULNESS In mechanism design, a truthful game (Vickrey, 1961) is a game in which all agents make an honest reporting in the Bayesian Nash equilibrium. In Comm-POSGs, the truthfulness of the game is achieved if all the sent signal equals the pre-state ztij = hti i.e., all the agent share a complete information. In such case, every agent has the same information ẑti = hti := (ht1, . . . , htn)T for all i’s and the same post-state model probability distribution, and hence the mean of the cross entropy between the distributions below will be minimized. Dφ(Zt) := 1 n n∑ i=1 H [ qφ(ĥti|ẑti) ] + 1 n2 n∑ i=1 n∑ j=1 DKL ( qφ(ĥti|ẑti) || qφ(ĥtj |ẑtj) ) . (2) The first term represents the entropy of knowledge each agent has about the environment, and the second the information asymmetry between the agents. Dφ is a lower bound on the amount of true information the environment has H [p(st)]. Since achieving truthfulness is essentially the same problem as minimizing Dφ, it also maximizes J∗ simultaneously. Proposition 3.1. (global optimality) For any games G in Comm-POSG, if Dφ(Zt) = H [p(st)] for 0 ≤ t ≤ T andRi is symmetric for any permutation of i ∈ JnK, J∗(G) = Ĵ(G) holds. Proof. Let w := 〈θ, φ〉 on a parameter space W(G), and W∗(G) the BNE. Since J is obviously maximized if σφ is truthful, we prove σφ must be truthful under a given condition. To this end, we show the following Pareto-optimality. Lemma 3.1. For any G in Comm-POSG and given w, if J(w) ≥ J(w′) holds for any w′ ∈ W(G), then either of U(w) ≥ U(w′) or DKL (p || qφ) ≤ DKL (p || qφ′) holds, where U(w) := Eqφ [V πθ ] = ∫ st∈S,ẑt∈Zn V πθ (st) dqφ(st|hti, zt,−i) dσφ(ẑt). (3) Proof. The first inequality U(w) ≥ U(w′) indicates that w is on the BNEW∗(G) given φ, and of the second that the belief state qφ is as close to the true state p as possible. On a fully-observable environment, from the theorem of value iteration, there exists the solution π?(st) w.r.t. V π ? (st) for any st ∈ S. We name π? and V ? := V π ? the unbiased policy and value, respectively. Since the unbiased policy solves the objective as J(w) = nEp(st) [V πθ (st)] from Eq (1), the goal intrinsicly is to find the policy π∗ as close as π? for 〈π∗, φ∗〉 ∈ W(G), that maximizes U(w). The π∗ further can be represented as a mixed policy made of a couple of the unbiased policy π? and a biased policy π′ 6= π? as follows, π∗(ati|xti, φ) = Eqφ(sti|xti) [π ?(ati|sti)] = qφ(st0|xti)π?(ati|st0) + (1− qφ(st0|xti))π′(ati|φ,xti), (4) for the observations xti ∈ Xn, where st0 ∈ S is the true state. Hence V π ∗ (st0|φ) = EP(xti|st0) [qφ(st0|xti)V ?(st0) + (1− qφ(st0|xti))V ′(xti|φ))] = qφ(st0)V ?(st0) + EP(xti|st0) [(1− qφ(st0|xti))V ′(xti|φ))] = qφ(st0)V ?(st0) + (1− qφ(st0))V̄ ′(st0|φ), (5) where V ′(xti|φ) := ∫ st∈Sn+1,at∈An Ri(st0,at) n∏ i=1 dπ′(ati|sti)qφ(sti|xti), (6) and V̄ ′(st0|φ) := EP(xti|st0) [ V ′(xti|φ) 1− qφ(st0|xti) 1− qφ(st0) ] . (7) Thus, the error from the unbiased value function can be written as V ?(st0) − V π ∗ (st0|φ) = (1 − qφ(st0))(V ?(st0)− V̄ ′(st0|φ)), which is minimized if qφ(st0) = 1 as V ?(st0) > V̄ ′(st0|φ) by the definition. From the Jensen’s equation, logEp(st0) [qφ(st0)] ≥ Ep(st0) [log qφ(st0)] = −DKL (p || qφ)−H [p] . (8) The right-hand side of the inequation corresponds to the negative cross-entropy to be maximized. Therefore, as the second term H [p] depends not on φ, the optimization is achieved by minimizing DKL (p || qφ). Suppose that Ĵ(G) = J(w) > J∗(G) for a non-truthful reporting policy s.t. σφ(h|h) < 1. From lemma 3.1, qφ(st|hti) for an internal state hti = f(xti) of Agent i with an encoder f minimizes DKL (p || qφ(st|hti)). As that qφ(st|zti) 6= qφ(st|hti) and DKL (p || qφ(st|zti)) > DKL (p || qφ(st|hti)) contradicts the Pareto-optimality, σφ must be truthful. 4 PROPOSED FRAMEWORK An obvious way to achieve truthful learning is to add Dφ as a penalty term of the objective, but there are two obstacles to this approach. One is that the new regularization term also adds a bias to the social welfare J , and the other is that Dφ contains the agent’s internal state, post-state ĥti, so the exact amount cannot be measured by the designer of the learner. If post-states are reported correctly, then pre-states should also be reported honestly, and thus truthfulness is achieved. Therefore, it must be assumed that the post-states cannot be observed during optimization. Our framework, truthful self-play (TSP), consists of two elements: one is the introduction of imaginary rewards, a general framework for unbiased regularization in Comm-POSG, and the other is the introduction of peer prediction method (Miller et al., 2005), a truthful mechanism to encourage honest reporting based solely on observable variables. In the following, each of them is described separately and we clarify that the proposed framework converges to the global optimum in Comm-POSG. We show the whole procedure in Algorithm 1. 4.1 IMAGINARY REWARD Imaginary rewards are virtual rewards passed from an agent and have a different basis i than rewards passed from the environment, with the characteristic that they sum to zero. Since the most of RL environments, including Comm-POSG, are of no other entities than agent and environment, twodimensional structure are sufficient to describe them comprehensively if we wish to distinguish the sender of the reward. To remain the social welfare of the system is real, the system must be designed so that the sum of the imaginary rewards, i.e., imaginary part of the social welfare, is zero. In other words, it is not observed macroscopically and affects only the relative expected rewards of agents. The real and imaginary parts of the complex rewards are ordered by the mass parameter β during training, which allows the weights of the network to maintain a real structure. The whole imaginary reward is denoted as iY = (iyij)i,j∈JnK1 where iyij is the imaginary reward passed from Agent i to Agent j, and the complex reward for the whole game is R+ := R + iY where R is a diagonal matrix with the environmental reward ri as (i, i)-th entry. We write G[iY] as the structure in which this structure is introduced. In this case, the following proposition holds. Proposition 4.1. For any G in Comm-POSG, if G[iY] is truthful and R+ is an Hermitian matrix, J∗(G[iY]) = Ĵ(G) holds. Proof. Since G[iY] is truthful, J∗(G[iY]) = Ĵ(G[iY]) holds from Proposition 3.1. Further, since R+ is Hermitian, iyij = −iyji, and hence Im Ĵ(G[iY]) = 0 holds; Ĵ(G[iY]) = Ĵ(G) holds. This indicates that the BNE could be improved by introducing imaginary rewards: J∗(G[iY]) ≥ J∗(G). Also, since ∑n i=1 ∑n j=1 iyij = 0 from the condition that R + is Hermitian, the imaginary rewards affect not the social welfare of the system, which is a macroscopic objective, but only the expected rewards of each agent. The baseline in policy gradient (Williams, 1992) is an example of a function that affects not the objective when the mean gets zero. However, the baseline function is a quantity that is determined based on the value function of a single agent, whereas the imaginary reward is different in that (1) it affects the value function of each agent and (2) it is a meaningful quantity only when n ≥ 2 and is not observed when n = 1. 4.2 PEER PREDICTION MECHANISM The peer prediction mechanism (Miller et al., 2005) is derived from a mechanism design using proper scoring rules (Gneiting & Raftery, 2007), which aims to encourage verifiers to honestly report their beliefs by assigning a reward measure score to their responses when predicting probabilistic events. These mechanisms assume at least two agents, a reporter and a verifier. The general scoring rule can be expressed as F(ps‖s) where ps is the probability of occurrence reported by the verifier for the event s, and F(ps‖s) is the score to be obtained if the reported event s actually occurred. The scoring rule is proper if an honest declaration consistent with the beliefs of the verifier and the reported ps maximizes the expected value of the earned score, and it is strictly proper if it is the only option for maximizing the expected value. A representative example that is strictly proper is the logarithmic scoring rule F(ps‖s) = log ps, where the expected value for a 1-bit signal is the cross-entropy p∗s log ps + (1− p∗s) log(1− ps) for belief p∗s . One can find that ps = p∗s is the only report that maximizes the score. Since the proper scoring rule assumes that events s are observable, it is not applicable to problems such as partial observation environments where the true value is hidden. Miller et al. (2005), who first presented a peer prediction mechanism, posed scoring to a posterior of the verifiers that are updated by the signal, rather than the event. This concept is formulated by a model that assumes that an event s emits a signal z stochastically and infers the type of s by the signal of the reporters who receive it. The peer prediction mechanism (Miller et al., 2005) is denoted as F(p(s|z)‖s) under the assumption that (1) the type of event s and the signal z emitted by each type follow a given prior, (2) the priors are shared knowledge among verifiers, and (3) the posterior is updated according to the reports. We apply the mechanism to RL, i.e., the problem of predicting the agent’s optimal behavior ati ∼ πθ|st for the true state st ∈ S. In self-play, the conditions of 1 and 2 can be satisfied because the prior πθ is shared among the agents, and furthermore, the post-state in Comm-POSG corresponds to 3, so that the peer prediction mechanism can be applied to the problem of predicting agent behavior. To summarize the above discussion, we can see that we can allocate a score matrix Lt as follows, Lt := (`(ati|ztji))i,j∈JnK; `(ati|ztji) := F(πθ(ati|ztji)‖ati) = log πθ(ati|ztji), (9) which is an n-order square matrix representing the score from Agent i to Agent j. 4.3 THE TRUTHFUL SELF-PLAY In TSP, a truthful system is constructed by introducing a proper scoring rule into imaginary rewards. However, since the score matrix obtained from the proper scoring rule does not satisfy Hermitianity, we perform zero-averaging by subtracting the mean of the scores from each element of the matrix, 1Note the use of i for imaginary units and i for indices. Algorithm 1 The truthful self-play (TSP). Require: Comm-POSG G = 〈n, T,S,A+,X , T ,P,R+〉, recurrent neural network 〈σφ, qφ, πθ〉 with initial weight w0 = 〈θ0, φ0〉 and initial state h0, learning rate α > 0, and mass parameter β ≥ 0. Initialize w← w0. for each episode do Genesis: s1 ∼ p(s), ĥ0i ← h0 ∀i ∈ JnK. for t = 1 to T do 1. Self-Play Observe xt ∼ Pn(·|st), Update pre-state hti ← fφ(ĥt−1,i, xti), ∀i ∈ JnK. Generate message zti ∼ σφ(·|hti), ∀i ∈ JnK. Send message Zt ← (zt1, . . . , ztn) Receive the message ẑti ← ZTt:i, ∀i ∈ JnK. Update post-state ĥti ∼ qφ(·|ẑti), ∀i ∈ JnK. Act ati ∼ πθ(·|ĥti), ∀i ∈ JnK. Get the real reward rt ← R(st,at), 2. Compute the score matrix with peer prediction mechanism (Miller et al., 2005), `ij ← { log πθ(aj |zi) (i 6= j) 0 (i = j) ∀i, j ∈ JnK; (12) 3. Combine real and imaginary rewards and construct a complex reward. R+t ← Rt + ı∆L . (13) 4. Update the weights by policy gradient (Williams, 1992), gt ← n∑ i=1 r+ti ∇w [ log πθ(ati|ĥti) + log qφ(ĥti|ẑti) + log σφ(zti|ĥt−1,i, xti) ] (14) w← w + αRegt + αβImgt 5. Proceed to the next state st+1 ∼ T (·|st,at). end for end for return w thereby making the sum to zero. This can be expressed as follows using the graph Laplacian ∆ := E − 11T/n. Y = ∆Lψ = 1 n n− 1 −1 . . . −1 −1 n− 1 . . . −1 ... ... . . . ... −1 −1 . . . n− 1 0 `ψ(at2|zt1) . . . `ψ(atn|zt1) `ψ(at1|zt2) 0 . . . `ψ(atn|zt2) ... ... . . . ... `ψ(at1|ztn) `ψ(at2|ztn) . . . 0 , (10) to get R+ = R + i∆Lψ, (11) which is the formula that connects reinforcement learning and mechanism design. We show the truthful self-play (TSP) in Algorithm 1. The only modification required from self-play is the imaginary reward. Theorem 4.1. (global optimality) For any G in Comm-POSG, TSP converges to the global optimum Ĵ(G) if the following convergence condition are met, sup φ ∣∣∣∣∂ ReV πθ∂ ImV πθ ∣∣∣∣ < β, (15) where β <∞ is bounded mass parameter. Proof (in summary2) From Proposition A.2, [β∆Lψ] is unbiased truthful. Therefore, from Proposition A.1, convergence to the global optima is achieved. 5 NUMERICAL EXPERIMENT In this section, we establish the convergence of TSP through the results of numerical experiments with deep neural nets. We consider three environments for our analysis and experiments (→ Fig. 1). (a) a predator prey environment (PP) in which predators with limited vision look for a prey on a square grid. (b) a traffic junction environment (TJ) similar to Sukhbaatar et al. (2016) in which agents with limited vision learn to signal in order to avoid collisions. (c) StarCraft: Brood Wars (SC) explore and combat tasks which test control on multiple agents in various scenarios where agent needs to understand and decouple observations for multiple opposing units. We compare the performances of TSP with self-play (SP) and SP with curiosity (Houthooft et al., 2016) using three tasks belonging to Comm-POSG, comprising up to 20 agents. The hyperparameters are listed in the appendix. With regard CRAs, three models namely LSTM, CommNet (Sukhbaatar et al., 2016), and IC3Net (Singh et al., 2019), were compared. The empirical mean of the social welfare function J was used as a measure for the comparison. IC3Net is an improvement over CommNet, which is a continuous communication method based on LSTM. Actor-critic and value functions were added to the baselines in all the frameworks. We performed 2,000 epochs of experiment with 500 steps, each using 120 CPUs; the experiment was conducted over a period of three days. PP and TJ: Table 1 lists the experimental results for each task. We can see that IC3Net with TSP outperforms the one with SP for all tasks. Fig. 2 (a) shows that TSP elicits truthful information, and 2Refer Section A for the proofs. StarCraft task CommNet IC3Net IC3Net w/ TSP (b) confirms that the social welfare of TSP exceeds that of the SPs. (c) confirms that the average of the imaginary part is zero. From these experimental results, we conclude that the TSP successfully realized truthful learning and state-of-the-art in tasks comprising 3 to 20 agents. StarCraft: Table 2 shows a comparison of social welfare in the exploration and combat tasks in StarCraft. (i) In the search task, 10 Medics find one enemy medic on a 50×50-cell grid; similar to PP, the reward is a competitive task where the reward is divided by the number of medics found. (ii) In the combat task, 10 Marines versus 3 Zealots fight on a 50×50 cell grid. The maximum step of the episode is set at 60. We find that IC3Net, with its information-hiding gate, performs less well than CommNet but performs better when trained in TSP due to the truthful mechanism. 6 CONCLUDING REMARK Our objective was to construct a general framework for emergent unbiased state representation without any supervision. Firstly, we proposed the TSP and theoretically clarified its convergence to the global optimum in the general case. Secondly, we performed experiments involving up to 20 agents and achieved the state-of-the-art performance for all the tasks. Herein, we summarize the advantages of our framework. 1. Strong convergence: TSP guarantees convergence to the global optimum theoretically and experimentally; self-play cannot provide such a guarantee. Besides, the imaginary reward i∆L satisfies the baseline condition. 2. Simple solution: The only modification required for TSP is that i∆L should be added to the baseline in order to easily implement it for deep learning software libraries such as TensorFlow and PyTorch. 3. Broad coverage: TSP is a general framework, the same as self-play. Since TSP is independent of both agents and environments and supports both discrete and continuous control, it can be applied to a wide range of domains. No supervision is required. To the best of our knowledge, introducing mechanism design to MARL is a new direction for the deep-learning community. In future work, we will consider fairness (Sen, 1984) as the social choice function. We expect that many other frameworks will be developed by using the methodology employed in this study. A THEORY “. . . a human brain can learn such high-level abstractions if guided by the messages produced by other humans, which act as hints or indirect supervision for these high-level abstractions; and, language and the recombination and optimization of mental concepts provide an efficient evolutionary recombination operator, and this gives rise to rapid search in the space of communicable ideas that help humans build up better high-level internal representations of their world.” (Bengio, 2012) Proposition A.1. If [C] is an unbiased truthful mechanism of G, self-play with G[C] converges to Ĵ(G). Proof. Since [C] is unbiased, Eπθ [Ci] = 0 holds. Hence, for an arbitrary baseline b, b+ Ci also satisfies the baseline condition. Therefore, from the policy gradient theorem (Sutton & Barto, 1998), self-play converges to J∗(G[C]). Further, since [C] is an unbiased truthful mechanism, J∗(G[C]) = Ĵ(G[C]) = Ĵ(G) holds from Proposition 3.1. A general loss function `ψ : A × Z → R∞ for any strictly concave nonnegative function ψ : P(A)→ R∞ is defined as follows: `ψ := Dψ (πθ(aj |zi)‖δ(aj |·)) , (16) where δ(aj |·) is a point-wise probability that satisfies lim →0 ∫ B( ;ãj) dδ(aj |ãj) = 1 for an open ball B( ; ãj), and Dψ is Bregman divergence (Bregman, 1967) defined by the following equation. Dψ(p‖q) := ψ(p)− ψ(q) + ∫ ∇ψ(q) d(p− q). (17) Sending a truthful signal is the best response to minimize the expectation of the general loss function. For example, KL-diveregence is a special case of Bregman divergence when ψ = −H [·], and the following equation holds. Eπθ [`ψ] = ∫ Dψ (πθ(aj |zi)‖δ(aj |·)) dπθ(aj |hi) = DKL (πθ(ai|zi) || πθ(ai|hi)) ≥ 0. (18) The equality holds if and only if zi = hi. Notice that πθ(ai|hi) = πθ(aj |hi). Now, we generalize the zero-one mechanism to arbitrary signaling games. Proposition A.2. (Bregman mechanism) For any signaling games, if supφ ‖ dV πθ/dβIψ‖ < 1, [ıIψ] is an unbiased truthful mechanism of G ∈ G for a general cost function: Iψ(a|z) := ∆Lψ(a|z)1 = n− 1 −1 . . . −1 −1 n− 1 . . . −1 ... ... . . . ... −1 −1 . . . n− 1 · 0 `ψ(a2|z1) . . . `ψ(an|z1) `ψ(a1|z2) 0 . . . `ψ(an|z2) ... ... . . . ... `ψ(a1|zn) `ψ(a2|zn) . . . 0 1 1 ... 1 , (19) where ∆ := nE − 11T is a graph Laplacian. Proof. The problem we dealt with is designing a scoring rule for information that satisfies two properties: 1) regularity, the score should be finite if the information is sufficiently correct, and 2) properness, the score should be maximized if and only if the information is true. The well-known example of the scoring rule is mutual information (MI), which compares a pair of probabilistic distributions p and q in the logarithmic domain. However, MI cannot apply to continuous actions. Instead, we introduce a more general tool, the regular proper scoring rule as defined below. Definition A.1. (Gneiting & Raftery, 2007) For a set Ω, F(·‖·) : P (Ω) × Ω → R∞ is a regular proper scoring rule iff there exists a strictly concave, real-valued function f on P (Ω) such that F (p‖x) = f(p)− ∫ Ω f∗(p(ω), x) dp(ω) + f∗(p, x) (20) for p ∈ P (Ω) and x ∈ Ω, where f∗ is a subgradient of f that satisfies the following property, f(q) ≥ f(p) + ∫ Ω f∗(p, ω) d(q − p)(ω) (21) for q ∈ P (Ω). We also define F for q ∈ P (Ω) as F (p‖q) := ∫ Ω F (p‖x) dq(x), and describe a set of regular proper scoring rules B. For F ,F1,F2 ∈ B, the following property holds (Gneiting & Raftery, 2007). 1. Strict concavity: if q 6= p, then F(q‖q) > F(p‖q). 2. F1(p‖q) + aF2(p‖q) + f(q) ∈ B where a > 0 and f are not dependent on p. 3. −Dψ ∈ B, where Dψ is the Bregman divergence. Lemma A.1. For any F ∈ B and a function `F defined as shown below, if supφ ‖ dV πθ /dβ`ψ‖ < 1, then [ı`F ] is a truthful mechanism of G. `F (aj |zi) := { −F (πθ(aj |zi)‖aj) (i 6= j) 0 (i = j) . (22) Proof. We prove that the surrogate objective of G[ı`F ], V πθβ := V πθ − β`F is strictly concave, and if ∇φV πθβ = 0, then zi = hi with probability 1. We denote φ̂ as the truthful parameter where σφ̂(hi|hi) = 1. The policy gradient for φ is ∇φV πθβ dqφ =∇φV πθ dqφ + β∇φ ∫ F (πθ(aj |zi)‖aj) dπθ dqφ =∇φV πθ dqφ + β∇φF (πθ(aj |zi)‖πθ(aj |hi)) dqφ. (23) First, we consider the local optima, i.e., ∇φV πθ dqφ = 0 and φ 6= φ̂. For Gâteaux differential with respect to ~φ := (φ̂−φ)T/‖φ̂−φ‖, ~φ∇V πθβ = β ~φ∇`ψ > 0 holds from the strict concaveity. At the global optima i.e., ∇φV πθ dqφ = 0 and φ = φ̂, ∇φV πθβ = β∇`ψ = 0 holds. Next, if ~φ∇V πθ < 0, as supφ ‖dV πθ / d`F‖ < β, the following equation holds for φ 6= φ̂. ~φ∇V πθβ dqφ = ~φ (∇V πθ + β∇`F ) dqφ > ~φ∇V πθ dqφ + sup φ ∥∥∥∥ dVd`F ∥∥∥∥ ~φ∇`F dqφ ≥ ~φ∇V πθ dqφ − inf φ (~φ∇V πθ ) dqφ ≥ 0. (24) Hence, ~φ∇V πθβ ≥ 0 holds, and the equality holds if and only if φ = φ̂. Therefore, V πθ β is strictly concave, and the following equation holds for αk ∈ o(1/k). lim K→∞ K∑ k=1 ∇φV πθβ (φk) ‖∇φV πθβ (φk)‖ αk = φ̂. (a.s.) (25) Iψ is defined for both discrete and continuous actions. Table 3 lists examples of scoring rules `ψ for arbitrary actions. In particular, minimizing `ψ for continuous action is known as probability density estimation (Gneiting & Raftery, 2007). −Iψ is a proper scoring rule (Gneiting & Raftery, 2007) since it is a linear combination of Bregman divergence. Hence, from Lemma A.1, [iIψ] is truthful. Besides, since 1TIψ = 1T∆Lφ1 = 0, [iIψ] is unbiased. Theorem A.1. (global optimally) For any G in Comm-POSG, TSP converges to the global optimum Ĵ(G) if the following convergence condition are met, sup φ ∣∣∣∣∂ ReV πθ∂ ImV πθ ∣∣∣∣ < β, (26) where β <∞ is bounded mass parameter. Proof. From Proposition A.2, [ıIψ] is unbiased truthful. Therefore, from Proposition A.1, convergence to the global optima is achieved. A.1 SELF-PLAY CONVERGES TO LOCAL OPTIMA Theorem A.2. If G ∈ G is non-truthful, self-play does not converge to the global optimum Ĵ(G). Proof. Example A.1. (One-bit two-way communication game) Fig. 4 shows an example of a non-cooperative partially observable environment with 1-bit state. The reward structure is presented in Table 4. The sum of rewards is maximized when both agents report the correct state to the environment, n∑ i=1 Rni (s,a) = { 2c (a1 = a2 = s) 0 (otherwise) . Hence, the objective varies in the range 0 ≤ J(G2com) ≤ 2c. Proposition A.3. If c < 1, J∗(G2com) < Ĵ(G2com) holds. Proof. Since p(s) = 1/2, we can assume s = 1 without loss of generality. Besides, we discuss only Agent 1 because of symmetry. From Z = {0, 1}, Agent 1’s messageling policy σ1 sends the correct information x or false information 1 − x when it knows x. Hence, we can represent the policy by using parameter φ ∈ [0, 1] as follows. σφ(z|x) = { φz(1− φ)1−z (x = 1) 1/2 (x = ◦) , (27) Differentiating σφ with φ gives the following. dσφ dφ = { 2z − 1 (x = 1) 0 (x = ◦) G 2 com. (28) Therefore, from Eq (??), if 〈π∗, ·〉 ∈ W∗(G2com), then the policy gradient for φ is as follows. d dφ U(π∗, φ) = d dφ ∫ V ∗1 dq1 dσ1 dP dp = ∫ V ∗1 dq1 dσ1 dφ dP dp =λ ∫ V ∗1 (2z1 − 1) dz1 ∣∣∣∣ s=x1=1 =λ ∫ (2z1 − 1)R1 dπ∗1 dπ∗2 dq2 dz1 dP ∣∣∣∣ s=x1=1 =λ(1− λ) ∫ (2z1 − 1)R1 dπ∗1 dπ∗2 dq2 dz1 ∣∣∣∣ s=x1=1,x2=◦ =λ(1− λ) 1∑ z1=0 (2z1 − 1)R1(s, 〈x1, z1〉) ∣∣∣∣∣ s=x1=1 =λ(1− λ) [R1(1, 〈1, 1〉)−R1(1, 〈1, 0〉)] =λ(1− λ)(c− 1) < 0. (29) As the policy gradient is negative from the assumption of c ∈ (0, 1), φ∗ = 0 gets the Nash equilibrium from φ ≥ 0, thereby resulting in always sending false information to the opposite: σφ∗(z|x) = { 1− z (x = 1) 1/2 (x = ◦) . (30) Let J〈x1, x2〉 := J |x=〈x1,x2〉. We can get J∗ and Ĵ as follows. J∗ = ∫ J∗〈x1, x2〉dP2 dp = J∗〈1, 1〉λ2 + J∗〈1, ◦〉 · 2λ(1− λ) + J∗〈◦, ◦〉(1− λ)2 = 2cλ2 + 0 + 2c/22 · (1− λ)2 = 2c [ λ2 + 1 4 (1− λ)2 ] , (31) and Ĵ = ∫ Ĵ〈x1, x2〉dP2 dp = Ĵ〈1, 1〉λ2 + Ĵ〈1, ◦〉 · 2λ(1− λ) + Ĵ〈◦, ◦〉(1− λ)2 = 2cλ2 + 2c · 2λ(1− λ) + 2c/22 · (1− λ)2 = 2c [ λ2 + 2λ(1− λ) + 1 4 (1− λ)2 ] , (32) respectively. Therefore, Ĵ − J∗ = 4cλ(1− λ) > 0, and J∗ < Ĵ holds. From Proposition A.1, G = G2com is the counterexample that global optimally does not occur. A.2 ZERO-ONE MECHANISM SOLVES G2com . Proposition A.4. (zero-one mechanism) Let ` : A × Z → {0, 1} be a zero-one loss between an action and a message `(ai|zj) := aj(1− zi) + (1− ai)zi, and I(a|z) := [ 1 −1 −1 1 ] [ 0 `(a2|z1) `(a1|z2) 0 ] [ 1 1 ] . (33) If β > (1 − c)(1 − λ)/λ, then [iI] is an unbiased truthful mechanism of G2com, and self-play with G2com[ıI] converges to the global optima Ĵ(G2com) = 2c[1− 3/4 · (1− λ)2]. Proof. The following equation holds. d dφ V ∗β,1 dq1 = d dφ ∫ (V ∗1 − βI1 dπ∗2 dq2) dq1 =− λ(1− λ)(1− c)− β ∫ I1 dπ∗2 dq1 dq2 dσ1 dφ dσ2 dP2 dp =− λ(1− λ)(1− c)− βλ ∫ I1 dπ∗2 dq2(2z1 − 1) dz1 dσ2 dP ∣∣∣∣ s=x1=1 =− λ(1− λ)(1− c)− βλ2 ∫ (2z1 − 1)`(a2|z1) dπ∗2 dq2 dz1 ∣∣∣∣ s=x1=x2=1 =− λ(1− λ)(1− c)− βλ2 1∑ z1=0 (2z1 − 1)`(1|z1) =− λ(1− λ)(1− c)− βλ2(`(1|1)− `(1|0)) =− λ(1− λ)(1− c) + βλ2 =λ2 [ ı− (1− c)1− λ λ ] . (34) Therefore, if β > (1− c)(1− λ)/λ, then φ∗ = 1 holds, and J∗ = Ĵ holds. The value of Ĵ is clear from the proof of Lemma A.1. [ıI] is also known as the peer prediction method (Miller et al., 2005), which is inspired by peer review. This process is illustrated in Fig. 4 Left, and the state-action value functions are listed in Fig. 4 Right. B COMPLEXITY ANALYSIS Although the computational complexity of βIψ per iteration is O(n3) as it involves the multiplication of n-order square matrices, we can reduce it to O(n2) to obtain Iψ = nLψ − 1TLψ1. The spatial complexity is O(n2), and the sample size is O(n). C EXPERIMENTAL ENVIRONMENTS In the experiment, we used two partial observation environments. This setting is the same as that adopted in existing studies (Sukhbaatar et al., 2016; Singh et al., 2019). Fig. 1 shows the environments. C.1 PREDATOR PREY (PP) Predator-Prey (PP) is a widely used benchmark environment in MARL (Barrett et al., 2011; Sukhbaatar et al., 2016; Singh et al., 2019). Multiple predators search for prey in a randomly initialized location in the grid world with a limited field of view. The field of view of a predator is limited so that only a few blocks can be seen. Therefore, for the predator to reach the prey faster, it is necessary to inform other predators about the prey’s location and the locations already searched. Thus, the prey’s location is conveyed through communication among the predators, but predators can also send false messages to keep other predators away from the prey. In this experiment, experiments are performed using PP-3 and PP-5, which have two difficulty levels. In PP-3, the range of the visual field is set to 0 in a 5× 5 environment. In PP-5, the field of view is set to 1 in a 10× 10 environment. The numbers represent the number of agents. C.2 TRAFFIC JUNCTION (TJ) Traffic Junction (TJ) is a simplified road intersection task. An n body agent with a limited field of view informs the other bodies of its location to avoid collisions. In this experiment, the difficulty level of TJ is changed to three. TJ-5 solves the task of crossing two direct one-way streets. For TJ-10, there are two lanes, and each body can not only go straight, but also turn left or right. For TJ-20, the two-lane road will comprise two parallel roads, for a total of four intersections. Each number corresponds to n. In the initial state, each vehicle is given a starting point and a destination and is trained to follow a determined path as fast as possible while avoiding collisions. The agent is in each body and takes two actions, i.e., accelerator and brake, in a single time step. It is crucial to ensure that other vehicles do not approach each other to prevent collision while making good use of multiagent communication. That is similar to blinkers and brake lights. C.3 STARCRAFT: BLOOD WARS (SC) Explore: In order to complete the exploration task, the agent must be within a specific range (field of view) of the enemy unit. Once the agent is within the enemy unit’s field of view, it does not take any further action. The reward structure of the enemy unit is the same as the PP task, with the only difference being that the agent The point is that instead of being in the same place, you explore the enemy unit’s range of vision and get a non-negative reward. Medic units that do not attack enemy units are used to prevent combat from interfering with the mission objective. For observation, for each agent, it is the agent’s (absolute x, absolute y) and the enemy’s (relative x, relative y, visible), where visiblea is a visual range. If the enemy is not in exploration range, the relative x and relative y are zero. The agent has nine actions to choose from: eight basic directions and one stay action. Combat: Agents make their own observations (absolute x, absolute y, health point + shield, weapon cooldown, previous action) and (relative x, relative y, visible, health point + shield, weapon cooldown). Relative x and y are only observed when the enemy is visible, corresponds to a visible flag. All observations are normalized to lie between (0, 1). The agent must choose from 9+M actions, including 9 basic actions and 1 action to attack M agents. The attack action is only effective if the enemy is within the agent’s view, otherwise is a no-problem. In combat, the environment doesn’t compare to Starcraft’s predecessors. The setup is much more difficult, restrictive, new and different, and therefore not directly comparable. In Combat task, we give a negative reward rtime = −0.01 at each time step to avoid delaying the enemy team’s detection. When an agent is not participating in a battle, at each time step, the agent is rewarded with (i) normalized health status at the current and previous time step, and (ii) normalized health status at the previous time step displays the time steps of the enemies you have attacked so far and the current time step. The final reward for each agent consists of (i) all remaining health * 3 as a negative reward and (ii) 5 * m + all remaining health * 3 as a positive reward if the agent wins. Give health*3 to all living enemies as a negative reward when you lose. In this task, a group of enemies is randomly initialized in half of the map. Thus the other half making communication-demanding tasks even more difficult. D HYPERPARAMETERS
1. What is the main contribution of the paper regarding improving SP in partially observable environments? 2. What are the strengths and weaknesses of the proposed method, particularly in its theoretical grounding and experimental results? 3. Do you have any concerns regarding the clarity and novelty of the paper's content, especially in its use of terminology and separation of the communication channel from the state space? 4. How does the reviewer assess the quality and reproducibility of the paper's experiments and results? 5. Are there any suggestions for additional baselines or related work that could enhance the paper's comparisons and distinctions?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Truth Self Play (TSP) is a method for improving SP in partially observable environments which involve communication. The motivation is well grounded adding additional reward term to evaluate the truthfulness of communication sent. This is implemented by the additional reward p(a_j | z_ij), which is to say, force agent j to listen to the communication sent from agent i. In a reverse way, as the listener has to care about the message, the speaker (as this is self-play) starts providing useful information. Strengths And Weaknesses Strengths Strong theoretical grounding Experiments on lots of environments This is useful Weaknesses The math is kinda OOT in my opinion. Section 3 could be way shorter. The use of imaginary numbers is unnecesary - simple use R^2 and then apply a normalization over a single dimension. (Figure a) Only showing the Real part of the reward signal? Can we see how the second reward signal changes over time? Experimental results are lacking. I appreciate the variety of environments applied but simply reporting numbers isn’t convincing. I’d like to understand how messages look with some visualisation and how they change over time. I’d also like to understand what optimal or desired behaviour looks like in these environments - can you get to welface 0 in PP or TJ? Please info the reader more of your baselines (CommNET and IC3Net) Why didn’t you run CommNet w TSP Why don’t you have MAPPO as a baseline? The proper scoring rules seems to be no different agent modelling in other agents. Could you add some of these into the related work section just to help differentiate (or explain in rebuttal if i’m being slow) Clarity, Quality, Novelty And Reproducibility Clarity: Found the paper really unclear! Lots of confusing terms - is evolutionary learning any different to automatic curriculum? Just trying to understand the jargon here. Unclear what internal-state refers to in the introduction -> is this the internal beliefs of an agent? Why is this framed as faithful state representations - the question is about the truthfulness of the communication channel which is explicitly separate to the state space! Novelty: I can’t tell if this is Novel cause it just seems like opponent modelling and they haven’t explained why its different.
ICLR
Title Truthful Self-Play Abstract We present a general framework for evolutionary learning to emergent unbiased state representation without any supervision. Evolutionary frameworks such as self-play converge to bad local optima in case of multi-agent reinforcement learning in non-cooperative partially observable environments with communication due to information asymmetry. Our proposed framework is a simple modification of selfplay inspired by mechanism design, also known as reverse game theory, to elicit truthful signals and make the agents cooperative. The key idea is to add imaginary rewards using the peer prediction method, i.e., a mechanism for evaluating the validity of information exchanged between agents in a decentralized environment. Numerical experiments with predator prey, traffic junction and StarCraft tasks demonstrate that the state-of-the-art performance of our framework. N/A We present a general framework for evolutionary learning to emergent unbiased state representation without any supervision. Evolutionary frameworks such as self-play converge to bad local optima in case of multi-agent reinforcement learning in non-cooperative partially observable environments with communication due to information asymmetry. Our proposed framework is a simple modification of selfplay inspired by mechanism design, also known as reverse game theory, to elicit truthful signals and make the agents cooperative. The key idea is to add imaginary rewards using the peer prediction method, i.e., a mechanism for evaluating the validity of information exchanged between agents in a decentralized environment. Numerical experiments with predator prey, traffic junction and StarCraft tasks demonstrate that the state-of-the-art performance of our framework. 1 INTRODUCTION Evolving culture prevents deep neural networks from falling into bad local optima (Bengio, 2012). Self-play (Samuel, 1967; Tesauro, 1995) has not only demonstrated the ability to abstract highdimensional state spaces as typified by AlphaGo (Silver et al., 2017), but also improved exploration coverage in partially observable environments. Communication (Sukhbaatar et al., 2016; Singh et al., 2019) exchanges their internal representations such as explored observation and hidden state in RNNs. Evolutionary learning is expected to be a general framework for creating superhuman AIs as such learning can generate a high-level abstract representation without any bias in supervision. However, when applying evolutionary learning to a partially observable environment with noncooperative agents, improper bias is injected into the state representation. This bias originates from the environment. A partially observable environment with non-cooperative agents induces actions that disable an agent from honestly sharing the correct internal state resulting in the agent taking actions such as concealing information and deceiving other agents at equilibrium (Singh et al., 2019). The problem arises because the agent cannot fully observe the state of the environment, and thus, it does not have sufficient knowledge to verify the information provided by other agents. Furthermore, neural networks are vulnerable to adversarial examples (Szegedy et al., 2014) and are likely to induce erroneous behavior with small perturbations. Many discriminative models for information accuracy are available; these include GANs (Goodfellow et al., 2014; Radford et al., 2016) and curriculum learning (Lowe et al., 2020). However, these models assume that accurate samples can be obtained by supervision. Because of this assumption, is it impossible to apply these models to a partially observable environment, where the distribution is not stable. We generalize self-play to non-cooperative partially observable environments via mechanism design (Myerson, 1983; Miller et al., 2005), which is also known as reverse game theory. The key idea is to add imaginary rewards by using the peer prediction method (Miller et al., 2005), that is, a mechanism for evaluating the validity of information exchanged between agents in a decentralized environment, which is calculated based on social influence on the signals. We formulate the non-cooperative partially observable environment as an extention of the partially observable stochastic games (POSG) (Hansen et al., 2004); introduce truthfulness (Vickrey, 1961), which is an indicator of the validity of state representation. We show that the imaginary reward enables us to reflect the bias of state representation on the gradient without oracles. As the first contribution, we propose truthful self-play (TSP) and analytically demonstrate convergence to the global optimum (Section 4). We propose the imaginary reward on the basis of the peer prediction method (Miller et al., 2005) and apply it to self-play. The mechanism affects the gradient of the local optima, but not the global optima. The trick is to use the actions taken by the agents as feedback to verify the received signal from the every other agent, instead of the true state, input, and intent, which the agents cannot fully observe. TSP only requires a modification of the baseline function for self-play; it drastically improves the convergence to the global optimum in Comm-POSG. As the second contribution, based on the results of numerical experiments, we report that the TSP achieved state-of-the-art performance for various multi-agent tasks made of up to 20 agents (Section 5). Using predator prey (Barrett et al., 2011), traffic junction (Sukhbaatar et al., 2016; Singh et al., 2019), and StarCraft (Synnaeve et al., 2016) environments, which are typically used in Comm-POSG research, we compared the performances of TSP with the current neural nets, including the state-ofthe-art method, with LSTM, CommNet (Sukhbaatar et al., 2016), and IC3Net (Singh et al., 2019). We report that the model with IC3Net optimized by TSP has the best performance. This work is the first attempt to apply mechanism design to evolutionary learning. TSP is a general optimization algorithm whose convergence is theoretically guaranteed for arbitrary policies and environments. Since no supervision is required, TSP has a wide range of applications to not only game AIs (Silver et al., 2017), but also the robots (Jaderberg et al., 2018), chatbots (Gupta et al., 2019; Chevalier et al., 2019), and autonomous cars (Tang, 2019) employed in multiagent tasks. Notation: Vectors are columns. Let JnK := {1, . . . , n}. R is a set of real numbers. i is the imaginary unit. Reu and Imu are a real and an imaginary part of complex number u, respectively. n-tuple are written as boldface of the original variables a := 〈a1, . . . , an〉 , and a−i is a (n− 1)-tuple obtained by removing the i-th entry from a. Let 1 := (1, . . . , 1)T. Matrices are shown in uppercase letters L := (`ij). E is the unit matrix. The set of probability distributions based on the support X is described as P(X ). 2 RELATED WORK Neural communication has gained attention in the field of multiagent reinforcement learning (MARL) for both discrete (Foerster et al., 2016) and continuous (Sukhbaatar et al., 2016; Singh et al., 2019) signals. Those networks are trained via self-play to exchange the internal state of the environment stored in the working memory of recurrent neural networks (RNNs) to learn the right policy in partially observable environments. The term self-play was coined by the game AI community in the latter half of the century. Samuel (Samuel, 1967) introduced self-play as a framework for sharing a state-action value among two opposing agents to efficiently search the state space at Checkers. TD-Gammon (Tesauro, 1995) introduced self-play as a framework to learn TD(λ) (Sutton & Barto, 1998) and achieve professionalgrade levels in backgammon. AlphaGo (Silver et al., 2017) defeated the Go champion by combining supervised learning with professional game records and self-play. AlphaZero (Silver et al., 2018) successfully learnt beyond its own performance entirely based on self-play. All these studies explain that eliminating the bias of human knowledge in supervision is the advantage of self-play. Self-play is also known as evolutionary learning (Bengio, 2012) in the deep learning community mainly as an approach to emerging representations without supervision (Bansal et al., 2018; Balduzzi et al., 2019). Bansal et al. (2018) show that competitive environments contribute to emerging diversity and complexity. Rich generative models such as GANs (Goodfellow et al., 2014; Radford et al., 2016) are frameworks for acquiring an environmental model by employing competitive settings. RNNs such as world models (Ha & Schmidhuber, 2018; Eslami et al., 2018) are capable of more comprehensive ranges of exploration in partially observable environments and generation of symbols and languages (Bengio, 2017; Gupta et al., 2019; Chevalier et al., 2019). The difference between evolutionary learning and supervised learning is the absence of human knowledge and oracles. Several works have formalized those in which the agents exchange environmental information as a formal class of the games such as Dec-POMDP-Com (Goldman & Zilberstein, 2003) and COMMTDP (Pynadath & Tambe, 2002), and several frameworks are proposed to aim to solve the problems. However, the limitation of the frameworks is that they assume a common reward. As there are yet no formal definition of non-cooperative communication game, we formalize such a game to Comm-POSG as a superset of POSGs (Hansen et al., 2004), a more general class of multi-agent games including the cases of non-cooperativity (Hansen et al., 2004). To the best of our knowledge, there are no studies that have introduced truthful mechanisms into the field of MARL, but it may be possible to introduce it by using agents that can learn flexibly, such as neural networks. A typical truthful mechanism is the VCG mechanism (Vickrey, 1961), which is a generalization of the pivot method used in auction theory, but whereas the subject of the report that must satisfy truthfulness must be a valuation (or a value function if interpreted from a RL perspective). In this study, the scope of application is different because the belief states of the environment are subject to reporting. Therefore, we introduce instead a peer prediction method (Miller et al., 2005) that guarantees truthfulness with respect to reporting beliefs about arbitrary probability distributions using proper scoring rules (Gneiting & Raftery, 2007). 3 PROBLEM DEFINITION 3.1 COMM-POSG A communicative partially-observable stochastic game (Comm-POSG) is a class of non-cooperative Bayesian games in which every agent does not fully observe the environment but interacts each other. We define Comm-POSG as an extension of POSG (Hansen et al., 2004) with a message protocol. Definition 3.1 (Hansen et al., 2004) POSG 〈n, T,S,A,X , T ,P,R〉 is a class for multi-agent decision making under uncertainty in which the state evolves over time 1 ≤ t ≤ T , where • n is the number of agents, • T is a horizon i.e., the episode length, • S is a set of discrete/continuous state st ∈ S with an initial probabilistic distribution p(s0), • A is a set of discrete/continuous action ati ∈ A, • X is a set of discrete/continuous observation xti ∈ X , • T ∈ P (S ×A× S) is state transition probability, • P ∈ P (S × Xn) is an observation probability, and • R : S ×An → Rn is a reward function that outputs an n-dimensional vector. In Comm-POSGs, every agent further follows a message protocol Zn×n, where Z is the discrete/continuous signal space. The complete information exchanged among the agent in time is Zt, where Zt := (ztij)i,j∈JnK ∈ Zn×n is a signal matrix in which (i, j)-th entry ztij represents a signal from Agent i to Agent j at t. The i-th diagonal entry of Zt, hti := ztii represents the pre-state, an internal state of i-th agent before receiving the singals from the others. A game in Comm-POSG is denoted as G := 〈n, T,S,A,X , T ,P,R,Z〉. The objective of Comm-POSG is social welfare (Arrow, 1963) defined by the following, J := n∑ i=1 V πi ; V πi := Eπi [ T∑ t=1 γt−1rti ] , (1) where γ ∈ [0, 1] is discount rate, rti is reward πi is a stochastic policy, and V πi is the value function. In extensive-form games including Comm-POSG, in addition to the information in the environment, the information of other agents cannot be observed. In the optimization problem under these assumptions, a policy converges to a solution called the Bayesian Nash equilibrium (BNE) (Fudenberg, 1993). We denote the social welfare at the BNE is J∗, and the global maximum Ĵ . In general, J∗ 6= Ĵ holds, which is closely related to the information asymmetry. 3.2 COMMUNICATIVE RECURRENT AGENTS In order to propose an optimization algorithm in this paper, we do not propose a concrete structure of the network, but we propose an abstract structure that can cover existing neural communication models (Sukhbaatar et al., 2016; Singh et al., 2019), namely communicative recurrent agents (CRAs) 〈fφ, σφ, qφ, πθ〉, where • fφ(ĥt−1,i, xti) 7→ Z is a deep RNN for the high-dimensional input xti ∈ X and the previous post-state ĥt−1,i ∈ Z , with a parameter φ and an initial state ĥ0 ∈ Z , • σφ(zti|hti) is a stochastic messaging policy for a pre-state hti := fφ(ĥt−1,i, xti), • qφ(ĥti|ẑti) is a stochastic model for a post-state ĥti ∈ Z and the received messages ẑti := Z T t:i = (zt1i, . . . , zt,i−1,i, hti, zt,i+1,i, . . . , ztni) T, and • πθ(ati|ĥti) is the stochastic action policy with a parameter θ. These agents are trained through self-play using on-policy learning such as REINFORCE (Williams, 1992). All n-agents share the same weight per episode, and the weights are updated based on the cumulative reward after the episode. In addition to the recurrent agent’s output of actions with the observation series as input, the CRA has signals for communication as input and output. CRAs estimate current state of the environment and current value of the agent herself based on the post-state model with the pre-state hti in the hidden layer of the RNN and the received signals ẑti,−i from other agents. Hence, the veracity of the signals zti is the point of contention. 3.3 TRUTHFULNESS In mechanism design, a truthful game (Vickrey, 1961) is a game in which all agents make an honest reporting in the Bayesian Nash equilibrium. In Comm-POSGs, the truthfulness of the game is achieved if all the sent signal equals the pre-state ztij = hti i.e., all the agent share a complete information. In such case, every agent has the same information ẑti = hti := (ht1, . . . , htn)T for all i’s and the same post-state model probability distribution, and hence the mean of the cross entropy between the distributions below will be minimized. Dφ(Zt) := 1 n n∑ i=1 H [ qφ(ĥti|ẑti) ] + 1 n2 n∑ i=1 n∑ j=1 DKL ( qφ(ĥti|ẑti) || qφ(ĥtj |ẑtj) ) . (2) The first term represents the entropy of knowledge each agent has about the environment, and the second the information asymmetry between the agents. Dφ is a lower bound on the amount of true information the environment has H [p(st)]. Since achieving truthfulness is essentially the same problem as minimizing Dφ, it also maximizes J∗ simultaneously. Proposition 3.1. (global optimality) For any games G in Comm-POSG, if Dφ(Zt) = H [p(st)] for 0 ≤ t ≤ T andRi is symmetric for any permutation of i ∈ JnK, J∗(G) = Ĵ(G) holds. Proof. Let w := 〈θ, φ〉 on a parameter space W(G), and W∗(G) the BNE. Since J is obviously maximized if σφ is truthful, we prove σφ must be truthful under a given condition. To this end, we show the following Pareto-optimality. Lemma 3.1. For any G in Comm-POSG and given w, if J(w) ≥ J(w′) holds for any w′ ∈ W(G), then either of U(w) ≥ U(w′) or DKL (p || qφ) ≤ DKL (p || qφ′) holds, where U(w) := Eqφ [V πθ ] = ∫ st∈S,ẑt∈Zn V πθ (st) dqφ(st|hti, zt,−i) dσφ(ẑt). (3) Proof. The first inequality U(w) ≥ U(w′) indicates that w is on the BNEW∗(G) given φ, and of the second that the belief state qφ is as close to the true state p as possible. On a fully-observable environment, from the theorem of value iteration, there exists the solution π?(st) w.r.t. V π ? (st) for any st ∈ S. We name π? and V ? := V π ? the unbiased policy and value, respectively. Since the unbiased policy solves the objective as J(w) = nEp(st) [V πθ (st)] from Eq (1), the goal intrinsicly is to find the policy π∗ as close as π? for 〈π∗, φ∗〉 ∈ W(G), that maximizes U(w). The π∗ further can be represented as a mixed policy made of a couple of the unbiased policy π? and a biased policy π′ 6= π? as follows, π∗(ati|xti, φ) = Eqφ(sti|xti) [π ?(ati|sti)] = qφ(st0|xti)π?(ati|st0) + (1− qφ(st0|xti))π′(ati|φ,xti), (4) for the observations xti ∈ Xn, where st0 ∈ S is the true state. Hence V π ∗ (st0|φ) = EP(xti|st0) [qφ(st0|xti)V ?(st0) + (1− qφ(st0|xti))V ′(xti|φ))] = qφ(st0)V ?(st0) + EP(xti|st0) [(1− qφ(st0|xti))V ′(xti|φ))] = qφ(st0)V ?(st0) + (1− qφ(st0))V̄ ′(st0|φ), (5) where V ′(xti|φ) := ∫ st∈Sn+1,at∈An Ri(st0,at) n∏ i=1 dπ′(ati|sti)qφ(sti|xti), (6) and V̄ ′(st0|φ) := EP(xti|st0) [ V ′(xti|φ) 1− qφ(st0|xti) 1− qφ(st0) ] . (7) Thus, the error from the unbiased value function can be written as V ?(st0) − V π ∗ (st0|φ) = (1 − qφ(st0))(V ?(st0)− V̄ ′(st0|φ)), which is minimized if qφ(st0) = 1 as V ?(st0) > V̄ ′(st0|φ) by the definition. From the Jensen’s equation, logEp(st0) [qφ(st0)] ≥ Ep(st0) [log qφ(st0)] = −DKL (p || qφ)−H [p] . (8) The right-hand side of the inequation corresponds to the negative cross-entropy to be maximized. Therefore, as the second term H [p] depends not on φ, the optimization is achieved by minimizing DKL (p || qφ). Suppose that Ĵ(G) = J(w) > J∗(G) for a non-truthful reporting policy s.t. σφ(h|h) < 1. From lemma 3.1, qφ(st|hti) for an internal state hti = f(xti) of Agent i with an encoder f minimizes DKL (p || qφ(st|hti)). As that qφ(st|zti) 6= qφ(st|hti) and DKL (p || qφ(st|zti)) > DKL (p || qφ(st|hti)) contradicts the Pareto-optimality, σφ must be truthful. 4 PROPOSED FRAMEWORK An obvious way to achieve truthful learning is to add Dφ as a penalty term of the objective, but there are two obstacles to this approach. One is that the new regularization term also adds a bias to the social welfare J , and the other is that Dφ contains the agent’s internal state, post-state ĥti, so the exact amount cannot be measured by the designer of the learner. If post-states are reported correctly, then pre-states should also be reported honestly, and thus truthfulness is achieved. Therefore, it must be assumed that the post-states cannot be observed during optimization. Our framework, truthful self-play (TSP), consists of two elements: one is the introduction of imaginary rewards, a general framework for unbiased regularization in Comm-POSG, and the other is the introduction of peer prediction method (Miller et al., 2005), a truthful mechanism to encourage honest reporting based solely on observable variables. In the following, each of them is described separately and we clarify that the proposed framework converges to the global optimum in Comm-POSG. We show the whole procedure in Algorithm 1. 4.1 IMAGINARY REWARD Imaginary rewards are virtual rewards passed from an agent and have a different basis i than rewards passed from the environment, with the characteristic that they sum to zero. Since the most of RL environments, including Comm-POSG, are of no other entities than agent and environment, twodimensional structure are sufficient to describe them comprehensively if we wish to distinguish the sender of the reward. To remain the social welfare of the system is real, the system must be designed so that the sum of the imaginary rewards, i.e., imaginary part of the social welfare, is zero. In other words, it is not observed macroscopically and affects only the relative expected rewards of agents. The real and imaginary parts of the complex rewards are ordered by the mass parameter β during training, which allows the weights of the network to maintain a real structure. The whole imaginary reward is denoted as iY = (iyij)i,j∈JnK1 where iyij is the imaginary reward passed from Agent i to Agent j, and the complex reward for the whole game is R+ := R + iY where R is a diagonal matrix with the environmental reward ri as (i, i)-th entry. We write G[iY] as the structure in which this structure is introduced. In this case, the following proposition holds. Proposition 4.1. For any G in Comm-POSG, if G[iY] is truthful and R+ is an Hermitian matrix, J∗(G[iY]) = Ĵ(G) holds. Proof. Since G[iY] is truthful, J∗(G[iY]) = Ĵ(G[iY]) holds from Proposition 3.1. Further, since R+ is Hermitian, iyij = −iyji, and hence Im Ĵ(G[iY]) = 0 holds; Ĵ(G[iY]) = Ĵ(G) holds. This indicates that the BNE could be improved by introducing imaginary rewards: J∗(G[iY]) ≥ J∗(G). Also, since ∑n i=1 ∑n j=1 iyij = 0 from the condition that R + is Hermitian, the imaginary rewards affect not the social welfare of the system, which is a macroscopic objective, but only the expected rewards of each agent. The baseline in policy gradient (Williams, 1992) is an example of a function that affects not the objective when the mean gets zero. However, the baseline function is a quantity that is determined based on the value function of a single agent, whereas the imaginary reward is different in that (1) it affects the value function of each agent and (2) it is a meaningful quantity only when n ≥ 2 and is not observed when n = 1. 4.2 PEER PREDICTION MECHANISM The peer prediction mechanism (Miller et al., 2005) is derived from a mechanism design using proper scoring rules (Gneiting & Raftery, 2007), which aims to encourage verifiers to honestly report their beliefs by assigning a reward measure score to their responses when predicting probabilistic events. These mechanisms assume at least two agents, a reporter and a verifier. The general scoring rule can be expressed as F(ps‖s) where ps is the probability of occurrence reported by the verifier for the event s, and F(ps‖s) is the score to be obtained if the reported event s actually occurred. The scoring rule is proper if an honest declaration consistent with the beliefs of the verifier and the reported ps maximizes the expected value of the earned score, and it is strictly proper if it is the only option for maximizing the expected value. A representative example that is strictly proper is the logarithmic scoring rule F(ps‖s) = log ps, where the expected value for a 1-bit signal is the cross-entropy p∗s log ps + (1− p∗s) log(1− ps) for belief p∗s . One can find that ps = p∗s is the only report that maximizes the score. Since the proper scoring rule assumes that events s are observable, it is not applicable to problems such as partial observation environments where the true value is hidden. Miller et al. (2005), who first presented a peer prediction mechanism, posed scoring to a posterior of the verifiers that are updated by the signal, rather than the event. This concept is formulated by a model that assumes that an event s emits a signal z stochastically and infers the type of s by the signal of the reporters who receive it. The peer prediction mechanism (Miller et al., 2005) is denoted as F(p(s|z)‖s) under the assumption that (1) the type of event s and the signal z emitted by each type follow a given prior, (2) the priors are shared knowledge among verifiers, and (3) the posterior is updated according to the reports. We apply the mechanism to RL, i.e., the problem of predicting the agent’s optimal behavior ati ∼ πθ|st for the true state st ∈ S. In self-play, the conditions of 1 and 2 can be satisfied because the prior πθ is shared among the agents, and furthermore, the post-state in Comm-POSG corresponds to 3, so that the peer prediction mechanism can be applied to the problem of predicting agent behavior. To summarize the above discussion, we can see that we can allocate a score matrix Lt as follows, Lt := (`(ati|ztji))i,j∈JnK; `(ati|ztji) := F(πθ(ati|ztji)‖ati) = log πθ(ati|ztji), (9) which is an n-order square matrix representing the score from Agent i to Agent j. 4.3 THE TRUTHFUL SELF-PLAY In TSP, a truthful system is constructed by introducing a proper scoring rule into imaginary rewards. However, since the score matrix obtained from the proper scoring rule does not satisfy Hermitianity, we perform zero-averaging by subtracting the mean of the scores from each element of the matrix, 1Note the use of i for imaginary units and i for indices. Algorithm 1 The truthful self-play (TSP). Require: Comm-POSG G = 〈n, T,S,A+,X , T ,P,R+〉, recurrent neural network 〈σφ, qφ, πθ〉 with initial weight w0 = 〈θ0, φ0〉 and initial state h0, learning rate α > 0, and mass parameter β ≥ 0. Initialize w← w0. for each episode do Genesis: s1 ∼ p(s), ĥ0i ← h0 ∀i ∈ JnK. for t = 1 to T do 1. Self-Play Observe xt ∼ Pn(·|st), Update pre-state hti ← fφ(ĥt−1,i, xti), ∀i ∈ JnK. Generate message zti ∼ σφ(·|hti), ∀i ∈ JnK. Send message Zt ← (zt1, . . . , ztn) Receive the message ẑti ← ZTt:i, ∀i ∈ JnK. Update post-state ĥti ∼ qφ(·|ẑti), ∀i ∈ JnK. Act ati ∼ πθ(·|ĥti), ∀i ∈ JnK. Get the real reward rt ← R(st,at), 2. Compute the score matrix with peer prediction mechanism (Miller et al., 2005), `ij ← { log πθ(aj |zi) (i 6= j) 0 (i = j) ∀i, j ∈ JnK; (12) 3. Combine real and imaginary rewards and construct a complex reward. R+t ← Rt + ı∆L . (13) 4. Update the weights by policy gradient (Williams, 1992), gt ← n∑ i=1 r+ti ∇w [ log πθ(ati|ĥti) + log qφ(ĥti|ẑti) + log σφ(zti|ĥt−1,i, xti) ] (14) w← w + αRegt + αβImgt 5. Proceed to the next state st+1 ∼ T (·|st,at). end for end for return w thereby making the sum to zero. This can be expressed as follows using the graph Laplacian ∆ := E − 11T/n. Y = ∆Lψ = 1 n n− 1 −1 . . . −1 −1 n− 1 . . . −1 ... ... . . . ... −1 −1 . . . n− 1 0 `ψ(at2|zt1) . . . `ψ(atn|zt1) `ψ(at1|zt2) 0 . . . `ψ(atn|zt2) ... ... . . . ... `ψ(at1|ztn) `ψ(at2|ztn) . . . 0 , (10) to get R+ = R + i∆Lψ, (11) which is the formula that connects reinforcement learning and mechanism design. We show the truthful self-play (TSP) in Algorithm 1. The only modification required from self-play is the imaginary reward. Theorem 4.1. (global optimality) For any G in Comm-POSG, TSP converges to the global optimum Ĵ(G) if the following convergence condition are met, sup φ ∣∣∣∣∂ ReV πθ∂ ImV πθ ∣∣∣∣ < β, (15) where β <∞ is bounded mass parameter. Proof (in summary2) From Proposition A.2, [β∆Lψ] is unbiased truthful. Therefore, from Proposition A.1, convergence to the global optima is achieved. 5 NUMERICAL EXPERIMENT In this section, we establish the convergence of TSP through the results of numerical experiments with deep neural nets. We consider three environments for our analysis and experiments (→ Fig. 1). (a) a predator prey environment (PP) in which predators with limited vision look for a prey on a square grid. (b) a traffic junction environment (TJ) similar to Sukhbaatar et al. (2016) in which agents with limited vision learn to signal in order to avoid collisions. (c) StarCraft: Brood Wars (SC) explore and combat tasks which test control on multiple agents in various scenarios where agent needs to understand and decouple observations for multiple opposing units. We compare the performances of TSP with self-play (SP) and SP with curiosity (Houthooft et al., 2016) using three tasks belonging to Comm-POSG, comprising up to 20 agents. The hyperparameters are listed in the appendix. With regard CRAs, three models namely LSTM, CommNet (Sukhbaatar et al., 2016), and IC3Net (Singh et al., 2019), were compared. The empirical mean of the social welfare function J was used as a measure for the comparison. IC3Net is an improvement over CommNet, which is a continuous communication method based on LSTM. Actor-critic and value functions were added to the baselines in all the frameworks. We performed 2,000 epochs of experiment with 500 steps, each using 120 CPUs; the experiment was conducted over a period of three days. PP and TJ: Table 1 lists the experimental results for each task. We can see that IC3Net with TSP outperforms the one with SP for all tasks. Fig. 2 (a) shows that TSP elicits truthful information, and 2Refer Section A for the proofs. StarCraft task CommNet IC3Net IC3Net w/ TSP (b) confirms that the social welfare of TSP exceeds that of the SPs. (c) confirms that the average of the imaginary part is zero. From these experimental results, we conclude that the TSP successfully realized truthful learning and state-of-the-art in tasks comprising 3 to 20 agents. StarCraft: Table 2 shows a comparison of social welfare in the exploration and combat tasks in StarCraft. (i) In the search task, 10 Medics find one enemy medic on a 50×50-cell grid; similar to PP, the reward is a competitive task where the reward is divided by the number of medics found. (ii) In the combat task, 10 Marines versus 3 Zealots fight on a 50×50 cell grid. The maximum step of the episode is set at 60. We find that IC3Net, with its information-hiding gate, performs less well than CommNet but performs better when trained in TSP due to the truthful mechanism. 6 CONCLUDING REMARK Our objective was to construct a general framework for emergent unbiased state representation without any supervision. Firstly, we proposed the TSP and theoretically clarified its convergence to the global optimum in the general case. Secondly, we performed experiments involving up to 20 agents and achieved the state-of-the-art performance for all the tasks. Herein, we summarize the advantages of our framework. 1. Strong convergence: TSP guarantees convergence to the global optimum theoretically and experimentally; self-play cannot provide such a guarantee. Besides, the imaginary reward i∆L satisfies the baseline condition. 2. Simple solution: The only modification required for TSP is that i∆L should be added to the baseline in order to easily implement it for deep learning software libraries such as TensorFlow and PyTorch. 3. Broad coverage: TSP is a general framework, the same as self-play. Since TSP is independent of both agents and environments and supports both discrete and continuous control, it can be applied to a wide range of domains. No supervision is required. To the best of our knowledge, introducing mechanism design to MARL is a new direction for the deep-learning community. In future work, we will consider fairness (Sen, 1984) as the social choice function. We expect that many other frameworks will be developed by using the methodology employed in this study. A THEORY “. . . a human brain can learn such high-level abstractions if guided by the messages produced by other humans, which act as hints or indirect supervision for these high-level abstractions; and, language and the recombination and optimization of mental concepts provide an efficient evolutionary recombination operator, and this gives rise to rapid search in the space of communicable ideas that help humans build up better high-level internal representations of their world.” (Bengio, 2012) Proposition A.1. If [C] is an unbiased truthful mechanism of G, self-play with G[C] converges to Ĵ(G). Proof. Since [C] is unbiased, Eπθ [Ci] = 0 holds. Hence, for an arbitrary baseline b, b+ Ci also satisfies the baseline condition. Therefore, from the policy gradient theorem (Sutton & Barto, 1998), self-play converges to J∗(G[C]). Further, since [C] is an unbiased truthful mechanism, J∗(G[C]) = Ĵ(G[C]) = Ĵ(G) holds from Proposition 3.1. A general loss function `ψ : A × Z → R∞ for any strictly concave nonnegative function ψ : P(A)→ R∞ is defined as follows: `ψ := Dψ (πθ(aj |zi)‖δ(aj |·)) , (16) where δ(aj |·) is a point-wise probability that satisfies lim →0 ∫ B( ;ãj) dδ(aj |ãj) = 1 for an open ball B( ; ãj), and Dψ is Bregman divergence (Bregman, 1967) defined by the following equation. Dψ(p‖q) := ψ(p)− ψ(q) + ∫ ∇ψ(q) d(p− q). (17) Sending a truthful signal is the best response to minimize the expectation of the general loss function. For example, KL-diveregence is a special case of Bregman divergence when ψ = −H [·], and the following equation holds. Eπθ [`ψ] = ∫ Dψ (πθ(aj |zi)‖δ(aj |·)) dπθ(aj |hi) = DKL (πθ(ai|zi) || πθ(ai|hi)) ≥ 0. (18) The equality holds if and only if zi = hi. Notice that πθ(ai|hi) = πθ(aj |hi). Now, we generalize the zero-one mechanism to arbitrary signaling games. Proposition A.2. (Bregman mechanism) For any signaling games, if supφ ‖ dV πθ/dβIψ‖ < 1, [ıIψ] is an unbiased truthful mechanism of G ∈ G for a general cost function: Iψ(a|z) := ∆Lψ(a|z)1 = n− 1 −1 . . . −1 −1 n− 1 . . . −1 ... ... . . . ... −1 −1 . . . n− 1 · 0 `ψ(a2|z1) . . . `ψ(an|z1) `ψ(a1|z2) 0 . . . `ψ(an|z2) ... ... . . . ... `ψ(a1|zn) `ψ(a2|zn) . . . 0 1 1 ... 1 , (19) where ∆ := nE − 11T is a graph Laplacian. Proof. The problem we dealt with is designing a scoring rule for information that satisfies two properties: 1) regularity, the score should be finite if the information is sufficiently correct, and 2) properness, the score should be maximized if and only if the information is true. The well-known example of the scoring rule is mutual information (MI), which compares a pair of probabilistic distributions p and q in the logarithmic domain. However, MI cannot apply to continuous actions. Instead, we introduce a more general tool, the regular proper scoring rule as defined below. Definition A.1. (Gneiting & Raftery, 2007) For a set Ω, F(·‖·) : P (Ω) × Ω → R∞ is a regular proper scoring rule iff there exists a strictly concave, real-valued function f on P (Ω) such that F (p‖x) = f(p)− ∫ Ω f∗(p(ω), x) dp(ω) + f∗(p, x) (20) for p ∈ P (Ω) and x ∈ Ω, where f∗ is a subgradient of f that satisfies the following property, f(q) ≥ f(p) + ∫ Ω f∗(p, ω) d(q − p)(ω) (21) for q ∈ P (Ω). We also define F for q ∈ P (Ω) as F (p‖q) := ∫ Ω F (p‖x) dq(x), and describe a set of regular proper scoring rules B. For F ,F1,F2 ∈ B, the following property holds (Gneiting & Raftery, 2007). 1. Strict concavity: if q 6= p, then F(q‖q) > F(p‖q). 2. F1(p‖q) + aF2(p‖q) + f(q) ∈ B where a > 0 and f are not dependent on p. 3. −Dψ ∈ B, where Dψ is the Bregman divergence. Lemma A.1. For any F ∈ B and a function `F defined as shown below, if supφ ‖ dV πθ /dβ`ψ‖ < 1, then [ı`F ] is a truthful mechanism of G. `F (aj |zi) := { −F (πθ(aj |zi)‖aj) (i 6= j) 0 (i = j) . (22) Proof. We prove that the surrogate objective of G[ı`F ], V πθβ := V πθ − β`F is strictly concave, and if ∇φV πθβ = 0, then zi = hi with probability 1. We denote φ̂ as the truthful parameter where σφ̂(hi|hi) = 1. The policy gradient for φ is ∇φV πθβ dqφ =∇φV πθ dqφ + β∇φ ∫ F (πθ(aj |zi)‖aj) dπθ dqφ =∇φV πθ dqφ + β∇φF (πθ(aj |zi)‖πθ(aj |hi)) dqφ. (23) First, we consider the local optima, i.e., ∇φV πθ dqφ = 0 and φ 6= φ̂. For Gâteaux differential with respect to ~φ := (φ̂−φ)T/‖φ̂−φ‖, ~φ∇V πθβ = β ~φ∇`ψ > 0 holds from the strict concaveity. At the global optima i.e., ∇φV πθ dqφ = 0 and φ = φ̂, ∇φV πθβ = β∇`ψ = 0 holds. Next, if ~φ∇V πθ < 0, as supφ ‖dV πθ / d`F‖ < β, the following equation holds for φ 6= φ̂. ~φ∇V πθβ dqφ = ~φ (∇V πθ + β∇`F ) dqφ > ~φ∇V πθ dqφ + sup φ ∥∥∥∥ dVd`F ∥∥∥∥ ~φ∇`F dqφ ≥ ~φ∇V πθ dqφ − inf φ (~φ∇V πθ ) dqφ ≥ 0. (24) Hence, ~φ∇V πθβ ≥ 0 holds, and the equality holds if and only if φ = φ̂. Therefore, V πθ β is strictly concave, and the following equation holds for αk ∈ o(1/k). lim K→∞ K∑ k=1 ∇φV πθβ (φk) ‖∇φV πθβ (φk)‖ αk = φ̂. (a.s.) (25) Iψ is defined for both discrete and continuous actions. Table 3 lists examples of scoring rules `ψ for arbitrary actions. In particular, minimizing `ψ for continuous action is known as probability density estimation (Gneiting & Raftery, 2007). −Iψ is a proper scoring rule (Gneiting & Raftery, 2007) since it is a linear combination of Bregman divergence. Hence, from Lemma A.1, [iIψ] is truthful. Besides, since 1TIψ = 1T∆Lφ1 = 0, [iIψ] is unbiased. Theorem A.1. (global optimally) For any G in Comm-POSG, TSP converges to the global optimum Ĵ(G) if the following convergence condition are met, sup φ ∣∣∣∣∂ ReV πθ∂ ImV πθ ∣∣∣∣ < β, (26) where β <∞ is bounded mass parameter. Proof. From Proposition A.2, [ıIψ] is unbiased truthful. Therefore, from Proposition A.1, convergence to the global optima is achieved. A.1 SELF-PLAY CONVERGES TO LOCAL OPTIMA Theorem A.2. If G ∈ G is non-truthful, self-play does not converge to the global optimum Ĵ(G). Proof. Example A.1. (One-bit two-way communication game) Fig. 4 shows an example of a non-cooperative partially observable environment with 1-bit state. The reward structure is presented in Table 4. The sum of rewards is maximized when both agents report the correct state to the environment, n∑ i=1 Rni (s,a) = { 2c (a1 = a2 = s) 0 (otherwise) . Hence, the objective varies in the range 0 ≤ J(G2com) ≤ 2c. Proposition A.3. If c < 1, J∗(G2com) < Ĵ(G2com) holds. Proof. Since p(s) = 1/2, we can assume s = 1 without loss of generality. Besides, we discuss only Agent 1 because of symmetry. From Z = {0, 1}, Agent 1’s messageling policy σ1 sends the correct information x or false information 1 − x when it knows x. Hence, we can represent the policy by using parameter φ ∈ [0, 1] as follows. σφ(z|x) = { φz(1− φ)1−z (x = 1) 1/2 (x = ◦) , (27) Differentiating σφ with φ gives the following. dσφ dφ = { 2z − 1 (x = 1) 0 (x = ◦) G 2 com. (28) Therefore, from Eq (??), if 〈π∗, ·〉 ∈ W∗(G2com), then the policy gradient for φ is as follows. d dφ U(π∗, φ) = d dφ ∫ V ∗1 dq1 dσ1 dP dp = ∫ V ∗1 dq1 dσ1 dφ dP dp =λ ∫ V ∗1 (2z1 − 1) dz1 ∣∣∣∣ s=x1=1 =λ ∫ (2z1 − 1)R1 dπ∗1 dπ∗2 dq2 dz1 dP ∣∣∣∣ s=x1=1 =λ(1− λ) ∫ (2z1 − 1)R1 dπ∗1 dπ∗2 dq2 dz1 ∣∣∣∣ s=x1=1,x2=◦ =λ(1− λ) 1∑ z1=0 (2z1 − 1)R1(s, 〈x1, z1〉) ∣∣∣∣∣ s=x1=1 =λ(1− λ) [R1(1, 〈1, 1〉)−R1(1, 〈1, 0〉)] =λ(1− λ)(c− 1) < 0. (29) As the policy gradient is negative from the assumption of c ∈ (0, 1), φ∗ = 0 gets the Nash equilibrium from φ ≥ 0, thereby resulting in always sending false information to the opposite: σφ∗(z|x) = { 1− z (x = 1) 1/2 (x = ◦) . (30) Let J〈x1, x2〉 := J |x=〈x1,x2〉. We can get J∗ and Ĵ as follows. J∗ = ∫ J∗〈x1, x2〉dP2 dp = J∗〈1, 1〉λ2 + J∗〈1, ◦〉 · 2λ(1− λ) + J∗〈◦, ◦〉(1− λ)2 = 2cλ2 + 0 + 2c/22 · (1− λ)2 = 2c [ λ2 + 1 4 (1− λ)2 ] , (31) and Ĵ = ∫ Ĵ〈x1, x2〉dP2 dp = Ĵ〈1, 1〉λ2 + Ĵ〈1, ◦〉 · 2λ(1− λ) + Ĵ〈◦, ◦〉(1− λ)2 = 2cλ2 + 2c · 2λ(1− λ) + 2c/22 · (1− λ)2 = 2c [ λ2 + 2λ(1− λ) + 1 4 (1− λ)2 ] , (32) respectively. Therefore, Ĵ − J∗ = 4cλ(1− λ) > 0, and J∗ < Ĵ holds. From Proposition A.1, G = G2com is the counterexample that global optimally does not occur. A.2 ZERO-ONE MECHANISM SOLVES G2com . Proposition A.4. (zero-one mechanism) Let ` : A × Z → {0, 1} be a zero-one loss between an action and a message `(ai|zj) := aj(1− zi) + (1− ai)zi, and I(a|z) := [ 1 −1 −1 1 ] [ 0 `(a2|z1) `(a1|z2) 0 ] [ 1 1 ] . (33) If β > (1 − c)(1 − λ)/λ, then [iI] is an unbiased truthful mechanism of G2com, and self-play with G2com[ıI] converges to the global optima Ĵ(G2com) = 2c[1− 3/4 · (1− λ)2]. Proof. The following equation holds. d dφ V ∗β,1 dq1 = d dφ ∫ (V ∗1 − βI1 dπ∗2 dq2) dq1 =− λ(1− λ)(1− c)− β ∫ I1 dπ∗2 dq1 dq2 dσ1 dφ dσ2 dP2 dp =− λ(1− λ)(1− c)− βλ ∫ I1 dπ∗2 dq2(2z1 − 1) dz1 dσ2 dP ∣∣∣∣ s=x1=1 =− λ(1− λ)(1− c)− βλ2 ∫ (2z1 − 1)`(a2|z1) dπ∗2 dq2 dz1 ∣∣∣∣ s=x1=x2=1 =− λ(1− λ)(1− c)− βλ2 1∑ z1=0 (2z1 − 1)`(1|z1) =− λ(1− λ)(1− c)− βλ2(`(1|1)− `(1|0)) =− λ(1− λ)(1− c) + βλ2 =λ2 [ ı− (1− c)1− λ λ ] . (34) Therefore, if β > (1− c)(1− λ)/λ, then φ∗ = 1 holds, and J∗ = Ĵ holds. The value of Ĵ is clear from the proof of Lemma A.1. [ıI] is also known as the peer prediction method (Miller et al., 2005), which is inspired by peer review. This process is illustrated in Fig. 4 Left, and the state-action value functions are listed in Fig. 4 Right. B COMPLEXITY ANALYSIS Although the computational complexity of βIψ per iteration is O(n3) as it involves the multiplication of n-order square matrices, we can reduce it to O(n2) to obtain Iψ = nLψ − 1TLψ1. The spatial complexity is O(n2), and the sample size is O(n). C EXPERIMENTAL ENVIRONMENTS In the experiment, we used two partial observation environments. This setting is the same as that adopted in existing studies (Sukhbaatar et al., 2016; Singh et al., 2019). Fig. 1 shows the environments. C.1 PREDATOR PREY (PP) Predator-Prey (PP) is a widely used benchmark environment in MARL (Barrett et al., 2011; Sukhbaatar et al., 2016; Singh et al., 2019). Multiple predators search for prey in a randomly initialized location in the grid world with a limited field of view. The field of view of a predator is limited so that only a few blocks can be seen. Therefore, for the predator to reach the prey faster, it is necessary to inform other predators about the prey’s location and the locations already searched. Thus, the prey’s location is conveyed through communication among the predators, but predators can also send false messages to keep other predators away from the prey. In this experiment, experiments are performed using PP-3 and PP-5, which have two difficulty levels. In PP-3, the range of the visual field is set to 0 in a 5× 5 environment. In PP-5, the field of view is set to 1 in a 10× 10 environment. The numbers represent the number of agents. C.2 TRAFFIC JUNCTION (TJ) Traffic Junction (TJ) is a simplified road intersection task. An n body agent with a limited field of view informs the other bodies of its location to avoid collisions. In this experiment, the difficulty level of TJ is changed to three. TJ-5 solves the task of crossing two direct one-way streets. For TJ-10, there are two lanes, and each body can not only go straight, but also turn left or right. For TJ-20, the two-lane road will comprise two parallel roads, for a total of four intersections. Each number corresponds to n. In the initial state, each vehicle is given a starting point and a destination and is trained to follow a determined path as fast as possible while avoiding collisions. The agent is in each body and takes two actions, i.e., accelerator and brake, in a single time step. It is crucial to ensure that other vehicles do not approach each other to prevent collision while making good use of multiagent communication. That is similar to blinkers and brake lights. C.3 STARCRAFT: BLOOD WARS (SC) Explore: In order to complete the exploration task, the agent must be within a specific range (field of view) of the enemy unit. Once the agent is within the enemy unit’s field of view, it does not take any further action. The reward structure of the enemy unit is the same as the PP task, with the only difference being that the agent The point is that instead of being in the same place, you explore the enemy unit’s range of vision and get a non-negative reward. Medic units that do not attack enemy units are used to prevent combat from interfering with the mission objective. For observation, for each agent, it is the agent’s (absolute x, absolute y) and the enemy’s (relative x, relative y, visible), where visiblea is a visual range. If the enemy is not in exploration range, the relative x and relative y are zero. The agent has nine actions to choose from: eight basic directions and one stay action. Combat: Agents make their own observations (absolute x, absolute y, health point + shield, weapon cooldown, previous action) and (relative x, relative y, visible, health point + shield, weapon cooldown). Relative x and y are only observed when the enemy is visible, corresponds to a visible flag. All observations are normalized to lie between (0, 1). The agent must choose from 9+M actions, including 9 basic actions and 1 action to attack M agents. The attack action is only effective if the enemy is within the agent’s view, otherwise is a no-problem. In combat, the environment doesn’t compare to Starcraft’s predecessors. The setup is much more difficult, restrictive, new and different, and therefore not directly comparable. In Combat task, we give a negative reward rtime = −0.01 at each time step to avoid delaying the enemy team’s detection. When an agent is not participating in a battle, at each time step, the agent is rewarded with (i) normalized health status at the current and previous time step, and (ii) normalized health status at the previous time step displays the time steps of the enemies you have attacked so far and the current time step. The final reward for each agent consists of (i) all remaining health * 3 as a negative reward and (ii) 5 * m + all remaining health * 3 as a positive reward if the agent wins. Give health*3 to all living enemies as a negative reward when you lose. In this task, a group of enemies is randomly initialized in half of the map. Thus the other half making communication-demanding tasks even more difficult. D HYPERPARAMETERS
1. What is the focus and contribution of the paper on multi-agent reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical guarantee and ease of implementation? 3. What are the weaknesses of the paper regarding the experiment section and the discussion of the results? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper Paper propose a method for multi-agent reinforcement learning in non-cooperative partially observable environments with communication. The proposed method, TSP, adds imaginary rewards using the peer prediction method by evaluating the validity of information exchanged between agents. TSP has guaranteed convergence to the global optimum and has good empirical performance. Strengths And Weaknesses Strength This work is novel in the sense that it is the first attempt to apply mechanism design to multigent evolutionary learning. TSP’s convergence is theoretically guaranteed for arbitrary policies and environments. The method seems relatively easy to implement and numerical experiments show the effectiveness of TSP. Weakness The authors could add a discussion on what causes self-play + curiosity’s bad performance. The author should at least discuss the scalability since the experiments are on simpler and smaller environments. The learning curves show that TSP is not as stable as baselines, how many random seeds are used in the evaluation? And what causes the instability issue? Clarity, Quality, Novelty And Reproducibility Quality Novel idea that improves self-play through a simple modification. Empirical results show the effectiveness. Experimental evaluations are extensive. Clarify Mostly clear. Claims are well supported. Some details missing that could be helpful for reproducibility