paper_id
stringlengths 9
12
| venue
stringclasses 139
values | year
stringclasses 7
values | paper_title
stringlengths 0
181
| paper_authors
stringlengths 4
925
| paper_abstract
stringlengths 1
5k
| paper_keywords
stringlengths 2
436
| paper_content
stringlengths 0
100k
| review_id
stringlengths 9
12
| review_title
stringlengths 0
500
| review_rating
stringclasses 61
values | review_text
stringlengths 2
28.3k
| review_confidence
stringclasses 13
values | text
stringlengths 402
130k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
faE-D_0d4M | ICLR.cc/2021/Conference | 2021 | Exploring representation learning for flexible few-shot tasks | ["Mengye Ren", "Eleni Triantafillou", "Kuan-Chieh Wang", "James Lucas", "Jake Snell", "Xaq Pitkow", "Andreas S. Tolias", "Richard Zemel"] | Existing approaches to few-shot learning deal with tasks that have persistent, rigid notions of classes. Typically, the learner observes data only from a fixed number of classes at training time and is asked to generalize to a new set of classes at test time. Two examples from the same class would always be assigned the same labels in any episode. In this work, we consider a realistic setting where the relationship between examples can change from episode to episode depending on the task context, which is not given to the learner. We define two new benchmark datasets for this flexible few-shot scenario, where the tasks are based on images of faces (Celeb-A) and shoes (Zappos50K). While classification baselines learn representations that work well for standard few-shot learning, they suffer in our flexible tasks since the classification criteria shift from training to testing. On the other hand, unsupervised contrastive representation learning with instance-based invariance objectives preserves such flexibility. A combination of instance and class invariance learning objectives is found to perform best on our new flexible few-shot learning benchmarks, and a novel variant of Prototypical Networks is proposed for selecting useful feature dimensions. | ["Few-shot learning", "representation learning"] | ABSTRACTExisting approaches to few-shot learning deal with tasks that have persistent, rigidnotions of classes. Typically, the learner observes data only from a fixed numberof classes at training time and is asked to generalize to a new set of classes attest time. Two examples from the same class would always be assigned the samelabels in any episode. In this work, we consider a realistic setting where the re-lationship between examples can change from episode to episode depending onthe task context, which is not given to the learner. We define two new benchmarkdatasets for this flexible few-shot scenario, where the tasks are based on imagesof faces (Celeb-A) and shoes (Zappos50K). While classification baselines learnrepresentations that work well for standard few-shot learning, they suffer in ourflexible tasks since the classification criteria shift from training to testing. On theother hand, unsupervised contrastive representation learning with instance-basedinvariance objectives preserves such flexibility. A combination of instance andclass invariance learning objectives is found to perform best on our new flexiblefew-shot learning benchmarks, and a novel variant of Prototypical Networks isproposed for selecting useful feature dimensions.1 I NTRODUCTIONFollowing the success of machine learning applied to fully-supervised settings, there has been asurge of interest in machine learning within more realistic, natural learning scenarios. Among these,meta-learning and few-shot learning (Lake et al., 2011) (FSL) have emerged as exciting alternatives.In the few-shot learning setting, the learner is presented with episodes of new learning tasks, wherethe learner must identify patterns in a labeled support set and apply them to make predictions foran unlabeled query set. Since its inception, there has been significant progress on FSL benchmarks.However, standard supervised baselines are often shown to perform as well as carefully designedsolutions (Chen et al., 2019; Tian et al., 2020). In this work, we argue that this observation is due inpart to the rigidity in which FSL episodes are designed.In a typical few-shot classification setting, each episode consists of a few examples belonging to oneofNclasses. Across different training episodes, different images are sampled from the classes inthe training set but they will always be given the same class label: an elephant is always an elephant.Most current approaches to FSL attempt to remove context. Existing tasks focus on classificationjudgements, where the query image should be deemed similar to the support image belonging to thesame class, factoring out the role of context such as the setting, pose, and presence of other objects.But many judgements are contextual—they depend on the task at hand and frame-of-reference. Arock is similar to a chair when the aim is to sit, but similar to a club if the aim is to hit. Meta-learning is especially appropriate in contextual judgements, as people are able to adapt readily tonew contexts and make appropriate judgements. So an important question is how to get context intofew-shot classification?In this work, we define a new flexible few-shot learning (FFSL) paradigm. Instead of buildingepisodes from classes, each episode is a binary classification problem that is constructed with somecontext that is hidden from the learner. In this way, the same data point may be given different labelsacross multiple episodes. For example, elephants and tables may belong to the same class if thecontext is “has legs”, but not when the context is “has ears”. Importantly, the learner is not givendirect access to the context and must infer it from the examples present in the episode.1Under review as a conference paper at ICLR 2021Episode 1 (context: a living thing)Episode 2 (context: has handle)Episode 1 (context: color)TestTrainingEpisode 2 (context: has legs)Classes are defined flexiblydepending on the episode context.New images / classes / attributes are introduced.vs.vs.vs.vs.Figure 1: Illustration of the flexible few-shot learning tasks. Instead of having a fixed seman-tic class, each example may belong to different classes flexibly depending on the context of eachepisode. New classes and attributes are introduced in testing to establish new classification criteria.Our FFSL problem is significantly more challenging than the standard setup. In each episode, alearner must infer the correct context and adapt their predictions accordingly. In Section 5.1 westudy generalization issues that occur under supervised representation learning for the flexible few-shot tasks. We show that these approaches easily overfit to the training attributes, even when givendirect access to the attributes that determine the context. We provide additional analysis of a toyproblem to illustrate one possible cause of this failure.In this work, we contribute two new benchmark datasets for this flexible few-shot scenario. The tasksare based on images of faces (Celeb-A) (Liu et al., 2015) and shoes (Zappos50K) (Yu & Grauman,2014). We provide a thorough empirical evaluation of existing methods on these tasks. We find thatsuccessful approaches in the standard FSL setting fall short on the flexible few-shot tasks. Further,while supervised classification baselines can learn good representation in the standard FSL setting,they suffer in FFSL. On the other hand, we found a combination of instance and class invarianceobjectives is able to provide improved performance on the flexible few-shot tasks. Moreover, wepresent Mask-ProtoNet which combines prototype classification with feature selection capability,and it performs better compared to standard prototype averaging and linear readout.2 B ACKGROUND : STANDARD FEW-SHOT CLASSIFICATIONThe vast majority of standard few-shot classification datasets are constructed as follows. First, astandard supervised classification dataset is obtained (e.g. MNIST). Some number of the classesare designated as training classes (e.g. digits 0-4), and the dataset is partitioned so that all imagesbelonging to the training classes are placed into the training set. The remaining classes are used forvalidation/testing.At training time, the learner is given episodes ( E) to learn from. The episode is divided into alabelled support set (ES) and an unlabelled query set (EQ). An episode is said to be N-way whenit contains data points from only Nclasses. Additionally, the episode is k-shot when there are klabelled data points from each of the Nclasses in the support set. Given an episode, the learnermust successfully predict the class identity of data points in the query set, given the small amountof labelled information in the support set. Throughout, we use xto denote input data and ythecorresponding class labels for this input.Prototypical networks: A standard prototypical network (Snell et al., 2017) consists of an em-bedding network, g, and a choice of distance function. In each episode, the labelled support dataare used to construct class prototypes, c, by averaging the data points assigned to each class. Thelikelihood of the query predictions is then given by p(y=ijx) = softmax(d(g(x);ci)). Typicallydis the squared Euclidean distance or the cosine dissimilarity function.3 R ELATED WORKMeta-learning and few-shot learning: As one of the earlier studies of FSL, Lake et al. (2011)showed that probabilistic programming can learn about unseen hand-written characters in the Om-niglot dataset using few examples. Koch et al. (2015) showed that a deep Siamese network canachieve similar performance. Vinyals et al. (2016) introduced the more challenging miniImageNetdataset. This lead to the development of many meta-learning methods with deep networks including2Under review as a conference paper at ICLR 2021Celeb-AZappos-50KPositive examplesNegative examplesContext: Male & SmilingContext: Cheekbones & EarringsPositive examplesNegative examplesContext: Women & Lace UpPositive examplesNegative examplesContext: Slippers & Slip-OnPositive examplesNegative examplesFigure 2: Sample FFSL episodes using Celeb-A (left) and Zappos-50K (right) datasets. Positive andnegative examples are sampled according to the context attributes, but the context information is notrevealed to the model at test time.MAML (Finn et al., 2017), Matching Network (Vinyals et al., 2016), and the Prototypical Net-work (Snell et al., 2017). One hypothesis is that to solve the FSL task, a model needs to be flexibleenough to adapt its feature extractor to the unseen test task. Though MAML is very flexible, it is notempirically better than simpler methods such as Prototypical Networks. To strike a balance betweenflexibility and simplicity, TADAM (Oreshkin et al., 2018) proposed adapting the network using theFiLM layer (Perez et al., 2018), a generalization of conditional normalization.In our work, we explore some generalization challenges introduced by the FFSL benchmarks. Ingeneral, there is limited theoretical support for the success of meta-learning. Most existing workfocuses on defining notions of task similarity (Ben-David et al., 2010; Ben-David & Borbely, 2008),building explicit models for meta-learning (Baxter, 2000; Pentina & Lampert, 2014) or on learninggood data representations for generalization across tasks (Maurer, 2009; Bullins et al., 2019; Duet al., 2020). Yet another line of work investigates the theoretical limitations of few-shot learning(Hanneke & Kpotufe, 2020; Lucas et al., 2020). Here we study the generalization failure modes ofsupervised representation learning approaches to the FFSL tasks.The standard few-shot classification task has been extended in various ways. In few-shot semi-supervised learning, the support set is augmented with unlabelled examples to provide access to extrainformation (Ren et al., 2018). This has inspired novel algorithms such as a meta-learning versionof learning from pseudo-labels (Sun et al., 2019). To capture the possibility that a model needsto deal with varying support set size, and task difficulty, Triantafillou et al. (2019) introduced theMeta-Dataset. They found that a hybrid of Prototypical Networks and MAML performed best. Tocapture another aspect of learning in the real world, Finn et al. (2018) investigated the possibility ofhaving ambiguous tasks. In the same spirit, we extend the study of few-shot learning by introducingour FFSL benchmarks, and show that this task calls for novel algorithms.Zero-shot learning: In zero-shot learning (ZSL), a model is asked to recognize classes not presentin the training set, supervised only by some auxiliary description or attribute values (see Wang et al.(2019a) for a survey). Lampert et al. (2014) studied the direct attribute prediction method. Insubsequent sections we also look at pretraining a predictor of attribute values. One motivatingfactor for ZSL is the situation where no training example is available for the new classes, but onlydescriptions of them. The motivation behind our FFSL task can be seen as complementary in thatsometimes a new concept cannot easily be described, but coming up with a small set of representativeexamples is easier, e.g. “shoes that I like”. This suggests a comparison to recommendation systems.Cold start in recommendation systems: Our FFSL tasks share overlap with the cold start prob-lem in recommendation systems (Lam et al., 2008; Gope & Jain, 2017), in which a new user or itemis added to the system with little or no information. As data is collected on the new instance, thesystem must quickly learn to generate good recommendations. The similarity of meta-learning andcold-start recommendation has been explored before (Vartak et al., 2017). However, as new userscan be considered as having their own context to classify items, arguably our flexible few-shot tasksshare greater similarity with cold-start recommendation than standard FSL settings.Compositional learning: Compositional features can be used to construct novel concepts. Thishas been used to improve ZSL where a model not only predicts the class, but also attribute values ofunseen objects (Purushwalkam et al., 2019; Wang et al., 2019b; Yang et al., 2020). Another aspectof our FFSL task is the need to reason about the underlying decision criteria. This theme is alsoimportant in the Visual IQ test proposed in Barrett et al. (2018). There a model is asked to infer andextrapolate attribute values to solve Raven’s Progressive Matrices.3Under review as a conference paper at ICLR 20214 FFSL: F LEXIBLE FEW-SHOT LEARNINGIn this section, we define our FFSL paradigm and introduce our two new benchmark datasets. Asin the standard few-shot classification setting (Section 2), our learner is presented with episodes ofdata. However, the episodes are not constrained to contain data points from only Nclasses. Instead,each data point is given either a positive or negative label depending on some criteria that is notknown to the learner.Figure 1 shows some examples of different episodes in our FFSL setting. Each episode contains animage of a pot, but the class identity of the pot varies according to the hidden context. In Episode 1,the pot and the chair are given the same labels whereas in Episode 2 they belong to different classes.Moreover, at test time brand new concepts (e.g. tables) or criteria (e.g. color) may be introduced.Conceptually, each data point x2X represents some combination of hidden attributes z2Z. Andeach context is an injective function, f:Z!f 0;1g, that labels each of the data points dependingon their hidden attributes. In this work, we consider contexts that compute conjuctions of binaryattributes. The set of training contexts and test contexts need not be the same.In order to solve the FFSL task, the learner must correctly find a mapping from the data domainXto the correct labels. One natural way to solve this problem would be to first find a mappingh:X ! Z , that is persistent across episodes, and then estimate the context in each episode.However, we do not limit our exploration to methods that use this approach, since FFSL allowsdifferent partitions of the Zspace for training and testing, and as we will explain in Section 5.1,directly learning to predict Zcan lead to generalization issues.Next we describe how we generate the FFSL datasets using existing image datasets with attributes,Celeb-A faces (Liu et al., 2015) and Zappos-50K shoes (Yu & Grauman, 2014). Sample episodesfrom each dataset are shown in Figure 2.Celeb-A: The Celeb-A dataset contains around 200K images, where we split half to training, anda quarter to validation and testing each. Each image is annotated with 40 binary attributes, detailinghair colour, facial expressions, and other descriptors. We picked 27 salient attributes and split 14for training and 13 for both val and test. There is no overlap between training or test attributes butthey may sometimes belong to a common category, e.g. blonde hair is in training and brown hair isin test. Split details are included in the Appendix B. For each episode, we randomly select one ortwo attributes and look for positive example belonging to these attributes simultaneously. And wealso sample an equal number of negative examples that don’t belong to one or both of the selectedattributes. This will construct a support set of positive and negative samples, and then we repeat thesame process for the corresponding query set as well.Zappos-50K: The Zappos-50K dataset contains just under 50K images of shoes annotated withattribute values, out of which we kept a total of 76 that we considered salient. We construct an image-level split that assigns 80% of the images to the training set, 10% to the validation and 10% to thetest set. We additionally split the set of attribute values into two disjoint sets that are used to form thetraining and held-out FFSL tasks, respectively. Sampling an episode from a particular split involvessampling a conjunction of attributes from that split (e.g. ‘gender = boy’ and ‘material = leather’),and then sampling positive and negative examples from the relevant example split. The positiveexamples obey both clauses of the conjunction and, as a design choice, the negative examples do notobey either clause. The sampled positive and negative examples are then divided into a support andquery set for the episode.5 E XPLORING MODELS FOR FLEXIBLE FEW-SHOT LEARNINGIn this section, we explore different learning models to solve FFSL tasks. Overall, we separate learn-ing into two stages: representation learning andfew-shot learning . In the representation learningstage, a network backbone learns task relevant features over many examples. And in the FSL stage,an episode with a few examples is presented, and the learner utilizes the base backbone network andperforms additional learning on top.For typical meta-learning based methods, these two stages are essentially the same—training per-forms episodic learning just like testing. Aside from meta-learning, simple supervised pretraining4Under review as a conference paper at ICLR 2021can also learn good representation for standard few-shot classification by using a linear classifierreadout at test time (Chen et al., 2019; Tian et al., 2020).5.1 G ENERALIZATION ISSUES WITH SUPERVISED REPRESENTATION LEARNING707580859095TrainTestAcc. (%)Celeb-A 20-shot FlexibleProtoNetSASA*Figure 3: FFSL 20-shot classification. Bothsupervised attribute classification and stan-dard FSL do not generalize well.In the FFSL task, any single example can have sev-eral positive attributes and the context used to clas-sify them varies across training and test. This sug-gests that useful representations must be more gen-eral than those needed for standard FSL. To investi-gate this, we first conducted an initial experiment onthe Celeb-A benchmark. We adopted a standard pro-totypical network ( ProtoNet ) with features learnedthrough the episodic query loss as our meta-learningapproach. We also explored pretraining-based ap-proaches. We trained a classifier to predict the 14binary training attributes from the input images tolearn a representation. At test time we simply used alinear classifier to solve each episode. This approachis denoted as SA(Supervised Attributes ), analogousto the setting in Chen et al. (2019). We also trained an oracle classifier ( SA*) on all 40 attributes inthe dataset, including both training and testing attributes. Since the tasks are constructed using at-tribute information, the performance of SA* should be considered an upper bound for this problem.Results are shown in Figure 3. Both ProtoNet and SA perform well on the training tasks sincethey are exposed to the label information from the training attributes; however, the test performanceshows a significant generalization gap. In order to succeed in the training objective, both ProtoNetand SA essentially learn to ignore other features that are potentially useful for testing as classificationcriteria. By contrast, SA* is able to perform similarly on both training and testing, since the learningdoes not depend on a particular split of the attributes. Initial experiments therefore suggest thatsupervised learning alone will likely not be sufficient for our FFSL task.In Appendix A we study a toy FFSL problem which further illustrates these generalization issues.We explore training a prototypical network on data from a linear generative model, where eachepisode presents significant ambiguity in resolving the correct context. We show that in this setting,unlike in standarad FSC tasks, the prototypical network is forced to discard information on the testattributes in order to solve the training tasks effectively, and thus fails to generalize.5.2 U NSUPERVISED CONSTRASTIVE REPRESENTATION LEARNINGLearning good representation for downstream applications has always been a sought-after purposeof deep learning. Hinton & Salakhutdinov (2006) proposed to pretrain subsequent layers of autoen-coders for representation learning, and showed good performance for dimensionality reduction, anddownstream classification. Following the development of variational autoencoders (V AEs) (Kingma& Welling, 2013), many extensions have been proposed to encourage “disentangled” representationlearning by reweighing terms in the evidence lower bound (Higgins et al., 2017; Kim & Mnih, 2018).In contrast to traditional generative modeling where the objective is grounded on uncovering the datadistribution, self-supervised learning recently emerged as a promising approach for representationlearning. These include learning to predict rotations (Kolesnikov et al., 2019), maximize mutualinformation between the input and representation (Belghazi et al., 2018; van den Oord et al., 2018),and contrastive learning approaches (Chen et al., 2020; van den Oord et al., 2018; Tian et al., 2019;He et al., 2019; Xiong et al., 2020). They have shown promise in learning semantic aware represen-tations, almost closing the gap with supervised representation training on the challenging ImageNetbenchmark. We follow SIMCLR (Chen et al., 2020) as a representative framework for unsupervisedcontrastive learning, shown in Figure 4-A. We chose SIMCLR because of its empirical success.Concretely, it sends a pair of augmented versions of the same image to the input and obtains ahidden representation. The hidden representation is further passed into a decoder, producing unit-norm vectors. The network is trained end-to-end to minimize the InfoNCE loss (van den Oord et al.,5Under review as a conference paper at ICLR 2021BackboneContrastive LearningA. PretrainC. TestBackboneMask-ProtoNetFeaturesFeature maskxPrototypeFeature mask updates for M iterations to minimize support loss(Unsupervised)PrototypeClassificationM steps BackboneB. Finetune(Supervised)AttributeLearningFigure 4: Our proposed method for FFSL. A: we first pretrain the network with unsupervisedcontrastive objective to learn general features. B:Then we finetune the network to classify the set oftraining attributes. Both stages employ a different decoder header so that the representation remainsgeneral. C:Finally at test time we use Mask-ProtoNet, a variant of ProtoNet that infers featureselection iteratively.2018), which distinguishes the positive sample from the same pair from the rest by encouragingfeature dot product between the positive pair to gain a higher value than negative pairs.Finetuning with supervised attribute classification We can combine the merits of unsupervisedrepresentation learning and supervised attribute classification (SA). To prevent SA from overridingthe unsupervised features, we add another classifier decoder MLP before the sigmoid classificationlayer (see Figure 4-B). Empirically, finetuning on SA is found to be beneficial, but early stoppingis needed to prevent optimizing too much towards training attributes, which would cause significantgeneralization issues (Section 5.1).During test time, we directly use the representation before both decoders to perform FSL. In the nextsection, we introduce Mask-ProtoNet, a novel method for FFSL.5.3 F EW-SHOT LEARNING WITH MASK-PROTO NETAlgorithm 1 Mask-ProtoNetRequire: Net,fxSi;ySigNi=1,fxQgMj=1// An embedding network, Nsupport,MqueryEnsure:f^yQjgMj=1// Network representation h2RD1:hSi Net(xSi)8i;hQj Net(xQj)8j;2:w 02RD;3:fort= 1...M+ 1do4: ~w (w)5: p[k] Pi(hSi~w) 1[ySi=k]Pi1[ySi=k]6: ^ySi;k softmax(d(hSi~w;p[k]))8i;7:l 1NPiCE(^ySi;ySi) +k~wk18: w wrwl9:end for10:^yQj;k softmax(d(hQj~w;p[k]))8j;11:return ^yQjOnce the representation is learned, a common ap-proach for FSL is to directly learn a linear clas-sifier on top of the representation, or average theprototypes from the support set. Prototype aver-aging, however, will consider all feature dimen-sions, including the ones that are not relevant tothe current episode. A linear classifier, on theother hand, learns a weight coefficient for eachfeature dimension, thus performing some levelof feature selection. Still, the weights need tobe properly regularized to encourage high-fidelityselection. A popular way is to apply an L1 regu-larizer on the weights to encourage sparsity. Thelearning of a classifier is essentially done at thesame time as the selection of feature dimensions.In this paper, we propose Mask-ProtoNet as an al-ternative for few-shot learning that separates theprocedure of classifier learning and feature selec-tion: we use prototypes for classification and ad-ditionally learn a soft binary mask for feature selection.Just like a linear classifier, the Mask-ProtoNet learns a weight coefficient for each dimension. Thisweight is then passed through a sigmoid function to act as a soft binary mask, which is learnedfor a small number of iterations before termination. Finally classification is performed based onthe masked prototypes. Conceptually, the mask will disable unused features and instead focus ondimensions that are activated in the current episode. The mask is updated to minimize the innerloop loss, which is a combination of support set cross entropy and an L1 sparse regularizer. The fullalgorithm is described in Algorithm 1 and Figure 4-C.6Under review as a conference paper at ICLR 20217075808590FFSESAIDUU-SASA*Acc. (%)Celeb-ALRLR +L1ProtoMaskProto80859095FFSESAUU-SASA*Acc. (%)Zappos-50KLRLR +L1ProtoMaskProtoFigure 5: 20-shot FFSL results comparing different representation learning and FSL stagecombinations. FFSE : Meta-learning directly using the flexible few-shot episodes. SA: Supervisedattribute classification. ID: Auxiliary representation learning (for Celeb-A this is face ID classifi-cation). U: Unsupervised contrastive learning. U-SA : Our proposed U pretraining followed by SAfinetuning. SA*: Supervised attribute binary classification on allattributes, which serves as an ora-cle (striped bars). A set of few-shot learners are evaluated: 1) logistic regression ( LR), 2) LR with L1regularization ( LR +L1 ), 3) ProtoNet ( Proto ), and 4) the proposed Mask-ProtoNet ( MaskProto ).U-SA with Mask-ProtoNet achieves the best performance in both benchmarks. Chance is 50%.6 E XPERIMENTSIn this section we present our experimental evaluations with various representation learning andfew-shot learning methods for our FFSL benchmarks. Representation learning methods include:1)FFSE : Meta-learning through Flexible Few-ShotEpisodes; 2) SA:Supervised Attribute clas-sification on training attributes only; 3) ID: Auxiliary representation learning task, for Celeb-Athis is the face IDentity classification; 4) U:Unsupervised representation learning (SIMCLR); 5)U-SA :Unsupervised representation learning followed by Supervised Attribute classification fine-tuning. This approach is described in Figure 4-A and B; 6) SA*:Supervised Attribute classificationon all attributes, which serves as an oracle.We also compared the following methods for few-shot learning: 1) LR: Plain logistic regressionon the hidden representation; 2) LR +L1 : LR with L1 regularization on the weights; 3) Proto :Classification with prototypes (Snell et al., 2017); 4) MaskProto : Prototypes with additional maskthat is learned in an inner loop (as proposed in this paper, described in Algorithm 1).Implementation details: Images were resized to 84 843. We used ResNet-12 (He et al.,2016; Oreshkin et al., 2018) with 64, 128, 256, 512 channels in each residual module. The decodernetwork for contrastive learning has two 512-d layers and outputs 128-d vectors. The classifierfinetuning decoder network has two 512-d layers and outputs a 512-d vector. We trained SIMCLRusing random crop areas of 0.08 – 1.0, color augmentation 0.5, and InfoNCE temperature 0.5, for1000 epochs using LARS (You et al., 2017) and cosine schedule with batch size 512 and peaklearning rate 2.0. SA finetuning lasts for another 2k steps with batch size 128 and learning rate0.1 for the decoder and 0.01 for the backbone and momentum 0.9. ID, SA and SA* use batch size256 with a learning rate 0.1 for 30k steps, with 0.1x learning rate decay at 20k and 25k steps, andmomentum 0.9. Features are normalized before sending to LR classifiers. We use cosine similarityfor ProtoNet and Mask-ProtoNet.6.1 R ESULTS AND DISCUSSIONMain results: Figure 5 shows our main results on Celeb-A and Zappos-50K with 20-shot FFSLepisodes. On both benchmarks, training on flexible few-shot episodes based on training attributes(FFSE) performed worst. This aligns with our observation of the generalization issue explained inSection 5.1. Similarly, supervised attribute (SA) learning faced the same challenge. An auxiliarytask of class identification (ID) was not helpful for representation learning either. Interestingly,unsupervised representation learning (U) attained relatively better test performance, suggesting thatthe training objective in contrastive learning preserves more general features—not just shown forsemantic classification tasks in prior literature, but also for the flexible class definitions present here.Surprisingly, finetuning slightly on SIMCLR pretrained networks (U-SA) contributed further gainsin performance. We also tried to finetune directly on FFSL episodes using meta-learning approachesbut this did not perform well — one possible explanation is given in our toy example (Appendix A).We conclude that meta-learning may not help learn higher-level features about the FFSL task itself.Lastly, we confirmed that U-SA closes the generalization gap between SA and SA*, and obtained7Under review as a conference paper at ICLR 202165758595251020Acc. (%)# shotsA. Number of shotsIDSAUU-SASA*GT-LR70758085251020Acc. (%)# shotsB. FSL methodLRLR +L1ProtoMaskProto80818283848501234Acc. (%)# decoder hidden layersC. Decoder depth808182838485012345678910Acc. (%)Finetune steps (K)D. Finetune stepsFigure 6: Additional results on the Celeb-A dataset. A: How many examples are needed forFFSL? We provide an oracle performance where the feature representation is directly the binaryground-truth attribute vector ( GT-LR ) and we train a logistic regression classifier on top. It suggeststhat there is natural ambiguity in the task and more examples than standard FSL are needed. B:Comparison of few-shot learning methods on different number of shots. Mask-ProtoNet worksbetter with an increasing number of shots. C: Effect of the number of decoder layers duringfinetuning. Adding a decoder keeps the representation general and not overfitting to the trainingattributes. D: Effect of the number of finetuning steps. Small amount of finetuning on the trainingattribute is beneficial, but eventually the accuracy goes down.matching performance on Zappos-50K. Lastly, we confirmed that U-SA closes the generalizationgap between SA and SA*. These results were consistent across our benchmarks. Therefore, U-SAwas the most effective representation learning algorithm we explored for FFSL. Note that this resultcontrasts with standard FSL literature, where unsupervised representation learning still lags behindsupervised pretraining (Medina et al., 2020). Moreover, MaskProto is often the best across differentFSL approaches, consistently higher than Proto, which does not reason about feature selection.Number of shots: Since we have a flexible definition of classes in each episode, it could be thecase that the support examples are ambiguous. For example, by presenting both an elephant and acat in the support set, it is unclear whether the positive set is about animals or mammals. Figure 6-Ashows several approaches evaluated using Mask-ProtoNet with varying number of support examplesper class in Celeb-A FFSL episodes. In addition to the SA* oracle, we provided another oracle GT-LR, where the representations are the binary attribute values, and readout is done by solving a linearclassifier. GT-LR gradually approached 100% accuracy as the number of shots approached 20. Thisdemonstrates that FFSL tasks potentially require more support examples to resolve ambiguity. Againhere, U-SA consistently outperformed U, SA, and ID baselines across different number of shots.Figure 6-B plots the performance of different FSL methods, using a common U-SA representation.Mask-ProtoNet performs better with more support examples, but worse with fewer (e.g. 2), sinceminimizing the support loss of only two examples can lead to over-confidence.Effect of decoder depth: Figure 6-C studies the effect of a decoder for attribute classificationfinetuning. Adding an MLP decoder was found to be beneficial for unsupervised representationlearning in prior literature (Chen et al., 2020). Here we found that adding a decoder is also importantfor SA finetuning, contributing to over 2% improvement.Effect of SA finetuning: Figure 6-D plots the validation accuracy on FFSL tasks during finetuningfor a total of 10k steps. It is found that the accuracy grows from 80% and peaks at 2k steps with over84%, and then drops. This suggests that a little finetuning on supervised attributes is beneficial, butprolonged finetuning eventually makes the representation less generalizable.7 C ONCLUSIONThe notion of a class often changes depending on the context, yet existing few-shot classificationrelies on a fixed semantic class definition. In this paper, we propose a flexible few-shot learn-ing paradigm where the classification criteria change based on the episode context. We proposedbenchmarks using the Celeb-A and Zappos-50K datasets to create flexible definitions with existingattribute labels. We explored various ways to perform representation learning for this new task.Unlike in standard FSL, we found that supervised representation learning generalizes poorly on thetest set, due to the partitioning of training & test attributes. Unsupervised contrastive learning onthe other hand preserved more generalizable features, and further finetuning on supervised attributeclassification yielded the best results. Finally, a variant of ProtoNet, Mask-ProtoNet is proposed anddelivers better readout performance. The development of FFSL benchmarks will hopefully encour-age more future research investigating the generalization ability of meta-learning methods.8Under review as a conference paper at ICLR 2021 | hsXSg5k7xyX | Interesting idea with a few significant omissions | 5: Marginally below acceptance threshold | The authors propose a new view on few shot classification. Instead of having a fixed set of classes split into base and novel subsets, they propose to use image attributes to construct classes on the fly during training and testing. That is, in every episode, a class is constructed by random sampling a pair of attributes (such as living and has_legs) and taking the images which have these attributes (i.e person and horse) ad positives, and the ones that don't have at least one of them (such as chair and fish) as negatives. This ensures that the learned representation can't overfit to a particular category definition and has to be truly generalizable. They argue that this setting corresponds better to the real world, where a category of an object can strongly depend on the context.
In an experimental evaluation on CelebA and Zappos they demonstrate that pertaining a representation on the attribute classification task and finetuning on the proposed attribute-based few shot benchmark provides a strong baseline, compared to directly training for few-shot classification. They also demonstrate that training with a contrastive loss objective first leads to further improvements, presumably, because contrastive loss helps to learn generalizable features. Finally, they propose an extension of PrototypicalNetworks with a learnable feature selection module which outperforms a simple linear classifier baseline and vanilla PrototypicalNetworks in most settings.
The paper is very well written and is easy to follow. The idea of a using attributes to define a more challenging setting for evaluating few shot learning methods is interesting and novel to the best of my knowledge. Using attributes to learn more generalizable features has been explored before, however (see Tokmakov et al., ICCV'19). The authors seem to be not aware of that work, which also proposed a very similar approach of using an auxiliary attribute classification loss to learn a more generalizable representation for few shot learning. Moreover, that paper provided attribute annotations for a subset of the ImageNet dataset. The authors should discuss their relationship to Tokmakov et al., and report an evaluation on ImageNet using their attributes, which would be a lot more convincing compared to the 2 toy datasets currently used in the paper.
I also have a few other concerns regarding the evaluation:
1. Why are the episodes sampled differently for the 2 datasets? Either a strong argument has to be provided, or the settings should be unified.
2. Why are you using a cosine classifier for the prototypical networks, but now for the logistic regression baseline? Chen et al., report significantly stronger performance of the Cosine classifier compared to the vanilla one. It has to be added to all the experiments.
3. Another observation in Chen et al., is that the depth of the network has a major effect on the performance of few-shot learning methods. The current ResNet-12 backbone used in all the experiments is not deep enough to make any strong conclusions about the relative performance of the methods. To the very least, results for ResNet-18 and -34 need to be added, and, ideally, also for ResNet-50.
4. In Section 6.1 you are claiming that U-SA closes the generalization gap between SA and SA* on both datasets which is not true. Please correct this statement.
5. Some details of the evaluation protocol seem to be missing. For instance, how many episodes are sampled during evaluation?
Overall, this paper proposes an interesting idea for a new few-shot learning setting but falls short both in acknowledging prior work and in providing a convincing experimental evaluation. If the authors address the concerns about the evaluation protocol listed above and additionally report results on ImageNet using the attributes from Tokmakov et al., showing that their conclusions still hold, I will consider increasing my score. | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | <|im_start|>system
You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|>
<|im_start|>user
### Paper Title
Exploring representation learning for flexible few-shot tasks
### Paper Abstract
Existing approaches to few-shot learning deal with tasks that have persistent, rigid notions of classes. Typically, the learner observes data only from a fixed number of classes at training time and is asked to generalize to a new set of classes at test time. Two examples from the same class would always be assigned the same labels in any episode. In this work, we consider a realistic setting where the relationship between examples can change from episode to episode depending on the task context, which is not given to the learner. We define two new benchmark datasets for this flexible few-shot scenario, where the tasks are based on images of faces (Celeb-A) and shoes (Zappos50K). While classification baselines learn representations that work well for standard few-shot learning, they suffer in our flexible tasks since the classification criteria shift from training to testing. On the other hand, unsupervised contrastive representation learning with instance-based invariance objectives preserves such flexibility. A combination of instance and class invariance learning objectives is found to perform best on our new flexible few-shot learning benchmarks, and a novel variant of Prototypical Networks is proposed for selecting useful feature dimensions.
### Paper Keywords
["Few-shot learning", "representation learning"]
### Paper Content
ABSTRACTExisting approaches to few-shot learning deal with tasks that have persistent, rigidnotions of classes. Typically, the learner observes data only from a fixed numberof classes at training time and is asked to generalize to a new set of classes attest time. Two examples from the same class would always be assigned the samelabels in any episode. In this work, we consider a realistic setting where the re-lationship between examples can change from episode to episode depending onthe task context, which is not given to the learner. We define two new benchmarkdatasets for this flexible few-shot scenario, where the tasks are based on imagesof faces (Celeb-A) and shoes (Zappos50K). While classification baselines learnrepresentations that work well for standard few-shot learning, they suffer in ourflexible tasks since the classification criteria shift from training to testing. On theother hand, unsupervised contrastive representation learning with instance-basedinvariance objectives preserves such flexibility. A combination of instance andclass invariance learning objectives is found to perform best on our new flexiblefew-shot learning benchmarks, and a novel variant of Prototypical Networks isproposed for selecting useful feature dimensions.1 I NTRODUCTIONFollowing the success of machine learning applied to fully-supervised settings, there has been asurge of interest in machine learning within more realistic, natural learning scenarios. Among these,meta-learning and few-shot learning (Lake et al., 2011) (FSL) have emerged as exciting alternatives.In the few-shot learning setting, the learner is presented with episodes of new learning tasks, wherethe learner must identify patterns in a labeled support set and apply them to make predictions foran unlabeled query set. Since its inception, there has been significant progress on FSL benchmarks.However, standard supervised baselines are often shown to perform as well as carefully designedsolutions (Chen et al., 2019; Tian et al., 2020). In this work, we argue that this observation is due inpart to the rigidity in which FSL episodes are designed.In a typical few-shot classification setting, each episode consists of a few examples belonging to oneofNclasses. Across different training episodes, different images are sampled from the classes inthe training set but they will always be given the same class label: an elephant is always an elephant.Most current approaches to FSL attempt to remove context. Existing tasks focus on classificationjudgements, where the query image should be deemed similar to the support image belonging to thesame class, factoring out the role of context such as the setting, pose, and presence of other objects.But many judgements are contextual—they depend on the task at hand and frame-of-reference. Arock is similar to a chair when the aim is to sit, but similar to a club if the aim is to hit. Meta-learning is especially appropriate in contextual judgements, as people are able to adapt readily tonew contexts and make appropriate judgements. So an important question is how to get context intofew-shot classification?In this work, we define a new flexible few-shot learning (FFSL) paradigm. Instead of buildingepisodes from classes, each episode is a binary classification problem that is constructed with somecontext that is hidden from the learner. In this way, the same data point may be given different labelsacross multiple episodes. For example, elephants and tables may belong to the same class if thecontext is “has legs”, but not when the context is “has ears”. Importantly, the learner is not givendirect access to the context and must infer it from the examples present in the episode.1Under review as a conference paper at ICLR 2021Episode 1 (context: a living thing)Episode 2 (context: has handle)Episode 1 (context: color)TestTrainingEpisode 2 (context: has legs)Classes are defined flexiblydepending on the episode context.New images / classes / attributes are introduced.vs.vs.vs.vs.Figure 1: Illustration of the flexible few-shot learning tasks. Instead of having a fixed seman-tic class, each example may belong to different classes flexibly depending on the context of eachepisode. New classes and attributes are introduced in testing to establish new classification criteria.Our FFSL problem is significantly more challenging than the standard setup. In each episode, alearner must infer the correct context and adapt their predictions accordingly. In Section 5.1 westudy generalization issues that occur under supervised representation learning for the flexible few-shot tasks. We show that these approaches easily overfit to the training attributes, even when givendirect access to the attributes that determine the context. We provide additional analysis of a toyproblem to illustrate one possible cause of this failure.In this work, we contribute two new benchmark datasets for this flexible few-shot scenario. The tasksare based on images of faces (Celeb-A) (Liu et al., 2015) and shoes (Zappos50K) (Yu & Grauman,2014). We provide a thorough empirical evaluation of existing methods on these tasks. We find thatsuccessful approaches in the standard FSL setting fall short on the flexible few-shot tasks. Further,while supervised classification baselines can learn good representation in the standard FSL setting,they suffer in FFSL. On the other hand, we found a combination of instance and class invarianceobjectives is able to provide improved performance on the flexible few-shot tasks. Moreover, wepresent Mask-ProtoNet which combines prototype classification with feature selection capability,and it performs better compared to standard prototype averaging and linear readout.2 B ACKGROUND : STANDARD FEW-SHOT CLASSIFICATIONThe vast majority of standard few-shot classification datasets are constructed as follows. First, astandard supervised classification dataset is obtained (e.g. MNIST). Some number of the classesare designated as training classes (e.g. digits 0-4), and the dataset is partitioned so that all imagesbelonging to the training classes are placed into the training set. The remaining classes are used forvalidation/testing.At training time, the learner is given episodes ( E) to learn from. The episode is divided into alabelled support set (ES) and an unlabelled query set (EQ). An episode is said to be N-way whenit contains data points from only Nclasses. Additionally, the episode is k-shot when there are klabelled data points from each of the Nclasses in the support set. Given an episode, the learnermust successfully predict the class identity of data points in the query set, given the small amountof labelled information in the support set. Throughout, we use xto denote input data and ythecorresponding class labels for this input.Prototypical networks: A standard prototypical network (Snell et al., 2017) consists of an em-bedding network, g, and a choice of distance function. In each episode, the labelled support dataare used to construct class prototypes, c, by averaging the data points assigned to each class. Thelikelihood of the query predictions is then given by p(y=ijx) = softmax(d(g(x);ci)). Typicallydis the squared Euclidean distance or the cosine dissimilarity function.3 R ELATED WORKMeta-learning and few-shot learning: As one of the earlier studies of FSL, Lake et al. (2011)showed that probabilistic programming can learn about unseen hand-written characters in the Om-niglot dataset using few examples. Koch et al. (2015) showed that a deep Siamese network canachieve similar performance. Vinyals et al. (2016) introduced the more challenging miniImageNetdataset. This lead to the development of many meta-learning methods with deep networks including2Under review as a conference paper at ICLR 2021Celeb-AZappos-50KPositive examplesNegative examplesContext: Male & SmilingContext: Cheekbones & EarringsPositive examplesNegative examplesContext: Women & Lace UpPositive examplesNegative examplesContext: Slippers & Slip-OnPositive examplesNegative examplesFigure 2: Sample FFSL episodes using Celeb-A (left) and Zappos-50K (right) datasets. Positive andnegative examples are sampled according to the context attributes, but the context information is notrevealed to the model at test time.MAML (Finn et al., 2017), Matching Network (Vinyals et al., 2016), and the Prototypical Net-work (Snell et al., 2017). One hypothesis is that to solve the FSL task, a model needs to be flexibleenough to adapt its feature extractor to the unseen test task. Though MAML is very flexible, it is notempirically better than simpler methods such as Prototypical Networks. To strike a balance betweenflexibility and simplicity, TADAM (Oreshkin et al., 2018) proposed adapting the network using theFiLM layer (Perez et al., 2018), a generalization of conditional normalization.In our work, we explore some generalization challenges introduced by the FFSL benchmarks. Ingeneral, there is limited theoretical support for the success of meta-learning. Most existing workfocuses on defining notions of task similarity (Ben-David et al., 2010; Ben-David & Borbely, 2008),building explicit models for meta-learning (Baxter, 2000; Pentina & Lampert, 2014) or on learninggood data representations for generalization across tasks (Maurer, 2009; Bullins et al., 2019; Duet al., 2020). Yet another line of work investigates the theoretical limitations of few-shot learning(Hanneke & Kpotufe, 2020; Lucas et al., 2020). Here we study the generalization failure modes ofsupervised representation learning approaches to the FFSL tasks.The standard few-shot classification task has been extended in various ways. In few-shot semi-supervised learning, the support set is augmented with unlabelled examples to provide access to extrainformation (Ren et al., 2018). This has inspired novel algorithms such as a meta-learning versionof learning from pseudo-labels (Sun et al., 2019). To capture the possibility that a model needsto deal with varying support set size, and task difficulty, Triantafillou et al. (2019) introduced theMeta-Dataset. They found that a hybrid of Prototypical Networks and MAML performed best. Tocapture another aspect of learning in the real world, Finn et al. (2018) investigated the possibility ofhaving ambiguous tasks. In the same spirit, we extend the study of few-shot learning by introducingour FFSL benchmarks, and show that this task calls for novel algorithms.Zero-shot learning: In zero-shot learning (ZSL), a model is asked to recognize classes not presentin the training set, supervised only by some auxiliary description or attribute values (see Wang et al.(2019a) for a survey). Lampert et al. (2014) studied the direct attribute prediction method. Insubsequent sections we also look at pretraining a predictor of attribute values. One motivatingfactor for ZSL is the situation where no training example is available for the new classes, but onlydescriptions of them. The motivation behind our FFSL task can be seen as complementary in thatsometimes a new concept cannot easily be described, but coming up with a small set of representativeexamples is easier, e.g. “shoes that I like”. This suggests a comparison to recommendation systems.Cold start in recommendation systems: Our FFSL tasks share overlap with the cold start prob-lem in recommendation systems (Lam et al., 2008; Gope & Jain, 2017), in which a new user or itemis added to the system with little or no information. As data is collected on the new instance, thesystem must quickly learn to generate good recommendations. The similarity of meta-learning andcold-start recommendation has been explored before (Vartak et al., 2017). However, as new userscan be considered as having their own context to classify items, arguably our flexible few-shot tasksshare greater similarity with cold-start recommendation than standard FSL settings.Compositional learning: Compositional features can be used to construct novel concepts. Thishas been used to improve ZSL where a model not only predicts the class, but also attribute values ofunseen objects (Purushwalkam et al., 2019; Wang et al., 2019b; Yang et al., 2020). Another aspectof our FFSL task is the need to reason about the underlying decision criteria. This theme is alsoimportant in the Visual IQ test proposed in Barrett et al. (2018). There a model is asked to infer andextrapolate attribute values to solve Raven’s Progressive Matrices.3Under review as a conference paper at ICLR 20214 FFSL: F LEXIBLE FEW-SHOT LEARNINGIn this section, we define our FFSL paradigm and introduce our two new benchmark datasets. Asin the standard few-shot classification setting (Section 2), our learner is presented with episodes ofdata. However, the episodes are not constrained to contain data points from only Nclasses. Instead,each data point is given either a positive or negative label depending on some criteria that is notknown to the learner.Figure 1 shows some examples of different episodes in our FFSL setting. Each episode contains animage of a pot, but the class identity of the pot varies according to the hidden context. In Episode 1,the pot and the chair are given the same labels whereas in Episode 2 they belong to different classes.Moreover, at test time brand new concepts (e.g. tables) or criteria (e.g. color) may be introduced.Conceptually, each data point x2X represents some combination of hidden attributes z2Z. Andeach context is an injective function, f:Z!f 0;1g, that labels each of the data points dependingon their hidden attributes. In this work, we consider contexts that compute conjuctions of binaryattributes. The set of training contexts and test contexts need not be the same.In order to solve the FFSL task, the learner must correctly find a mapping from the data domainXto the correct labels. One natural way to solve this problem would be to first find a mappingh:X ! Z , that is persistent across episodes, and then estimate the context in each episode.However, we do not limit our exploration to methods that use this approach, since FFSL allowsdifferent partitions of the Zspace for training and testing, and as we will explain in Section 5.1,directly learning to predict Zcan lead to generalization issues.Next we describe how we generate the FFSL datasets using existing image datasets with attributes,Celeb-A faces (Liu et al., 2015) and Zappos-50K shoes (Yu & Grauman, 2014). Sample episodesfrom each dataset are shown in Figure 2.Celeb-A: The Celeb-A dataset contains around 200K images, where we split half to training, anda quarter to validation and testing each. Each image is annotated with 40 binary attributes, detailinghair colour, facial expressions, and other descriptors. We picked 27 salient attributes and split 14for training and 13 for both val and test. There is no overlap between training or test attributes butthey may sometimes belong to a common category, e.g. blonde hair is in training and brown hair isin test. Split details are included in the Appendix B. For each episode, we randomly select one ortwo attributes and look for positive example belonging to these attributes simultaneously. And wealso sample an equal number of negative examples that don’t belong to one or both of the selectedattributes. This will construct a support set of positive and negative samples, and then we repeat thesame process for the corresponding query set as well.Zappos-50K: The Zappos-50K dataset contains just under 50K images of shoes annotated withattribute values, out of which we kept a total of 76 that we considered salient. We construct an image-level split that assigns 80% of the images to the training set, 10% to the validation and 10% to thetest set. We additionally split the set of attribute values into two disjoint sets that are used to form thetraining and held-out FFSL tasks, respectively. Sampling an episode from a particular split involvessampling a conjunction of attributes from that split (e.g. ‘gender = boy’ and ‘material = leather’),and then sampling positive and negative examples from the relevant example split. The positiveexamples obey both clauses of the conjunction and, as a design choice, the negative examples do notobey either clause. The sampled positive and negative examples are then divided into a support andquery set for the episode.5 E XPLORING MODELS FOR FLEXIBLE FEW-SHOT LEARNINGIn this section, we explore different learning models to solve FFSL tasks. Overall, we separate learn-ing into two stages: representation learning andfew-shot learning . In the representation learningstage, a network backbone learns task relevant features over many examples. And in the FSL stage,an episode with a few examples is presented, and the learner utilizes the base backbone network andperforms additional learning on top.For typical meta-learning based methods, these two stages are essentially the same—training per-forms episodic learning just like testing. Aside from meta-learning, simple supervised pretraining4Under review as a conference paper at ICLR 2021can also learn good representation for standard few-shot classification by using a linear classifierreadout at test time (Chen et al., 2019; Tian et al., 2020).5.1 G ENERALIZATION ISSUES WITH SUPERVISED REPRESENTATION LEARNING707580859095TrainTestAcc. (%)Celeb-A 20-shot FlexibleProtoNetSASA*Figure 3: FFSL 20-shot classification. Bothsupervised attribute classification and stan-dard FSL do not generalize well.In the FFSL task, any single example can have sev-eral positive attributes and the context used to clas-sify them varies across training and test. This sug-gests that useful representations must be more gen-eral than those needed for standard FSL. To investi-gate this, we first conducted an initial experiment onthe Celeb-A benchmark. We adopted a standard pro-totypical network ( ProtoNet ) with features learnedthrough the episodic query loss as our meta-learningapproach. We also explored pretraining-based ap-proaches. We trained a classifier to predict the 14binary training attributes from the input images tolearn a representation. At test time we simply used alinear classifier to solve each episode. This approachis denoted as SA(Supervised Attributes ), analogousto the setting in Chen et al. (2019). We also trained an oracle classifier ( SA*) on all 40 attributes inthe dataset, including both training and testing attributes. Since the tasks are constructed using at-tribute information, the performance of SA* should be considered an upper bound for this problem.Results are shown in Figure 3. Both ProtoNet and SA perform well on the training tasks sincethey are exposed to the label information from the training attributes; however, the test performanceshows a significant generalization gap. In order to succeed in the training objective, both ProtoNetand SA essentially learn to ignore other features that are potentially useful for testing as classificationcriteria. By contrast, SA* is able to perform similarly on both training and testing, since the learningdoes not depend on a particular split of the attributes. Initial experiments therefore suggest thatsupervised learning alone will likely not be sufficient for our FFSL task.In Appendix A we study a toy FFSL problem which further illustrates these generalization issues.We explore training a prototypical network on data from a linear generative model, where eachepisode presents significant ambiguity in resolving the correct context. We show that in this setting,unlike in standarad FSC tasks, the prototypical network is forced to discard information on the testattributes in order to solve the training tasks effectively, and thus fails to generalize.5.2 U NSUPERVISED CONSTRASTIVE REPRESENTATION LEARNINGLearning good representation for downstream applications has always been a sought-after purposeof deep learning. Hinton & Salakhutdinov (2006) proposed to pretrain subsequent layers of autoen-coders for representation learning, and showed good performance for dimensionality reduction, anddownstream classification. Following the development of variational autoencoders (V AEs) (Kingma& Welling, 2013), many extensions have been proposed to encourage “disentangled” representationlearning by reweighing terms in the evidence lower bound (Higgins et al., 2017; Kim & Mnih, 2018).In contrast to traditional generative modeling where the objective is grounded on uncovering the datadistribution, self-supervised learning recently emerged as a promising approach for representationlearning. These include learning to predict rotations (Kolesnikov et al., 2019), maximize mutualinformation between the input and representation (Belghazi et al., 2018; van den Oord et al., 2018),and contrastive learning approaches (Chen et al., 2020; van den Oord et al., 2018; Tian et al., 2019;He et al., 2019; Xiong et al., 2020). They have shown promise in learning semantic aware represen-tations, almost closing the gap with supervised representation training on the challenging ImageNetbenchmark. We follow SIMCLR (Chen et al., 2020) as a representative framework for unsupervisedcontrastive learning, shown in Figure 4-A. We chose SIMCLR because of its empirical success.Concretely, it sends a pair of augmented versions of the same image to the input and obtains ahidden representation. The hidden representation is further passed into a decoder, producing unit-norm vectors. The network is trained end-to-end to minimize the InfoNCE loss (van den Oord et al.,5Under review as a conference paper at ICLR 2021BackboneContrastive LearningA. PretrainC. TestBackboneMask-ProtoNetFeaturesFeature maskxPrototypeFeature mask updates for M iterations to minimize support loss(Unsupervised)PrototypeClassificationM steps BackboneB. Finetune(Supervised)AttributeLearningFigure 4: Our proposed method for FFSL. A: we first pretrain the network with unsupervisedcontrastive objective to learn general features. B:Then we finetune the network to classify the set oftraining attributes. Both stages employ a different decoder header so that the representation remainsgeneral. C:Finally at test time we use Mask-ProtoNet, a variant of ProtoNet that infers featureselection iteratively.2018), which distinguishes the positive sample from the same pair from the rest by encouragingfeature dot product between the positive pair to gain a higher value than negative pairs.Finetuning with supervised attribute classification We can combine the merits of unsupervisedrepresentation learning and supervised attribute classification (SA). To prevent SA from overridingthe unsupervised features, we add another classifier decoder MLP before the sigmoid classificationlayer (see Figure 4-B). Empirically, finetuning on SA is found to be beneficial, but early stoppingis needed to prevent optimizing too much towards training attributes, which would cause significantgeneralization issues (Section 5.1).During test time, we directly use the representation before both decoders to perform FSL. In the nextsection, we introduce Mask-ProtoNet, a novel method for FFSL.5.3 F EW-SHOT LEARNING WITH MASK-PROTO NETAlgorithm 1 Mask-ProtoNetRequire: Net,fxSi;ySigNi=1,fxQgMj=1// An embedding network, Nsupport,MqueryEnsure:f^yQjgMj=1// Network representation h2RD1:hSi Net(xSi)8i;hQj Net(xQj)8j;2:w 02RD;3:fort= 1...M+ 1do4: ~w (w)5: p[k] Pi(hSi~w) 1[ySi=k]Pi1[ySi=k]6: ^ySi;k softmax(d(hSi~w;p[k]))8i;7:l 1NPiCE(^ySi;ySi) +k~wk18: w wrwl9:end for10:^yQj;k softmax(d(hQj~w;p[k]))8j;11:return ^yQjOnce the representation is learned, a common ap-proach for FSL is to directly learn a linear clas-sifier on top of the representation, or average theprototypes from the support set. Prototype aver-aging, however, will consider all feature dimen-sions, including the ones that are not relevant tothe current episode. A linear classifier, on theother hand, learns a weight coefficient for eachfeature dimension, thus performing some levelof feature selection. Still, the weights need tobe properly regularized to encourage high-fidelityselection. A popular way is to apply an L1 regu-larizer on the weights to encourage sparsity. Thelearning of a classifier is essentially done at thesame time as the selection of feature dimensions.In this paper, we propose Mask-ProtoNet as an al-ternative for few-shot learning that separates theprocedure of classifier learning and feature selec-tion: we use prototypes for classification and ad-ditionally learn a soft binary mask for feature selection.Just like a linear classifier, the Mask-ProtoNet learns a weight coefficient for each dimension. Thisweight is then passed through a sigmoid function to act as a soft binary mask, which is learnedfor a small number of iterations before termination. Finally classification is performed based onthe masked prototypes. Conceptually, the mask will disable unused features and instead focus ondimensions that are activated in the current episode. The mask is updated to minimize the innerloop loss, which is a combination of support set cross entropy and an L1 sparse regularizer. The fullalgorithm is described in Algorithm 1 and Figure 4-C.6Under review as a conference paper at ICLR 20217075808590FFSESAIDUU-SASA*Acc. (%)Celeb-ALRLR +L1ProtoMaskProto80859095FFSESAUU-SASA*Acc. (%)Zappos-50KLRLR +L1ProtoMaskProtoFigure 5: 20-shot FFSL results comparing different representation learning and FSL stagecombinations. FFSE : Meta-learning directly using the flexible few-shot episodes. SA: Supervisedattribute classification. ID: Auxiliary representation learning (for Celeb-A this is face ID classifi-cation). U: Unsupervised contrastive learning. U-SA : Our proposed U pretraining followed by SAfinetuning. SA*: Supervised attribute binary classification on allattributes, which serves as an ora-cle (striped bars). A set of few-shot learners are evaluated: 1) logistic regression ( LR), 2) LR with L1regularization ( LR +L1 ), 3) ProtoNet ( Proto ), and 4) the proposed Mask-ProtoNet ( MaskProto ).U-SA with Mask-ProtoNet achieves the best performance in both benchmarks. Chance is 50%.6 E XPERIMENTSIn this section we present our experimental evaluations with various representation learning andfew-shot learning methods for our FFSL benchmarks. Representation learning methods include:1)FFSE : Meta-learning through Flexible Few-ShotEpisodes; 2) SA:Supervised Attribute clas-sification on training attributes only; 3) ID: Auxiliary representation learning task, for Celeb-Athis is the face IDentity classification; 4) U:Unsupervised representation learning (SIMCLR); 5)U-SA :Unsupervised representation learning followed by Supervised Attribute classification fine-tuning. This approach is described in Figure 4-A and B; 6) SA*:Supervised Attribute classificationon all attributes, which serves as an oracle.We also compared the following methods for few-shot learning: 1) LR: Plain logistic regressionon the hidden representation; 2) LR +L1 : LR with L1 regularization on the weights; 3) Proto :Classification with prototypes (Snell et al., 2017); 4) MaskProto : Prototypes with additional maskthat is learned in an inner loop (as proposed in this paper, described in Algorithm 1).Implementation details: Images were resized to 84 843. We used ResNet-12 (He et al.,2016; Oreshkin et al., 2018) with 64, 128, 256, 512 channels in each residual module. The decodernetwork for contrastive learning has two 512-d layers and outputs 128-d vectors. The classifierfinetuning decoder network has two 512-d layers and outputs a 512-d vector. We trained SIMCLRusing random crop areas of 0.08 – 1.0, color augmentation 0.5, and InfoNCE temperature 0.5, for1000 epochs using LARS (You et al., 2017) and cosine schedule with batch size 512 and peaklearning rate 2.0. SA finetuning lasts for another 2k steps with batch size 128 and learning rate0.1 for the decoder and 0.01 for the backbone and momentum 0.9. ID, SA and SA* use batch size256 with a learning rate 0.1 for 30k steps, with 0.1x learning rate decay at 20k and 25k steps, andmomentum 0.9. Features are normalized before sending to LR classifiers. We use cosine similarityfor ProtoNet and Mask-ProtoNet.6.1 R ESULTS AND DISCUSSIONMain results: Figure 5 shows our main results on Celeb-A and Zappos-50K with 20-shot FFSLepisodes. On both benchmarks, training on flexible few-shot episodes based on training attributes(FFSE) performed worst. This aligns with our observation of the generalization issue explained inSection 5.1. Similarly, supervised attribute (SA) learning faced the same challenge. An auxiliarytask of class identification (ID) was not helpful for representation learning either. Interestingly,unsupervised representation learning (U) attained relatively better test performance, suggesting thatthe training objective in contrastive learning preserves more general features—not just shown forsemantic classification tasks in prior literature, but also for the flexible class definitions present here.Surprisingly, finetuning slightly on SIMCLR pretrained networks (U-SA) contributed further gainsin performance. We also tried to finetune directly on FFSL episodes using meta-learning approachesbut this did not perform well — one possible explanation is given in our toy example (Appendix A).We conclude that meta-learning may not help learn higher-level features about the FFSL task itself.Lastly, we confirmed that U-SA closes the generalization gap between SA and SA*, and obtained7Under review as a conference paper at ICLR 202165758595251020Acc. (%)# shotsA. Number of shotsIDSAUU-SASA*GT-LR70758085251020Acc. (%)# shotsB. FSL methodLRLR +L1ProtoMaskProto80818283848501234Acc. (%)# decoder hidden layersC. Decoder depth808182838485012345678910Acc. (%)Finetune steps (K)D. Finetune stepsFigure 6: Additional results on the Celeb-A dataset. A: How many examples are needed forFFSL? We provide an oracle performance where the feature representation is directly the binaryground-truth attribute vector ( GT-LR ) and we train a logistic regression classifier on top. It suggeststhat there is natural ambiguity in the task and more examples than standard FSL are needed. B:Comparison of few-shot learning methods on different number of shots. Mask-ProtoNet worksbetter with an increasing number of shots. C: Effect of the number of decoder layers duringfinetuning. Adding a decoder keeps the representation general and not overfitting to the trainingattributes. D: Effect of the number of finetuning steps. Small amount of finetuning on the trainingattribute is beneficial, but eventually the accuracy goes down.matching performance on Zappos-50K. Lastly, we confirmed that U-SA closes the generalizationgap between SA and SA*. These results were consistent across our benchmarks. Therefore, U-SAwas the most effective representation learning algorithm we explored for FFSL. Note that this resultcontrasts with standard FSL literature, where unsupervised representation learning still lags behindsupervised pretraining (Medina et al., 2020). Moreover, MaskProto is often the best across differentFSL approaches, consistently higher than Proto, which does not reason about feature selection.Number of shots: Since we have a flexible definition of classes in each episode, it could be thecase that the support examples are ambiguous. For example, by presenting both an elephant and acat in the support set, it is unclear whether the positive set is about animals or mammals. Figure 6-Ashows several approaches evaluated using Mask-ProtoNet with varying number of support examplesper class in Celeb-A FFSL episodes. In addition to the SA* oracle, we provided another oracle GT-LR, where the representations are the binary attribute values, and readout is done by solving a linearclassifier. GT-LR gradually approached 100% accuracy as the number of shots approached 20. Thisdemonstrates that FFSL tasks potentially require more support examples to resolve ambiguity. Againhere, U-SA consistently outperformed U, SA, and ID baselines across different number of shots.Figure 6-B plots the performance of different FSL methods, using a common U-SA representation.Mask-ProtoNet performs better with more support examples, but worse with fewer (e.g. 2), sinceminimizing the support loss of only two examples can lead to over-confidence.Effect of decoder depth: Figure 6-C studies the effect of a decoder for attribute classificationfinetuning. Adding an MLP decoder was found to be beneficial for unsupervised representationlearning in prior literature (Chen et al., 2020). Here we found that adding a decoder is also importantfor SA finetuning, contributing to over 2% improvement.Effect of SA finetuning: Figure 6-D plots the validation accuracy on FFSL tasks during finetuningfor a total of 10k steps. It is found that the accuracy grows from 80% and peaks at 2k steps with over84%, and then drops. This suggests that a little finetuning on supervised attributes is beneficial, butprolonged finetuning eventually makes the representation less generalizable.7 C ONCLUSIONThe notion of a class often changes depending on the context, yet existing few-shot classificationrelies on a fixed semantic class definition. In this paper, we propose a flexible few-shot learn-ing paradigm where the classification criteria change based on the episode context. We proposedbenchmarks using the Celeb-A and Zappos-50K datasets to create flexible definitions with existingattribute labels. We explored various ways to perform representation learning for this new task.Unlike in standard FSL, we found that supervised representation learning generalizes poorly on thetest set, due to the partitioning of training & test attributes. Unsupervised contrastive learning onthe other hand preserved more generalizable features, and further finetuning on supervised attributeclassification yielded the best results. Finally, a variant of ProtoNet, Mask-ProtoNet is proposed anddelivers better readout performance. The development of FFSL benchmarks will hopefully encour-age more future research investigating the generalization ability of meta-learning methods.8Under review as a conference paper at ICLR 2021<|im_end|>
<|im_start|>assistant
### Review Title
Interesting idea with a few significant omissions
### Review Text
The authors propose a new view on few shot classification. Instead of having a fixed set of classes split into base and novel subsets, they propose to use image attributes to construct classes on the fly during training and testing. That is, in every episode, a class is constructed by random sampling a pair of attributes (such as living and has_legs) and taking the images which have these attributes (i.e person and horse) ad positives, and the ones that don't have at least one of them (such as chair and fish) as negatives. This ensures that the learned representation can't overfit to a particular category definition and has to be truly generalizable. They argue that this setting corresponds better to the real world, where a category of an object can strongly depend on the context. In an experimental evaluation on CelebA and Zappos they demonstrate that pertaining a representation on the attribute classification task and finetuning on the proposed attribute-based few shot benchmark provides a strong baseline, compared to directly training for few-shot classification. They also demonstrate that training with a contrastive loss objective first leads to further improvements, presumably, because contrastive loss helps to learn generalizable features. Finally, they propose an extension of PrototypicalNetworks with a learnable feature selection module which outperforms a simple linear classifier baseline and vanilla PrototypicalNetworks in most settings. The paper is very well written and is easy to follow. The idea of a using attributes to define a more challenging setting for evaluating few shot learning methods is interesting and novel to the best of my knowledge. Using attributes to learn more generalizable features has been explored before, however (see Tokmakov et al., ICCV'19). The authors seem to be not aware of that work, which also proposed a very similar approach of using an auxiliary attribute classification loss to learn a more generalizable representation for few shot learning. Moreover, that paper provided attribute annotations for a subset of the ImageNet dataset. The authors should discuss their relationship to Tokmakov et al., and report an evaluation on ImageNet using their attributes, which would be a lot more convincing compared to the 2 toy datasets currently used in the paper. I also have a few other concerns regarding the evaluation: 1. Why are the episodes sampled differently for the 2 datasets? Either a strong argument has to be provided, or the settings should be unified. 2. Why are you using a cosine classifier for the prototypical networks, but now for the logistic regression baseline? Chen et al., report significantly stronger performance of the Cosine classifier compared to the vanilla one. It has to be added to all the experiments. 3. Another observation in Chen et al., is that the depth of the network has a major effect on the performance of few-shot learning methods. The current ResNet-12 backbone used in all the experiments is not deep enough to make any strong conclusions about the relative performance of the methods. To the very least, results for ResNet-18 and -34 need to be added, and, ideally, also for ResNet-50. 4. In Section 6.1 you are claiming that U-SA closes the generalization gap between SA and SA* on both datasets which is not true. Please correct this statement. 5. Some details of the evaluation protocol seem to be missing. For instance, how many episodes are sampled during evaluation? Overall, this paper proposes an interesting idea for a new few-shot learning setting but falls short both in acknowledging prior work and in providing a convincing experimental evaluation. If the authors address the concerns about the evaluation protocol listed above and additionally report results on ImageNet using the attributes from Tokmakov et al., showing that their conclusions still hold, I will consider increasing my score.
### Review Rating
5: Marginally below acceptance threshold
### Review Confidence
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|>
<|im_end|> |
ykCRDlfxmk | ICLR.cc/2021/Conference | 2021 | AutoHAS: Efficient Hyperparameter and Architecture Search | ["Xuanyi Dong", "Mingxing Tan", "Adams Wei Yu", "Daiyi Peng", "Bogdan Gabrys", "Quoc V Le"] | Deep learning models often require extensive efforts in optimizing hyperparameters and architectures. Standard hyperparameter optimization methods are expensive because of their multi-trial nature: different configurations are tried separately to find the best. In this paper, we propose AutoHAS, an efficient framework for both hyperparameter and architecture search. AutoHAS generalizes the concept of efficient architecture search, ENAS and DARTS, to hyperparameter search and hence can jointly optimize both in a single training. A key challenge in such generalization is that ENAS and DARTS are designed to optimize discrete architecture choices, whereas hyperparameter choices are often continuous. To tackle this challenge, we discretize the continuous space into a linear combination of multiple categorical basis. Furthermore, we extend the idea of weight sharing and augment it with REINFORCE to reduce its memory cost. In order to decouple the shared network weights and controller optimization, we also propose to create temporary weights for evaluating the sampled hyperparameters and updating the controller. Experimental results show AutoHAS can improve the ImageNet accuracy by up to 0.8% for highly-optimized state-of-the-art ResNet/EfficientNet models, and up to 11% for less-optimized models. Compared to random search and Bayesian search, AutoHAS consistently achieves better accuracy with 10x less computation cost. | ["HPO", "NAS", "AutoML"] | ABSTRACTDeep learning models often require extensive efforts in optimizing hyperparametersand architectures. Standard hyperparameter optimization methods are expensivebecause of their multi-trial nature: different configurations are tried separatelyto find the best. In this paper, we propose AutoHAS, an efficient framework forboth hyperparameter and architecture search. AutoHAS generalizes the conceptof efficient architecture search, ENAS and DARTS, to hyperparameter search andhence can jointly optimize both in a single training. A key challenge in such gener-alization is that ENAS and DARTS are designed to optimize discrete architecturechoices, whereas hyperparameter choices are often continuous. To tackle thischallenge, we discretize the continuous space into a linear combination of multiplecategorical basis. Furthermore, we extend the idea of weight sharing and augmentit with REINFORCE to reduce its memory cost. In order to decouple the sharednetwork weights and controller optimization, we also propose to create temporaryweights for evaluating the sampled hyperparameters and updating the controller.Experimental results show AutoHAS can improve the ImageNet accuracy by up to0.8% for highly-optimized state-of-the-art ResNet/EfficientNet models, and up to11% for less-optimized models. Compared to random search and Bayesian search,AutoHAS consistently achieves better accuracy with 10x less computation cost.1 I NTRODUCTIONDeep learning models require intensive efforts in optimizing architectures and hyperparameters.Standard hyperparameter optimization methods, such as grid search, random search (e.g., Bergstra &Bengio (2012)) or Bayesian optimization (e.g., Snoek et al. (2012)), are inefficient because they aremulti-trial: different configurations are tried in parallel to find the best configuration. As these methodsare expensive, there is a trend towards more efficient, single-trial methods for specific hyperparameters.For example, the learning rate can be optimized with the hypergradient method (Baydin et al., 2018).Similarly, many architecture search methods started out multi-trial (Zoph & Le, 2017; Baker et al.,2017; Real et al., 2019), but more recent proposals are single-trial (Pham et al., 2018; Liu et al., 2019).These efficient methods, however, sacrifice generality: each method only works for one aspect or asubset of the hyperparameters or architectures.In this paper, we generalize those efficient, single-trial methods to include both hyperparameters andarchitectures1. One important benefit of the generalization is that we can have a general, efficientmethod for hyperparameter optimization as a special case. Another benefit is that we can now jointlysearch for both hyperparameters and architectures in a single model. Practically, this means that ourmethod is an improvement over neural architecture search (NAS) because each model can potentiallybe coupled with its own best hyperparameters, thus achieving comparable or even better performancethan existing NAS with fixed hyperparameters.To this end, we propose AutoHAS, an efficient hyperparameter and architecture search framework.It is, to the best of our knowledge, the first method that can efficiently handle architecture space,hyperparameter space, or the joint search space. A challenge here is that architecture choices (e.g.kernel size) are often categorical values whereas hyperparameter choices (e.g. learning rate) are1In this paper, hyperparameters refer all design choices that will affect the training procedure of a model,such as learning rate, weight decay, optimizer, dropout, augmentation policy, etc.1Under review as a conference paper at ICLR 2021Compute W* using W and HPValidationAccuracyUpdate the AutoHAS controllerCandidate HP(RMSProp, LR=0.1)Candidate ArchitectureWTrainingLossSampleCandidate ArchitectureUpdate W using the sampled HPSampleLayer-0Layer-1Layer-2SuperModelLayer-0Layer-1Layer-2Candidate ArchitectureAutoHASController*WFigure 1: The overview of AutoHAS method. LEFT: Each candidate architecture’s weights areshared with a super model, where each candidate is a sub model within this super model. RIGHT:During the search, AutoHAS alternates between optimizing the shared weights of super model Wandupdating the controller. It also creates temporary weights Wby optimizing the sampled candidatearchitecture using the sampled candidate hyperparameter (HP). This Wwill be used to compute thevalidation accuracy as a reward so as to update the AutoHAS controller to select better candidates.Finally,Wis discarded after updating the controller so as not to affect the original W.often continuous values. To address the mixture of categorical and continuous search spaces, we firstdiscretize the continuous hyperparameters into a linear combination of multiple categorical basis. Thediscretization allows us to unify architecture and hyperparameter choices during search. As explainedbelow, we will use a reinforcement learning (RL) method to search over these discretized choices inFig. 1. The probability distribution over all candidates is naturally learnt by the RL controller, and itis used as the coefficient in the linear combination to find the best architecture and hyperparameters.AutoHAS uses the weight sharing technique proposed by (Pham et al., 2018; Liu et al., 2019). Themain idea is to train a super model, where each candidate in the architecture space is its sub-model.Using a super model can avoid training millions of candidates from scratch (Liu et al., 2019; Dong& Yang, 2019a; Cai et al., 2019; Pham et al., 2018). AutoHAS extends its scope from architecturesearch to both architecture and hyperparameter search. We not only share the weights of super modelwith each architecture but also share this super model across hyperparameters. At each search step,AutoHAS optimizes the sampled sub-model by a combination of the sampled hyperparameter choices,and the shared weights of super model serves as a good initialization for all hyperparameters at thenext step of search (see Fig. 1 and Sec. 2). In order to decouple the shared network weights ( Win Fig. 1) and controller optimization, we also propose to create temporary weights ( Win Fig.1) for evaluating the sampled hyperparameters and updating the controller. With weight sharing,AutoHAS reduces the search cost by an order of magnitude than random search and Bayesian search.In experiments, AutoHAS shows non-trivial improvements on seven datasets, such as 0.8% accuracygain on highly-optimized EfficientNet and 11% accuracy gain on less-optimized models.2 A UTOHASIn this section, we elaborate the design philosophy of AutoHAS. We introduce the background ofAutoHAS in Sec. 2.1, how to represent architectures and hyperparameters in a unified way in Sec. 2.2,how to search in Sec. 2.3, and how to derive the final architectures and hyperparameters in Sec. 2.4.2.1 P RELIMINARIESAutoHAS should be able to handle the general case of NAS and HPO – jointly find architecture and hyperparameters hthat achieve high performance on the validation set. This objective can beformulated as a bi-level optimization problem:min;hL(;h;!;Dval) s:t: !=fh(;!0;Dtrain); (1)whereLis the objective function (e.g., cross-entropy loss) and !0is the initial weights of thearchitecture .Dtrain andDvaldenote the training data and the validation data, respectively. fhrepresents the algorithm with hyperparameters hto obtain the optimal weights !, such as using SGDto minimize the training loss. In that case, !=fh(;!0;Dtrain) = arg min!L(;h;!0;Dtrain).We can also use HyperNetwork (Ha et al., 2017) to generate weights !.2Under review as a conference paper at ICLR 2021AutoHAS generalizes both NAS and HPO by introducing a broader search space. On one-hand, NASis a special case of HAS, where the inner optimization fh(;!0;Dtrain)uses fixedandhto opti-mize min!L(;h;!; Dtrain). On the other, HPO is a special case of HAS, where is fixed in Eq. (1).2.2 U NIFIED REPRESENTATION OF THE SEARCH SPACE IN AUTOHASThe search space in AutoHAS is a Cartesian product of the architecture and hyperparameter candidates.To search over this mixed search space, we need a unified representation of different searchablecomponents, i.e., architectures, learning rates, optimizers, etc.Architectures Search Space. We use the simplest case as an example. First of all, let the set ofpredefined candidate operations (e.g., 3x3 convolution, pooling, etc.) be O=fO1;O2;:::;Ong,where the cardinality of Oisnfor each layer in the architecture. Suppose an architecture isconstructed by stacking multiple layers, each layer takes a tensor Fas input and output (F),which serves as the next layer’s input. 2O denotes the operation at a layer and might be differentat different layers. Then a candidate architecture is essentially the sequence for all layers fg.Further, a layer can be represented as a linear combination of the operations in Oas follows:(F) =Xni=1CiOi(F) s:t:Xni=1Ci= 1;Ci2f0;1g (2)whereCi(thei-th element of the vector C) is the coefficient of operation Oifor a layer.Hyperparameter Search Space. Now we can define the hyperparameter search space in a similarway. The major difference is that we have to consider both categorical and continuous cases:h=Xmi=1ChiBi s:t:Xmi=1Chi= 1; Chi2([0;1];if continuousf0;1g;if categorical(3)whereBis a predefined set of hyperparameter basis with the cardinality of mandBiis thei-thbasis inB.Chi(thei-th element of the vector Ch) is the coefficient of hyperparameter basis Bi. Ifwe have a continuous hyperparameter, we have to discretize it into a linear combination of basisand unify both categorical and continuous. For example, for weight decay, Bcould bef1e-1, 1e-2,1e-3g, and therefore, all possible weight decay values can be represented as a linear combination overB. For categorical hyperparameters, taking the optimizer as an example, Bcould befAdam, SGD,RMSPropg. In this case, a constraint on Chiis applied:Chi2f0;1gas in Eq. (3).2.3 A UTOHAS: E FFICIENT HYPERPARAMETER AND ARCHITECTURE SEARCHAlgorithm 1 AutoHAS TrainingInput: Randomly initialize WandPInput: Split the available data into two dis-joint sets: Dtrain andDval1:while not converged do2: Sample (;h2 B ) from the con-troller3: Estimate the quality Q(;h)as thereward to update controller by REIN-FORCE4:W fh(;W;Dtrain)5:end while6:Derive the final architecture and hy-perparameters hbyP(Sec. 2.4)Given the discretizing strategy in Sec. 2.2, each candidatein the search space can be represented by the value of C=fCfor all layers ;Chfor all types of hyperparameter g,which represents the coefficients for all architecture andhyperparameter choices. As a result, AutoHAS convertsthe searching problem to obtaining the coefficients C.AutoHAS applies reinforcement learning together withweight sharing to search over the discretized space. Dur-ing search, we learn a controller to sample the candidatearchitecture and hyperparameters from the discretizedspace. In AutoHAS, this controller is parameterized bya collection of independent multinomial variables P=fPfor all layers ;Phfor all types of hyperparameter g2,which draws the probability distribution of the discretizedspace. AutoHAS also leverages a super model to shareweightsWamong all candidate architectures, where each candidate is a sub-model in this supermodel (Pham et al., 2018; Liu et al., 2019). Furthermore, AutoHAS extends the scope of weightsharing from architecture to hyperparameters, where Walso serves as the initialization for thealgorithmfh.We describe AutoHAS in Algorithm 1. It alternates between learning the shared weights Wandlearning the controller using REINFORCE (Williams, 1992). Specifically, at each iteration, the2PandPharen- andm-dimensional vectors, respectively. Each vector sums up to 1.3Under review as a conference paper at ICLR 2021controller samples a candidate — an architecture and basis hyperparameter h2B. We estimate itsqualityQ(;h)by utilizing the temporary weights to maintain the value of fh(;W;Dtrain). Usingtemporary weights, we can measure the validation accuracy of andhasQ(;h), and in the sametime, avoid the side effect of fh(;W;Dtrain)w.r.t.W. In our experiment, fh(;W;Dtrain)isapproximately calculated as one-step gradient descent using the algorithm determined by h. Thisestimated quality is used as a reward to update the controller’s parameters Pvia REINFORCE. Then,we optimize the shared weights W, where the weights corresponding to the sampled architecture Wis updated as fh(;W;Dtrain).2.4 D ERIVING HYPERPARAMETERS AND ARCHITECTUREAfter AutoHAS optimizes P=fP;Phgvia Algorithm 1, we can derive the coefficient Cas follows:C=onehot(arg maxiP); (4)Ch=(Phif continuousonehot(arg maxiPh)if categorical; (5)Together with Eq. (2)and Eq. (3), we can derive the final architecture and hyperparameters h.Intuitively speaking, the selected operation in the final architecture has the highest probability overother candidates, and so does the categorical hyperparameter. For the continuous hyperparameter, thefinal one is the weighted sum of the learnt probability Phwith its basisB.To evaluate whether the AutoHAS-discovered andhis good or not, we will use hto re-trainonthe whole training set and report its performance on the test sets.2.5 D ISCUSSIONGeneralizability . AutoHAS can be applied to searching for architecture only, hyperparameter only,or both. Moreover, unlike previous HPO methods that require the hyperparamter optimizationformulation fhto be differentiable for computing gradient w.r.t. the hyperparameters, AutoHAStreats the inner optimization fhas a block-box, and thus is applicable for both differentiable andnon-differentiable hyperparmaters.Phase-wise AutoHAS . It is challenging to search over the large joint HAS space. Since the sampledarchitecture and hyperparameters change at every iteration, the gradients w.r.t. the shared weights insuper model might dramatically change. Consequently, the shared weights can not be trained welland insufficiently indicative of the RL reward. To alleviate this problem, we propose an alternative,i.e., Phase-wise AutoHAS, which split the whole search procedure into two (or multiple) phases.In the first phase, it will use Algorithm 1 to search for the choices of some components and keepother components fixed as the default value. In the second phase, it will re-use the discoveredcomponents in the first phase and search for others. We found this Phase-wise AutoHAS works betterthan (single-phase) AutoHAS in most cases, at the cost of doubling computational resources. Moreempirical analysis can be found in Sec. 3.3.Why do we need temporary weights? There is an interaction between architecture optimizationand hyperparameter optimization in AutoHAS. If we implement fhin a straightforward solution, itwill overwrite the original weights Wwhen we compute fh. Consequently, the updating of Winthe red branch in Fig. 1 becomes unsafe. Here, we utilize the temporary weights Wto maintain thevalue offh. This strategy allows us to decouple the training of shared weights and the update of theAutoHAS controller, and thus effectively optimize over the hyperparameter space.3 E XPERIMENTSWe evaluate AutoHAS on seven datasets, including two large-scale datasets, ImageNet (Deng et al.,2009) and Places365 (Zhou et al., 2017). We will briefly introduce the experimental settings inSec. 3.1. We compare AutoHAS with other SOTA methods/models in Sec. 3.2. Lastly, we ablativelystudy AutoHAS in Sec. 3.3.4Under review as a conference paper at ICLR 20213.1 E XPERIMENTAL SETTINGSDatasets . We leverage seven datasets to comprehensively evaluate our AutoHAS. Their details (Denget al., 2009; Zhou et al., 2017; Xiao et al., 2016; Krizhevsky & Hinton, 2009; Krause et al., 2013;Nilsback & Zisserman, 2010) are described in Table 1.Table 1: Benchmark datasets – ImageNet and Places365 are two commonly used large-scale datasetsfor image classification, while the other five are small-scaled datasets.Name #Classes #Train Data #Eval Data Hold-out Dtrain Hold-out DvalImageNet 1000 1.28M 50K 1.23M 50KPlaces365 365 1.8M 50K 1.69M 112KCIFAR-10 10 50K 10K 45K 5KCIFAR-100 100 50K 10K 45K 5KStanford Cars 196 8144 8041 6494 1650Oxford Flower 102 2040 6149 1020 1020SUN-397 397 19850 19850 15880 3970Searching settings. We call the hyperparameters that control the behavior of AutoHAS as metahyperparameters – the optimizer and learning rate for RL controller, the momentum ratio for RLbaseline, and the warm-up ratio. Warm-upping the REINFORCE algorithm indicates that we do notupdate the parameters of the controller at the beginning. In addition, when the search space includesarchitecture choices, we also uses the warm-up technique described in Bender et al. (2020). Forthese meta hyperparameters, we use Adam, momentum as 0.95, warm-up ratio as 0.3. The metalearning rate is selected from f0.01, 0.02, 0.05, 0.1 gaccording to the validation performance. Whenthe architecture choices are in the search space, we will use the absolute reward function (Benderet al., 2020) to constrain the FLOPs of the searched model to be the same as the baseline model.For experiments on ImageNet and Places365, we use the batch size of 4096, search for 100 epochs,and use 44 Cloud TPU V3 chips. For experiments on other datasets, we use the batch size of 512,search for 15K steps, and use 2 2 Cloud TPU V3 chips.Training settings . Once we complete the searching procedure, we re-train the model using theAutoHAS-discovered hyperparameter and architecture. For the components that are not searched for,we keep it the same as the baseline models. For each experiment, we run three times and report themean (and variance) of the accuracy.3.2 C OMPARISON WITHHPO AND NAS1357911131517192123Search Time Cost (Hours)697071727374ImageNet Accuracy (%)AutoHASMobileNetV2IFTHGDRandom SearchBayesian OptimizationFigure 2: Comparison between AutoHAS andprevious HPO methods on ImageNet. AutoHASachieves better accuracy than HGD, and uses muchless search time cost than others.AutoHAS shows better performance thanother HPO methods. We choose MobileNet-V2 as the baseline model. We search for themixup ratio from [0, 0.2] and drop-path ratiofrom [0, 0.5] for each MBConv layer. We usethe training schedule in (Bender et al., 2020).Results compared with four representative HPOmethods are shown in Fig. 2. Multi-trial searchmethods, Random Search (Bergstra & Bengio,2012) or Bayesian optimization (Golovin et al.,2017), must train and evaluate many candidates,and thus are inefficient. Even using 10 moretime, they still cannot match the accuracy ofAutoHAS. HGD (Baydin et al., 2018) can onlysearch for the learning rate and the searchedlearning rate is much worse than the baseline.IFT (Lorraine et al., 2020) is an efficientgradient-based HPO method. With the samesearch space, AutoHAS gets higher accuracy than IFT.AutoHAS is feasible for jointly searching hyperparameter and architecture. As a proof ofconcept for the joint search, we follow MNasNet (Tan et al., 2019) and ProxylessNAS (Cai et al.,5Under review as a conference paper at ICLR 20212019) to design a architecture search space (i.e., kernel size f3x3, 5x5gand expansion ratio f3, 6gontop of MobileNetV2), and a joint search space with additional hyperparmater search options (i.e.,mixup and dropout ratio). We then compare AutoHAS performance on these two search spaces. Witharchitecture-only search, AutoHAS achieves comparable results (e.g., 74% accuracy @ 300M flops)as MnasNet/ProxylessNAS, but with the joint search, AutoHAS can further improve accuracy by0.2% with the same FLOPs, suggesting the potential benefit of jointly optimizing architectures andhyperparameters. Notebly, NAS methods are infeasible to optimze the hyperparameters.Table 2: AutoHAS improves ResNet-50 and EfficientNet-B0 on ImageNet – For each training, werepeat the training three times and the variance is less than 0:16.Model Method #Params #FLOPs Top-1 AccuracyResNet-50 (He et al., 2016)Human 25.6 M 4110 M 77.20AutoHAS 25.6 M 4110 M 77.83 (+0.63)EfficientNet-B0 (Tan & Le, 2019)NAS 5.3 M 398 M 77.15AutoHAS 5.2 M 418 M 77.92 (+0.77)AutoHAS improves SoTA ImageNet models. To investigate the effect of AutoHAS over the state-of-the-art models. We apply AutoHAS to two strong baselines. Firstly, we choose ResNet-50.The baseline strategy is to train it by 200 epochs, start the learning rate at 1.6 and decay it by 0.1for every13of the whole training procedure, use EMA with the decay rate of 0.9999, and applySGD with the momentum of 0.9. This can provide higher accuracy than the original paper. Forreference, the reported top-1 accuracy is 76.15% for ResNet-50 in TorchVision, whereas our baselineis 77.2% accuracy. Since previous methods usually do not tune the architecture of ResNet-50, weonly use AutoHAS to search for its hyperparameters including learning rate and mixup ratio for dataaugmentation. From Table 2, AutoHAS improves this strong baseline by 0.63%.Secondly, we choose a NAS-searched model, EfficientNet-B0. The baseline strategy is to train itby 600 epochs and use the same learning rate schedule as in the original paper. As EfficientNet-B0already tunes the kernel size and expansion ratio, we choose a different architecture space. Specifically,in each MBConv layer, we search for the number of groups for all the 1-by-1 convolution layer, thenumber of depth-wise convolution layer, whether to use a residual branch or not. In terms of thehyperparameter space, we search for the per-layer drop-connect ratio, mixup ratio, and the learningrate. We use phase-wise AutoHAS to first search for the architecture and then for the hyperparameters.From Table 2, we improves the strong EfficientNet-B0 baseline by 0.77% ImageNet top-1 accuracy.100 200 300 400 500 600Parameters (MB)535455565758Places365 Accuracy (%)AlexNetGoogleLeNetVGG-16ResNet-152ResNeXt-101CRU-Net-116DPN-92 (32x3d)B0B0 + AutoHASFigure 3: AutoHAS improves accuracy by 1%for EfficientNet-B0 on Places365.AutoHAS improves SoTA Places36 models. Be-side ImageNet, we have also evaluated Auto-HAS on another popular dataset: Places365 (Zhouet al., 2017). Similarly, we apply AutoHASto EfficientNet-B0 to search for better architec-tures and hyperparameters on this dataset. Fig. 3shows the results: Although EfficientNet-B0 is astrong baseline with significantly better parameter-accuracy trade-offs than other models, AutoHAScan still further improve its accuracy 1% and ob-tain a new state-of-the-art accuracy on Places365.Note that B0andB0 + AutoHAS only uses sin-gle crop evaluation, while other models use 10crops.3.3 A BLATION STUDIESWhy choose RL instead of a differentiable strategy? Differentiable search methods have beenextensively studied for its simplicity in many previous literature (Liu et al., 2019; Dong & Yang, 2019a;Wan et al., 2020; Xie et al., 2019), but these methods usually require much higher memory cost inorder to train the entire super model. In our AutoHAS framework, we employ a simple reinforcementlearning algorithm – REINFORCE Williams (1992) – to optimize the controller: instead of training6Under review as a conference paper at ICLR 2021the whole super model, we only train a subset of the super model and therefore significantly reduce thetraining memory cost. Notably, the REINFORCE could also be simply replaced by a differentiable-based algorithm with the supervision of validation loss. We investigate the difference betweendifferentiable and REINFORCE search in Table 3. We use a small variant of MobileNetV2 with depthmultiplier 0.3 as our baseline model (in order to fit our device memory constraint for the differentiableapproach), and then apply them to the same search space. Not surprisingly, differentiable searchrequires much higher memory cost (6.1x more than baseline) as it needs to maintain the feature orgradient tensors for all the super model, whereas our REINFORMCE-based AutoHAS is much morememory efficient: reducing the memory cost by 70% than the differentiable approach. Empirically,we observe they achieve similar accuracy gains in this case, but AutoHAS enables us to search formuch larger models such as EfficientNet-B0 and ResNet-50 as shown in Table 2.Table 3: Differentiable Search vs. AutoHAS REINFORCE Search – Both are applied to the samebaseline model with the same hyperparamter and architecture search space. Baseline model has nosearch cost, but we list its standalone training cost as a reference. Compared to the differentiablesearch, our AutoHAS achieves slightly better accuracy with much less search memory cost.#Params #FLOPs Accuracy Search Cost(M) (M) (%) Memory(GB) Time(Hour)Baseline model 1.5 35.9 50.96 (1.0) (1.4)Differentiable 1.5 36.1 52.17 6.1 2.9AutoHAS(REINFORCE) 1.5 36.3 53.01 1.8 1.7AutoHAS on different search spaces and datasets . To evaluate the generalization ability, we haveevaluated AutoHAS in different hyperparameter and architecture spaces for five more datasets. Forsimplicity, we choose the standard MobileNetV2 as our baseline model. Table 4 shows the results. Weobserve: (1) The accuracy gains for many of these datasets are much larger than ImageNet/Places365,possible because the hyperparameter and architecture of the baseline are not heavily optimized onthese scenarios, leaving us a larger headroom for performance optimization. In particular, AutoHASachieves up to 11% accuracy gain on Flower dataset, suggesting that AutoHAS could be more usefulfor less optimized or new model/dataset scenarios. (2) Joint search and phase-wise search havesimilar performance, possibly due to the difficulty of navigating through a large and complex searchspace and the interactions between different hyperparamters. Suppose phase-wise search has twophases with search space size O(m) and O(n), then its total search space size is O(m + n), but itscorresponding joint search space size would be much larger O(m * n), making the joint searchproblem much more difficult. While this paper mainly focuses on unifying the architecture andhyperparameter search, it is still an open challenge how to navigate through the very large joint searchspace while still obtaining the optimal solution, which would be our future work.Table 4: AutoHAS Accuracy for Different Search Space on five Datasets – Weight decay and MixUpare for hyperparameters, and Arch is for architectures. joint indicates the joint search; phaseindicates the phase-wise search. Each experiment is repeated three times and the average accuracy isreported (standard deviation is about 0.2%).Image Classification Top-1 Accuracy (%)CIFAR-10 CIFAR-100 Stanford Cars Oxford Flower SUN-397Baseline 94.1 76.3 83.8 74.0 46.3WeightDecay 95.0 77.8 89.0 84.4 49.1MixUp 94.1 77.0 85.2 79.6 47.4Arch 94.5 76.8 84.1 76.4 46.3MixUp + Arch (joint) 94.4 77.4 84.8 78.2 47.3MixUp + Arch (phase) 94.4 77.6 85.5 79.6 48.3WeightDecay + MixUp (joint) 95.0 (+0.9) 78.4 (+2.1) 89.9 84.4 50.5WeightDecay + MixUp (phase) 94.9 78.2 90.5 (+6.8) 85.4 (+11.4) 50.8 (+4.5)7Under review as a conference paper at ICLR 20214 R ELATED WORKSNeural Architecture Search (NAS). Since the seminal works (Baker et al., 2017; Zoph & Le, 2017)show promising improvements over manually designed architectures, more efforts have been devotedto NAS. The accuracy of NAS models has been improved by carefully designed search space (Zophet al., 2018), better search method (Real et al., 2019), or compound scaling (Tan & Le, 2019). Themodel size and latency have been reduced by Pareto optimization (Tan et al., 2019; Wu et al., 2019;Cai et al., 2019; 2020) and enlarged search space of neural size (Cai et al., 2020; Dong & Yang,2019b). The efficiency of NAS algorithms has been improved by weight sharing (Pham et al., 2018),differentiable optimization (Liu et al., 2019), or stochastic sampling (Dong & Yang, 2019a; Xie et al.,2019). As these NAS methods use fixed hyperparamters during search, we have empirically observedthat they often lead to sub-optimal results, because different architectures tend to favor their ownhyperparameters. In addition, even if the manual optimization of architecture design is avoided byNAS, they still need to tune the hyperparameters after a good architecture is discovered.Hyperparameter optimization (HPO). Black-box and multi-fidelity HPO methods have a longstanding history (Bergstra & Bengio, 2012; Hutter, 2009; Hutter et al., 2011; 2019; Kohavi & John,1995; Hutter et al., 2019). Black-box methods, e.g., grid search and random search (Bergstra &Bengio, 2012), regard the evaluation function as a black-box. They sample some hyperparametersand evaluate them one by one to find the best. Bayesian methods can make the sampling procedurein random search more efficient (Jones et al., 1998; Shahriari et al., 2015; Snoek et al., 2015).They employ a surrogate model and an acquisition function to decide which candidate to evaluatenext (Thornton et al., 2013). Multi-fidelity optimization methods accelerate the above methodsby evaluating on a proxy task, e.g., using less training epochs or a subset of data (Domhan et al.,2015; Jaderberg et al., 2017; Kohavi & John, 1995; Li et al., 2017). These HPO methods arecomputationally expensive to search for deep learning models (Krizhevsky et al., 2012).Recently, gradient-based HPO methods have shown better efficiency (Baydin et al., 2018; Lorraineet al., 2020), by computing the gradient with respect to the hyperparameters. For example, Maclaurinet al. (2015) calculate the extract gradients w.r.t. hyperparameters. Pedregosa (2016) leveragesthe implicit function theorem to calculate approximate hypergradient. Following that, differentapproximation methods have been proposed (Lorraine et al., 2020; Pedregosa, 2016; Shaban et al.,2019). Despite of their efficiency, they can only be applied to differentiable hyperparameters suchas weight decay, but not non-differentiable hyperparameters, such as learning rate (Lorraine et al.,2020) or optimizer (Shaban et al., 2019). Our AutoHAS is not only as efficient as gradient-basedHPO methods but also applicable to both differentiable and non-differentiable hyperparameters.Moreover, we show significant improvements on state-of-the-art models with large-scale datasets,which supplements the lack of strong empirical evidence in previous HPO methods.Hyperparameter and Architecture Search. Few approaches have been developed for the jointsearching of hyperparameter and architecture (Klein & Hutter, 2019; Zela et al., 2018). However, theyfocus on small datasets and small search spaces. These methods are more computationally expensivethan AutoHAS. Concurrent to our AutoHAS, FBNet-V3 (Dai et al., 2020) learns an acquisitionfunction to predict the performance for the pair of hyperparameter and architecture. They requireto evaluate thousands of pairs to optimize this function and thus costs much more computationalresources than ours.5 C ONCLUSIONIn this paper, we proposed an automated and unified framework AutoHAS, which can efficientlysearch for both hyperparameters and architectures. AutoHAS provides a novel perspective of AutoMLalgorithms by generalizing the weight sharing technique from architectures to hyperparameters.Specifically, AutoHAS first unifies the representation of both continuous and categorical choices bythe discretizing strategy. Then AutoHAS leverages the weight sharing technique to train a single supermodel for different hyperparameter and architecture candidates. In parallel, AutoHAS introducesREINFORCE to learn a controller that can sample good hyperparameter and architecture candidates.Experimentally, AutoHAS significantly improves the baseline models on seven datasets. For thehighly-optimized ResNet/EfficientNet, AutoHAS improves ImageNet top-1 accuracy by 0.8%; forother less-optimized scenarios (e.g., Oxford Flower), it improves the accuracy by 11.4%.8Under review as a conference paper at ICLR 2021 | q0Z_u5fJyS | Another tricky paper | 4: Ok but not good enough - rejection |
This paper proposes a search framework that is very similar to "Neural Architecture Search with Reinforcement Learning", except that the authors claim their method can search discrete training hyper-parameters. The authors evaluate their method on several datasets and claims to achieve SoTA results.
1. REINFORCE V.S. Bayesian Optimization and other derivative free optimizations
I strongly encourage authors to take a comprehensive review of literatures in policy gradients (REINFORCE), and derivative free optimizations. There is a weird trend in NAS community that re-makes the wheels in the search. I believe several claims made by the authors are questionable:
a. Sample-efficiency is a well known issue in RL, and RL usually requires millions of trajectories before working well. The author now claims a well known policy gradient method is actually the most efficient one. In fact, [1] also uses REINFORCE to update the controller, and it required a lot of samples to work.
b. the comparison to HPO methods is unfair and several claims are wrong. First, I'd like clarify one point, in derivative free optimization, we maximize f(x) s.t. some constraints, and x can be anything, including the configurations of training pipeline, architecture hyper-parameters, etc.. So, adding training hyper-parameters into x and optimize f(x) is not a well justified research problem. Besides, as you argue in many places in the paper, existing derivative free solvers support the search over a mixture of [continuous, discrete] variables. Check this package for example. https://github.com/facebookresearch/nevergrad.
You can see some paper claiming they can do NAS really fast, e.g. [2], simply because they are using a supernet or using bi-level optimizations in DARTS. They terminate the training of supernet earlier, then use some tricks to fine-tune the final architecture to a reasonable result (simply because the search space is well defined). That does not necessarily mean HPO methods cannot be applied with these tricks; in fact, using HPO together with a supernet has achieved far better results.
FYI, you can get gradients in the discrete space using finite difference. Therefore, calculating gradients over discrete variables can not be counted as a contribution.
c. questionable experiment results: I highly doubt the Bayesian Optimization in Fig.2 is not setup correctly. Please also plot the figure by samples. If the authors use different tricks to reduce the search time for your agent, please also apply to Bayesian Optimization to ensure a fair apple-to-apple evaluation.
2. ResNet-50 baseline used in sec.3.2 is questionable, and please follow the setup below:
https://github.com/rwightman/pytorch-image-models/blob/master/results/results-imagenet.csv, where they achieve 79.039 top-1 accuracy with resnet 50.
3. ImageNet results are far from SoTA: [3] shows 300 MFLOPS model achieve 79.6 top-1 accuracy. I understand you may use different tricks. but given the current situation, it is really hard for a reviewer to judge if two paper use the same tricks.
4. I'm not sure if it is still meaningful to claim NAS from 5 hours -> 1 hour. Training a CIFAR-10 model from scratch to SoTA result takes 3 days. Now NAS becomes a task even easier than training a model. Do you really believe that? or perhaps NAS has over exploited our prior knowledge in the development of CNN. If I draw the first sample from a well defined search space and apply lots of hacks to boost the network performance to a reasonable level, does it make sense claim NAS in 1 second?
In summary, this paper is more like an engineering study, rather than a rigorous scientific research. My main concern is that this paper does not provide any good insights.
[1] Neural Architecture Search with Reinforcement Learning
[2] Searching for A Robust Neural Architecture in Four GPU Hours
[3] Neural Architecture Transfer
| 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | <|im_start|>system
You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|>
<|im_start|>user
### Paper Title
AutoHAS: Efficient Hyperparameter and Architecture Search
### Paper Abstract
Deep learning models often require extensive efforts in optimizing hyperparameters and architectures. Standard hyperparameter optimization methods are expensive because of their multi-trial nature: different configurations are tried separately to find the best. In this paper, we propose AutoHAS, an efficient framework for both hyperparameter and architecture search. AutoHAS generalizes the concept of efficient architecture search, ENAS and DARTS, to hyperparameter search and hence can jointly optimize both in a single training. A key challenge in such generalization is that ENAS and DARTS are designed to optimize discrete architecture choices, whereas hyperparameter choices are often continuous. To tackle this challenge, we discretize the continuous space into a linear combination of multiple categorical basis. Furthermore, we extend the idea of weight sharing and augment it with REINFORCE to reduce its memory cost. In order to decouple the shared network weights and controller optimization, we also propose to create temporary weights for evaluating the sampled hyperparameters and updating the controller. Experimental results show AutoHAS can improve the ImageNet accuracy by up to 0.8% for highly-optimized state-of-the-art ResNet/EfficientNet models, and up to 11% for less-optimized models. Compared to random search and Bayesian search, AutoHAS consistently achieves better accuracy with 10x less computation cost.
### Paper Keywords
["HPO", "NAS", "AutoML"]
### Paper Content
ABSTRACTDeep learning models often require extensive efforts in optimizing hyperparametersand architectures. Standard hyperparameter optimization methods are expensivebecause of their multi-trial nature: different configurations are tried separatelyto find the best. In this paper, we propose AutoHAS, an efficient framework forboth hyperparameter and architecture search. AutoHAS generalizes the conceptof efficient architecture search, ENAS and DARTS, to hyperparameter search andhence can jointly optimize both in a single training. A key challenge in such gener-alization is that ENAS and DARTS are designed to optimize discrete architecturechoices, whereas hyperparameter choices are often continuous. To tackle thischallenge, we discretize the continuous space into a linear combination of multiplecategorical basis. Furthermore, we extend the idea of weight sharing and augmentit with REINFORCE to reduce its memory cost. In order to decouple the sharednetwork weights and controller optimization, we also propose to create temporaryweights for evaluating the sampled hyperparameters and updating the controller.Experimental results show AutoHAS can improve the ImageNet accuracy by up to0.8% for highly-optimized state-of-the-art ResNet/EfficientNet models, and up to11% for less-optimized models. Compared to random search and Bayesian search,AutoHAS consistently achieves better accuracy with 10x less computation cost.1 I NTRODUCTIONDeep learning models require intensive efforts in optimizing architectures and hyperparameters.Standard hyperparameter optimization methods, such as grid search, random search (e.g., Bergstra &Bengio (2012)) or Bayesian optimization (e.g., Snoek et al. (2012)), are inefficient because they aremulti-trial: different configurations are tried in parallel to find the best configuration. As these methodsare expensive, there is a trend towards more efficient, single-trial methods for specific hyperparameters.For example, the learning rate can be optimized with the hypergradient method (Baydin et al., 2018).Similarly, many architecture search methods started out multi-trial (Zoph & Le, 2017; Baker et al.,2017; Real et al., 2019), but more recent proposals are single-trial (Pham et al., 2018; Liu et al., 2019).These efficient methods, however, sacrifice generality: each method only works for one aspect or asubset of the hyperparameters or architectures.In this paper, we generalize those efficient, single-trial methods to include both hyperparameters andarchitectures1. One important benefit of the generalization is that we can have a general, efficientmethod for hyperparameter optimization as a special case. Another benefit is that we can now jointlysearch for both hyperparameters and architectures in a single model. Practically, this means that ourmethod is an improvement over neural architecture search (NAS) because each model can potentiallybe coupled with its own best hyperparameters, thus achieving comparable or even better performancethan existing NAS with fixed hyperparameters.To this end, we propose AutoHAS, an efficient hyperparameter and architecture search framework.It is, to the best of our knowledge, the first method that can efficiently handle architecture space,hyperparameter space, or the joint search space. A challenge here is that architecture choices (e.g.kernel size) are often categorical values whereas hyperparameter choices (e.g. learning rate) are1In this paper, hyperparameters refer all design choices that will affect the training procedure of a model,such as learning rate, weight decay, optimizer, dropout, augmentation policy, etc.1Under review as a conference paper at ICLR 2021Compute W* using W and HPValidationAccuracyUpdate the AutoHAS controllerCandidate HP(RMSProp, LR=0.1)Candidate ArchitectureWTrainingLossSampleCandidate ArchitectureUpdate W using the sampled HPSampleLayer-0Layer-1Layer-2SuperModelLayer-0Layer-1Layer-2Candidate ArchitectureAutoHASController*WFigure 1: The overview of AutoHAS method. LEFT: Each candidate architecture’s weights areshared with a super model, where each candidate is a sub model within this super model. RIGHT:During the search, AutoHAS alternates between optimizing the shared weights of super model Wandupdating the controller. It also creates temporary weights Wby optimizing the sampled candidatearchitecture using the sampled candidate hyperparameter (HP). This Wwill be used to compute thevalidation accuracy as a reward so as to update the AutoHAS controller to select better candidates.Finally,Wis discarded after updating the controller so as not to affect the original W.often continuous values. To address the mixture of categorical and continuous search spaces, we firstdiscretize the continuous hyperparameters into a linear combination of multiple categorical basis. Thediscretization allows us to unify architecture and hyperparameter choices during search. As explainedbelow, we will use a reinforcement learning (RL) method to search over these discretized choices inFig. 1. The probability distribution over all candidates is naturally learnt by the RL controller, and itis used as the coefficient in the linear combination to find the best architecture and hyperparameters.AutoHAS uses the weight sharing technique proposed by (Pham et al., 2018; Liu et al., 2019). Themain idea is to train a super model, where each candidate in the architecture space is its sub-model.Using a super model can avoid training millions of candidates from scratch (Liu et al., 2019; Dong& Yang, 2019a; Cai et al., 2019; Pham et al., 2018). AutoHAS extends its scope from architecturesearch to both architecture and hyperparameter search. We not only share the weights of super modelwith each architecture but also share this super model across hyperparameters. At each search step,AutoHAS optimizes the sampled sub-model by a combination of the sampled hyperparameter choices,and the shared weights of super model serves as a good initialization for all hyperparameters at thenext step of search (see Fig. 1 and Sec. 2). In order to decouple the shared network weights ( Win Fig. 1) and controller optimization, we also propose to create temporary weights ( Win Fig.1) for evaluating the sampled hyperparameters and updating the controller. With weight sharing,AutoHAS reduces the search cost by an order of magnitude than random search and Bayesian search.In experiments, AutoHAS shows non-trivial improvements on seven datasets, such as 0.8% accuracygain on highly-optimized EfficientNet and 11% accuracy gain on less-optimized models.2 A UTOHASIn this section, we elaborate the design philosophy of AutoHAS. We introduce the background ofAutoHAS in Sec. 2.1, how to represent architectures and hyperparameters in a unified way in Sec. 2.2,how to search in Sec. 2.3, and how to derive the final architectures and hyperparameters in Sec. 2.4.2.1 P RELIMINARIESAutoHAS should be able to handle the general case of NAS and HPO – jointly find architecture and hyperparameters hthat achieve high performance on the validation set. This objective can beformulated as a bi-level optimization problem:min;hL(;h;!;Dval) s:t: !=fh(;!0;Dtrain); (1)whereLis the objective function (e.g., cross-entropy loss) and !0is the initial weights of thearchitecture .Dtrain andDvaldenote the training data and the validation data, respectively. fhrepresents the algorithm with hyperparameters hto obtain the optimal weights !, such as using SGDto minimize the training loss. In that case, !=fh(;!0;Dtrain) = arg min!L(;h;!0;Dtrain).We can also use HyperNetwork (Ha et al., 2017) to generate weights !.2Under review as a conference paper at ICLR 2021AutoHAS generalizes both NAS and HPO by introducing a broader search space. On one-hand, NASis a special case of HAS, where the inner optimization fh(;!0;Dtrain)uses fixedandhto opti-mize min!L(;h;!; Dtrain). On the other, HPO is a special case of HAS, where is fixed in Eq. (1).2.2 U NIFIED REPRESENTATION OF THE SEARCH SPACE IN AUTOHASThe search space in AutoHAS is a Cartesian product of the architecture and hyperparameter candidates.To search over this mixed search space, we need a unified representation of different searchablecomponents, i.e., architectures, learning rates, optimizers, etc.Architectures Search Space. We use the simplest case as an example. First of all, let the set ofpredefined candidate operations (e.g., 3x3 convolution, pooling, etc.) be O=fO1;O2;:::;Ong,where the cardinality of Oisnfor each layer in the architecture. Suppose an architecture isconstructed by stacking multiple layers, each layer takes a tensor Fas input and output (F),which serves as the next layer’s input. 2O denotes the operation at a layer and might be differentat different layers. Then a candidate architecture is essentially the sequence for all layers fg.Further, a layer can be represented as a linear combination of the operations in Oas follows:(F) =Xni=1CiOi(F) s:t:Xni=1Ci= 1;Ci2f0;1g (2)whereCi(thei-th element of the vector C) is the coefficient of operation Oifor a layer.Hyperparameter Search Space. Now we can define the hyperparameter search space in a similarway. The major difference is that we have to consider both categorical and continuous cases:h=Xmi=1ChiBi s:t:Xmi=1Chi= 1; Chi2([0;1];if continuousf0;1g;if categorical(3)whereBis a predefined set of hyperparameter basis with the cardinality of mandBiis thei-thbasis inB.Chi(thei-th element of the vector Ch) is the coefficient of hyperparameter basis Bi. Ifwe have a continuous hyperparameter, we have to discretize it into a linear combination of basisand unify both categorical and continuous. For example, for weight decay, Bcould bef1e-1, 1e-2,1e-3g, and therefore, all possible weight decay values can be represented as a linear combination overB. For categorical hyperparameters, taking the optimizer as an example, Bcould befAdam, SGD,RMSPropg. In this case, a constraint on Chiis applied:Chi2f0;1gas in Eq. (3).2.3 A UTOHAS: E FFICIENT HYPERPARAMETER AND ARCHITECTURE SEARCHAlgorithm 1 AutoHAS TrainingInput: Randomly initialize WandPInput: Split the available data into two dis-joint sets: Dtrain andDval1:while not converged do2: Sample (;h2 B ) from the con-troller3: Estimate the quality Q(;h)as thereward to update controller by REIN-FORCE4:W fh(;W;Dtrain)5:end while6:Derive the final architecture and hy-perparameters hbyP(Sec. 2.4)Given the discretizing strategy in Sec. 2.2, each candidatein the search space can be represented by the value of C=fCfor all layers ;Chfor all types of hyperparameter g,which represents the coefficients for all architecture andhyperparameter choices. As a result, AutoHAS convertsthe searching problem to obtaining the coefficients C.AutoHAS applies reinforcement learning together withweight sharing to search over the discretized space. Dur-ing search, we learn a controller to sample the candidatearchitecture and hyperparameters from the discretizedspace. In AutoHAS, this controller is parameterized bya collection of independent multinomial variables P=fPfor all layers ;Phfor all types of hyperparameter g2,which draws the probability distribution of the discretizedspace. AutoHAS also leverages a super model to shareweightsWamong all candidate architectures, where each candidate is a sub-model in this supermodel (Pham et al., 2018; Liu et al., 2019). Furthermore, AutoHAS extends the scope of weightsharing from architecture to hyperparameters, where Walso serves as the initialization for thealgorithmfh.We describe AutoHAS in Algorithm 1. It alternates between learning the shared weights Wandlearning the controller using REINFORCE (Williams, 1992). Specifically, at each iteration, the2PandPharen- andm-dimensional vectors, respectively. Each vector sums up to 1.3Under review as a conference paper at ICLR 2021controller samples a candidate — an architecture and basis hyperparameter h2B. We estimate itsqualityQ(;h)by utilizing the temporary weights to maintain the value of fh(;W;Dtrain). Usingtemporary weights, we can measure the validation accuracy of andhasQ(;h), and in the sametime, avoid the side effect of fh(;W;Dtrain)w.r.t.W. In our experiment, fh(;W;Dtrain)isapproximately calculated as one-step gradient descent using the algorithm determined by h. Thisestimated quality is used as a reward to update the controller’s parameters Pvia REINFORCE. Then,we optimize the shared weights W, where the weights corresponding to the sampled architecture Wis updated as fh(;W;Dtrain).2.4 D ERIVING HYPERPARAMETERS AND ARCHITECTUREAfter AutoHAS optimizes P=fP;Phgvia Algorithm 1, we can derive the coefficient Cas follows:C=onehot(arg maxiP); (4)Ch=(Phif continuousonehot(arg maxiPh)if categorical; (5)Together with Eq. (2)and Eq. (3), we can derive the final architecture and hyperparameters h.Intuitively speaking, the selected operation in the final architecture has the highest probability overother candidates, and so does the categorical hyperparameter. For the continuous hyperparameter, thefinal one is the weighted sum of the learnt probability Phwith its basisB.To evaluate whether the AutoHAS-discovered andhis good or not, we will use hto re-trainonthe whole training set and report its performance on the test sets.2.5 D ISCUSSIONGeneralizability . AutoHAS can be applied to searching for architecture only, hyperparameter only,or both. Moreover, unlike previous HPO methods that require the hyperparamter optimizationformulation fhto be differentiable for computing gradient w.r.t. the hyperparameters, AutoHAStreats the inner optimization fhas a block-box, and thus is applicable for both differentiable andnon-differentiable hyperparmaters.Phase-wise AutoHAS . It is challenging to search over the large joint HAS space. Since the sampledarchitecture and hyperparameters change at every iteration, the gradients w.r.t. the shared weights insuper model might dramatically change. Consequently, the shared weights can not be trained welland insufficiently indicative of the RL reward. To alleviate this problem, we propose an alternative,i.e., Phase-wise AutoHAS, which split the whole search procedure into two (or multiple) phases.In the first phase, it will use Algorithm 1 to search for the choices of some components and keepother components fixed as the default value. In the second phase, it will re-use the discoveredcomponents in the first phase and search for others. We found this Phase-wise AutoHAS works betterthan (single-phase) AutoHAS in most cases, at the cost of doubling computational resources. Moreempirical analysis can be found in Sec. 3.3.Why do we need temporary weights? There is an interaction between architecture optimizationand hyperparameter optimization in AutoHAS. If we implement fhin a straightforward solution, itwill overwrite the original weights Wwhen we compute fh. Consequently, the updating of Winthe red branch in Fig. 1 becomes unsafe. Here, we utilize the temporary weights Wto maintain thevalue offh. This strategy allows us to decouple the training of shared weights and the update of theAutoHAS controller, and thus effectively optimize over the hyperparameter space.3 E XPERIMENTSWe evaluate AutoHAS on seven datasets, including two large-scale datasets, ImageNet (Deng et al.,2009) and Places365 (Zhou et al., 2017). We will briefly introduce the experimental settings inSec. 3.1. We compare AutoHAS with other SOTA methods/models in Sec. 3.2. Lastly, we ablativelystudy AutoHAS in Sec. 3.3.4Under review as a conference paper at ICLR 20213.1 E XPERIMENTAL SETTINGSDatasets . We leverage seven datasets to comprehensively evaluate our AutoHAS. Their details (Denget al., 2009; Zhou et al., 2017; Xiao et al., 2016; Krizhevsky & Hinton, 2009; Krause et al., 2013;Nilsback & Zisserman, 2010) are described in Table 1.Table 1: Benchmark datasets – ImageNet and Places365 are two commonly used large-scale datasetsfor image classification, while the other five are small-scaled datasets.Name #Classes #Train Data #Eval Data Hold-out Dtrain Hold-out DvalImageNet 1000 1.28M 50K 1.23M 50KPlaces365 365 1.8M 50K 1.69M 112KCIFAR-10 10 50K 10K 45K 5KCIFAR-100 100 50K 10K 45K 5KStanford Cars 196 8144 8041 6494 1650Oxford Flower 102 2040 6149 1020 1020SUN-397 397 19850 19850 15880 3970Searching settings. We call the hyperparameters that control the behavior of AutoHAS as metahyperparameters – the optimizer and learning rate for RL controller, the momentum ratio for RLbaseline, and the warm-up ratio. Warm-upping the REINFORCE algorithm indicates that we do notupdate the parameters of the controller at the beginning. In addition, when the search space includesarchitecture choices, we also uses the warm-up technique described in Bender et al. (2020). Forthese meta hyperparameters, we use Adam, momentum as 0.95, warm-up ratio as 0.3. The metalearning rate is selected from f0.01, 0.02, 0.05, 0.1 gaccording to the validation performance. Whenthe architecture choices are in the search space, we will use the absolute reward function (Benderet al., 2020) to constrain the FLOPs of the searched model to be the same as the baseline model.For experiments on ImageNet and Places365, we use the batch size of 4096, search for 100 epochs,and use 44 Cloud TPU V3 chips. For experiments on other datasets, we use the batch size of 512,search for 15K steps, and use 2 2 Cloud TPU V3 chips.Training settings . Once we complete the searching procedure, we re-train the model using theAutoHAS-discovered hyperparameter and architecture. For the components that are not searched for,we keep it the same as the baseline models. For each experiment, we run three times and report themean (and variance) of the accuracy.3.2 C OMPARISON WITHHPO AND NAS1357911131517192123Search Time Cost (Hours)697071727374ImageNet Accuracy (%)AutoHASMobileNetV2IFTHGDRandom SearchBayesian OptimizationFigure 2: Comparison between AutoHAS andprevious HPO methods on ImageNet. AutoHASachieves better accuracy than HGD, and uses muchless search time cost than others.AutoHAS shows better performance thanother HPO methods. We choose MobileNet-V2 as the baseline model. We search for themixup ratio from [0, 0.2] and drop-path ratiofrom [0, 0.5] for each MBConv layer. We usethe training schedule in (Bender et al., 2020).Results compared with four representative HPOmethods are shown in Fig. 2. Multi-trial searchmethods, Random Search (Bergstra & Bengio,2012) or Bayesian optimization (Golovin et al.,2017), must train and evaluate many candidates,and thus are inefficient. Even using 10 moretime, they still cannot match the accuracy ofAutoHAS. HGD (Baydin et al., 2018) can onlysearch for the learning rate and the searchedlearning rate is much worse than the baseline.IFT (Lorraine et al., 2020) is an efficientgradient-based HPO method. With the samesearch space, AutoHAS gets higher accuracy than IFT.AutoHAS is feasible for jointly searching hyperparameter and architecture. As a proof ofconcept for the joint search, we follow MNasNet (Tan et al., 2019) and ProxylessNAS (Cai et al.,5Under review as a conference paper at ICLR 20212019) to design a architecture search space (i.e., kernel size f3x3, 5x5gand expansion ratio f3, 6gontop of MobileNetV2), and a joint search space with additional hyperparmater search options (i.e.,mixup and dropout ratio). We then compare AutoHAS performance on these two search spaces. Witharchitecture-only search, AutoHAS achieves comparable results (e.g., 74% accuracy @ 300M flops)as MnasNet/ProxylessNAS, but with the joint search, AutoHAS can further improve accuracy by0.2% with the same FLOPs, suggesting the potential benefit of jointly optimizing architectures andhyperparameters. Notebly, NAS methods are infeasible to optimze the hyperparameters.Table 2: AutoHAS improves ResNet-50 and EfficientNet-B0 on ImageNet – For each training, werepeat the training three times and the variance is less than 0:16.Model Method #Params #FLOPs Top-1 AccuracyResNet-50 (He et al., 2016)Human 25.6 M 4110 M 77.20AutoHAS 25.6 M 4110 M 77.83 (+0.63)EfficientNet-B0 (Tan & Le, 2019)NAS 5.3 M 398 M 77.15AutoHAS 5.2 M 418 M 77.92 (+0.77)AutoHAS improves SoTA ImageNet models. To investigate the effect of AutoHAS over the state-of-the-art models. We apply AutoHAS to two strong baselines. Firstly, we choose ResNet-50.The baseline strategy is to train it by 200 epochs, start the learning rate at 1.6 and decay it by 0.1for every13of the whole training procedure, use EMA with the decay rate of 0.9999, and applySGD with the momentum of 0.9. This can provide higher accuracy than the original paper. Forreference, the reported top-1 accuracy is 76.15% for ResNet-50 in TorchVision, whereas our baselineis 77.2% accuracy. Since previous methods usually do not tune the architecture of ResNet-50, weonly use AutoHAS to search for its hyperparameters including learning rate and mixup ratio for dataaugmentation. From Table 2, AutoHAS improves this strong baseline by 0.63%.Secondly, we choose a NAS-searched model, EfficientNet-B0. The baseline strategy is to train itby 600 epochs and use the same learning rate schedule as in the original paper. As EfficientNet-B0already tunes the kernel size and expansion ratio, we choose a different architecture space. Specifically,in each MBConv layer, we search for the number of groups for all the 1-by-1 convolution layer, thenumber of depth-wise convolution layer, whether to use a residual branch or not. In terms of thehyperparameter space, we search for the per-layer drop-connect ratio, mixup ratio, and the learningrate. We use phase-wise AutoHAS to first search for the architecture and then for the hyperparameters.From Table 2, we improves the strong EfficientNet-B0 baseline by 0.77% ImageNet top-1 accuracy.100 200 300 400 500 600Parameters (MB)535455565758Places365 Accuracy (%)AlexNetGoogleLeNetVGG-16ResNet-152ResNeXt-101CRU-Net-116DPN-92 (32x3d)B0B0 + AutoHASFigure 3: AutoHAS improves accuracy by 1%for EfficientNet-B0 on Places365.AutoHAS improves SoTA Places36 models. Be-side ImageNet, we have also evaluated Auto-HAS on another popular dataset: Places365 (Zhouet al., 2017). Similarly, we apply AutoHASto EfficientNet-B0 to search for better architec-tures and hyperparameters on this dataset. Fig. 3shows the results: Although EfficientNet-B0 is astrong baseline with significantly better parameter-accuracy trade-offs than other models, AutoHAScan still further improve its accuracy 1% and ob-tain a new state-of-the-art accuracy on Places365.Note that B0andB0 + AutoHAS only uses sin-gle crop evaluation, while other models use 10crops.3.3 A BLATION STUDIESWhy choose RL instead of a differentiable strategy? Differentiable search methods have beenextensively studied for its simplicity in many previous literature (Liu et al., 2019; Dong & Yang, 2019a;Wan et al., 2020; Xie et al., 2019), but these methods usually require much higher memory cost inorder to train the entire super model. In our AutoHAS framework, we employ a simple reinforcementlearning algorithm – REINFORCE Williams (1992) – to optimize the controller: instead of training6Under review as a conference paper at ICLR 2021the whole super model, we only train a subset of the super model and therefore significantly reduce thetraining memory cost. Notably, the REINFORCE could also be simply replaced by a differentiable-based algorithm with the supervision of validation loss. We investigate the difference betweendifferentiable and REINFORCE search in Table 3. We use a small variant of MobileNetV2 with depthmultiplier 0.3 as our baseline model (in order to fit our device memory constraint for the differentiableapproach), and then apply them to the same search space. Not surprisingly, differentiable searchrequires much higher memory cost (6.1x more than baseline) as it needs to maintain the feature orgradient tensors for all the super model, whereas our REINFORMCE-based AutoHAS is much morememory efficient: reducing the memory cost by 70% than the differentiable approach. Empirically,we observe they achieve similar accuracy gains in this case, but AutoHAS enables us to search formuch larger models such as EfficientNet-B0 and ResNet-50 as shown in Table 2.Table 3: Differentiable Search vs. AutoHAS REINFORCE Search – Both are applied to the samebaseline model with the same hyperparamter and architecture search space. Baseline model has nosearch cost, but we list its standalone training cost as a reference. Compared to the differentiablesearch, our AutoHAS achieves slightly better accuracy with much less search memory cost.#Params #FLOPs Accuracy Search Cost(M) (M) (%) Memory(GB) Time(Hour)Baseline model 1.5 35.9 50.96 (1.0) (1.4)Differentiable 1.5 36.1 52.17 6.1 2.9AutoHAS(REINFORCE) 1.5 36.3 53.01 1.8 1.7AutoHAS on different search spaces and datasets . To evaluate the generalization ability, we haveevaluated AutoHAS in different hyperparameter and architecture spaces for five more datasets. Forsimplicity, we choose the standard MobileNetV2 as our baseline model. Table 4 shows the results. Weobserve: (1) The accuracy gains for many of these datasets are much larger than ImageNet/Places365,possible because the hyperparameter and architecture of the baseline are not heavily optimized onthese scenarios, leaving us a larger headroom for performance optimization. In particular, AutoHASachieves up to 11% accuracy gain on Flower dataset, suggesting that AutoHAS could be more usefulfor less optimized or new model/dataset scenarios. (2) Joint search and phase-wise search havesimilar performance, possibly due to the difficulty of navigating through a large and complex searchspace and the interactions between different hyperparamters. Suppose phase-wise search has twophases with search space size O(m) and O(n), then its total search space size is O(m + n), but itscorresponding joint search space size would be much larger O(m * n), making the joint searchproblem much more difficult. While this paper mainly focuses on unifying the architecture andhyperparameter search, it is still an open challenge how to navigate through the very large joint searchspace while still obtaining the optimal solution, which would be our future work.Table 4: AutoHAS Accuracy for Different Search Space on five Datasets – Weight decay and MixUpare for hyperparameters, and Arch is for architectures. joint indicates the joint search; phaseindicates the phase-wise search. Each experiment is repeated three times and the average accuracy isreported (standard deviation is about 0.2%).Image Classification Top-1 Accuracy (%)CIFAR-10 CIFAR-100 Stanford Cars Oxford Flower SUN-397Baseline 94.1 76.3 83.8 74.0 46.3WeightDecay 95.0 77.8 89.0 84.4 49.1MixUp 94.1 77.0 85.2 79.6 47.4Arch 94.5 76.8 84.1 76.4 46.3MixUp + Arch (joint) 94.4 77.4 84.8 78.2 47.3MixUp + Arch (phase) 94.4 77.6 85.5 79.6 48.3WeightDecay + MixUp (joint) 95.0 (+0.9) 78.4 (+2.1) 89.9 84.4 50.5WeightDecay + MixUp (phase) 94.9 78.2 90.5 (+6.8) 85.4 (+11.4) 50.8 (+4.5)7Under review as a conference paper at ICLR 20214 R ELATED WORKSNeural Architecture Search (NAS). Since the seminal works (Baker et al., 2017; Zoph & Le, 2017)show promising improvements over manually designed architectures, more efforts have been devotedto NAS. The accuracy of NAS models has been improved by carefully designed search space (Zophet al., 2018), better search method (Real et al., 2019), or compound scaling (Tan & Le, 2019). Themodel size and latency have been reduced by Pareto optimization (Tan et al., 2019; Wu et al., 2019;Cai et al., 2019; 2020) and enlarged search space of neural size (Cai et al., 2020; Dong & Yang,2019b). The efficiency of NAS algorithms has been improved by weight sharing (Pham et al., 2018),differentiable optimization (Liu et al., 2019), or stochastic sampling (Dong & Yang, 2019a; Xie et al.,2019). As these NAS methods use fixed hyperparamters during search, we have empirically observedthat they often lead to sub-optimal results, because different architectures tend to favor their ownhyperparameters. In addition, even if the manual optimization of architecture design is avoided byNAS, they still need to tune the hyperparameters after a good architecture is discovered.Hyperparameter optimization (HPO). Black-box and multi-fidelity HPO methods have a longstanding history (Bergstra & Bengio, 2012; Hutter, 2009; Hutter et al., 2011; 2019; Kohavi & John,1995; Hutter et al., 2019). Black-box methods, e.g., grid search and random search (Bergstra &Bengio, 2012), regard the evaluation function as a black-box. They sample some hyperparametersand evaluate them one by one to find the best. Bayesian methods can make the sampling procedurein random search more efficient (Jones et al., 1998; Shahriari et al., 2015; Snoek et al., 2015).They employ a surrogate model and an acquisition function to decide which candidate to evaluatenext (Thornton et al., 2013). Multi-fidelity optimization methods accelerate the above methodsby evaluating on a proxy task, e.g., using less training epochs or a subset of data (Domhan et al.,2015; Jaderberg et al., 2017; Kohavi & John, 1995; Li et al., 2017). These HPO methods arecomputationally expensive to search for deep learning models (Krizhevsky et al., 2012).Recently, gradient-based HPO methods have shown better efficiency (Baydin et al., 2018; Lorraineet al., 2020), by computing the gradient with respect to the hyperparameters. For example, Maclaurinet al. (2015) calculate the extract gradients w.r.t. hyperparameters. Pedregosa (2016) leveragesthe implicit function theorem to calculate approximate hypergradient. Following that, differentapproximation methods have been proposed (Lorraine et al., 2020; Pedregosa, 2016; Shaban et al.,2019). Despite of their efficiency, they can only be applied to differentiable hyperparameters suchas weight decay, but not non-differentiable hyperparameters, such as learning rate (Lorraine et al.,2020) or optimizer (Shaban et al., 2019). Our AutoHAS is not only as efficient as gradient-basedHPO methods but also applicable to both differentiable and non-differentiable hyperparameters.Moreover, we show significant improvements on state-of-the-art models with large-scale datasets,which supplements the lack of strong empirical evidence in previous HPO methods.Hyperparameter and Architecture Search. Few approaches have been developed for the jointsearching of hyperparameter and architecture (Klein & Hutter, 2019; Zela et al., 2018). However, theyfocus on small datasets and small search spaces. These methods are more computationally expensivethan AutoHAS. Concurrent to our AutoHAS, FBNet-V3 (Dai et al., 2020) learns an acquisitionfunction to predict the performance for the pair of hyperparameter and architecture. They requireto evaluate thousands of pairs to optimize this function and thus costs much more computationalresources than ours.5 C ONCLUSIONIn this paper, we proposed an automated and unified framework AutoHAS, which can efficientlysearch for both hyperparameters and architectures. AutoHAS provides a novel perspective of AutoMLalgorithms by generalizing the weight sharing technique from architectures to hyperparameters.Specifically, AutoHAS first unifies the representation of both continuous and categorical choices bythe discretizing strategy. Then AutoHAS leverages the weight sharing technique to train a single supermodel for different hyperparameter and architecture candidates. In parallel, AutoHAS introducesREINFORCE to learn a controller that can sample good hyperparameter and architecture candidates.Experimentally, AutoHAS significantly improves the baseline models on seven datasets. For thehighly-optimized ResNet/EfficientNet, AutoHAS improves ImageNet top-1 accuracy by 0.8%; forother less-optimized scenarios (e.g., Oxford Flower), it improves the accuracy by 11.4%.8Under review as a conference paper at ICLR 2021<|im_end|>
<|im_start|>assistant
### Review Title
Another tricky paper
### Review Text
This paper proposes a search framework that is very similar to "Neural Architecture Search with Reinforcement Learning", except that the authors claim their method can search discrete training hyper-parameters. The authors evaluate their method on several datasets and claims to achieve SoTA results. 1. REINFORCE V.S. Bayesian Optimization and other derivative free optimizations I strongly encourage authors to take a comprehensive review of literatures in policy gradients (REINFORCE), and derivative free optimizations. There is a weird trend in NAS community that re-makes the wheels in the search. I believe several claims made by the authors are questionable: a. Sample-efficiency is a well known issue in RL, and RL usually requires millions of trajectories before working well. The author now claims a well known policy gradient method is actually the most efficient one. In fact, [1] also uses REINFORCE to update the controller, and it required a lot of samples to work. b. the comparison to HPO methods is unfair and several claims are wrong. First, I'd like clarify one point, in derivative free optimization, we maximize f(x) s.t. some constraints, and x can be anything, including the configurations of training pipeline, architecture hyper-parameters, etc.. So, adding training hyper-parameters into x and optimize f(x) is not a well justified research problem. Besides, as you argue in many places in the paper, existing derivative free solvers support the search over a mixture of [continuous, discrete] variables. Check this package for example. https://github.com/facebookresearch/nevergrad. You can see some paper claiming they can do NAS really fast, e.g. [2], simply because they are using a supernet or using bi-level optimizations in DARTS. They terminate the training of supernet earlier, then use some tricks to fine-tune the final architecture to a reasonable result (simply because the search space is well defined). That does not necessarily mean HPO methods cannot be applied with these tricks; in fact, using HPO together with a supernet has achieved far better results. FYI, you can get gradients in the discrete space using finite difference. Therefore, calculating gradients over discrete variables can not be counted as a contribution. c. questionable experiment results: I highly doubt the Bayesian Optimization in Fig.2 is not setup correctly. Please also plot the figure by samples. If the authors use different tricks to reduce the search time for your agent, please also apply to Bayesian Optimization to ensure a fair apple-to-apple evaluation. 2. ResNet-50 baseline used in sec.3.2 is questionable, and please follow the setup below: https://github.com/rwightman/pytorch-image-models/blob/master/results/results-imagenet.csv, where they achieve 79.039 top-1 accuracy with resnet 50. 3. ImageNet results are far from SoTA: [3] shows 300 MFLOPS model achieve 79.6 top-1 accuracy. I understand you may use different tricks. but given the current situation, it is really hard for a reviewer to judge if two paper use the same tricks. 4. I'm not sure if it is still meaningful to claim NAS from 5 hours -> 1 hour. Training a CIFAR-10 model from scratch to SoTA result takes 3 days. Now NAS becomes a task even easier than training a model. Do you really believe that? or perhaps NAS has over exploited our prior knowledge in the development of CNN. If I draw the first sample from a well defined search space and apply lots of hacks to boost the network performance to a reasonable level, does it make sense claim NAS in 1 second? In summary, this paper is more like an engineering study, rather than a rigorous scientific research. My main concern is that this paper does not provide any good insights. [1] Neural Architecture Search with Reinforcement Learning [2] Searching for A Robust Neural Architecture in Four GPU Hours [3] Neural Architecture Transfer
### Review Rating
4: Ok but not good enough - rejection
### Review Confidence
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|>
<|im_end|> |
r1lI3ertwH | ICLR.cc/2020/Conference | 2020 | WHAT DATA IS USEFUL FOR MY DATA: TRANSFER LEARNING WITH A MIXTURE OF SELF-SUPERVISED EXPERTS | ["Xi Yan", "David Acuna", "Sanja Fidler"] | "Transfer learning has proven to be a successful way to train high performing deep learning models i(...TRUNCATED) | "[\"data\", \"mixture\", \"useful\", \"transfer\", \"target domain\", \"work\", \"relevant\", \"data(...TRUNCATED) | "ABSTRACTTransfer learning has proven to be a successful way to train high performing deeplearning m(...TRUNCATED) | HkgeoYJ5qS | Official Blind Review #2 | 3: Weak Reject | "This paper is focused on simplifying the use of larger datasets (via pretraining models) for the pu(...TRUNCATED) | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
|
S1L-hCNtl | ICLR.cc/2017/workshop | 2017 | Generative Adversarial Learning of Markov Chains | ["Jiaming Song", "Shengjia Zhao", "Stefano Ermon"] | "We investigate generative adversarial training methods to learn a transition operator for a Markov (...TRUNCATED) | ["Deep learning", "Unsupervised Learning"] | "ABSTRACTWe investigate generative adversarial training methods to learn a transition op-erator for (...TRUNCATED) | rkfqGpgie | Simple, appealing idea but no clear improvement over std. GANs | 6: Marginally above acceptance threshold | "The authors propose to use an adversarial objective to train a transition operator for a Markov cha(...TRUNCATED) | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
HyecJGP5ge | ICLR.cc/2017/conference | 2017 | NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD | ["Sahil Garg", "Irina Rish", "Guillermo Cecchi", "Aurelie Lozano"] | "In this paper, we focus on online representation learning in non-stationary environments which may (...TRUNCATED) | "[\"Unsupervised Learning\", \"Computer vision\", \"Transfer Learning\", \"Optimization\", \"Applica(...TRUNCATED) | "ABSTRACTIn this paper, we focus on online representation learning in non-stationary envi-ronments w(...TRUNCATED) | SkDONYuVx | Simple interesting modified online dictionary learning | 7: Good paper, accept | "The authors propose a simple modification of online dictionary learning: inspired by neurogenesis, (...TRUNCATED) | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
Sks9_ajex | ICLR.cc/2017/conference | 2017 | "Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via (...TRUNCATED) | ["Sergey Zagoruyko", "Nikos Komodakis"] | "Attention plays a critical role in human visual experience. Furthermore, it has recently been demon(...TRUNCATED) | ["Computer vision", "Deep learning", "Supervised Learning"] | "ABSTRACTAttention plays a critical role in human visual experience. Furthermore, it hasrecently bee(...TRUNCATED) | H1BT0bwVe | Review | 6: Marginally above acceptance threshold | "The paper presented a modified knowledge distillation framework that minimizes the difference of th(...TRUNCATED) | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
B1lf43A5Y7 | ICLR.cc/2019/Conference | 2019 | How to learn (and how not to learn) multi-hop reasoning with memory networks | ["Jifan Chen", "Greg Durrett"] | "Answering questions about a text frequently requires aggregating information from multiple places i(...TRUNCATED) | ["NLP", "Reading Comprehension", "Memory Networks", "Multi-hop Reasoning"] | "ABSTRACTAnswering questions about a text frequently requires aggregating informationfrom multiple p(...TRUNCATED) | Byxo2ZUq3Q | Interesting investigation but insufficient proposition and results | 5: Marginally below acceptance threshold | "The paper proposes to investigate the well-known problem of memory network learning and more precis(...TRUNCATED) | "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the rel(...TRUNCATED) | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
H1Xw62kRZ | ICLR.cc/2018/Conference | 2018 | Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis | ["Rudy Bunel", "Matthew Hausknecht", "Jacob Devlin", "Rishabh Singh", "Pushmeet Kohli"] | "Program synthesis is the task of automatically generating a program consistent with\na specificatio(...TRUNCATED) | ["Program Synthesis", "Reinforcement Learning", "Language Model"] | "ABSTRACTProgram synthesis is the task of automatically generating a program consistent witha specif(...TRUNCATED) | Hk4_Jw9xG | Good paper, could be more clearly written. | 5: Marginally below acceptance threshold | "The authors consider the task of program synthesis in the Karel DSL. Their innovations are to use r(...TRUNCATED) | 3: The reviewer is fairly confident that the evaluation is correct | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
KCzRX9N8BIH | ICLR.cc/2021/Conference | 2021 | It Is Likely That Your Loss Should be a Likelihood | ["Mark Hamilton", "Evan Shelhamer", "William T. Freeman"] | "Many common loss functions such as mean-squared-error, cross-entropy, and reconstruction loss are u(...TRUNCATED) | "[\"Adaptive Losses\", \"Outlier Detection\", \"Adaptive Regularization\", \"Recalibration\", \"Robu(...TRUNCATED) | "ABSTRACTMany common loss functions such as mean-squared-error, cross-entropy, and re-construction l(...TRUNCATED) | h4g2A3zgEa2 | An interesting idea but writing and presentation should be improved. | 6: Marginally above acceptance threshold | "# Summary:\nThe paper proposes the use of complete parametrized likelihoods for providing supervisi(...TRUNCATED) | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
rkl5CjC9Fm | ICLR.cc/2019/Conference | 2019 | Dual Importance Weight GAN | ["Gahye Lee", "Seungkyu Lee"] | "Generative Adversarial Networks (GAN) are trained to generate a sample image of interest. To this e(...TRUNCATED) | "[\"gan\", \"generative network\", \"distribution\", \"dual importance weights\", \"generated sample(...TRUNCATED) | "ABSTRACTGenerative Adversarial Networks (GAN) are trained to generate a sample imageof interest. To(...TRUNCATED) | BJxGzPMc2X | Exploiting importance sampling in the latent space of auto-encoder to alleviate mode collapse | 5: Marginally below acceptance threshold | "This paper proposed a new regularizer for the objective of GAN’s generator, with the purpose of a(...TRUNCATED) | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | "<|im_start|>system\nYou are a helpful assistant that reviews papers and provides feedback on their (...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 34