title
stringlengths
28
121
abstract
stringlengths
697
11.5k
introduction
stringlengths
382
11.9k
Ding_Network_Expansion_for_Practical_Training_Acceleration_CVPR_2023
Abstract Recently, the sizes of deep neural networks and train-ing datasets both increase drastically to pursue better performance in a practical sense. With the prevalence of transformer-based models in vision tasks, even more pressure is laid on the GPU platforms to train these heavy models, which consumes a large amount of time and computing resources as well. Therefore, it’s crucial to accelerate the training process of deep neural networks. In this paper, we propose a general network expansion method to reduce the practical time cost of the model training process. Specifically, we utilize both width-and depth-level sparsity of dense models to accelerate the training of deep neural networks. Firstly, we pick a sparse sub-network from the original dense model by reducing the number of parameters as the starting point of training. Then the sparse architecture will gradually expand during the training procedure and finally grow into a dense one. We design different expanding strategies to grow CNNs and ViTs respectively, due to the great heterogeneity in between the two architectures. Our method can be easily integrated into popular deep learning frameworks, which saves considerable training time and hardware resources. Extensive experiments show that our acceleration method can significantly speed up the training process of modern vision models on general GPU devices with negligible performance drop ( e.g. 1.42×faster for ResNet-101 and
1.34×faster for DeiT-base on ImageNet-1k). The code is available at https : / / github . com / huawei -noah / Efficient -Computing / tree / master / TrainingAcceleration / NetworkExpansion and https : / / gitee . com / mindspore / hub / blob / master / mshub _ res / assets / noah -cvlab / gpu / 1.8/networkexpansion_v1.0_imagenet2012.md . 1. Introduction Deep neural networks have demonstrate their excellent performance on multiple vision tasks, such as classifica-⋆Corresponding authors.tion [15, 30, 44], object detection [12, 43], semantic seg-mentation [32, 35], etc.In spite of their success, these net-works usually come with heavy architectures and severe over-parameterization, and therefore it takes many days or even weeks to train such networks from scratch. The ever-increasing model complexity [23,24,34,42] and train-ing time cause not only a serious slowdown for the re-search schedule, but also a huge waste of time and com-puting resources. However, CNNs are still going deeper and bigger for higher capacity to cope with extremely large datasets [27, 45]. Recently, a new type of architecture named vision transformers (ViTs) have emerged and soon achieved state-of-the-art performances on multiple com-puter vision tasks [16, 48, 52, 57]. Originating from Natural Language Processing, the vision transformer has a different network topology and larger computational complexity than CNNs. Besides, transformer-based models usually require more epochs to converge. From another perspective, compared with purchasing ex-pensive GPU servers, many researchers and personal users nowadays choose cloud computing service to run experi-ments and pay their bills by GPU-hours. Thus, an acceler-ated training framework is obviously cost-efficient. On the other hand, shortened training time leads to not only quicker idea verification but also more refined hyper-parameter tun-ing, which is crucial to the punctual completion of the project and on-time product delivery. There are some existing methods about efficient model training [36, 51, 53, 55], but few of them can achieve high practical acceleration on geneal GPU platforms. [53] pro-poses to prune the gradients of feature maps during back-propagation to reduce train-time FLOPs, and achieve train-ing speedup on CPU platform. [51] conducts efficient CNN training on ARM and FPGA devices to reduce power con-sumption. [36] prunes weights of the network to achieve training acceleration but eventually yield a pruned sparse model with non-negligible performance drop. [55] skips easy samples that contribute little to loss reduction by using an assistant model asynchronously running on CPU. Yet it This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20269 requires sophisticated engineering implementation. Though the prior works claim an ideal theoretical acceleration ra-tio, none of them achieve obviously practical acceleration on common GPU platforms. Most of these works overlook the most general scenario, i.e. accelerating training on gen-eral GPU platforms with popular deep learning frameworks such as PyTorch [40] and TensorFlow [1]. The lack of re-lated research is probably because GPU servers are not so power-constrained as edge devices. In this paper, we propose a general training accelera-tion framework (network expansion) for both CNN and ViT models to reduce the practical training time. We first sample a sub-network from the original dense model as the starting point of training. Then this sparse architecture will grad-ually expand its network topology by adding new parame-ters, which increases the model capacity along the training procedure. When performing network expansion, we fol-low the principle of avoiding the introduction of redundant parameters. For CNN, new filters are progressively added whose weights are initialized by imposing filter-level or-thogonality. This reduces the correlation between old and new feature maps and improves the expressiveness of the convolutional network. For vision transformers, we first train a shallow sub-network with fewer layers, and create an exponential moving average (EMA) version of the trained model. As the training continues, some layers of the EMA model will be inserted into the trained model to construct a deeper one. With the network expansion training paradigm, the sampled sub-network eventually grows into the desired dense architecture, and thus the total training FLOPs and time are greatly reduced. Our method can be easily integrated into popular deep learning frameworks on general GPU platforms. Without changing the original optimizer and hyper-parameters (such as epochs and learning rate), our method can achieve 1.42 × wall-time acceleration for training ResNet-101, 1.34 ×wall-time acceleration for training DeiT-base, on ImageNet-1k dataset with negligible top-1 accuracy gap, compared with normal training baseline. Moreover, experiments show that our acceleration framework can generalize to downstream tasks such as semantic segmentation.
Cai_Multi-Centroid_Task_Descriptor_for_Dynamic_Class_Incremental_Inference_CVPR_2023
Abstract Incremental learning could be roughly divided into two categories, i.e., class-and task-incremental learning. Themain difference is whether the task ID is given during eval-uation. In this paper , we show this task information is in-deed a strong prior knowledge, which will bring significantimprovement over class-incremental learning baseline, e.g.,DER [ 39]. Based on this observation, we propose a gate network to predict the task ID for class incremental infer-ence. This is challenging as there is no explicit semantic re-lationship between categories in the concept of task. There-fore, we propose a multi-centroid task descriptor by assum-ing the data within a task can form multiple clusters. The cluster centers are optimized by pulling relevant sample-centroid pairs while pushing others away, which ensuresthat there is at least one centroid close to a given sample. To select relevant pairs, we use class prototypes as prox-ies and solve a bipartite matching problem, making the task descriptor representative yet not degenerate to uni-modal.As a result, our dynamic inference network is trained inde-pendently of baseline and provides a flexible, efficient so-lution to distinguish between tasks. Extensive experiments show our approach achieves state-of-the-art results, e.g.,we achieve 72.41% average accuracy on CIF AR100-B0S50, outperforming DER by 3.40% .
1. Introduction As a rapidly developing task in machine learning, incre-mental learning [ 7,27] (IL) aims to continually learn new concepts (classes), where the training data comes as a se-quence of tasks with each including a couple of new classesat a time. Such training strategy allows the network to incre-mentally learn novel knowledge [ 28] and has become more prevalent in real-world applications. †Corresponding authors.Dynamic InferenceBaseline ࢀࢶࢶ૛ࢶ૚ Prediction …c ࢶ૛ࢶ૚ ࢀࢶPrediction … Task ID: 2ࢶ૚ ࢶࢀ…Concat c Gate Similar Dissimilar Task distribution (a) (b) Figure 1. (a): Illustration of DER (Left) and our dynamic infer-ence method (Right). (b): Step accuracy on CIFAR100-B0S10with DER under three different evaluation procedures: i) GivenTask ID: use t-th branch to infer the sample in t-th task ii) All branches: concatenate all branches like original DER; iii) Ran-dom Path: randomly select one branch for inference. Generally speaking, incremental learning could be roughly divided into two categories, i.e.,Task-IL vs. Class-IL [35].It depends on whether the task ID is given during inference. For example, Task-IL could use this task ID in-formation to search prediction in a narrowed label space and is considered to be easier than Class-IL. Consequently, theaccuracy of Task-IL is always outperforming Class-IL, and we can consider task ID as strong prior knowledge for in-cremental learning. That motivates us to think about a critical problem, can we utilize task ID to improve the performance of Class-ILapproaches? To answer this question, we select Class-IL SOTA method DER [ 39] as our baseline shown in the left of Fig. 1(a). Once a new task arrived, DER expands the cur-rent network with a new task-specific branch Φ. This multi-branch architecture allows us to conveniently adapt DER to a Task-IL approach. That is, when a couple of categories are trained in a specific task, a test sample belonging to this task would be sent to the corresponding branch for infer-ence. So we compare the results of DER by giving task ID or directly using the original inference strategy over DER This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7298 and show the step accuracy in Fig. 1(b). It is amazing that even though the training procedure is the same, the accuracygap between Task-IL and Class-IL is significantly large. Based on this observation, we propose to design a gate network by automatically predicting task ID for class in-cremental inference, which is named dynamic inference inthis paper. A straightforward solution to this aim is to treatthe task ID prediction problem as a classification task. But it’s rather difficult since there is no explicit semantic rela-tionship between categories in the concept of task. So thedata in one task has high variance and also lacks criteria to discriminate them. To address the aforementioned issues, we propose a new multi-centroid task descriptor, by assuming data within a task can form multiple clusters. Their centers, i.e., cen-troids, and then optimized to representatively describe a task. By pulling every relevant sample-centroid pair whilepushing others away, it ensures that there is at least one cen-troid close to the given sample, which enables our network to distinguish between tasks. To select relevant pairs, we use class prototypes as proxy and solve a linear sum assign-ment problem [ 21],i.e., bipartite matching problem such that for a given sample, we are able to find its matched cen-troid based on the prototype-centroid matching results. During inference, we compare each instance feature and the task descriptor and then find the most relevant branchfor inference. The whole framework (dynamic inferencenetwork) is trained independently of the baseline, which al-lows our approach to be flexibly integrated into trained DER or other multi-branch models. We validate our approach onthree commonly used benchmarks, including CIFAR-100, ImageNet-100, and ImageNet-1000. The results demon-strate the effectiveness of our approach, which obtains state-of-the-art results. The main contributions of our work are: 1) Based on the idea of Task-IL, we propose a standalone gate networkto automatically predict task ID for class incremental in-ference. The prediction is efficient, has no restrictions ontest data, and is able to distinguish between tasks for large-scale Class-IL at the first time. 2) We propose a trainable multi-centroid task descriptor to describe the complicatedtask distribution. To make this descriptor representative, we solve a prototype-centroid bipartite matching problem to se-lect relevant sample-centroid pairs for optimization. 3) Ex-tensive experiments on large-scale benchmark CIFAR-100, ImageNet-100 and ImageNet-1000 demonstrate the superi-ority of our approach compared with the state-of-the-art.
Cao_Physics-Guided_ISO-Dependent_Sensor_Noise_Modeling_for_Extreme_Low-Light_Photography_CVPR_2023
Abstract Although deep neural networks have achieved astonish-ing performance in many vision tasks, existing learning-based methods are far inferior to the physical model-based solutions in extreme low-light sensor noise modeling. To tap the potential of learning-based sensor noise modeling, we investigate the noise formation in a typical imaging process and propose a novel physics-guided ISO-dependent sensor noise modeling approach. Specifically, we build a normal-izing flow-based framework to represent the complex noise characteristics of CMOS camera sensors. Each compo-nent of the noise model is dedicated to a particular kind of noise under the guidance of physical models. Moreover, we take into consideration of the ISO dependence in the noise model, which is not completely considered by the ex-isting learning-based methods. For training the proposed noise model, a new dataset is further collected with paired noisy-clean images, as well as flat-field and bias frames covering a wide range of ISO settings. Compared to exist-ing methods, the proposed noise model is equipped with a flexible structure and accurate modeling capabilities, which is beneficial for better denoising performance in extreme low-light scenes. The dataset and code are available at https://github.com/happycaoyue/LLD .
1. Introduction In recent years, learning-based image denoising methods have achieved tremendous success with pairwise training samples [22, 32]. However, it is still challenging to recover high-quality results in extreme low-light scenarios, mainly due to limited data [9, 31]. Considering the difficulty of collecting enormous pairwise training data, noise model-ing [25, 31, 33] becomes an alternative solution by simu-lating noises that match the extreme low-light distribution. The noises in extreme low-light scenarios contain severestriping artifacts and color bias, and the damage to image quality is inconsistent across different ISO settings and lo-cations. To model such complicated noises, physics-based methods [9, 29, 31, 33] build cumbersome statistical mod-els according to the physical process from photons ( i.e., the light) to digital signals ( i.e., the rawRGB image). Neverthe-less, the noise parameter calibration relies on a large num-ber of flat-field and bias frames, which is also laborious and expensive. For example, PMN [9] requires 400 bias frames at each ISO setting to calibrate the noise parameters. For circumventing the tedious parameter calibration pro-cess, learning-based methods [1, 5, 25] directly learn the mapping from clean images to their noisy counterparts. Yet the performance is still far inferior to the physics-based statistical methods [31, 33]. To boost the performance of learning-based sensor noise modeling, we delve into such inferiority and attribute the major cause to the inconsistency between the noise models and the imaging process. For example, NoiseFlow [1] utilizes the distribution matching ability of normalizing flow but is unable to model strip-ing artifacts and color bias. Starlight [25] leverages vari-ous noise sources like heteroscedastic Gaussian noise, row noise, and fixed-pattern noise. However, it mixes the noises with the clean image and delivers them into a GAN model. In other words, the noises are entangled with each other, which increases the difficulty of describing the noise distri-butions. Moreover, these methods [1,5,25] either ignore the ISO dependency of the noise or are based on the assumption of a small range of ISO settings, further limiting the perfor-mance of learning-based noise modeling methods. As a remedy, we propose a refined noise model to tap the potential of learning-based sensor noise modeling. As shown in Tab. 1, the proposed noise model covers the most common kind of noises in the imaging process, in-cluding shot noise Nshot, dark current fixed-pattern noise NFP, black level error noise NBLE, dark current shot noise NDCSN , read noise Nread, row noise Nrow, and quantiza-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5744 Table 1. Comparison between noise modeling methods. Smeans that the noise is sampled from real images. Method Category Nshot NFP NBLE NDCSN Nread Nrow Nq Learnability ISO dependence ELLE [29] Physics ✔ ✔ ✔ ✔ ✔ ✔ None Incomplete ELD [31] Physics ✔ S ✔ ✔ ✔ ✔ None Incomplete SFRN [33] Physics ✔ S S S S S ✔ None Incomplete PMN [9] Physics ✔ ✔ ✔ ✔ None Incomplete NoiseFlow [1] Learn ✔ ✔ ✔ ✔ Complete Incomplete Startlight [25] Learn ✔ ✔ ✔ ✔ ✔ ✔ Incomplete Incomplete Ours Learn ✔ ✔ ✔ ✔ ✔ ✔ ✔ Complete Complete tion noise Nq. Among them, NFP,NBLE, and NDCSN jointly model the dark current noise NDC. Besides, the ISO dependence is also better considered compared to existing methods. The noise model is implemented in the normal-izing flow framework, and each component corresponds to a specific type of noise. Such configuration leverages the explicit distribution modeling ability of normalizing flow models, and the network architecture is also flexible enough to align accurately with our proposed noise model. Apart from the inconsistency between existing noise models and the imaging process, another key factor imped-ing accurate noise modeling is the dataset. For obtaining the reference clean images, the SIDD dataset [2] overlays multiple noisy images, which leaves the black level error noise [9,29,31] and fixed-pattern noise [4,21,23] unable to remove. Another commonly used dataset SID [6] adopts the same ISO setting for pairwise long and short exposure im-ages, which results in the long exposure reference images still containing noises such as fixed-pattern noise. There-fore, for better training the proposed noise model, we have also collected a low-light image denoising (LLD) dataset, which contains pairs of noisy (short exposure, high ISO) and clean (long exposure, low ISO) images. Furthermore, we also provide flat-field and bias frames at various ISO settings in the LLD dataset, hoping it can facilitate the un-derstanding of real noise and image denoising research. With the LLD dataset, we train the proposed noise model in a two-stage manner. Specifically, the noises are divided into two groups, i.e., the fixed-pattern noise and the ran-dom noise. Then the noise model is first trained to describe the random noise and then fitted to the fixed-pattern noise. Thanks to the flexible structure and accurate modeling ca-pabilities, the proposed noise model can better capture the characteristics of real noise. The image denoising methods can also benefit from our noise model and achieve superior performance in extreme low-light scenes. To sum up, the main contributions of this work include: • We investigate the noise formation process in ex-treme low-light scenarios and propose a novel physics-guided noise model. The ISO dependence is taken into consideration in the proposed method. • We collect a dataset for extreme low-light image de-noising. The dataset contains pairwise noisy-clean im-ages captured by two cameras ( i.e., Sony A7S2 and Nikon D850). We also provide flat-field and bias frames covering a wide range of ISO settings. • While the learning-based nature eliminates the labor-intensive parameter hand-calibration process, our pro-posed method can achieve superior noise modeling ac-curacy and boost the image denoising performance.
Jeon_Context-Based_Trit-Plane_Coding_for_Progressive_Image_Compression_CVPR_2023
Abstract Trit-plane coding enables deep progressive image com-pression, but it cannot use autoregressive context mod-els. In this paper , we propose the context-based trit-planecoding (CTC) algorithm to achieve progressive compres-sion more compactly. First, we develop the context-basedrate reduction module to estimate trit probabilities of la-tent elements accurately and thus encode the trit-planescompactly. Second, we develop the context-based distor-tion reduction module to refine partial latent tensors fromthe trit-planes and improve the reconstructed image qual-ity. Third, we propose a retraining scheme for the de-coder to attain better rate-distortion tradeoffs. Extensiveexperiments show that CTC outperforms the baseline trit-plane codec significantly, e.g. by −14.84% in BD-rate on the Kodak lossless dataset, while increasing the time com-plexity only marginally. The source codes are available at https://github.com/seungminjeon-github/CTC .
1. Introduction Image compression is a fundamental problem in both image processing and low-level vision. A lot of tradi-tional codecs have been developed, including standardsJPEG [ 47], JPEG2000 [ 40], and VVC [ 11]. Many of these codecs are based on discrete cosine transform or wavelet transform. Using handcrafted modules, they provide de-cent rate-distortion (RD) results. However, with the rapidlygrowing usage of image data, it is still necessary to developadvanced image codecs with better RD performance. Deep learning has been explored with the advance of big data analysis and computational power, and it also has been successfully adopted for image compression. Learning-based codecs have similar structures to traditional ones: they transform an image into latent variables and then en-code those variables into a bitstream. They often adopt con-volutional neural networks (CNNs) for the transformation. Several innovations have been made to improve RD per-formance, including differentiable quantization approxima-*Corresponding author. RD pointsbppPSNR (dB) 3234 CRRCDR33 Baseline (0.210, 32.47)Context models (0.191, 33.27) 0.18 Baseline Context models0.19 0.20 0.2134.5 33.5Original Figure 1. Illustration of the proposed context models: CRR re-duces the bitrate, while CDR improves the image quality, as com-pared with the context-free baseline [ 27]. tions [ 5,6], hyperprior [ 7], context models [ 20,32,33], and prior models [ 13,15]. As a result, the deep image codecs are competitive with or even superior to the traditional ones. It is desirable to compress images progressively in ap-plications where a single bitstream should be used for mul-tiple users with different bandwidths. But, relatively fewdeep codecs support such progressive compression or scal-able coding [ 35]. Many codecs should train their networks multiple times to achieve compression at as many bitrates[7,13,33,53]. Some codecs support variable-rate coding [15,51], but they should generate multiple bitstreams for different bitrates. It is more efficient to truncate a singlebitstream to satisfy different bitrate requirements. Lu et al . [30] and Lee et al .[27] are such progressive codecs, based on nested quantization and trit-plane coding, respectively.But, they cannot use existing context models [ 20,26,32–34], which assume the synchronization of the latent elements,used as contexts, in the encoder and the decoder. Thoselatent elements are at different states depending on bitrates. In this paper, we propose the context-based trit-plane coding (CTC) algorithm for progressive image compres-sion, based on novel context models. First, we develop the context-based rate reduction (CRR) module, which entropy-encodes trit-planes more compactly by exploiting already decoded information. Second, we develop the context-based distortion reduction (CDR) module, which refines This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14348 partial latent tensors after entropy decoding for higher-quality image reconstruction. Also, we propose a simple yeteffective retraining scheme for the decoder to achieve betterRD tradeoffs. It is demonstrated that CTC outperforms theexisting progressive codecs [ 27,30] significantly. This paper has the following major contributions: • We propose the first context models, CRR and CDR, for deep progressive image compression. As illus-trated in Figure 1, CRR reduces the bitrate, while CDR improves the image quality effectively, in comparison with the baseline trit-plane coding [ 27]. • We develop a decoder retraining scheme, which adapts the decoder to refined latent tensors by CDR to im-prove the RD performance greatly. • The proposed CTC algorithm outperforms the state-of-the-art progressive codecs [ 27,30] significantly. Rela-tive to [ 27], CTC yields BD-rates of −14.84% on the Kodak dataset [ 3],−14.75% on the CLIC validation set [ 4], and−17.00% on the JPEG-AI testset [ 1].
Cao_Self-Supervised_Learning_for_Multimodal_Non-Rigid_3D_Shape_Matching_CVPR_2023
Abstract The matching of 3D shapes has been extensively stud-ied for shapes represented as surface meshes, as well as for shapes represented as point clouds. While point clouds are a common representation of raw real-world 3D data (e.g. from laser scanners), meshes encode rich and expres-sive topological information, but their creation typically re-quires some form of (often manual) curation. In turn, meth-ods that purely rely on point clouds are unable to meet the matching quality of mesh-based methods that utilise the ad-ditional topological structure. In this work we close this gap by introducing a self-supervised multimodal learning strategy that combines mesh-based functional map regular-isation with a contrastive loss that couples mesh and point cloud data. Our shape matching approach allows to ob-tain intramodal correspondences for triangle meshes, com-plete point clouds, and partially observed point clouds, as well as correspondences across these data modalities. We demonstrate that our method achieves state-of-the-art re-sults on several challenging benchmark datasets even in comparison to recent supervised methods, and that our method reaches previously unseen cross-dataset generali-sation ability. Our code is available at https://github.com/ dongliangcao/Self-Supervised-Multimodal-Shape-Matching .
1. Introduction Matching 3D shapes, i.e. finding correspondences be-tween their parts, is a fundamental problem in computer vision and computer graphics that has a wide range of ap-plications [11, 16, 31]. Even though it has been studied for decades [56, 57], the non-rigid shape matching problem re-mains highly challenging. One often faces a large variabil-ity in terms of shape deformations, or input data with severe noise and topological changes. With the recent success of deep learning, many learning-based approaches were proposed for 3D shape match-ing [17, 19, 28, 33]. While recent approaches demonstrate near-perfect matching accuracy without requiring ground truth annotations [8, 17], they are limited to 3D shapes rep-resented as triangle meshes and strongly rely on clean data. 0.00 0.02 0.04 0.06 0.08 0.10 Geodesic error threshold0.00.20.40.60.81.0Proportion of correct matchingsSHREC19 Deep Shells: 0.075 Deep Shells (PC): 0.117 DPC (PC): 0.176 Ours: 0.040 Ours (PC): 0.045Figure 1. Left: Our method obtains accurate correspondences for triangle meshes, point clouds and even partially observed point clouds. Right: Proportion of correct keypoints (PCK) curves and mean geodesic errors (scores in the legend) on the SHREC’19 dataset [34] for meshes (solid lines) and point clouds (dashed lines). Existing point cloud matching methods (DPC [26], green line), or mesh-based methods applied to point clouds (Deep Shells [17], red dashed line) are unable to meet the matching per-formance of mesh-based methods (solid lines). In contrast, our method is multimodal and can process both meshes and point clouds, while enabling accurate shape matching with comparable performance for both modalities (blue lines). Since point clouds are a common representation for real-world 3D data, many unsupervised learning approaches were specifically designed for point cloud matching [20, 26, 63]. These methods are often based on learning per-point features, so that point-wise correspondences are ob-tained by comparing feature similarities. The learned fea-tures were shown to be robust under large shape deforma-tions and severe noise. However, although point clouds commonly represent samples of a surface, respective topo-logical relations are not explicitly available and thus can-not effectively be used during training. In turn, existing point cloud correspondence methods are unable to meet the matching performance of mesh-based methods, as can be seen in Fig. 1. Moreover, when applying state-of-the-art unsupervised methods designed for meshes (e.g. Deep Shells [17]) to point clouds, one can observe a significant drop in matching performance. In this work, we propose a self-supervised learning framework to address these shortcomings. Our method uses This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17735 Methods Unsup. Mesh Point Cloud FM-based Partiality Robustness w.o. Refinement Required train data∗∗ FMNet [28] ✗ ✓ ✗∗✓ ✗ ✗ ✓ Small GeomFMaps [13] ✗ ✓ ✗∗✓ ✗ ✗ ✓ Small DiffFMaps [32] ✗ ✓ ✓ ✓ ✗ ✓ ✓ Moderate DPFM [2] ✗ ✓ ✗∗✓ ✓ ✗ ✓ Small 3D-CODED [19] ✗ ✓ ✓ ✗ ✓ ✓ ✗ Large IFMatch [55] ✗ ✓ ✓ ✗ ✓ ✓ ✗ Moderate UnsupFMNet [21] ✓ ✓ ✗∗✓ ✗ ✗ ✓ Small SURFMNet [46, 51] ✓ ✓ ✗∗✓ ✗ ✗ ✓ Small Deep Shells [17] ✓ ✓ ✗∗✓ ✗ ✗ ✗ Small ConsistFMaps [8] ✓ ✓ ✗∗✓ ✓ ✗ ✓ Small CorrNet3D [63] ✓ ✓ ✓ ✗ ✗ ✓ ✓ Large DPC [26] ✓ ✓ ✓ ✗ ✗ ✓ ✓ Moderate Ours ✓ ✓ ✓ ✓ ✓ ✓ ✓ Small Table 1. Method comparison. Our method is the first learning-based approach that combines a unique set of desirable properties. ∗Methods are originally designed for meshes, directly applying them to point clouds leads to a large performance drop. ∗∗Categorisation according to the amount of training data: Small ( <1000) ,Moderate ( ≈5,000) andLarge ( >10,000) . a combination of triangle meshes and point clouds (ex-tracted from the meshes) for training. We first utilise the structural properties of functional maps for triangle meshes as strong unsupervised regularisation. At the same time, we introduce a self-supervised contrastive loss between triangle meshes and corresponding point clouds, enabling the learn-ing of consistent feature representations for both modali-ties. With that, our method does not require to compute functional maps for point clouds at inference time, but di-rectly predicts correspondences based on feature similarity comparison. Overall, our method is the first learning-based approach that combines a unique set of desirable proper-ties, i.e. it can be trained without ground-truth correspon-dence annotations, is designed for both triangle meshes and point clouds (throughout this paper we refer to this as mul-timodal ), is robust against noise, allows for partial shape matching, and requires only a small amount of training data, see Tab. 1. In summary, our main contributions are: • For the first time we enable multimodal non-rigid 3D shape matching under a simple yet efficient self-supervised learning framework. • Our method achieves accurate matchings for triangle meshes based on functional map regularisation , while ensuring matching robustness for less structured point cloud data through deep feature similarity . • Our method outperforms state-of-the-art unsupervised and even supervised methods on several challenging 3D shape matching benchmark datasets and shows pre-viously unseen cross-dataset generalisation ability . • We extend the SURREAL dataset [58] by SURREAL-PV that exhibits disconnected components in partial views as they occur in 3D scanning scenarios.2. Related work Shape matching is a long-standing problem in computer vision and graphics. In the following, we will focus on re-viewing those methods that are most relevant to our work. A more comprehensive overview can be found in [56, 57].
Gehrig_Recurrent_Vision_Transformers_for_Object_Detection_With_Event_Cameras_CVPR_2023
Abstract We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cam-eras. Event cameras provide visual information with sub-millisecond latency at a high-dynamic range and with strong robustness against motion blur. These unique prop-erties offer great potential for low-latency object detec-tion and tracking in time-critical scenarios. Prior work in event-based vision has achieved outstanding detection per-formance but at the cost of substantial inference time, typ-ically beyond 40 milliseconds. By revisiting the high-level design of recurrent vision backbones, we reduce inference time by a factor of 6 while retaining similar performance. To achieve this, we explore a multi-stage design that uti-lizes three key concepts in each stage: first, a convolutional prior that can be regarded as a conditional positional em-bedding. Second, local and dilated global self-attention for spatial feature interaction. Third, recurrent temporal fea-ture aggregation to minimize latency while retaining tempo-ral information. RVTs can be trained from scratch to reach state-of-the-art performance on event-based object detec-tion -achieving an mAP of 47.2% on the Gen1 automotive dataset. At the same time, RVTs offer fast inference ( <12 ms on a T4 GPU) and favorable parameter efficiency ( 5 fewer than prior art). Our study brings new insights into effective design choices that can be fruitful for research be-yond event-based vision. Code :https://github.com/uzh-rpg/RVT
1. Introduction Time matters for object detection. In 30 milliseconds, a human can run 0.3 meters, a car on public roads covers up to 1 meter, and a train can travel over 2 meters. Yet, during this time, an ordinary camera captures only a single frame. Frame-based sensors must strike a balance between la-tency and bandwidth. Given a fixed bandwidth, a frame-based camera must trade-off camera resolution and frame rate. However, in highly dynamic scenes, reducing the res-olution or the frame rate may come at the cost of missing essential scene details, and, in safety-critical scenarios like Figure 1. Detection performance vs inference time of our RVT models on the 1 Mpx detection dataset using a T4 GPU. The circle areas are proportional to the model size. automotive, this may even cause fatalities. In recent years, event cameras have emerged as alter-native sensor that offers a different trade-off. Instead of counterbalancing bandwidth requirements and perceptual latency, they provide visual information at sub-millisecond latency but sacrifice absolute intensity information. In-stead of capturing intensity images, event cameras measure changes in intensity at the time they occur. This results in a stream of events, which encode time, location, and po-larity of brightness changes [14]. The main advantages of event cameras are their sub-millisecond latency, very high dynamic range ( >120 dB), strong robustness to motion blur, and ability to provide events asynchronously in a con-tinuous manner. In this work, we aim to utilize these outstanding proper-ties of event cameras for object detection in time-critical scenarios. Therefore, our objective is to design an ap-proach that reduces the processing latency as much as possi-ble while maintaining high performance. This is challeng-ing because event cameras asynchronously trigger binary events that are spread of pixel space and time. Hence, we need to develop detection algorithms that can continuously associate features in the spatio-temporal domain while si-multaneously satisfying strict latency requirements. Recent work has shown that dynamic graph neural net-works (GNNs) [28, 43] and sparse neural networks [10, 34, 55, 57] can theoretically achieve low latency inference for This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13884 event-based object detection. Yet, to achieve this in prac-tical scenarios they either require specialized hardware or their detection performance needs to be improved. An alternative thread of research approaches the problem from the view of conventional, dense neural network de-signs [7,19,20,26,38]. These methods show impressive per-formance on event-based object detection, especially when using temporal recurrence in their architectures [26, 38]. Still, the processing latency of these approaches remains beyond 40 milliseconds such that the low-latency aspect of event cameras cannot be fully leveraged. This raises the question: How can we achieve both high accuracy and effi-ciency without requiring specialized hardware? We notice that common design choices yield a subop-timal trade-off between performance and compute. For example, prior work uses expensive convolutional LSTM (Conv-LSTM) cells [44] extensively in their feature extrac-tion stage [26, 38] or relies on heavy backbones such as the VGG architecture [26]. Sparse neural networks instead struggle to model global mixing of features which is crucial to correctly locate and classify large objects in the scene. To achieve our main objective, we fundamentally revisit the design of vision backbones for event-based object de-tection. In particular, we take inspiration from neural net-work design for conventional frame-based object detection and combine them with ideas that have proven successful in the event-based vision literature. Our study deliberately focuses on macro design of the object detection backbone to identify key components for both high performance and fast inference on GPUs. The resulting neural network is based on a single block that is repeated four times to form a multi-stage hierarchical backbone that can be used with off-the-shelf detection frameworks. We identify three key components that enable an excel-lent trade-off between detection performance and inference time. First, we find that interleaved local-and global self-attention [50] is ideally suited to mix both local and global features while offering linear complexity in the input reso-lution. Second, this attention mechanism is most effective when preceded by a simple convolution that also downsam-ples the spatial resolution from the previous stage. This convolution effectively provides a strong prior about the grid-structure of the pixel array and also acts as a condi-tional positional embedding for the transformer layers [9]. Third, temporal recurrence is paramount to achieve strong detection performance with events. Differently from prior work, we find that Conv-LSTM cells can be replaced by plain LSTM cells [18] that operate on each feature sepa-rately1. By doing so, we dramatically reduce the number of parameters and latency but also slightly improve the over-all performance. Our full framework achieves competitive performance and higher efficiency compared to state-of-the-1equivalent to 11kernel in a Conv-LSTM cellart methods. Specifically, we reduce parameter count (from 100M to 18.5 M) and inference time (from 72 ms to 12 ms) up to a factor of 6 compared to prior art [26]. At the same time, we train our networks from scratch, showing that these benefits do not originate from large-scale pretraining. Our paper can be summarized as follows: (1) We re-examine predominant design choices in event-based object detection pipelines and reveal a set of key enablers for high performance in event-based object detection. (2) We pro-pose a simple, composable stage design that unifies the cru-cial building blocks in a compact way. We build a 4-stage hierarchical backbone that is fast, lightweight and still of-fers performance comparable to the best reported so far. (3) We achieve state-of-the-art object detection performance of 47.2% mAP on the Gen1 detection dataset [11] and a highly competitive 47.4% mAP on the 1 Mpx detection dataset [38] while training the proposed architecture from scratch. In addition, we also provide insights into effective data aug-mentation techniques that contribute to these results.
Arkushin_Ham2Pose_Animating_Sign_Language_Notation_Into_Pose_Sequences_CVPR_2023
Abstract Translating spoken languages into Sign languages is necessary for open communication between the hearing and hearing-impaired communities. To achieve this goal, we propose the first method for animating a text written in HamNoSys, a lexical Sign language notation, into signed pose sequences. As HamNoSys is universal by design, our proposed method offers a generic solution invariant to the target Sign language. Our method gradually generates pose predictions using transformer encoders that create mean-ingful representations of the text and poses while consider-ing their spatial and temporal information. We use weak su-pervision for the training process and show that our method succeeds in learning from partial and inaccurate data. Ad-ditionally, we offer a new distance measurement that con-siders missing keypoints, to measure the distance between pose sequences using DTW-MJE . We validate its correct-ness using AUTSL, a large-scale Sign language dataset, show that it measures the distance between pose sequences more accurately than existing measurements, and use it to assess the quality of our generated pose sequences. Code for the data pre-processing, the model, and the distance measurement is publicly released for future research.
1. Introduction Sign languages are an important communicative tool within the deaf and hard-of-hearing (DHH) community and a central property of Deaf culture. According to the World Health Organization, there are more than 70 million deaf people worldwide [56], who collectively use more than 300 different Sign languages [29]. Using the visual-gestural modality to convey meaning, Sign languages are considered natural languages [40], with their own grammar and lexi-cons. They are not universal and are mostly independent of spoken languages. For example, American Sign Lan-guage (ASL)—used predominantly in the United States— and British Sign Language (BSL)—used predominantly inGloss HOUSE3 HamNoSys SignWriting 􂇒󲇒 􂇚󲇚􇀦󷀦 􇀴󷀴 Figure 1. German Sign Language sign for “Haus”. Gloss is a unique semantic identifier; HamNoSys andSignWrit-ingdescribe the phonology of a sign: Two flat hands with fingers closed, rotated towards each other, touching, then symmetrically moving diagonally downwards. the United Kingdom—are entirely different, despite English being the predominant spoken language in both. As such, the translation task between each signed and spoken lan-guage pair is different and requires different data. Build-ing a robust system that translates spoken languages into Sign languages and vice versa is fundamental to alleviate communication gaps between the hearing-impaired and the hearing communities. While translation research from Sign languages into spoken languages has rapidly advanced in recent years [2, 5, 6, 30, 36, 37], translating spoken languages into Sign languages, also known as Sign Language Production (SLP), remains a challenge [41, 42, 47, 48]. This is partially due to a misconception that deaf people are comfortable reading spoken language and do not require translation into Sign language. However, there is no guarantee that someone whose first language is, for example, BSL, exhibits high literacy in written English. SLP is usually done through an intermediate notation system such as a semantic nota-tion system, e.g. gloss (Sec. 2.1), or a lexical notation sys-tem, e.g. HamNoSys, SignWriting (Sec. 2.2). The spoken language text is translated into the intermediate notation, which is then translated into the relevant signs. The signs can either be animated avatars or pose sequences later con-verted into videos. Previous work has shown progress in translating spoken language text to Sign language lexical notations, namely HamNoSys [54] and SignWriting [21], and in converting pose sequences into videos [8, 43, 55]. There has been some work on animating HamNoSys into avatars [3, 12, 13, 58], with unsatisfactory results (Sec. 3.1), This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21046 but no work on the task of animating HamNoSys into pose sequences. Hence, in this work, we focus on animating HamNoSys into signed pose sequences, thus facilitating the task of SLP with a generic solution for all Sign lan-guages. To do this, we collect and combine data from mul-tiple HamNoSys-to-video datasets [25, 26, 32], extract pose keypoints from the videos using a pose estimation model, and process these further as detailed in Sec. 4.1. We use the pose features as weak labels to train a model that gets HamNoSys text and a single pose frame as inputs and grad-ually generates the desired pose sequence from them. De-spite the pose features being inaccurate and incomplete, our model still learns to produce the correct motions. Addition-ally, we offer a new distance measurement that considers missing keypoints, to measure the distance between pose se-quences using DTW-MJE [20]. We validate its correctness using AUTSL, a large-scale Sign language dataset [44], and show that it measures pose sequences distance more accu-rately than currently used measurements. Overall, our main contributions are: 1. We propose the first method for animating HamNoSys into pose sequences.
Boudiaf_Open-Set_Likelihood_Maximization_for_Few-Shot_Learning_CVPR_2023
Abstract We tackle the Few-Shot Open-Set Recognition (FSOSR) problem, i.e. classifying instances among a set of classes for which we only have a few labeled samples, while simultane-ously detecting instances that do not belong to any known class. We explore the popular transductive setting, which leverages the unlabelled query instances at inference. Moti-vated by the observation that existing transductive methods perform poorly in open-set scenarios, we propose a gen-eralization of the maximum likelihood principle, in which latent scores down-weighing the influence of potential out-liers are introduced alongside the usual parametric model. Our formulation embeds supervision constraints from the support set and additional penalties discouraging overconfi-dent predictions on the query set. We proceed with a block-coordinate descent, with the latent scores and parametric model co-optimized alternately, thereby benefiting from each other. We call our resulting formulation Open-Set Likeli-hood Optimization ( OSLO ).OSLO is interpretable and fully modular; it can be applied on top of any pre-trained model seamlessly. Through extensive experiments, we show that our method surpasses existing inductive and transduc-tive methods on both aspects of open-set recognition, namely inlier classification and outlier detection. Code is avail-able at https://github.com/ebennequin/few-shot-open-set .
1. Introduction Few-shot classification consists in recognizing concepts for which we have only a handful of labeled examples. These form the support set , which, together with a batch of unla-beled instances (the query set ), constitute a few-shot task. *Equal contribution. Corresponding authors: {ma-lik.boudiaf.1@etsmtl.net, etienneb@sicara.com} †MICS, CentraleSupélec, Université Paris-SaclayMost few-shot methods classify the unlabeled query samples of a given task based on their similarity to the support in-stances in some feature space [36]. This implicitly assumes aclosed-set setting for each task, i.e. query instances are supposed to be constrained to the set of classes explicitly defined by the support set. However, the real world is open and this closed-set assumption may not hold in practice, especially for limited support sets. Whether they are unex-pected items circulating on an assembly line, a new dress not yet included in a marketplace’s catalog, or a previously undiscovered species of fungi, open-set instances occur ev-erywhere. When they do, a closed-set classifier will falsely label them as the closest known class. This drove the research community toward open-set recognition i.e. recognizing instances with the awareness that they may belong to unknown classes. In large-scale settings, the literature abounds of methods designed specif-ically to detect open-set instances while maintaining good accuracy on closed-set instances [1, 32, 51]. Very recently, the authors of [21] introduced a Few-Shot Open-Set Recog-nition (FSOSR) setting, in which query instances may not belong to any known class. The study in [21], together with other recent follow-up works [15, 16], exposed FSOSR to be a difficult task. To help alleviate the scarcity of labeled data, transduction [38] was recently explored for few-shot classification [24], and has since become a prominent research direction, fuel-ing a large body of works, e.g. [3, 4, 9, 14, 23, 26, 40, 43, 52], among many others. By leveraging the statistics of the query set, transductive methods yield performances that are sub-stantially better than their inductive counterparts [4, 40] in the standard closed-set setting. In this work, we seek to explore transduction for the FSOSR setting. We argue that theoretically, transduction has the potential to enable both classification and outlier detection (OD) modules to act symbiotically. Indeed, the classification module can reveal valuable structure of the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24007 inlier’s marginal distribution that the OD module seeks to estimate, such as the number of modes or conditional dis-tributions, while the OD part indicates the “usability” of each unlabelled sample. However, transductive principles currently adopted for few-shot learning heavily rely on the closed-set assumption in the unlabelled data, leading them to match the classification confidence for open-set instances with that of closed-set instances. In the presence of out-liers, this not only harms their predictive performance on closed-set instances, but also makes prediction-based outlier detection substantially harder than with simple inductive baselines. Contributions. In this work, we aim at designing a princi-pled framework that reconciles transduction with the open nature of the FSOSR problem. Our idea is simple but pow-erful: instead of finding heuristics to assess the outlierness of each unlabelled query sample, we treat this score as a latent variable of the problem. Based on this idea, we pro-pose a generalization of the maximum likelihood principle, in which the introduced latent scores weigh potential out-liers down, thereby preventing the parametric model from fitting those samples. Our generalization embeds additional supervision constraints from the support set and penalties discouraging overconfident predictions. We proceed with a block-coordinate descent optimization of our objective, with the closed-set soft assignments, outlierness scores, and para-metric models co-optimized alternately, thereby benefiting from each other. We call our resulting formulation Open-Set Likelihood Optimization (OSLO ).OSLO provides highly interpretable and closed-form solutions within each iteration for both the soft assignments, outlierness variables, and the parametric model. Additionally, OSLO is fully modular; it can be applied on top of any pre-trained model seamlessly. Empirically, we show that OSLO significantly surpasses its inductive and transductive competitors alike for both out-lier detection and closed-set prediction. Applied on a wide variety of architectures and training strategies and without any re-optimization of its parameters, OSLO ’s improvement over a strong baseline remains large and consistent. This modularity allows our method to fully benefit from the latest advances in standard image recognition. Before diving into the core content, let us summarize our contributions: 1.To the best of our knowledge, we realize the first study and benchmarking of transductive methods for the Few-Shot Open-Set Recognition setting. We reproduce and benchmark five state-of-the-art transductive methods.
Huang_Boosting_Accuracy_and_Robustness_of_Student_Models_via_Adaptive_Adversarial_CVPR_2023
Abstract Distilled student models in teacher-student architectures are widely considered for computational-effective deploy-ment in real-time applications and edge devices. However, there is a higher risk of student models to encounter ad-versarial attacks at the edge. Popular enhancing schemes such as adversarial training have limited performance on compressed networks. Thus, recent studies concern about adversarial distillation (AD) that aims to inherit not only prediction accuracy but also adversarial robustness of a ro-bust teacher model under the paradigm of robust optimiza-tion. In the min-max framework of AD, existing AD methods generally use fixed supervision information from the teacher model to guide the inner optimization for knowledge distil-lation which often leads to an overcorrection towards model smoothness. In this paper, we propose an adaptive adver-sarial distillation (AdaAD) that involves the teacher model in the knowledge optimization process in a way interacting with the student model to adaptively search for the inner results. Comparing with state-of-the-art methods, the pro-posed AdaAD can significantly boost both the prediction accuracy and adversarial robustness of student models in most scenarios. In particular, the ResNet-18 model trained by AdaAD achieves top-rank performance (54.23% robust accuracy) on RobustBench under AutoAttack.
1. Introduction Although demonstrating great success in dealing with large-scale data, deep neural networks (DNNs) are often over-parameterized in practice and require huge storage as well as computational cost [18, 22, 26]. In many real-time *Corresponding author.applications, it is desirable to deploy lightweight models in mobile devices with limited resources for prompt inference results. Teacher-student architectures have been considered as a means of computational-effective and high-performing deployment in such applications [23, 29, 45]. Due to lim-ited budget when deploying at the edge, small (student) models are in general lack of sufficient protection mecha-nisms. Compared with large-scale models, however, they are more prone to the risk of being exposed to a potential attacker, e.g., who crafts adversarial attacks for malicious purpose [3, 21, 43]. Therefore, it is essential to improve ad-versarial robustness of small models against malicious at-tacks when applying them to real applications. As a defense scheme, adversarial training (AT) has been studied and demonstrated effective in improving adversar-ial robustness for deep models [21, 24, 27, 32, 36]. Sev-eral studies have shown that AT is more effective on over-parameterized models with high capacity rather than on small models [27, 31, 48]. Recently, adversarial distilla-tion (AD) was proposed as an alternative scheme for im-proving adversarial robustness in teacher-student architec-tures [20, 28, 49, 50]. Like AT, AD can also be formulated as a min-max optimization problem. It aims to enable the student model to inherit not only the prediction accuracy but also the adversarial robustness from a robust teacher model under the paradigm of robust optimization. Existing AD methods generally utilize teacher models to produce fixed soft labels to guide the distillation optimiza-tion process [20, 49, 50]. However, fitting a neighborhood region with a fixed label will inevitably impose an over-correction towards model smoothness, leading to a severe trade-off between accuracy and robustness [12,12,17]. Fur-thermore, these AD methods do not fully interact with the teacher models to minimize the prediction discrepancy be-tween student and teacher models, thereby limiting the pre-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24668 diction and robustness inherited by the student model. In this paper, we propose adaptive adversarial distilla-tion (AdaAD) which fully involves a robust teacher model to adaptively search for more representative inner results in the knowledge distillation process. Specifically, in the in-ner optimization of AdaAD, we adaptively search for the points, representing the upper bound of the prediction dis-crepancy between the two models, as the inner results. And in outer optimization, we minimize the upper bound to per-form distillation. In this way, we can enable the student model to better inherit the prediction accuracy and adver-sarial robustness from the teacher model. Our main contributions can be summarized as: • We formulate a new AD objective by maximizing the prediction discrepancy between teacher and student models in the min-max framework, and provide de-tailed analysis to explain why the proposed method can achieve better distillation performance. • We design an adaptive adversarial distillation scheme, namely AdaAD, that adaptively searches for optimal match points in the inner optimization. This enables a much larger search radius (also known as perturba-tion limit) in local neighborhoods, which significantly enhances the robustness of student models. • Extensive experimental results verify that the perfor-mance of our method is significantly superior to that of the state-of-the-arts AT and AD methods in various scenarios. In particular, the ResNet-18 model trained over CIFAR-10 dataset by AdaAD achieves top-rank performance (54.23% robust accuracy) on the leader-board of RobustBench1under AutoAttack.
Ding_CAP_Robust_Point_Cloud_Classification_via_Semantic_and_Structural_Modeling_CVPR_2023
Abstract Recently, deep neural networks have shown great suc-cess on 3D point cloud classification tasks, which simulta-neously raises the concern of adversarial attacks that cause severe damage to real-world applications. Moreover, de-fending against adversarial examples in point cloud data is extremely difficult due to the emergence of various attack strategies. In this work, with the insight of the fact that the adversarial examples in this task still preserve the same se-mantic and structural information as the original input, we design a novel defense framework for improving the robust-ness of existing classification models, which consists of two main modules: the attention-based pooling and the dynamic contrastive learning. In addition, we also develop an algo-rithm to theoretically certify the robustness of the proposed framework. Extensive empirical results on two datasets and three classification models show the robustness of our ap-proach against various attacks, e.g., the averaged attack success rate of PointNet decreases from 70.2%to2.7%on the ModelNet40 dataset under 9common attacks.
1. Introduction With the rapid development of 3D sensors such as Li-DAR used in autonomous vehicles, point cloud data, which represents real-world objects by a set of 3D coordinates of points, has been widely applied in various 3D vision appli-cations [30]. Powered by the deep and non-linear structures, a number of deep learning models have proved to be ef-fective in modeling the geometric pattern underlying point cloud data, such as multi-layer perceptron (MLP) [36], con-volutional neural network (CNN) [24] and graph neural net-work (GNN) [47]. Despite the effectiveness, the extensive usage of DNN also raises the concern of adversarial ex-amples , where the input point clouds are slightly manipu-lated by an adversary to cause the misbehavior of a model [22, 48, 52]. Considering its severe consequences and dam-age to real-world applications, the study of adversarial ex-*Corresponding authors: Mi Zhang and Min Yang Figure 1. The demonstration of different adversarial attack strate-gies in point cloud classification. amples on point cloud data has been attracting more and more attention from both industry and academia. Owing to the unique data format of point cloud, i.e., a set of 3D coordinates, the design of adversarial attacks varies in multiple aspects [30]. From the view of perturbations, the adversaries could shift existing points to create adver-sarial examples [52], which is similar to the adversarial at-tacks in images [13]. Besides, the adversaries could also delete [49, 63] or add points [28, 52] to conduct the attack. Recent studies show that the generative model, i.e., trans-forming the original point cloud into a new one [15, 64], is also effective to find adversarial examples. From the view of restrictions, the constraints of the perturbations may differ in different approaches, e.g., limiting the number of altered points [18, 22], restricting the maximal/averaged distance of shifted points [40] and constraining the shape similarity between the adversarial examples and original ones [52]. Recently, many efforts have been made to mitigate po-tential adversarial examples in point cloud data [26, 65], which mainly fall into two categories, •Adversarial Training-based (AT): this line of research takes inspiration from the work in the image domain [32], which proposes to pair adversarial examples with correct labels and put them into the training set [26, 40]. In the context of point cloud data, the main drawback of AT-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12260 based methods is that they are only robust to certain kinds of seen adversarial examples [61]. For instance, when the adversaries leverage an attack method that differs from the methods used in AT, the attack success rate could raise from 0.6%to100.0%1. Considering the diverse attack strategies in this task, it is difficult to find adversar-ial examples for AT-based methods that could generalize to various kinds of attacks. •Recovery-based: this line of research reveals that adver-sarial examples in point cloud data often contain outlier points [52]. Based on this, recovery-based methods pro-pose to restore a clean sample from an adversarial exam-ple before feeding it to the classification model. For in-stance, SOR utilizes a rule-based strategy to filter outlier points [65], while DUP-Net [65] leverages a deep genera-tive network to recover the samples better [57]. Different from AT-based methods, the recovery-based methods do not focus on certain kinds of attack strategies, however, they could be evaded by shape-invariant attacks [42, 48] which take the geometric pattern of the perturbations into account, e.g., generating points that smoothly lie on the surface of the object [48], to make the recovery less ef-fective. Our Work. Despite the variety of attack strategies, the semantic and structural information of different adversar-ial examples can hardly change [18, 22, 48, 52, 63]. For instance, an adversarial example of a car should still look like a car no matter how adversaries choose the attack strat-egy, e.g., adding, deleting, shifting or transforming. How-ever, previous works point out that existing classifiers often pay attention to limited segments or local features of the whole object to conduct the prediction, leading to the po-tential risk of different adversarial attacks [49, 50, 63]. This motivates us to improve the robustness of existing classifiers by enhancing the modeling of semantic and structural infor-mation. Based on this, we develop a novel defense frame-work called c ontrastive and a ttentional p oint cloud learning (CAP), which is mainly composed of two modules: (1) the attention-based feature pooling and (2) the dynamic con-trastive learning paradigm. The first module aims to cap-ture the global structural information of the object by rec-ognizing critical points among the point cloud data. To this end, we design a multi-head attention layer to assign critical points with higher weights, which will be used for obtaining the global representation of the input. We also introduce the temperature coefficient and random sampling techniques to prevent the module from focusing on a few fixed segments. The second module aims to characterize the semantic infor-mation of different objects by disentangling the features of objects with different labels while gathering those with the same label. To this end, we design an interesting dynamic 1For detailed experimental setting and results please refer to Sec. 5contrastive learning paradigm, which divides the learning goal into a coarse-to-fine process and helps the learning bet-ter converge. With the aid of the proposed CAP, we can signifi-cantly improve the robustness against various adversarial examples for existing classification models such as Point-Net/PointNet++ [37], DGCNN [47] and PointCNN [24]. Furthermore, we show that the robustness of CAP is the-oretically certified. Specifically, given a certain constraint of the perturbations, we could evaluate whether the trained model is robust under arbitrary attack strategies, e.g., if an adversary would be able to add perturbations to a chair to obtain a prediction of a car. To this end, we first measure the changes in features after adding perturbations based on the manifold learning theory. Then we leverage the extreme value theory to estimate the upper bound of the potential changes, which indicates the optimal attack that aims to move the adversarial example across the decision bound-ary. After that, the robustness of the model can be mea-sured based on the estimated upper bound. With the pro-posed certified defense, a user could estimate the potential risk of adversarial attacks in real-world applications before deployment. We validate the proposed CAP on two bench-mark datasets and seven attack methods. In summary, the main contributions of this work are: • We propose a novel and general solution for improving the robustness of existing point cloud classification mod-els by modeling the semantic and structural information, which is able to train robust models against various kinds of adversarial attacks. • We present an algorithm to theoretically certify the ro-bustness of the proposed framework. With the aid of the manifold learning and the extreme value theory, the esti-mated robustness is highly consistent with the actual em-pirical results, i.e., attack success rate. • Extensive experiments on two benchmark datasets show that our CAP can significantly improve the robustness of different classification models (PointNet/PointNet++, DGCNN and PointCNN), e.g., the attack success rate of PointNet decreases from 70.2%to2.7%on average on the ModelNet40 dataset.
Chen_Local-to-Global_Registration_for_Bundle-Adjusting_Neural_Radiance_Fields_CVPR_2023
Abstract Neural Radiance Fields (NeRF) have achieved photo-realistic novel views synthesis; however, the requirement of accurate camera poses limits its application. Despite analysis-by-synthesis extensions for jointly learning neu-ral 3D representations and registering camera frames exist, they are susceptible to suboptimal solutions if poorly initial-ized. We propose L2G-NeRF , a Local-to-Global registra-tion method for bundle-adjusting Neural Radiance Fields: first, a pixel-wise flexible alignment, followed by a frame-wise constrained parametric alignment. Pixel-wise local alignment is learned in an unsupervised way via a deep network which optimizes photometric reconstruction errors. Frame-wise global alignment is performed using differen-tiable parameter estimation solvers on the pixel-wise corre-spondences to find a global transformation. Experiments on synthetic and real-world data show that our method outper-forms the current state-of-the-art in terms of high-fidelity reconstruction and resolving large camera pose misalign-ment. Our module is an easy-to-use plugin that can be applied to NeRF variants and other neural field applica-tions. The Code and supplementary materials are available athttps://rover-xingyu.github.io/L2G-NeRF/ .
1. Introduction Recent success with neural fields [ 47] has caused a resurgence of interest in visual computing problems, where coordinate-based neural networks that represent a field gain traction as a useful parameterization of 2D images [ 4,7,40], and 3D scenes [ 27,29,34]. Commonly, these coordinates are warped to a global coordinate system by camera param-eters obtained via computing homography, structure from motion (S fM), or simultaneous localization and mapping *Authors contributed equally to this work. †Corresponding Author. This work is partly supported by the Na-tional Key Research and Development Program of China under Grant 2022YFB3303800 and National Key Projects of China, 2021XJTU0040. InitializationBARF L2G-NeRF (ours) Ghosting artifacts Photorealistic reconstruction Suboptimal registration Accurate registrationperturbed/optimized ground-truth translational errorFigure 1. We present L2G-NeRF , a new bundle-adjusting neural radiance fields — employing local-to-global registration — that is much more robust than the current state-of-the-art BARF [ 24]. (SLAM) [ 17] with off-the-shelf tools like COLMAP [ 39], before being fed to the neural fields. This paper considers the generic problem of simultane-ously reconstructing the neural fields from RGB images andregistering the given camera frames, which is known as a long-standing chicken-and-egg problem — registration is needed to reconstruct the fields, and reconstruction is needed to register the cameras. One straightforward way to solve this problem is to jointly optimize the camera parameters with the neural fields via backpropagation. Recent work can be broadly placed into two camps: parametric and non-parametric. Parametric methods [ 10,20,24,44] directly optimize global geometric transformations ( e.g. rigid, homography). Non-parametric methods [ 22,31] do not make any assumptions This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8264 on the type of transformation, and attempt to directly op-timize some pixel agreement metric ( e.g. brightness con-stancy constraint in optical flow and stereo). However, both approaches have flaws: parametric meth-ods fail to minimize the photometric errors (falling into the suboptimal solutions) if poorly initialized, as shown in Fig.1, while non-parametric methods have trouble dealing with large displacements ( e.g. although the photometric er-rors are minimized, the alignments do not obey the geomet-ric constraint). It is natural, therefore, to consider a hybrid approach, combining the benefits of parametric and non-parametric methods together. In this paper, we propose L2G-NeRF, a local-to-global process integrating parametric and non-parametric methods for bundle-adjusting neural radiance fields — the joint prob-lem of reconstructing the neural fields and registering the camera parameters, which can be regarded as a type of clas-sic photometric bundle adjustment (BA) [ 3,12,25]. Fig. 2 shows an overview. In the first non-parametric stage, we initialize the alignment by predicting a local transformation field for each pixel of the camera frames. This is achieved by self-supervised training of a deep network to optimize standard photometric reconstruction errors. In the second stage, differentiable parameter estimation solvers are ap-plied to a set of pixel-wise correspondences to obtain a global alignment, which is then used to apply a soft con-straint to the local alignment. In summary, we present the following contributions: • We show that the optimization of bundle-adjusting neural fields is sensitive to initialization, and we present a simple yet effective strategy for local-to-global registration on neural fields. • We introduce two differentiable parameter estimation solvers for rigid and homography transformation re-spectively, which play a crucial role in calculating the gradient flow from the global alignment to the local alignment. • Our method is agnostic to the particular type of neural fields, specifically, we show that the local-to-global process works quite well in 2D neural images and 3D Neural Radiance Fields (NeRF) [ 29], allowing for applications such as image reconstruction and novel view synthesis.
Alzayer_DC2_Dual-Camera_Defocus_Control_by_Learning_To_Refocus_CVPR_2023
Abstract Smartphone cameras today are increasingly approach-ing the versatility and quality of professional cameras through a combination of hardware and software advance-ments. However, fixed aperture remains a key limitation, preventing users from controlling the depth of field (DoF) of captured images. At the same time, many smartphones now have multiple cameras with different fixed apertures -specifically, an ultra-wide camera with wider field of view and deeper DoF and a higher resolution primary camera with shallower DoF . In this work, we propose DC2, a system fordefocus control for synthetically varying camera aper-ture, focus distance and arbitrary defocus effects by fusing information from such a dual-camera system. Our key in-sight is to leverage real-world smartphone camera dataset by using image refocus as a proxy task for learning to con-trol defocus. Quantitative and qualitative evaluations on real-world data demonstrate our system’s efficacy where we outperform state-of-the-art on defocus deblurring, bokeh rendering, and image refocus. Finally, we demonstrate cre-ative post-capture defocus control enabled by our method, including tilt-shift and content-based defocus effects.1. Introduction Smartphone cameras are the most common modality for capturing photographs today [13]. Recent advance-ments in computational photography such as burst photog-raphy [18], synthetic bokeh via portrait mode [48], super-resolution [55], and more have been highly effective at clos-ing the gap between professional DSLR and smartphone photography. However, a key limitation for smartphone cameras today is depth-of-field (DoF) control, i.e., control-ling parts of the scene that appear in (and out of) focus. This is primarily an artifact of their relatively simple optics and imaging systems (e.g., fixed aperture, smaller imaging sen-sors, etc.). To bridge the gap, modern smartphones tend to computationally process the images for further post-capture enhancements such as synthesizing shallow DoF (e.g., por-trait mode [37, 48]). However, this strategy alone does not allow for DoF extension or post-capture refocus. In this work, we propose Dual-Camera Defocus Control (DC2), a framework that can provide post-capture defocus control leveraging multi-camera systems prevalent in smartphones today. Figure 1shows example outputs from our frame-work for various post-capture DoF variations. In particular, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21488 our method is controllable and enables image refocus, DoF extension, and reduction. Post-capture defocus control is a compound process that involves removing defocus blur (i.e., defocus deblurring) and then adding defocus blur selectively based on the scene depth. Defocus deblurring [2, 4,5,23,27,31,35,39,40,42, 43,59,60], itself, is challenging due to the nature of the defocus point spread function (PSF) formation which can be spatially varying in size and shape [28, 46]. The PSF’s shape and size are not only depth dependent, but also vary based on aperture size, focal length, focus distance, optical aberration, and radial distortion. Synthesizing and adding defocus blur [9, 17,21,33,37,38,48,57,58] is also diffi-cult and requires an accurate depth map along with an all-in-focus image. Additionally, it requires realistic blur for-mation and blending around the object’s boundaries. Most prior work has addressed defocus deblurring and synthesiz-ing defocus blur as two isolated tasks. There has been less work on post-capture defocus control (e.g., image refocus-ing [22, 34,41]). The image refocusing literature [22, 34] has focused on light-field data captured with specialized hardware. While the results in [51, 52] are the state-of-the-art, light-field data is not representative of smartphone and DSLR cameras by lacking realistic defocus blur and spatial resolution [12]. Most modern smartphones are now equipped with two or more rear cameras to assist with computational imaging. The primary camera – often referred to as the wide camera orW– has a higher resolution sensor, a higher focal length lens but a relatively shallower DoF. Alongside Wis the ultra-wide (UW) camera, often with a lower resolution sen-sor, lower focal length (wider field of view) and wider DoF. Our critical insight is to leverage this unique camera setup and cross-camera DoF variations to design a system for re-alistic post-capture defocus control. Differently from prior work, we tackle the problem of defocus control (deblurring andadding blur) and propose using real-world data easily captured using a smartphone device to train our learning-based system. Our primary contributions in this work are as follows: • We propose a learning-based system for defocus con-trol on dual-camera smartphones. This subsumes the tasks of defocus deblurring, depth-based blur ren-dering, image refocusing and enables arbitrary post-capture defocus control. • In the absence of defocus control ground-truth, we en-able training our system on real-world data captured from a smartphone device. To achieve that, we re-formulate the problem of defocus control as learning to refocus and define a novel training strategy to serve the purpose. • We collect a dataset of diverse scenes with focus stackdata at controlled lens positions the Wcamera and ac-companying UW camera images for training our sys-tem. Additionally, we compute all-in-focus images using the focus stacks to quantitatively evaluate im-age refocus, defocus deblurring and depth-based blur-ring tasks and demonstrate superior performance com-pared to state-of-the-art (SoTA) methods across all three tasks. • Finally, we demonstrate creative defocus control ef-fects enabled by our system, including tilt-shift and content-based defocus. 2. Related Work Defocus Deblurring Defocus blur leads to a loss of detail in the captured image. To recover lost details, a line of work follows a two-stage approach: (1) estimate an explicit de-focus map, (2) use a non-blind deconvolution guided by the defocus map [23, 42]. With the current advances in learning-based techniques, recent work perform single image deblur-ring directly by training a neural network end-to-end to re-store the deblurred image [2, 27,31,39,40,43]. Due to the difficulty of the defocus deblurring task, other works try to utilize additional signals, such as the dual pixel (DP) data to improve deblurring performance [4, 5,35,59,60]. DP data is useful for deblurring as it provides the model with defocus disparity that can be used to inform deblurring. While the DP data provides valuable cues for the amount of defocus blur at each pixel, the DP views are extracted from a single camera. Therefore, the performance of the DP deblurring methods drops noticeably and suffer from unappealing vi-sual artifacts for severely blurred regions. In the same vein, we aim to exploit the UW image as a com-plementary signal already available in modern smartphones yet ignored for DoF control. By using the UW image with different DoF arrangements, we can deblur regions with se-vere defocus blur that existing methods cannot handle be-cause of the fundamental information loss. Nevertheless, we are aware that using another camera adds other chal-lenges like image misalignment, occlusion, and color mis-matches which we address in Section 4.3. Bokeh Rendering Photographers can rely on shallow DoF to highlight an object of interest and add an artistic effect to the photo. The blur kernel is spatially variant based on depth as well as the camera and optics. To avoid the need of es-timating depth, one work magnifies the existing defocus in the image to make the blur more apparent without explicit depth estimate [7]. Since recent work in depth estimation improved significantly [30, 44], many shallow DoF render-ing methods assume having depth [37] or estimate depth in the process [48, 57]. Using an input or estimated depth map, a shallow DoF can be synthesized using classical rendering methods [9, 17,38,48], using a neural network to add the 21489 Ultra-wide DepthFocus Stack DFNet Target DefocusRef. Defocus Ref. UW Output Target Wide Ref. Wide Inputs Loss Target Defocus Output Shallow DoF Refocused Deblurred (a) Dual camera image refocus dataset. (b) Proxy task: Learn to refocus. (c) Evaluate on arbitrary defocus maps. Figure 2. Image refocus as a proxy task. Since we cannot gather a realdataset for arbitrary focus manipulation, our idea is to train a model to perform image refocus using a target defocus map as an input. At the test time, our trained model can perform arbitrary focus manipulation by feeding it an arbitrary target defocus map. synthetic blur [21, 33,50] or a combination of classical and neural rendering [37]. With that said, shallow DoF synthe-sis methods typically assume an all-in-focus image or an input with a deep DoF. Our proposed framework learns to blur as a byproduct of learning to refocus with the insight that the refocus task involves both deblurring and selective blurring. Unlike prior work that addressed either defocus deblurring or im-age bokeh rendering, we introduce a generic framework that facilitates post-capture full defocus control (e.g., image re-focusing). Image Refocus and DoF Control At capture time, the camera focus can be adjusted automatically (i.e., autofo-cus [3, 6,19]) or manually by moving the lens or adjust-ing the aperture. When the image is captured, it can still be post-processed to manipulate the focus. Nevertheless, post-capture image refocus is challenging as it requires both deblurring and blurring. Prior work uses specialized hard-ware to record a light field which allows post-capture fo-cus control [34, 53]. However, light field cameras have low spatial resolution and are not represe
ntative of smartphone cameras. An alternative to requiring custom hardware is to capture a focus stack, and then merge the frames required to simulate the desired focus distance and DoF [10, 22,29,36], but the long capture time restricts using focus stacks to static scenes. Research on single-image refocus is limited due to its difficulty, but the typical approach is to deblur to obtain an all-in-focus image followed by blurring. Previous work used classical deblurring and blurring [8] to obtain single image refocus, and the most notable recent single-image-based image refocus is RefocusGAN [41], which trains a two-stages GAN to perform refocusing. The limited re-search on software-based image refocus is likely due to the challenging task that involves both defocus deblurring and selective blurring. In our work, we provide a practical setup for post-capture image refocus without the restrictions ofinaccessible hardware or the constraint of capturing a fo-cus stack. We do so by leveraging the dual camera that is available in modern smartphones. Image Fusion. Combining information from images with complementary information captured using different cam-eras [36, 47] or the same camera with different capture set-tings [15, 18] can enhance images in terms of sharpness [22, 36,47], illuminant estimation [1], exposure [11, 15,18,36], or other aspects [16, 32,47,49]. With the recent preva-lence of dual-camera smartphones today, researchers have pursued works that target this setup. One line of work has used dual-camera for super-resolution to take advan-tage of the different resolutions the cameras have in still photos [51, 56,64] as well as in videos [26]. The dual-camera setup has also been used in multiple commercial smartphones, e.g., Google Pixel devices to deblur faces by capturing an ultra-wide image with faster shutter time and fusing with the wide photo [25]. To our knowledge, we are the first to investigate using the dual-camera setup for defo-cus control. 3. Learning to Refocus as a Proxy Task As mentioned, smartphone cameras tend to have fixed apertures limiting DoF control at capture time. In our work, we aim to unlock the ability to synthetically control the aperture -by transferring sharper details where present and synthesizing realistic blur. However, to train such a model, we run into a chicken and egg problem: we require a dataset of images captured with different apertures, which isn’t pos-sible with smartphones. An alternative solution could be to generate such a dataset synthetically, but modeling a real-istic point spread function (PSF) for the blur kernel is non-trivial [5]. Professional DSLRs provide yet another alter-native [20] but often require paired captures smartphone / DLSR captures to reduce the domain gap. Ideally, we would like to use the same camera system for both training and 21490 Focus Plane Stereo Depth Ref. UW Ref. W Depth Defocus map Flow based alignment Aligned UW Occlusion mask (Eq. 1) Optical Flow + Target defocus Ref. defocus Occlusion mask Ref. UW Ref. W Output (a) Data processing (b) DFNet Figure 3. Data processing and high-level architecture. (Left) To be able to use the reference inputs for our Detail Fusion Network, we need to align the inputs and a depth estimate to approximate the defocus map of the reference Wand the target defocus map we would like to synthesize. We use flow-based alignment with PWCNet [45] and use the stereo depth estimated using portrait mode [48]. ( Right ) Our Detail Fusion Network (DFNet) consists of refinement modules to refine the reference inputs combined with a fusion module that predicts blending masks to combine the two refined inputs. evaluation. To resolve this, we observe that a somewhat par-allel task is image refocus. When we change the focus dis-tance, the defocus radius is adjusted in different parts of the image, involving a combination of pixels getting deblurred and blurred. This suggests that image refocus is at least as hard as scaling the DoF. Motivated by this observation, we make the hypothesis that by training a model on image re-focus as a proxy task , we can use the same model to control the DoF at test time as we show in Figure 2. The key idea is to provide the model with reference and target defocus maps ( Section 4.1) as input, and at test time control the model behavior by manipulating this target defocus map. 4. Method To train a model on our proxy task, we need to collect a dataset of focus stacks for the wide camera and a paired ultra-wide frame which can be used as a guide due to its deeper DoF. In Figure 3 we show the high-level structure of our method dubbed DC2. The primary module that we train is the Detail Fusion Network (DFNet), which requires a ref-erence wide frame, (aligned) reference ultra-wide frame, and estimated defocus maps. In Section 4.1, we describe how we collect the focus stack data and process it to obtain the inputs needed for DFNet. We then describe the archi-tecture details of DFNet in Section 4.2, which is motivated by the dual-camera input setup. 4.1. Data Processing Using the Google Pixel 6 Pro as our camera platform, we captured a dataset of 100 focus stacks of diverse scenes, including indoor and outdoor scenarios. For each scene, we sweep the focus plane for the wide camera and cap-ture a complete focus stack. We simultaneously capture a frame from the ultra-wide camera, which has a smaller aper-ture, deeper DoF, and fixed focus. For each frame, we use optical-flow-based warping using PWCNet [45] and follow-ing prior work [25] to align the ultra-wide frame with the wide frame. Since the alignment is imperfect (e.g., in tex-tureless regions and occluded boundaries), we estimate an occlusion mask that can be used to express potentially mis-aligned regions for the model. To estimate defocus maps, we require the metric depth. We use the depth map em-bedded in the Pixel camera’s portrait mode output which can estimate metric depth using dual camera stereo algo-rithms [63] with a known camera baseline. To compute the defocus map associated with each frame, we use the follow-ing formula for the radius of the circle of confusion c c=A|S2−S1| S2f S1−f(1) where Ais the camera aperture, S1is the focus distance, S2 is the pixel depth, and fis the focal length. In Figure 2a, we show a visualization of a focus stack, associated UW, stereo depth, and a collection of sample scenes. 4.2. Model Architecture Our method performs detail fusion on two primary in-puts: the reference wide ( W) and ultra-wide ( UW) images. Since we train the model to refocus, Wis expected to be treated as a base image, while UW is a guide for missing high-frequency details. Based on this intuition, we propose Detail Fusion Net-work (DFNet) that has two refinement paths: Wrefine-ment path ( ΦW re f),UW refinement path ( ΦUW re f), and a fusion 21491 module ( Φf usion ) that predicts blending masks for the re-fined Wand refined UW. Note that the Wrefinement path never gets to see the UW frame and vice versa. We use a network architecture based on Dynamic Residual Blocks Network (DRBNet) [39] for our refinement modules with multi-scale refinements. For the fusion module, we use a sequence of atrous convolutions [14] for an increased re-ceptive field and predict a blending mask for each scale. To preserve high-frequency details in the blending mask, we add upsampling layer and residual connections when pre-dicting the blending mask of the larger scale. During train-ing, we blend the outputs of ΦW re fandΦUW re fand compute the loss for all scales for improved performance. In Figure 3 we show a high-level diagram of our architecture and how each component interacts with the others. By visualizing the in-termediate outputs between our different modules, we ob-serve that the network indeed attempts to maintain the low-frequency signal from Wwhile utilizing high-frequency sig-nals from UW. Please refer to the supplementary material for a detailed model architecture and a deeper analysis of model behavior and visualizations. 4.3. Training Details We train our model by randomly sampling slices from the focus stack in our training scenes. For each element in the batch, we randomly sample a training scene, and sam-ple two frames to use as reference and target images, re-spectively. While we can approximate depth from all pairs, severely blurry frames can have unreliable depth. To ad-dress that, we use the stereo pair with the greatest number of matched features to use for the scene depth to compute the defocus maps. We train on randomly cropped 256x256 patches, using a batch size of 8, and a learning rate of 10−4 for 200k iterations, and then reduce the learning rate to 10−5 for another 200k iterations using Adam [24]. Our recon-struction loss is a combination of L1loss on pixels and gra-dient magnitudes, SSIM loss [54], and perceptual loss [61]. For a target wide frame Wtgtand a model output y, the loss is Ltotal =L1(Wtgt,y) +L1(∇Wtgt,∇y) LSSIM (Wtgt,y)) +LV GG(Wtgt,y)(2) 5. Experimental Results We train our method to perform defocus control through training on the proxy task of image refocus. As a result, our model can perform a variety of related defocus control tasks. Specifically, we evaluate our method on defocus de-blurring, synthesizing shallow DoF, and image refocus. Evaluation metrics. We use the standard signal processing metrics, i.e., the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM). We also reportTable 1. Defocus deblurring evaluation. Performance on generating all-in-focus images from a single slice in the focus stack. The best results are in bold numbers. Method PSNR ↑SSIM ↑LPIPS ↓ MDP [2] 23.50 0.674 0.394 IFAN [27] 23.48 0.679 0.371 DRBNet [39] 24.27 0.681 0.377 Ours 24.79 0.704 0.351 the learned perceptual image patch similarity (LPIPS) [62]. 5.1. Defocus Deblurring Task. The goal of defocus deblurring is to remove the de-focus blur
Guan_Self-Supervised_Implicit_Glyph_Attention_for_Text_Recognition_CVPR_2023
Abstract The attention mechanism has become the de facto mod-ule in scene text recognition (STR) methods, due to its capa-bility of extracting character-level representations. Thesemethods can be summarized into implicit attention based and supervised attention based, depended on how the at-tention is computed, i.e., implicit attention and supervisedattention are learned from sequence-level text annotationsand or character-level bounding box annotations, respec-tively. Implicit attention, as it may extract coarse or even incorrect spatial regions as character attention, is prone tosuffering from an alignment-drifted issue. Supervised at-tention can alleviate the above issue, but it is character category-specific, which requires extra laborious character-level bounding box annotations and would be memory-intensive when handling languages with larger charactercategories. To address the aforementioned issues, we pro-pose a novel attention mechanism for STR, self-supervisedimplicit glyph attention (SIGA). SIGA delineates the glyphstructures of text images by jointly self-supervised text seg-mentation and implicit attention alignment, which serveas the supervision to improve attention correctness with-out extra character-level annotations. Experimental results demonstrate that SIGA performs consistently and signifi-cantly better than previous attention-based STR methods, in terms of both attention correctness and final recognition performance on publicly available context benchmarks and our contributed contextless benchmarks.
1. Introduction Scene text recognition (STR) aims to recognize texts from natural images, which has wide applications in hand-writing recognition [ 36,47,53], industrial print recogni-tion [ 12,16,32], and visual understanding [ 8,23,31]. Recently, attention-based models with encoder-decoder ar-chitectures are typically developed to address this task by attending to important regions of text images to extract *Corresponding author.(c) Self-supervised implicit glyph attention (SIGA) "COPTER" imagesSTR (a) Implicit attentionimagesSTR (b) Supervised attention"COPTER" Alignment Unsupervised method "COPTER" imagesSTR Category-independent channels 12 3 54 6 Category-dependent channels 35 15 1816 20 Category-independent channels12 3 54 6 Figure 1. Three difference supervised manners for STR. character-level representations. These methods can be sum-marized into implicit attention methods (a) and supervisedattention methods (b) as shown in Figure 1, according to the annotation type used for supervising the attention. Specifically, implicit attention is learned from sequence-level text annotations by computing attention scores across all locations over a 1D or 2D space. For example, the 1D se-quential attention weights [ 3,39] are generated at different decoding steps to extract important items of the encodedsequence. The 2D attention weights [ 15,50] are gener-ated by executing a cross-attention operation with the em-bedded time-dependent sequences and visual features onall spatial locations. However, implicit attention methods, which only extract coarse or even unaligned spatial regions as character attention, may encounter alignment-drifted at-tention. In contrast, supervised attention is learned from ex-tra character-level bounding box annotations by generatingcharacter segmentation maps. Although these supervised attention methods [ 20,28,30,42] can alleviate the above is-sue, they rely on labour-intensive character-level boundingbox annotations, and their attention maps with respect to character categories might be memory-intensive when thenumber of character categories is large. To address the aforementioned issues, we propose a novel attention-based method for STR (Figure 1(c)), self-supervised implicit glyph attention (SIGA). As brieflyshown in Figure 2, SIGA delineates the glyph structures of text images by jointly self-supervised text segmenta-tion and implicit attention alignment, which serve as the supervision for learning attention maps during trainingto improve attention correctness. Specifically, the glyphstructures are generated by modulating the learned textforeground representations with sequence-aligned attention This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15285 vectors. The text foreground representations are distilled from self-supervised segmentation results according to theinternal structures of images [ 18]; The sequence-aligned at-tention vectors are obtained by applying an orthogonal con-straint to the 1D implicit attention vectors [ 3]. They then serve as the position information of each character in a text image to modulate the text foreground representations togenerate glyph pseudo-labels online. By introducing glyph pseudo-labels as the supervision of attention maps, the learned glyph attention encouragesthe text recognition network to focus on the structural re-gions of glyphs to improve attention correctness. Differ-ent from supervised attention methods, the glyph attention maps bring no additional cost to enable character and de-coding order consistency when handling languages with larger character categories. For recognizing texts with linguistic context, SIGA achieves state-of-the-art results on seven publicly availablecontext benchmarks. We also encapsulate our glyph atten-tion module as a plug-in component to other attention-basedmethods, achieving average performance gains of 5.68% and 1.34% on SRN [ 50] and ABINet [ 15], respectively. It is worth mentioning that SIGA shows its prominent superiority in recognizing contextless texts widely used in industrial scenarios ( e.g., workpiece serial numbers [ 16] and identification codes [ 32]). Specifically, we contribute two large-scale contextless benchmarks (real-world MPSCand synthetic ArbitText) with random character sequences that differ from legal words. Experiments demonstrate that SIGA improves the accuracy of contextless text recognitionby a large margin, which is 7.0% and 10.3% higher thanMGP-STR [ 44] on MPSC and ArbitText, respectively. In summary, the main contributions are as follows: • We propose a novel attention mechanism for scene text recognition, SIGA, which is able to delineate glyph structures of text images by jointly self-supervised text segmentation and implicit attention alignment to improve attention correctness without character-level bounding box annotations. • Extensive experiments demonstrate that the proposed glyph attention is essential for improving the perfor-mance of vision models. Our method achieves the state-of-the-art performance on publicly available con-text benchmarks and our contributed large-scale con-textless benchmarks (MPSC and ArbitText).
Hong_ACL-SPC_Adaptive_Closed-Loop_System_for_Self-Supervised_Point_Cloud_Completion_CVPR_2023
Abstract Point cloud completion addresses filling in the missing parts of a partial point cloud obtained from depth sensors and generating a complete point cloud. Although there has been steep progress in the supervised methods on the synthetic point cloud completion task, it is hardly applica-ble in real-world scenarios due to the domain gap between the synthetic and real-world datasets or the requirement of prior information. To overcome these limitations, we pro-pose a novel self-supervised framework ACL-SPC for point cloud completion to train and test on the same data. ACL-SPC takes a single partial input and attempts to output the complete point cloud using an adaptive closed-loop (ACL) system that enforces the output same for the variation of an input. We evaluate our ACL-SPC on various datasets to prove that it can successfully learn to complete a partial point cloud as the first self-supervised scheme. Results show that our method is comparable with unsupervised meth-ods and achieves superior performance on the real-world dataset compared to the supervised methods trained on the synthetic dataset. Extensive experiments justify the neces-sity of self-supervised learning and the effectiveness of our proposed method for the real-world point cloud completion task. The code is publicly available from this link.
1. Introduction Along with the development of autonomous driving cars and robotics, the usage of depth sensors such as LiDARs has increased. These sensors can collect numerous points in the 3D space, and the combination of these points forms a 3D representation called a point cloud. Point cloud rep-resentation has been widely used in many applications as it is highly convertible to other 3D data representations, e.g., voxel and mesh, and accessible for obtaining information from the real world. However, point clouds obtained from a real-world sensor, e.g., a LiDAR, are often incomplete *equal contribution Optimizer Loss ACL-SPC Initial predicted complete pc Synthetic partial pc Random view Predicted complete pc Error Partial pc generator Figure 1. Overview of our proposed pipeline. We first generate C0using the initial partial point cloud. Then, multiple synthetic point clouds Pvare generated from the random views of C0. We input the generated Pvto the network and make predicted com-plete point clouds. We take the loss between C0andCvto opti-mize the parameters of the network fθ. and sparse due to occlusion, limitations of sensor resolu-tion, and viewing angle [49] leading to loss of some geo-metric information and difficulty in proceeding with further applications e.g., object detection [26] and object segmenta-tion [7]. We define such point clouds as partial point clouds. Therefore, point cloud completion is a crucial task that in-fers completing geometric 3D shapes by using such partial point cloud observations. With the advent of deep learning, previous data-driven works [40, 43, 49] have been able to solve this task us-ing complete point cloud ground-truths. Even though such methods have achieved decent performance, they are not applicable in real-world scenarios where the ground-truth point clouds are not easy to obtain. For these rea-sons, researchers have recently attempted to overcome the lack of high-quality and large-scale paired training data using multiple views of the point cloud in unsupervised and weakly-supervised manners. Especially, recent meth-ods [15, 21] leverage multi-view consistency of the desired object, which shows effectiveness in supervising 3D shape prediction. PointPnCNet [21] claims that its method is based on self-supervised learning. However, combining multi-view consistency enables reconstructing a complete 3D point cloud and can be weak supervision. Moreover, collecting multiple partial views of an object in real-world This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9435 scenarios is difficult as gathering ground-truth point clouds. Therefore, the necessity for multi-view consistency pre-vents this method from being fully self-supervised. Mean-while, other methods [6, 13, 41, 50] exploit unpaired par-tial and complete point clouds [6, 41] or pre-trained mod-els [13, 50] on synthetic data to overcome the difficulty of collecting ground-truth. However, the need for unpaired data limits the methods’ applicability to a few categories. To overcome the challenges mentioned above, we pro-pose a novel and the first self-supervised method called ACL-SPC for point cloud completion using only a sin-gle partial point cloud. We develop an adaptive closed-loop (ACL) [2] system as shown in Figure 1 to design our self-supervised point cloud completion framework ACL-SPC. In ACL-SPC, an encoder adaptively reacts to the vari-ance in the input by adjusting its parameters to generate the same output. Using our developed ACL, our method tries to generate a complete point cloud from a single partial in-put captured from an unknown viewpoint without any prior information or multi-view consistency and also simulates several synthetic partial point clouds from the reconstructed point cloud. Under our defined novel loss function, our ACL-SPC can learn to generate the same complete point cloud from all such synthetic point clouds and the initial partial point cloud without any supervision. In the experi-ments, we demonstrate the ability of our method to restore a complete point cloud and the effect of our designed loss functions on saving fine details and improving quantitative performance. We also evaluate our method with various datasets, including real-world scenarios, and verify that our method can be applied in practice. Evaluation results show that our method is comparable to other unsupervised meth-ods and performs better than the supervised method trained on a synthetic dataset. Our main contributions can be summarized as follows: • We propose ACL-SPC by developing an adaptive control-loop ACL framework to solve the point cloud completion problem in a self-supervised manner. • We also design an effective self-supervised loss func-tion to train our method without requiring any other information and using only a single partial point cloud taken from an unknown viewpoint. • Our method achieves superior performance in real-world scenarios compared to methods trained on syn-thetic datasets and comparative performance among other unsupervised methods.
Gong_DiffPose_Toward_More_Reliable_3D_Pose_Estimation_CVPR_2023
Abstract Monocular 3D human pose estimation is quite challeng-ing due to the inherent ambiguity and occlusion, which often lead to high uncertainty and indeterminacy. On the other hand, diffusion models have recently emerged as an effec-tive tool for generating high-quality images from noise. In-spired by their capability, we explore a novel pose estima-tion framework (DiffPose) that formulates 3D pose estima-tion as a reverse diffusion process. We incorporate novel de-signs into our DiffPose to facilitate the diffusion process for 3D pose estimation: a pose-specific initialization of pose uncertainty distributions, a Gaussian Mixture Model-based forward diffusion process, and a context-conditioned re-verse diffusion process. Our proposed DiffPose significantly outperforms existing methods on the widely used pose estimation benchmarks Human3.6M and MPI-INF-3DHP . Project page: https://gongjia0208.github.io/Diffpose/.
1. Introduction 3D human pose estimation, which aims to predict the 3D coordinates of human joints from images or videos, is an important task with a wide range of applications, including augmented reality [5], sign language translation [21] and human-robot interaction [40], attracting a lot of attention in recent years [23, 46, 50, 52]. Generally, the mainstream approach is to conduct 3D pose estimation in two stages: the 2D pose is first obtained with a 2D pose detector, and then 2D-to-3D lifting is performed (where the lifting process is the primary aspect that most recent works [2, 10, 16, 17, 19, 32, 54] focus on). Yet, despite the considerable progress, monocular 3D pose estimation still remains challenging. In particular, it can be difficult to accurately predict 3D pose from monocular data due to many challenges, including the inherent depth ambiguity and the potential occlusion, which often lead to high indeterminacy and uncertainty. † Equal contribution; § Currently at Meta; ‡ Corresponding author 2D pose sequence𝝐𝝐𝝐𝝐 𝑯𝑲𝑯𝒌𝑯𝟎Forward diffusionReverse diffusionContext conditionInitial Pose DistributionDesired 3D Pose𝒈𝒈𝒈𝒈Figure 1. Overview of our DiffPose framework. In the forward process (denoted with blue dotted arrows), we gradually diffuse a “ground truth” 3D pose distribution H0with low indetermi-nacy towards a 3D pose distribution with high uncertainty HK by adding noise ϵat every step, which generates intermediate dis-tributions to guide model training. Before the reverse process, we first initialize the indeterminate 3D pose distribution HKfrom the input. Then, during the reverse process (denoted with red solid arrows), we use the diffusion model g, conditioned on the context information from 2D pose sequence, to progressively transform HKinto a 3D pose distribution H0with low indeterminacy. On the other hand, diffusion models [12, 38] have re-cently become popular as an effective way to generate high-quality images [33]. Generally, diffusion models are capa-ble of generating samples that match a specified data distri-bution (e.g., natural images) from random (indeterminate) noise through multiple steps where the noise is progres-sively removed [12, 38]. Intuitively, such a paradigm of progressive denoising helps to break down the large gap be-tween distributions (from a highly uncertain one to a deter-minate one) into smaller intermediate steps [39] and thus successfully helps the model to converge towards smoothly generating samples from the target data distribution. Inspired by the strong capability of diffusion models to generate realistic samples even from a starting point with high uncertainty (e.g., random noise), here we aim to tackle 3D pose estimation, which also involves handling uncer-tainty and indeterminacy (of 3D poses), with diffusion mod-els. In this paper, we propose DiffPose , a novel framework that represents a new brand of diffusion-based 3D pose es-timation approach, which also follows the mainstream two-stage pipeline. In short, DiffPose models the 3D pose esti-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13041 mation procedure as a reverse diffusion process, where we progressively transform a 3D pose distribution with high uncertainty and indeterminacy towards a 3D pose with low uncertainty. Intuitively, we can consider the determinate ground truth 3D pose as particles in the context of thermodynamics, where particles can be neatly gathered and form a clear pose with low indeterminacy at the start; then eventually these particles stochastically spread over the space, leading to high indeterminacy. This process of particles evolving from low indeterminacy to high indeterminacy is the for-ward diffusion process . The pose estimation task aims to perform precisely the opposite of this process, i.e., the re-verse diffusion process . We receive an initial 2D pose that is indeterminate and uncertain in 3D space, and we want to shed the indeterminacy to obtain a determinate 3D pose distribution containing high-quality solutions. Overall, our DiffPose framework consists of two oppo-site processes: the forward process and the reverse process , as shown in Fig. 1. In short, the forward process generates supervisory signals of intermediate distributions for training purposes, while the reverse process is a key part of our 3D pose estimation pipeline that is used for both training and testing. Specifically, in the forward process, we gradually diffuse a “ground truth” 3D pose distribution H0with low indeterminacy towards a 3D pose distribution with high in-determinacy that resembles the 3D pose’s underlying uncer-tainty distribution HK. We obtain samples from the inter-mediate distributions along the way, which are used during training as step-by-step supervisory signals for our diffu-sion model g. To start the reverse process, we first initialize the indeterminate 3D pose distribution ( HK) according to the underlying uncertainty of the 3D pose. Then, our diffu-sion model gis used in the reverse process to progressively transform HKinto a 3D pose distribution with low indeter-minacy ( H0). The diffusion model gis optimized using the samples from intermediate distributions (generated in the forward process), which guide it to smoothly transform the indeterminate distribution HKinto accurate predictions. However, there are several challenges in the above for-ward and reverse process. Firstly, in 3D pose estimation, we start the reverse diffusion process from an estimated 2D pose which has high uncertainty in 3D space, instead of starting from random noise like in existing image genera-tion diffusion models [12, 38]. This is a significant differ-ence, as it means that the underlying uncertainty distribution of each 3D pose can differ. Thus, we cannot design the out-put of the forward diffusion steps to converge to the same Gaussian noise like in previous image generation diffusion works [12, 38]. Moreover, the uncertainty distribution of 3D poses can be irregular and complicated, making it hard to characterize via a single Gaussian distribution. Lastly, it can be difficult to perform accurate 3D pose estimationwith just HKas input. This is because our aim is not just to generate any realistic 3D pose, but rather to predict accurate 3D poses corresponding to our estimated 2D poses, which often requires more context information to achieve. To address these challenges, we introduce several novel designs in our DiffPose. Firstly, we initialize the indetermi-nate 3D pose distribution HKbased on extracted heatmaps, which captures the underlying uncertainty of the desired 3D pose. Secondly, during forward diffusion, to generate the indeterminate 3D pose distributions that eventually (after Ksteps) resemble HK, we add noise to the ground truth 3D pose distribution H0, where the noise is modeled by a Gaussian Mixture Model (GMM) that characterizes the uncertainty distribution HK. Thirdly, the reverse diffusion process is conditioned on context information from the in-put video or frame in order to better leverage the spatial-temporal relationship between frames and joints. Then, to effectively use the context information and perform the pro-gressive denoising to obtain accurate 3D poses, we design a GCN-based diffusion model g. The contributions of this paper are threefold: (i) We pro-pose DiffPose, a novel framework which represents a new brand of method with the diffusion architecture for 3D pose estimation, which can naturally handle the indeterminacy and uncertainty of 3D poses. (ii) We propose various de-signs to facilitate 3D pose estimation, including the initial-ization of 3D pose distribution, a GMM-based forward dif-fusion process and a conditional reverse diffusion process. (iii) DiffPose achieves state-of-the-art performance on two widely used human pose estimation benchmarks.
Fang_Learning_Analytical_Posterior_Probability_for_Human_Mesh_Recovery_CVPR_2023
Abstract Despite various probabilistic methods for modeling the uncertainty and ambiguity in human mesh recovery, their overall precision is limited because existing formulations for joint rotations are either not constrained to SO(3)or difficult to learn for neural networks. To address such an issue, we derive a novel analytical formulation for learn-ing posterior probability distributions of human joint rota-tions conditioned on bone directions in a Bayesian man-ner, and based on this, we propose a new posterior-guided framework for human mesh recovery. We demonstrate that our framework is not only superior to existing SOTA base-lines on multiple benchmarks but also flexible enough to seamlessly incorporate with additional sensors due to its Bayesian nature. The code is available at https:// github.com/NetEase-GameAI/ProPose .
1. Introduction Human mesh recovery is a task of recovering body meshes and 3D joint rotations of human actors from im-ages, which has ubiquitous applications in animation pro-duction, sports analysis, etc. To achieve this goal, vari-ous approaches have been proposed in the computer vision community. Existing methods can be divided into two cate-gories, i.e., direct and indirect, respectively. Direct methods use neural networks to regress the rotations ( e.g., axis an-gle [22], rotation matrix [34], 6D vector [29,37,74]) of each humanoid joint in an end-to-end way, while indirect meth-ods recover joint rotations based on some intermediately predicted proxies ( e.g., 3D human keypoints [19, 36, 42], 2D heatmaps [52] or part segmentation [27]). However, both methods have obvious weaknesses. Generally, the es-timated poses from direct solutions are not so well-aligned with the images (Fig. 1(a)), because joint rotations are more difficult to regress compared with keypoints [19,36]. On the contrary, though indirect solutions tend to have better esti-mation precision, their performance heavily relies on the precision of the intermediate proxies and thus are vulner-able to noise and error in the predicted keypoints or part segmentation (Fig. 1(b)). Input (a) Direct (b) Indirect (c) Ours Figure 1. Comparisons of (a) the direct method [29], (b) the indi-rect method [36], and (c) our method. To simultaneously achieve high precision and high ro-bustness, some probabilistic methods are developed, which, instead of seeking a unique solution, try to explicitly model the uncertainty of human poses by learning some kind of probability distribution. Prevalent ways of modeling the distribution include multivariate Gaussian distributions [48, 56], normalizing flows [31], and neural networks [51, 53]. In practice, these learned probability distributions can no-tably improve the estimation results in some extreme cases (e.g., under large occlusion), however, only minor differ-ences can be found in terms of the overall performances on large datasets. One reason is that these probability models cannot truly reflect the rotational uncertainty since they are not strictly constrained to SO(3). Recently, [55] proposes to adopt the matrix Fisher distribution over SO(3)[8, 25] to model the rotational uncertainty caused by depth ambi-guity. However, even with this mathematically-correct for-mulation, the actual performance does not improve much either, because the parameters of the matrix Fisher distribu-tion are not easy for deep neural networks to learn directly. To address this problem, we propose a new learning-friendly and mathematically-correct formulation for learn-ing probability distributions for human mesh recovery. Our formulation is derived based on the facts that, (i) the joint rotations follow the matrix Fisher distribution over SO(3), This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8781 (ii) the unit directions of bones follow the von Mises-Fisher distribution [44], (iii) the bone direction can be viewed as the observation of joint rotation ( i.e., the latent variable). It can be proven that the probability distributions of joint ro-tations conditioned on bone directions still follow the ma-trix Fisher distribution, which allows us to regress the pos-terior probability distribution of the 3D joint rotations in a Bayesian manner, and more importantly, in an analytical form. Moreover, we mathematically prove that the posterior probability of human joint rotations is more concentrated than the prior probability. Our experimental results demon-strate that such a characteristic makes the posterior proba-bility an easier form to learn (for neural networks) than its prior counterpart. Apart from the theoretical contributions, we also pro-pose a new human mesh recovery framework that can uti-lize the learned analytical posterior probability. We demon-strate that this framework successfully achieves high preci-sion and high robustness at the same time, and outperforms existing SOTA baselines. Furthermore, our framework en-ables seamless integration with additional sensors that can yield directional/rotational observations ( e.g., multi-view cameras, optical markers, IMUs) due to its Bayesian na-ture. Different from naive multi-sensor fusion algorithms (e.g., Kalman filter [21]) that typically perform fusion at the inference stage, our framework allows fusion in the training stage to learn the noise characteristics of sensors, and thus has the potential to produce better precision. We demon-strate that our fusion mechanism can achieve similar ef-fects to fusing the latent features from multiple sensor input branches, but is much more flexible since it does not require modification of the main backbone. The key contributions of this paper are thereby: • We derive a novel analytical formulation for learn-ing probability distributions for human joint rotations, and theoretically prove that such formulation allows the regression of posterior probability distribution in a Bayesian manner. • We propose a new framework for human mesh recov-ery by leveraging the learned analytical posterior prob-ability and show that this framework outperforms ex-isting SOTA baselines. • We introduce a novel and flexible multi-sensor fusion mechanism that allows fusing different observations in the training stage.
Han_FashionSAP_Symbols_and_Attributes_Prompt_for_Fine-Grained_Fashion_Vision-Language_Pre-Training_CVPR_2023
Abstract Fashion vision-language pre-training models have shown efficacy for a wide range of downstream tasks. However, general vision-language pre-training models pay less attention to fine-grained domain features, while these features are important in distinguishing the specific do-main tasks from general tasks. We propose a method for fine-grained fashion vision-language pre-training based on fashion Symbols and Attributes Prompt (FashionSAP) to model fine-grained multi-modalities fashion attributes and characteristics. Firstly, we propose the fashion symbols, a novel abstract fashion concept layer, to represent differ-ent fashion items and to generalize various kinds of fine-grained fashion features, making modelling fine-grained attributes more effective. Secondly, the attributes prompt method is proposed to make the model learn specific at-tributes of fashion items explicitly. We design proper prompt templates according to the format of fashion data. Compre-hensive experiments are conducted on two public fashion benchmarks, i.e., FashionGen and FashionIQ, and Fash-ionSAP gets SOTA performances for four popular fashion tasks. The ablation study also shows the proposed abstract fashion symbols, and the attribute prompt method enables the model to acquire fine-grained semantics in the fashion domain effectively. The obvious performance gains from FashionSAP provide a new baseline for future fashion task research.1
1. Introduction Vision-Language pre-training (VLP) attracts wide atten-tion [10,16–18,43] as a foundation for general multi-modal tasks. VLP methods aim at learning multimodal knowledge Corresponding author 1The source code is available at https://github.com/hssip/ FashionSAP (a) General item Caption: a young man in a suitsecuring his tie. (b) Fashion item Caption :long sleeve shirt in red, white, and black plaid, single -button barrel cuffs, … Attribute(b): Season : spring -summer; Gender : men, …Figure 1. Two text-image instances from general (a) and fashion domain (b). The captions of the general domain only describe object-level (underlined words) image content, while fashion do-main captions emphasise attribute-level semantics. from large-scale text and image pairs data containing com-mon objects in daily life. For example, MSCOCO [19], a public vision-language benchmark, is introduced with com-mon object labels. The fashion domain is an important application of VLP methods, where the online retail mar-ket needs retrieval and recommendation services. To sat-isfy such requirements, the VLP model needs to learn high-quality representations containing fine-grained attributes from the fashion data. Many works have adapted general VLP models to fash-ion tasks directly. However, the general pre-training mod-els are not effective for learning fashion knowledge to de-scribe fashion items comprehensively, as the fashion de-scriptions are usually associated with fine-grained attribute-level features. As illustrated in Fig. 1, the description text of a fashion item (right) refers to fine-grained attributes like long sleeves, while such features are ignored by the de-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15028 Fashion Symbols Categories Definition Rules TOPS tops, shirt, polo, sweater, ... upper body DRESSES dress, suit, shift, ... up-to-lower body SKIRTS skirt, sarong, slit, kilt, ... lower body COATS jacket, parka, blazer, duffle, ... associated with others PANTS jeans, shorts, breeches, ... lower body SHOES boots, sneakers, pump, loafers, ... feet BAGS clutches, pouches, wristlet, ... bag & decorative ACCESSORIESring, sunglasses, accessories, hat, necklace, ....decorative & optional OTHERS swim-wear, lingerie, lounge-wear, ..., -Table 1. Fashion symbols and corresponding categories with definition rules. scriptions from general vision-language data (left). More-over, the public fashion producers from fashion platforms attach great importance to some definite attributes ( e.g., season, gender) of fashion data and especially provide the fashion attributes annotations. However, these high-quality attributes are highly neglected by existing fashion VLP models. It is important for fashion VLP models to focus on these fine-grained attributes and learn fashion-specific knowledge. Fashion attributes describe not only item details but also the overall item features. The category for fashion items is an essential attribute highlighted by many benchmark datasets [31, 42, 50]. We notice that categories have a deep correlation to fine-grained attributes, although they describe the general information of a fashion item. For example, the length is an important attribute for both pants and jeans, while it is rarely mentioned in the description of a pair of shoes. However, most existing fashion VLP methods ne-glect the importance of the relationship between similar cat-egories. In this paper, we explore the usage of category attributes as a global concept layer during pre-training. Ac-cording to the human description of a fashion product, we believe categories declare the basic understanding of a fash-ion product. Therefore, we attach the fashion category to the beginning of captions to guide the representation learn-ing. Since the fashion products are designed for the decora-tion of people, we summarize nine fashion symbols corre-sponding to human body parts, as shown in Tab. 1, to unify all the categories of fashion items. We propose a method for the fashion domain to learn fine-grained semantics. This method is able to capture the similarity of fine-grained features based on fashion sym-bols and learn explicit fine-grained fashion attributes by the prompt. Our method gets the SOTA performance for four popular fashion tasks on the two public datasets, and the ob-vious performance gains provide new baselines for further research. Our main contributions are summarized below:An effective fine-grained vision-language pre-training model is proposed to learn the attribute-level fashion knowledge. An abstract fashion concept layer is proposed, and 9 fashion symbols are summarized to represent vari-ous fashion concepts according to their similarities on body parts and product functions. The attributes prompt method enables the cross-modalities pre-training model to explicitly learn fine-grained fashion characteristics.
Chang_An_Erudite_Fine-Grained_Visual_Classification_Model_CVPR_2023
Abstract Current fine-grained visual classification (FGVC) mod-els are isolated. In practice, we first need to identify the coarse-grained label of an object, then select the corre-sponding FGVC model for recognition. This hinders the ap-plication of FGVC algorithms in real-life scenarios. In this paper, we propose an erudite FGVC model jointly trained by several different datasets1, which can efficiently and ac-curately predict an object’s fine-grained label across the combined label space. We found through a pilot study that positive and negative transfers co-occur when differ-ent datasets are mixed for training, i.e., the knowledge from other datasets is not always useful. Therefore, we first propose a feature disentanglement module and a fea-ture re-fusion module to reduce negative transfer and boost positive transfer between different datasets. In detail, we reduce negative transfer by decoupling the deep features through many dataset-specific feature extractors. Subse-quently, these are channel-wise re-fused to facilitate pos-itive transfer. Finally, we propose a meta-learning based dataset-agnostic spatial attention layer to take full advan-tage of the multi-dataset training data, given that localisa-tion is dataset-agnostic between different datasets. Exper-imental results across 11 different mixed-datasets built on four different FGVC datasets demonstrate the effectiveness of the proposed method. Furthermore, the proposed method can be easily combined with existing FGVC methods to obtain state-of-the-art results. Our code is available at https://github.com/PRIS-CV/An-Erudite-FGVC-Model .
1. Introduction In daily life, most people can quickly identify the coarse-grained label of an object ( e.g., car, bird, or aircraft). Then if we want to go further and identify its fine-grained la-*indicates the corresponding author. 1In this paper, different datasets mean different fine-grained visual clas-sification datasets. CarBirdCoarse-GrainedVisual ClassificationModelFGVC Model Zoo An Erudite Fine-Grained Visual Classification Model ⋯⋯ ⋯ Aircraft ⋯Flower Figure 1. How to identify the fine-grained labels of an object? Cur-rent paradigms require two stages: coarse-grained visual classifi-cation and fine-grained visual classification. This paper transforms the two stages of recognition into an erudite fine-grained visual classification model, which can directly recognise the fine-grained labels of objects across different coarse-grained label spaces. bels ( e.g., “Ferrari FF Coupe” [20], “Sayornis” [36], “Boe-ing727-200” [25]), we must learn and master the relevant knowledge [7]. However, it is impossible to master the knowledge and the classification topology of all objects in the world. A critical way to address this problem is to de-velop FGVC algorithms which can assist humans to recog-nise the fine-grained labels of different objects. Moreover, with the rapid development of deep learning, current FGVC algorithms have already abandoned the reliance on addi-tional information [2, 5] ( e.g., attributes, bounding boxes) and have achieved recognition performance of over 90% on a wide range of fine-grained datasets [38], with the ability to be applied in practice. However, current FGVC algorithms are all based on a single source of training data, e.g., a model trained on the CUB-200-2011 [36] dataset can only be used to recognise the species of a bird. If we want to identify a model for a car, we have to use another FGVC model. Specifically, as shown in Figure 1, if we want to recognise the fine-grained label of an object, we first need to know its coarse-grained label ( e.g., birds vs. cars) through a coarse-grained vi-sual classification model, then select its corresponding fine-grained model from the FGVC model zoo and recognise This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7268 its fine-grained label. This two-stage approach faces four challenges: Firstly, the inference time becomes longer (first coarse-grained image recognition, then fine-grained image recognition); Secondly, more storage space is required (dif-ferent FGVC datasets require different fine-grained models to be stored); Thirdly, the accumulation of errors occurs (the accuracy of coarse-grained image recognition directly af-fects the accuracy of fine-grained recognition); Fourthly, the positive and negative transfers between different datasets is ignored. The above challenges greatly hinder the applica-tion of FGVC algorithms in practice. A key solution to solve the challenges is to jointly train anerudite FGVC model with all training data from dif-ferent datasets, as shown in Figure 1. However, our pilot study found that a vanilla erudite model fails to make accu-rate predictions because both positive and negative transfer occurs between different datasets. Specifically, after joint training, although each dataset’s overall distribution of fea-tures almost always becomes better ( i.e.,larger inter-class variance and intra-class similarity) than training alone, it becomes clear that only some categories get a better feature representation, and others get a worse feature representa-tion. At the same time, the boundaries between different datasets sometimes become more blurred, confounding the model’s predictions between them. Unfortunately, negative transfer dominates in practice, resulting in a significant drop in the test accuracy of the model on each dataset compared to training alone. To make the erudite model more accurate, in this paper, we propose a feature disentanglement module and a fea-ture re-fusion module to balance the positive and negative transfer between different datasets. In detail, we decouple the deep features through many dataset-specific feature ex-tractors to obtain dataset-specific features, thus reducing the negative transfer. However, after decoupling the features, we need to know which dataset-specific classifier to use at the inference stage (but we cannot access the coarse-grained label of an object), and lost the positive transfer between datasets. Therefore, inspired by the mixture of experts (MoE) [26], we propose a gating-based feature re-fusion module to channel-wise re-fuse the dataset-specific features to facilitate positive transfer between different datasets. Fi-nally, we obtain features with higher inter-class variance and intra-class similarity while maintaining positive trans-fer and suppressing negative transfer between datasets. Meanwhile, an advantage of joint training with many different datasets is that we have more training data. Although the feature representations should be dataset-specific, salient feature localisation should be dataset-agnostic. Therefore, we can take full advantage of the in-creased training data to train the model to locate many dif-ferent discriminative regions. Naturally, we can use a tradi-tional spatial attention layer to locate regions that are usefulfor FGVC. However, directly applying a traditional spatial attention layer fails to work well due to domain-shift when training on different datasets [21, 28, 48]. To address this issue, we propose a meta-learning based spatial attention layer that drives the model to acquire a dataset-agnostic spa-tial attention that enhances the models’ localisation ability to further increase performance. We demonstrate our resulting framework on 11different mixed-datasets built on four different FGVC datasets, and show that it can easily be combined with existing FGVC methods to obtain state-of-the-art results.
Chen_Affordance_Grounding_From_Demonstration_Video_To_Target_Image_CVPR_2023
Abstract Humans excel at learning from expert demonstrations and solving their own problems. To equip intelligent robots and assistants, such as AR glasses, with this ability, it is essential to ground human hand interactions ( i.e., affor-dances) from demonstration videos and apply them to a target image like a user’s AR glass view. This video-to-image affordance grounding task is challenging due to (1) the need to predict fine-grained affordances, and (2) the lim-ited training data, which inadequately covers video-image discrepancies and negatively impacts grounding. To tackle them, we propose Affordance Transformer (Afformer), which has a fine-grained transformer-based decoder that gradually refines affordance grounding. Moreover, we introduce Mask Affordance Hand (MaskAHand), a self-supervised pre-training technique for synthesizing video-image data and simulating context changes, enhancing af-fordance grounding across video-image discrepancies. Af-former with MaskAHand pre-training achieves state-of-the-art performance on multiple benchmarks, including a sub-stantial 37% improvement on the OPRA dataset. Code is made available at https://github.com/showlab/afformer.
1. Introduction Humans frequently learn from observing others interact with objects to enhance their own experiences, such as fol-lowing an online tutorial to operate a novel appliance. To equip AI systems with this ability, a key challenge lies in comprehending human interaction across videos and im-ages. Specifically, a robot must ascertain the points of inter-action ( i.e., affordances [19]) in a demonstration video and apply them to a new target image, such as the user’s view through AR glasses. This process is formulated as video-to-image affordance grounding, recently proposed by [13], which presents a more challenging setting than previous affordance-related tasks, including affordance detection [11, 59], action-to-image grounding [18,40,41,44], and forecasting [16,35,36]. †Corresponding Author. Demonstration Video Target Image ②Fine-grained Affordance Heatmap Decoding ①Self-supervised Affordance Pre-training on Videos action: pullaction: pressVideo -to-Image Affordance Grounding Intelligent Robots/AR glassesFigure 1. This figure demonstrates the video-to-image affordance grounding task, which aims to identify the area of human hand in-teraction ( i.e., affordance) in a demonstration video and map it to a target image ( e.g., AR glass view). Our contributions include (1) proposing a self-supervised pre-training approach for affordance grounding, and (2) establishing a new model that excels remark-ably in fine-grained heatmap decoding. The complexity of this setting stems from two factors: (1) Fine-grained grounding: Unlike conventional grounding tasks that usually localize coarse affordance positions ( e.g., identifying all buttons related to “press”), video-to-image affordance grounding predicts fine-grained positions spe-cific to the query video ( e.g., only buttons pressed in the video). (2) Grounding accross various video-image discrep-ancies: Demonstration videos and images are often cap-tured in distinct environments, such as a store camera’s per-spective versus a user’s view in a kitchen, which compli-cates the grounding of affordances from videos to images. Moreover, annotating for this task is labor-intensive, as it necessitates thoroughly reviewing the entire video, corre-lating it with the image, and pinpointing affordances. As a result, affordance grounding performance may be limited by insufficient data on diverse video-image discrepancies. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6799 To enable fine-grained affordance grounding, we pro-pose a simple yet effective Affordance transformer (Af-former), which progressively refines coarse-grained predic-tions into fine-grained affordance grounding outcomes. Pre-vious methods [13,40,44] either simply employ large stride upsampling or deconvolution for coarse-grained affordance heatmap prediction ( e.g.,8×8→256×256[13]), or just evaluate at low resolution ( e.g.,28×28[40, 44]). As a result, these methods struggle with fine-grained affordance grounding, particularly when potential affordance regions are closely situated ( e.g., densely packed buttons on a mi-crowave). Our approach employs cross-attention [4, 5] be-tween multi-scale feature pyramids, to facilitate gradual de-coding of fine-grained affordance heatmaps. To address the limited data issue that inadequately covers video-image differences and hampers affordance ground-ing performance, we present a self-supervised pre-training method, Masked Affordance Hand (MaskAHand), which can leverage vast online videos to improve video-to-image affordance grounding. MaskAHand can automatically gen-erate target images from demonstration videos by mask-ing hand interactions and simulating contextual differences between videos and images. In the generated target im-age, the task involves estimating the interacting hand re-gions by watching the original video, thereby enhancing the similarity capabilities crucial for video-to-image affor-dance grounding. Our approach uniquely simulates context changes, an aspect overlooked by previous affordance pre-training techniques [18, 36]. Furthermore, we also rely less on external off-the-shelf tools [1,39,48,50] used in [18,36]. We conducted comprehensive experiments to evaluate our Afformer and MaskAHand methods on three video-to-image affordance benchmarks, namely OPRA [13], EPIC-Hotspot [44], and AssistQ [57]. Compared to prior architec-tures, our most lightweight Afformer variant achieves a rel-ative 33% improvement in fine-grained affordance ground-ing (256×256) on OPRA and relative gains of 10% to 20% for coarse-grained affordance prediction ( 28×28) on OPRA and EPIC-Hotspot. Utilizing MaskAHand pre-training for Afformer results in zero-shot prediction performance that is comparable to previously reported fully-supervised meth-ods on OPRA [13], with fine-tuning further enhancing the improvement to 37% relative gains. Moreover, we demon-strate the advantage of MaskAHand when the data scale for downstream tasks is limited: on AssistQ, with only approx-imately 600 data samples, MaskAHand boosts Afformer’s performance by a relative 28% points.
Feng_RONO_Robust_Discriminative_Learning_With_Noisy_Labels_for_2D-3D_Cross-Modal_CVPR_2023
Abstract Recently, with the advent of Metaverse and AI Gener-ated Content, cross-modal retrieval becomes popular with a burst of 2D and 3D data. However, this problem is challenging given the heterogeneous structure and semantic discrepancies. Moreover, imperfect annotations are ubiq-uitous given the ambiguous 2D and 3D content, thus in-evitably producing noisy labels to degrade the learning per-formance. To tackle the problem, this paper proposes a ro-bust 2D-3D retrieval framework (RONO) to robustly learn from noisy multimodal data. Specifically, one novel Robust Discriminative Center Learning mechanism (RDCL) is pro-posed in RONO to adaptively distinguish clean and noisy samples for respectively providing them with positive and negative optimization directions, thus mitigating the nega-tive impact of noisy labels. Besides, we present a Shared Space Consistency Learning mechanism (SSCL) to capture the intrinsic information inside the noisy data by minimizing the cross-modal and semantic discrepancy between com-mon space and label space simultaneously. Comprehen-sive mathematical analyses are given to theoretically prove the noise tolerance of the proposed method. Furthermore, we conduct extensive experiments on four 3D-model mul-timodal datasets to verify the effectiveness of our method by comparing it with 15 state-of-the-art methods. Code is available at https://github.com/penghu-cs/RONO.
1. Introduction Point-cloud retrieval (PCR) is fundamental and crucial for processing and analyzing 3D data [14], which could provide the direct technical support of the 3D data search engine, thus embracing compelling application prospects and practical value in the fields of robotics [7, 32], au-tonomous driving [23, 31], virtual/augmented reality [9], and medicine [29], etc. Different from 2D images, 3D point clouds could depict the internal architecture and ex-*Corresponding author: Peng Hu (penghu.ml@gmail.com).ternal appearance of objects from distinct views/modalities. Hence, PCR is often accompanied by retrieving across di-verse modalities, termed 2D-3D cross-modal retrieval [19]. On the other hand, it is extremely expensive and labor-intensive to label such a huge amount of data points [17, 41], not to mention the additional challenges of the miss-ing color and texture of the point clouds. In order to reduce the labeling cost, we could utilize open source or low-cost annotation tools (e.g., point-cloud-annotation-tool [18], La-belHub, etc.), hence it will inevitably introduce label noise due to the non-expert annotation. However, almost all exist-ing works excessively rely on well-labeled data [19, 20, 44], thus making them vulnerable to noisy labels and leading to unavoidable performance degradation. To address the aforementioned issues, we propose a robust 2D-3D retrieval framework (RONO) to robustly learn from noisy multimodal data as shown in Figure 1. Our RONO framework consists of two mechanisms: 1) a novel Robust Discriminative Center Learning mechanism (RDCL) to robustly and discriminatively tackle clean and noisy samples, and 2) a Shared Space Consistency Learning mechanism (SSCL) to alleviate and even eliminate the het-erogeneity and semantic gaps across different modalities. More specifically, RDCL is presented to adaptively di-vide the noisy data into clean and noisy samples based on the memorization effect of deep neural networks (DNNs) [3], and then endowing them with positive and negative op-timization directions, respectively. In brief, RDCL could compact the clean points to the corresponding category cen-ters while scattering the noisy ones apart away from the noisy centers in the common space, thus alleviating the interference of noisy labels. In addition, our SSCL aims at mitigating the inherent gaps in the common space, i.e., the heterogeneity and semantic gaps. On the one hand, to bridge the heterogeneity gap across different modalities, our SSCL enforces modality-specific samples from the same in-stance collapse into a single point in the common space, thus producing modality-invariant representations. On the other hand, our SSCL narrows the gap between the repre-sentation space and shared label space to explicitly elimi-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11610 Backbone Noisy label 𝒴1 Image𝒳1 Cloud point 𝒳𝑀𝒵1 𝒵𝑀෠𝒴1 ෠𝒴𝑀Similarity with labeled centerαNoisy sample Clean sample Cloud point Common space Noisy label 𝒴𝑀BackboneRepresentation Distribution of common space (b) MGCRC ℒ𝑐𝑟𝑐MLP CRC ℒ𝑐𝑟𝑐 (a) RDC Image ... MLP ...... Figure 1. The pipeline of our robust 2D-3D retrieval framework (RONO). First, modality-specific extractors project different modalities {Xj,Yj}M j=1into a common space. Second, our Robust Discriminative Center Learning mechanism (RDCL) is conducted in the common space to divide the clean and noisy data while rectifying the optimization direction of noisy ones, leading to robustness against noisy labels. Finally, RONO employs a Shared Space Consistency Learning mechanism (SSCL) to bridge the intrinsic gaps between common space and label space. To be specific, SSCL narrows the cross-modal gap by a Multimodal Gap loss (MG) while minimizing the semantic discrepancy between the common space and label space using a Common Representations Classification loss (CRC) Lcrc, thus endowing representations with modality-invariant discrimination. nate the semantic discrepancy, thus encapsulating common discrimination into common representations. Our main con-tributions can be summarized as follows: • We propose a robust 2D-3D cross-modal retrieval framework (RONO) to robustly learn the common discriminative and modality-invariant representations from noisy labels. To the best of our knowledge, this work could be one of the first attempts to learn with noisy labels for 2D-3D cross-modal retrieval. • To mitigate the impact of noisy labels, a novel Robust Discriminative Center Learning mechanism (RDCL) is proposed to adaptively distinguish clean and noisy samples, and then provide them with positive and neg-ative optimization directions, respectively. • To construct discriminative and modality-invariant representations, a Shared Space Consistency Learning mechanism (SSCL) is presented to alleviate the intrin-sic gaps across heterogeneity, representation, and label spaces. • We theoretically and experimentally demonstrate the robustness of the proposed method under both syn-thetic symmetric/asymmetric and real-world noisy la-bels. Our RONO remarkably outperforms the state-of-the-art methods on 3D object benchmarks of different scales with noisy labels without bells and whistles.2. Related Work
Frey_Probing_Neural_Representations_of_Scene_Perception_in_a_Hippocampally_Dependent_CVPR_2023
Abstract Deep artificial neural networks (DNNs) trained through backpropagation provide effective models of the mam-malian visual system, accurately capturing the hierarchy of neural responses through primary visual cortex to inferior temporal cortex (IT) [41, 43]. However, the ability of these networks to explain representations in higher cortical areas is relatively lacking and considerably less well researched. For example, DNNs have been less successful as a model of the egocentric to allocentric transformation embodied by circuits in retrosplenial and posterior parietal cortex. We describe a novel scene perception benchmark inspired by a hippocampal dependent task, designed to probe the ability of DNNs to transform scenes viewed from different egocen-tric perspectives. Using a network architecture inspired by the connectivity between temporal lobe structures and the hippocampus, we demonstrate that DNNs trained using a triplet loss can learn this task. Moreover, by enforcing a factorized latent space, we can split information propaga-tion into ”what” and ”where” pathways, which we use to reconstruct the input. This allows us to beat the state-of-the-art for unsupervised object segmentation on the CATER and MOVi-A,B,C benchmarks.
1. Introduction Recently, it has been shown that neural networks trained with large datasets can produce coherent scene understand-ing and are capable of synthesizing novel views [12, 21]. These models are trained on egocentric (self-centred) sen-sory input and can construct allocentric (world-centred) re-sponses. In animals, this transformation is governed by structures along the hierarchy from the visual cortex to the hippocampal formation, an important model system related to navigation and memory [20,30]. Notably, the hippocam-pus is a necessary component of the network supporting memory and perception of places and events and is one of the first brain regions compromised during the progressionof Alzheimer’s disease (AD) [3,34]. However, experimental knowledge regarding the interplay across multiple interact-ing brain regions is limited and new computational models are needed to better explain the single-cell responses across the whole transformation circuit. Here, we developed a scene recognition model to better understand the intrinsic computations governing the trans-formation from egocentric to allocentric reference frames, which controls successful view synthesis in humans and other animals. For this, we developed a novel hip-pocampally dependent task, inspired by the 4-Mountains-Test [18], which is used in clinics to predict early-onset Alzheimer’s disease [40]. We tested this task by creating a biologically realistic model inspired by recent work in scene perception, in which scenes need to be re-imagined from several different viewpoints. The main contributions of our paper are the following: • We introduce and open-source the allocentric scene perception (ASP) benchmark for training view synthe-sis models based on a hippocampally dependent task which is frequently used to predict AD. • We show that a biologically realistic neural network model trained using a triplet loss can accurately distin-guish between hundreds of scenes across many differ-ent viewpoints and that it can disentangle object infor-mation from location information when using a factor-ized latent space. • Lastly, we show that by using a reconstruction loss combined with a pixel-wise decoder we can perform unsupervised object segmentation, outperforming the state-of-the-art models on the CATER, MOVi-A,B,C benchmarks.
Brahma_A_Probabilistic_Framework_for_Lifelong_Test-Time_Adaptation_CVPR_2023
Abstract Test-time adaptation (TTA) is the problem of updating a pre-trained source model at inference time given test in-put(s) from a different target domain. Most existing TTA approaches assume the setting in which the target domain isstationary , i.e., all the test inputs come from a single target domain. However, in many practical settings, the test input distribution might exhibit a lifelong/continual shift over time. Moreover, existing TTA approaches also lack the ability to provide reliable uncertainty estimates, which is crucial when distribution shifts occur between the source and target domain. To address these issues, we present PETAL (Probabilistic lifElong Test-time Adapta-tion with seLf-training prior), which solves lifelong TTA us-ing a probabilistic approach, and naturally results in (1) a student-teacher framework, where the teacher model is an exponential moving average of the student model, and (2) regularizing the model updates at inference time using the source model as a regularizer. To prevent model drift in the lifelong/continual TTA setting, we also propose a data-driven parameter restoration technique which contributes to reducing the error accumulation and maintaining the knowledge of recent domains by restoring only the irrele-vant parameters. In terms of predictive error rate as well as uncertainty based metrics such as Brier score and negative log-likelihood, our method achieves better results than the current state-of-the-art for online lifelong test-time adap-tation across various benchmarks, such as CIFAR-10C, CIFAR-100C, ImageNetC, and ImageNet3DCC datasets. The source code for our approach is accessible at https: //github.com/dhanajitb/petal .
1. Introduction Deep learning models exhibit excellent performance in settings where the model is evaluated on data from the same distribution as the training data. However, the performance of such models degrades drastically when the distribution of the test inputs at inference time is different from the distri-bution of the train data (source) [11, 16, 36]. Thus, there isa need to robustify the network to handle such scenarios. A particularly challenging setting is when we do not have any labeled target domain data to finetune the source model, and unsupervised adaptation must happen at test time when the unlabeled test inputs arrive. This problem is known as test-time adaptation (TTA) [28, 35, 38]. Moreover, due to the difficulty of training a single model to be robust to all po-tential distribution changes at test time, standard fine-tuning is infeasible, and TTA becomes necessary. Another chal-lenge in TTA is that the source domain training data may no longer be available due to privacy/storage requirements, and we only have access to the source pre-trained model. Current approaches addressing the problem of TTA [28, 35, 38, 41] are based on techniques like self-training based pseudo-labeling or entropy minimization in order to enhance performance under distribution shift during test-ing. One crucial challenge faced by existing TTA methods is that real-world machine learning systems work in non-stationary and continually changing environments. Even though the self-training based approaches perform well when test inputs are from a different domain but all still i.i.d., it has been found that the performance is unstable when target test inputs come from a continually changing environment [32]. Thus, it becomes necessary to perform test-time adaptation in a continual manner. Such a setting is challenging because the continual adap-tation of the model in the long term makes it more diffi-cult to preserve knowledge about the source domain. Con-tinually changing test distribution causes pseudo-labels to become noisier and miscalibrated [10] over time, leading to error accumulation [3] which is more likely to occur if early predictions are incorrect. When adapting to new test input, the model tends to forget source domain knowledge, triggering catastrophic forgetting [27, 31, 33]. Moreover, existing TTA methods do not account for model/predictive uncertainty , which can result in miscalibrated predictions. Recently, [41] proposed CoTTA, an approach to address the continual/lifelong TTA setting using a stochastic pa-rameter reset mechanism to prevent forgetting. Their reset mechanism however is based on randomly choosing a sub-set of weights to reset and is not data-driven. Moreover, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3582 Pre-Trained Model Source Domain Continual Target Domains Lo w-light Sno w Time Fog Source Domain Pre-Trained Model Continually Adapt on Test Target Domains Teacher Model Parameter Restore Averaged Prediction Log Posterior Term Cross-Entropy Term Loss Prediction Student Model Exponential Moving Average Learned Posterior Density from Source Domain Augment Figure 1. Left: Problem setup of online lifelong TTA. During adaptation on test input, the source domain data is no longer available, and only the model pre-trained on the source domain is provided. Test inputs from different domains arrive continually, and the model has no knowledge about change in the domain. Right: Our proposed probabilistic framework for online lifelong TTA. We obtain a source domain pre-trained model from the posterior density learned using training data from the source domain. The posterior density is used to initialize the student model. A test sample is provided as input to the student model. Using multiple augmentations of a test sample, we obtain augmentation averaged prediction from the teacher model. The loss term consists of log posterior and cross-entropy terms utilizing student and teacher model predictions. We utilize backpropagation to update student model and exponential moving average for teacher model. their method does not take into account model/predictive uncertainty and is therefore susceptible to overconfident and miscalibrated predictions. To improve upon these challenges of continual/lifelong TTA, we propose a principled, probabilistic framework for lifelong TTA. Our framework (shown in Fig. 1 (Right)) constructs a posterior distribution over the source model weights and a data-dependent prior which results in a self-training based cross-entropy loss, with a regularizer term in the learning objective. This regularizer arises from terms corresponding to the posterior, which incorporates knowl-edge of source (training) domain data. Moreover, our framework also offers a probabilistic per-spective and justification to the recently proposed CoTTA [41] approach, which arises as a special case of our proba-bilistic framework. In particular, only considering the data-driven prior in our approach without the regularizer term, corresponds to the student-teacher based cross-entropy loss used in CoTTA. Further, to improve upon the stochastic restore used by [41], we present a data-driven parameter restoration based on Fisher Information Matrix (FIM). In terms of improving accuracy and enhancing calibration dur-ing distribution shift, our approach surpasses existing ap-proaches in various benchmarks. Main Contributions 1. From a probabilistic perspective, we arrive at the student-teacher training framework in our proposed Probabilistic lifElong Test-time Adaptation with seLf-training prior (PETAL) approach. Inspired from theself-training framework [19, 42], the teacher model is the exponential moving average of the student model, as depicted in Fig. 1 (Right).
Gu_Text_With_Knowledge_Graph_Augmented_Transformer_for_Video_Captioning_CVPR_2023
Abstract Video captioning aims to describe the content of videos using natural language. Although significant progress has been made, there is still much room to improve the per-formance for real-world applications, mainly due to the long-tail words challenge. In this paper, we propose a text with knowledge graph augmented transformer (TextKG) for video captioning. Notably, TextKG is a two-stream trans-former, formed by the external stream and internal stream. The external stream is designed to absorb additional knowl-edge, which models the interactions between the additional knowledge, e.g., pre-built knowledge graph, and the built-in information of videos, e.g., the salient object regions, speech transcripts, and video captions, to mitigate the long-tail words challenge. Meanwhile, the internal stream is de-signed to exploit the multi-modality information in videos (e.g., the appearance of video frames, speech transcripts, and video captions) to ensure the quality of caption results. In addition, the cross attention mechanism is also used in between the two streams for sharing information. In this way, the two streams can help each other for more accurate results. Extensive experiments conducted on four challeng-ing video captioning datasets, i.e., YouCookII, ActivityNet Captions, MSR-VTT, and MSVD, demonstrate that the pro-posed method performs favorably against the state-of-the-art methods. Specifically, the proposed TextKG method out-performs the best published results by improving 18.7%ab-solute CIDEr scores on the YouCookII dataset.
1. Introduction Video captioning aims to generate a complete and natu-ral sentence to describe video content, which attracts much *Corresponding author:libo@iscas.ac.cn, Libo Zhang was supported by Youth Innovation Promotion Association, CAS (2020111). This work was done during internship at ByteDance Inc.attention in recent years. Generally, most existing meth-ods [21, 38, 41, 58] require a large amount of paired video and description data for model training. Several datasets, such as YouCookII [69], and ActivityNet Captions [19] are constructed to promote the development of video caption-ing field. Meanwhile, some methods [29, 40, 48, 72] also use the large-scale narrated video dataset HowTo100M [30] to pretrain the captioning model to further improve the ac-curacy. Although significant progress has been witnessed, it is still a challenge for video captioning methods to be applied in real applications, mainly due to the long-tail issues of words. Most existing methods [29, 40, 48, 72] attempt to design powerful neural networks, trained on the large-scale video-text datasets to make the network learn the relations between video appearances and descriptions. However, it is pretty tough for the networks to accurately predict the ob-jects, properties, or behaviors that are infrequently or never appearing in training data. Some methods [14, 71] attempt to use knowledge graph to exploit the relations between ob-jects for long-tail challenge in image or video captioning, which produces promising results. In this paper, we present a text with knowledge graph augmented transformer (TextKG), which integrates addi-tional knowledge in knowledge graph and exploits the multi-modality information in videos to mitigate the long-tail words challenge. TextKG is a two-stream transformer, formed by the external stream and internal stream. The ex-ternal stream is used to absorb additional knowledge to help mitigate long-tail words challenge by modeling the interac-tions between the additional knowledge in pre-built knowl-edge graph, and the built-in information of videos, such as the salient object regions in each frame, speech transcripts, and video captions. Specifically, the information is first re-trieved from the pre-built knowledge graphs based on the detected salient objects. After that, we combine the features of the retrieved information, the appearance features of de-tected salient objects, the features of speech transcripts and This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18941 captions, then feed them into the external stream of Tex-tKG to model the interactions. The internal stream is de-signed to exploit the multi-modality information in videos, such as the appearance of video frames, speech transcripts and video captions, which can ensure the quality of cap-tion results. To share information between two streams, the cross attention mechanism is introduced. In this way, the two streams can obtain the required modal information from each other for generating more accurate results. The architecture of the proposed method is shown in Figure 1. Several experiments conducted on four challenging datasets, i.e., YouCookII [69], ActivityNet Captions [19], MSR-VTT [56], and MSVD [3] demonstrate that the pro-posed method performs favorably against the state-of-the-art methods. Notably, our TextKG method outperforms the best published results by improving 18.7%and3.2%abso-lute CIDEr scores in the paragraph-level evalution mode on the YouCookII and Activity-Net Captions datasets.
Bohdal_Meta_Omnium_A_Benchmark_for_General-Purpose_Learning-To-Learn_CVPR_2023
Abstract Meta-learning and other approaches to few-shot learn-ing are widely studied for image recognition, and are in-creasingly applied to other vision tasks such as pose estima-tion and dense prediction. This naturally raises the question of whether there is any few-shot meta-learning algorithm capable of generalizing across these diverse task types? To support the community in answering this question, we intro-duce Meta Omnium, a dataset-of-datasets spanning multi-ple vision tasks including recognition, keypoint localization, semantic segmentation and regression. We experiment with popular few-shot meta-learning baselines and analyze their ability to generalize across tasks and to transfer knowledge between them. Meta Omnium enables meta-learning re-searchers to evaluate model generalization to a much wider array of tasks than previously possible, and provides a sin-gle framework for evaluating meta-learners across a wide suite of vision applications in a consistent manner. Code and dataset are available at https://github.com/ edi-meta-learning/meta-omnium .
1. Introduction Meta-learning is a long-standing research area that aims to replicate the human ability to learn from a few exam-ples by learning-to-learn from a large number of learning problems [61]. This area has become increasingly impor-tant recently, as a paradigm with the potential to break the data bottleneck of traditional supervised learning [26, 70]. While the largest body of work is applied to image recog-nition, few-shot learning algorithms have now been stud-ied in most corners of computer vision, from semantic seg-mentation [37] to pose estimation [49] and beyond. Nev-ertheless, most of these applications of few-shot learning are advancing independently, with increasingly divergent application-specific methods and benchmarks. This makes it hard to evaluate whether few-shot meta-learners can solve diverse vision tasks. Importantly it also discourages the de-General Few-shot Meta -Learner Support Query Prediction + + + +In-Distribution Evaluation Out-of-Distribution Evaluation Transfer +General Few-shot Meta -Learner Figure 1. Illustration of the diverse visual domains and task types in Meta Omnium. Meta-learners are required to generalize across multiple task types, multiple datasets, and held-out datasets. velopment of meta-learners with the ability to learn-to-learn across tasks, transferring knowledge from, e.g., keypoint localization to segmentation – a capability that would be highly valuable for vision systems if achieved. The overall trend in computer vision [20, 52] and AI [5, 55] more generally is towards more general-purpose models and algorithms that support many tasks and ide-ally leverage synergies across them. However, it has not yet been possible to explore this trend in meta-learning, due to the lack of few-shot benchmarks spanning multiple tasks. State-of-the-art benchmarks [63, 65] for visual few-shot learning are restricted to image recognition across a handful of visual domains. There is no few-shot benchmark This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7693 Dataset Num Tasks Num Domains Num Imgs Categories Size Lightweight Multi-Task Multi-Domain Omniglot [30] 1 1 32K 1623 148MB ✓ ✗ ✗ miniImageNet [66] 1 1 60K 100 1GB ✓ ✗ ✗ Meta-Dataset [63] 1 7 ∼10 53M 43 ∼1500 210GB ✗ ✗ ✓ VTAB [80] 1 3 ∼19 2.2M 2 ∼397 100GB ✗ ✗ ✓ FSS1000 [37] 1 1 10000 1000 670MB ✓ ✗ ✗ Meta-Album [65] 1 10 ∼40 1.5M 19 ∼706 15GB ✓ ✗ ✓ Meta Omnium 4 21 160K 2 ∼706 3.1GB ✓ ✓ ✓ Table 1. Feature comparison between Meta Omnium and other few-shot meta-learning benchmarks. Meta Omnium uniquely combines a rich set of tasks and visual domains with a lightweight size for accessible use. that poses the more substantial challenge [57, 77] of gen-eralizing across different tasks . We remark that the term task is used differently in few-shot meta-learning literature [16, 26, 70] (to mean different image recognition problems, such as cat vs dog or car vs bus) and the multi-task litera-ture [20,57,74,77] (to mean different kinds of image under-standing problems, such as classification vs segmentation). In this paper, we will use the term task in the multi-task lit-erature sense, and the term episode to refer to tasks in the meta-learning literature sense, corresponding to a support and query set. We introduce Meta Omnium, a dataset-of-datasets span-ning multiple vision tasks including recognition, seman-tic segmentation, keypoint localization/pose estimation, and regression as illustrated in Figure 1. Specifically, Meta Om-nium provides the following important contributions: (1) Existing benchmarks only test the ability of meta-learners to learn-to-learn within tasks such as classification [63, 65], or dense prediction [37]. Meta Omnium uniquely tests the ability of meta-learners to learn across multiple task types. (2) Meta Omnium covers multiple visual domains (from natural to medical and industrial images). (3) Meta Om-nium provides the ability to thoroughly evaluate both in-distribution and out-of-distribution generalisation. (4) Meta Omnium has a clear hyper-parameter tuning (HPO) and model selection protocol, to facilitate future fair compari-son across current and future meta-learning algorithms. (5), Unlike popular predecessors, [63], and despite the diversity of tasks, Meta Omnium has been carefully designed to be of moderate computational cost, making it accessible for research in modestly-resourced universities as well as large institutions. Table 1 compares Meta Omnium to other rele-vant meta-learning datasets. We expect Meta Omnium to advance the field by encour-aging the development of meta-learning algorithms capa-ble of knowledge transfer across different tasks – as well as across learning episodes within individual tasks as is pop-ularly studied today [16, 70]. In this regard, it provides the next step of the level of a currently topical challenge of dealing with heterogeneity in meta-learning [1, 35, 63, 67]. While existing benchmarks have tested multi-domain het-erogeneity (e.g., recognition of written characters and plantswithin a single network) [63, 65] and shown it to be chal-lenging, Meta Omnium tests multi-task learning (e.g., char-acter recognition vs plant segmentation). This is substan-tially more ambitious when considered from the perspective of common representation learning. For example, a repre-sentation tuned for recognition might benefit from rotation invariance , while one tuned for segmentation might benefit from rotation equivariance [11, 15, 71]. Thus, in contrast to conventional within-task meta-learning benchmarks that have been criticized as relying more on common represen-tation learning than learning-to-learn [53, 62], Meta Om-nium better tests the ability of learning-to-learn since the constituent tasks require more diverse representations.
Cheng_BoxTeacher_Exploring_High-Quality_Pseudo_Labels_for_Weakly_Supervised_Instance_Segmentation_CVPR_2023
Abstract Labeling objects with pixel-wise segmentation requires a huge amount of human labor compared to bounding boxes. Most existing methods for weakly supervised instance seg-mentation focus on designing heuristic losses with priors from bounding boxes. While, we find that box-supervised methods can produce some fine segmentation masks and we wonder whether the detectors could learn from these fine masks while ignoring low-quality masks. To answer this question, we present BoxTeacher, an efficient and end-to-end training framework for high-performance weakly su-pervised instance segmentation, which leverages a sophis-ticated teacher to generate high-quality masks as pseudo labels. Considering the massive noisy masks hurt the train-ing, we present a mask-aware confidence score to esti-mate the quality of pseudo masks, and propose the noise-aware pixel loss and noise-reduced affinity loss to adap-tively optimize the student with pseudo masks. Extensive experiments can demonstrate effectiveness of the proposed BoxTeacher. Without bells and whistles, BoxTeacher re-markably achieves 35.0mask AP and 36.5mask AP with ResNet-50 and ResNet-101 respectively on the challenging COCO dataset, which outperforms the previous state-of-the-art methods by a significant margin and bridges the gap between box-supervised and mask-supervised methods.
1. Introduction Instance segmentation, aiming at recognizing and seg-menting objects in images, is a fairly challenging task in computer vision. Fortunately, the rapid development of object detection methods [7, 40, 50] has greatly advanced the emergence of numbers of successful methods [5, 6, 23, 49, 54, 55] for effective and efficient instance segmenta-⋆This work was done when Tianheng Cheng and Shaoyu Chen were interns at Horizon Robotics.†Wenyu Liu is the corresponding author: liuwy@hust.edu.cn BoxInst, 30.7 APGround Truth 79031BoxInst, 30.7 AP Ground Truth30.731.032.631.831.334.2 2829303132333435 BoxInstSelf-TrainingBoxTeacherMaskAP1× Schedule3× Schedule (a)(b) 000000377486 Figure 1. (a) Segmentation Masks from BoxInst. BoxInst (ResNet-50 [24]) can produce some fine segmentation masks with weak supervisions from bounding boxes and images. (b) Self-Training with Pseudo Masks on COCO val.We explore the self-training to train a CondInst [49] with the pseudo labels gener-ated by BoxInst. However, the improvements are limited tion. With the fine-grained human annotations, recent in-stance segmentation methods can achieve impressive re-sults on challenging the COCO dataset [34]. Nevertheless, labeling instance-level segmentation is much complicated and time-consuming, e.g., labeling an object with polygon-based masks requires 10.3×more time than that with a 4-point bounding box [11]. Recently, a few works [25,31–33,51,53] explore weakly supervised instance segmentation with box annotations or low-level colors. These weakly supervised methods can ef-fectively train instance segmentation methods [23, 49, 55] without pixel-wise or polygon-based annotations and ob-tain fine segmentation masks. As shown in Fig. 1(a), Box-Inst [51] can output a few high-quality segmentation masks and segment well on the object boundary, e.g., the person, even performs better than the ground-truth mask in details though other objects may be badly segmented. Naturally, we wonder if the generated masks of box-supervised meth-ods, especially the high-quality masks, could be qualified as pseudo segmentation labels to further improve the per-formance of weakly supervised instance segmentation. To answer this question, we first employ the naive This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3145 self-training to evaluate the performance of using box-supervised pseudo masks. Given the generated instance masks from BoxInst, we propose a simple yet effective box-based pseudo mask assignment to assign pseudo masks to ground-truth boxes. And then we train the CondInst [49] with the pseudo masks, which has the same architecture with BoxInst and consists of a detector [50] and a dynamic mask head. Fig. 1(b) shows that using self-training brings minor improvements and fails to unleash the power of high-quality pseudo masks, which can be attributed to two obsta-cles, i.e., (1) the naive self-training fails to filter low-quality masks, and (2) the noisy pseudo masks hurt the training using fully-supervised pixel-wise loss. Besides, the multi-stage self-training is inefficient. To address these problems, we present BoxTeacher, an end-to-end training framework, which takes advantage of high-quality pseudo masks produced by box supervision. BoxTeacher is composed of a sophisticated Teacher and a perturbed Student , in which the teacher generates high-quality pseudo instance masks along with the mask-aware confidence scores to estimate the quality of masks. Then the proposed box-based pseudo mask assignment will assign the pseudo masks to the ground-truth boxes. The student is normally optimized with the ground-truth boxes and pseudo masks through box-based loss and noise-aware pseudo mask loss , and then progressively updates the teacher via Exponential Moving Average (EMA). In contrast to the naive multi-stage self-training, BoxTeacher is more simple and efficient. The proposed mask-aware confidence score effectively reduces the impact of low-quality masks. More importantly, pseudo labeling can mutually improve the stu-dent and further enforce the teacher to generate higher-quality masks, hence pushing the limits of the box supervi-sion. BoxTeacher can serve as a general training paradigm and is agnostic to the methods for instance segmentation. To benchmark the proposed BoxTeacher, we adopt CondInst [49] as the basic segmentation method. On the challenging COCO dataset [34], BoxTeacher surprisingly achieves 35.0and36.5mask AP based on ResNet-50 [24] and ResNet-101 respectively, which remarkably outper-forms the counterparts. We provide extensive experiments on PASCAL VOC and Cityscapes to demonstrate its ef-fectiveness and generalization ability. Furthermore, Box-Teacher with Swin Transformer [37] obtains 40.6 mask AP as a weakly approach for instance segmentation. Overall, the contribution can be summarized as follows: • We solve the box-supervised instance segmentation problem from a new perspective, i.e., self-training with pseudo masks, and illustrate its effectiveness. • We present BoxTeacher, a simple yet effective frame-work, which leverages pseudo masks with the mask-aware confidence score and noise-aware pseudo masksloss. Besides, we propose a pseudo mask assignment to assign pseudo masks to ground-truth boxes. • We improve the weakly supervised instance segmenta-tion by large margins and bridge the gap between box-supervised and mask-supervised methods, e.g., Box-Teacher achieves 36.5mask AP on COCO compared to39.1AP obtained by CondInst.
Cui_KD-DLGAN_Data_Limited_Image_Generation_via_Knowledge_Distillation_CVPR_2023
Abstract Generative Adversarial Networks (GANs) rely heavily on large-scale training data for training high-quality im-age generation models. With limited training data, the GAN discriminator often suffers from severe overfitting which di-rectly leads to degraded generation especially in genera-tion diversity. Inspired by the recent advances in knowledge distillation (KD), we propose KD-DLGAN, a knowledge-distillation based generation framework that introduces pre-trained vision-language models for training effective data-limited generation models. KD-DLGAN consists of two innovative designs. The first is aggregated generative KD that mitigates the discriminator overfitting by challeng-ing the discriminator with harder learning tasks and dis-tilling more generalizable knowledge from the pre-trained models. The second is correlated generative KD that im-proves the generation diversity by distilling and preserv-ing the diverse image-text correlation within the pre-trained models. Extensive experiments over multiple benchmarks show that KD-DLGAN achieves superior image generation with limited training data. In addition, KD-DLGAN com-plements the state-of-the-art with consistent and substantial performance gains. Note that codes will be released.
1. Introduction Generative Adversarial Networks (GANs) [12] have be-come the cornerstone technique in various image generation tasks. On the other hand, effective training of GANs relies heavily on large-scale training images that are usually labo-rious and expensive to collect. With limited training data, the discriminator in GANs often suffers from severe over-fitting [40, 53], leading to degraded generation as shown in Fig. 1. Recent works attempt to address this issue from two major perspectives: i) massive data augmentation that aims *corresponding author. 10% data 5% data 2.5% data20406080100120 FID( ) with ImageNet Data BigGAN DA LeCam-GAN DA + KD KD-DLGANFigure 1. With limited training samples, state-of-the-art GANs such as BigGAN suffer from clear discriminator overfitting which directly leads to degraded generation. The recent work attempts to mitigate the overfitting via mass data augmentation in DA [53] or regularization in LeCam-GAN [40]. The proposed KD-DLGAN distills the rich and diverse text-image knowledge from the power-ful visual-language model to the discriminator which greatly mit-igates the discriminator overfitting. Additionally, KD-DLGAN is designed specifically for image generation tasks, which also out-performs the vanilla knowledge distillation (DA+KD) greatly. to expand the distribution of the limited training data [53]; ii) model regularization that introduces regularizers to mod-ulate the discriminator learning [40]. We intend to mitigate the discriminator overfitting from a new perspective. Recent studies show that knowledge distillation (KD) from powerful vision-language models such as CLIP [36] can effectively relieve network overfitting in visual recog-nition tasks [2,9,29,42]. Inspired by these prior studies, we explore KD for data-limited image generation, aiming to mitigate the discriminator overfitting by distilling the rich This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3872 image-text knowledge from vision-language models. One intuitive approach is to adopt existing KD methods [17, 37] for training data-limited GANs, e.g., by forcing the discrim-inator to mimic the representation space of vision-language models. However, such approach does not work well as most existing KD methods are designed for visual recogni-tion instead of GANs as illustrated in DA+KD in Fig. 1. We propose KD-DLGAN, a knowledge-distillation based image generation framework that introduces the idea of generative KD for training effective data-limited GANs. KD-DLGAN is designed based on two observations in data-limited image generation: 1) the overfitting is largely at-tributed to the simplicity of the discriminator task, i.e., the discriminator can easily memorize the limited training sam-ples and distinguish them with little efforts; 2) the degra-dation in data-limited generation is largely attributed to poor generation diversity, i.e., the trained data-limited GAN models tend to generate similar images. Inspired by the two observations, we design two gen-erative KD techniques that jointly distill knowledge from CLIP [36] to the GAN discriminator for effective training of data-limited GANs. The first is aggregated generative KD (AGKD) that challenges the discriminator by forcing fake samples to be similar to real samples while mimicking CLIP’s visual feature space. It mitigates the discriminator overfitting by aggregating features of real and fake samples and distilling generalizable CLIP knowledge concurrently. The second is correlated generative KD (CGKD) that strives to distill CLIP image-text correlations to the GAN discrim-inator. It improves the generation diversity by enforcing the diverse correlations between images and texts, ultimately improving the generation performance. The two designs distill the rich yet diverse CLIP knowledge which effec-tively mitigates the discriminator overfitting and improve the generation as illustrated in KD-DLGAN in Fig. 1. The main contributions of this work can be summarized in three aspects. First, we propose KD-DLGAN, a novel image generation framework that introduces knowledge dis-tillation for effective GAN training with limited training data. To the best of our knowledge, this is the first work that exploits the idea of knowledge distillation in data-limited image generation. Second, we design two generative KD techniques including aggregated generative KD and corre-lated generative KD that mitigate the discriminator overfit-ting and improves the generation performance effectively. Third, extensive experiments over multiple widely adopted benchmarks show that KD-DLGAN achieves superior im-age generation and it also complements the state-of-the-art with consistent and substantial performance gains.
Fostiropoulos_Batch_Model_Consolidation_A_Multi-Task_Model_Consolidation_Framework_CVPR_2023
Abstract In Continual Learning (CL), a model is required to learn a stream of tasks sequentially without significant perfor-mance degradation on previously learned tasks. Current approaches fail for a long sequence of tasks from diverse domains and difficulties. Many of the existing CL ap-proaches are difficult to apply in practice due to excessive memory cost or training time, or are tightly coupled to a single device. With the intuition derived from the widely applied mini-batch training, we propose Batch Model Con-solidation ( BMC ) to support more realistic CL under condi-tions where multiple agents are exposed to a range of tasks. During a regularization phase, BMC trains multiple expert models in parallel on a set of disjoint tasks. Each expert maintains weight similarity to a base model through a sta-bility loss, and constructs a buffer from a fraction of the task’s data. During the consolidation phase, we combine the learned knowledge on ‘batches’ of expert models us-ing a batched consolidation loss in memory data that ag-gregates all buffers. We thoroughly evaluate each compo-nent of our method in an ablation study and demonstrate the effectiveness on standardized benchmark datasets Split-CIFAR-100, Tiny-ImageNet, and the Stream dataset com-posed of 71 image classification tasks from diverse domains and difficulties. Our method outperforms the next best CL approach by 70% and is the only approach that can main-tain performance at the end of 71 tasks.
1. Introduction Continual Learning (CL) has allowed deep learning models to learn in a real world that is constantly evolv-ing, in which data distributions change, goals are updated, and critically, much of the information that any model will encounter is not immediately available [2]. Current ap-proaches in CL provide a trade-off to the stability-plasticity dilemma [3] where improving performance for a novel task leads to catastrophic forgetting. Continual Learning benchmarks are composed of a lim-ited number of tasks and with tasks of non-distinct domains, Figure 1. The loss contours by sequential training compared with batch task training [1] (shaded areas as low-error zones for each task). Intuition: Similar to mini-batch training batched task train-ing can reduce the local minima and improve the convexity of the loss landscape. such as Split-CIFAR100 [4] and Split-Tiny-ImageNet [5]. Previous approaches in Continual Learning suffer signifi-cant performance degradation when faced with a large num-ber of tasks, or tasks from diverse domains [1]. Addition-ally, the cost of many methods increases with the number of tasks [6,7] and becomes ultimately unacceptable for certain applications, while other methods [8–11] are tightly cou-pled to training on a single device and therefore cannot ben-efit from scaling in distributed settings. As such, current ap-proaches are impractical for many real-world applications, where multiple devices are trained on a diverse and disjoint set of tasks with the goal of maintaining a single model. Motivated by the performance, memory cost, training time and flexibility issues of current approaches, we pro-pose Batch Model Consolidation (BMC) , a Continual Learning framework that supports distributed training on multiple streams of diverse tasks, but also improves per-formance when applied on a single long task stream. Our method trains and consolidates multiple workers that each become an expert in a task that is disjoint from all other tasks. In contrast, for Federated Learning the training set is composed of a single task of heterogeneous data [12]. Our method is composed of two phases. First, during theregularization phase , a set of expert models is trained in new tasks in parallel with their weights regularized to a This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3664 Figure 2. A single incremental step of BMC. On the right figure, the updating of a base model with Multi-Expert Training : after receiving the data of the new tasks Di, . . . , D i+k, a batch of experts θi, . . . , θ i+kare trained separately on their corresponding tasks with stability lossapplied from the base model. The newly trained experts then sample a subset of their training data and combine them with the memory to perform batched distillation on the base model. On the leftfigure, the regularization helps the batched distillation to update the model closer to the regularization boundary and towards the jointly low-error zone of old tasks and two new tasks. base model . Second, during the consolidation phase the expert models are combined into the base model in a way that better retains the performance on the current tasks of all experts and all previously learned tasks. The main ad-vantage of our method is that it provides a better approxi-mation to the multi-task gradient of all tasks from all expert models, Fig. 2. Lastly, BMC better retains performance for significantly more tasks than current baselines, while reduc-ing the total time of training when compared to training on the same task-stream in a sequential manner. The primary contributions of our paper are as follows. 1. We propose Batch Model Consolidation ( BMC ) to support CL for training multiple expert models on a single task stream composed of tasks from diverse do-mains.
Fan_SelfME_Self-Supervised_Motion_Learning_for_Micro-Expression_Recognition_CVPR_2023
Abstract Facial micro-expressions (MEs) refer to brief sponta-neous facial movements that can reveal a person’s gen-uine emotion. They are valuable in lie detection, crimi-nal analysis, and other areas. While deep learning-based ME recognition (MER) methods achieved impressive suc-cess, these methods typically require pre-processing using conventional optical flow-based methods to extract facial motions as inputs. To overcome this limitation, we pro-posed a novel MER framework using self-supervised learn-ing to extract facial motion for ME (SelfME). To the best of our knowledge, this is the first work using an automatically self-learned motion technique for MER. However, the self-supervised motion learning method might suffer from ignor-ing symmetrical facial actions on the left and right sides of faces when extracting fine features. To address this issue, we developed a symmetric contrastive vision transformer (SCViT) to constrain the learning of similar facial action features for the left and right parts of faces. Experiments were conducted on two benchmark datasets showing that our method achieved state-of-the-art performance, and ab-lation studies demonstrated the effectiveness of our method.
1. Introduction Personality and emotions are crucial aspects of human cognition and play a vital role in human understanding and human-computer interaction [20]. Facial expressions provide an important cue for understanding human emo-tions [3]. According to neuropsychological research, micro-expressions (MEs) are revealed when voluntary and invol-untary expressions collide [8]. As a slight leakage of ex-pression, MEs are subtle in terms of intensities, brief in du-ration (occur less than 0.5 seconds), and affect small facial areas [3]. Because MEs are hard to be controlled, they are more likely to reflect genuine human emotions, and thus have been implemented in various fields, such as national security, political psychology, and medical care [41]. De-spite the fact that MEs are valuable, their unique character-istics bring multifarious challenges for ME analysis. ApexOnsetMotion⊝⇒ Figure 1. MEs may be imperceptible to the naked eye, but the mo-tion between the onset (the moment when the facial action begins to grow stronger) and the apex (the moment when the facial action reaches its maximum intensity) makes them readily observable. SelfME learns this motion automatically. The task of ME recognition (MER) is to classify MEs by the type of emotion [3]. Due to the small sample size of datasets, almost all methods for MER have used hand-crafted features, which can be non-optical flow-based [2, 44, 50] or optical flow-based [25, 32, 42]. The extracted features were then fed into either traditional classification models [40, 43, 47], or deep learning-based classification models [12, 24, 51, 52]. Although recently proposed meth-ods claimed they are deep learning-based methods, the ones with the highest performance often rely on traditional op-tical flow [10, 49] between the onset and apex frames as inputs. These optical flow methods are computed in a com-plex manner, and a number of researchers even proposed approaches for further processing the optical flow features prior to inputting them into the networks to improve their performance. This type of pipeline may hinder the develop-ment of MER in the deep learning era. To overcome this limitation, we proposed a novel MER framework using self-supervised learning to extract facial motion for ME (SelfME) in this work. To the best of our knowledge, this is the first work with an automatically self-learned motion technique for MER. We visualized the learned motion by SelfME in Fig. 1. The symmetry of fa-cial actions is important in MER, as spontaneous expres-sions are more symmetric than posed ones, or have inten-sity differences between left and right faces that are negligi-ble [9,13]. However, the learned motion may suffer from ig-noring symmetrical facial actions on the left and right sides of the face when extracting fine features. To address this issue, we developed a symmetric contrastive vision trans-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13834 former (SCViT) to constrain the learning of similar facial action features for the left and right parts of the faces, mit-igating asymmetry information irrelevant to MER. Experi-ments were conducted on two benchmark datasets, showing that our method achieved state-of-the-art performance. In addition, the ablation studies demonstrated the effectiveness of our method. The rest of this paper is organized as follows. Section 2 reviews the related work. Section 3 describes our method-ology. Section 4 presents the experiments with analysis and discussions. Limitations and ethical concerns are discussed in Section 5. Section 6 concludes the paper.
Huang_T-SEA_Transfer-Based_Self-Ensemble_Attack_on_Object_Detection_CVPR_2023
Abstract Compared to query-based black-box attacks, transfer-based black-box attacks do not require any information of the attacked models, which ensures their secrecy. However, most existing transfer-based approaches rely on ensembling multiple models to boost the attack transferability, which is time-and resource-intensive, not to mention the difficulty of obtaining diverse models on the same task. To address this limitation, in this work, we focus on the single-model transfer-based black-box attack on object detection, utiliz-ing only one model to achieve a high-transferability ad-versarial attack on multiple black-box detectors. Specifi-cally, we first make observations on the patch optimization process of the existing method and propose an enhanced attack framework by slightly adjusting its training strate-gies. Then, we analogize patch optimization with regular model optimization, proposing a series of self-ensemble ap-proaches on the input data, the attacked model, and the ad-versarial patch to efficiently make use of the limited infor-mation and prevent the patch from overfitting. The experi-mental results show that the proposed framework can be ap-plied with multiple classical base attack methods (e.g., PGD and MIM) to greatly improve the black-box transferability of the well-optimized patch on multiple mainstream detec-tors, meanwhile boosting white-box performance. Our code is available at https://github.com/VDIGPKU/T-SEA.
1. Introduction With the rapid development of computer vision, deep learning-based object detectors are being widely applied to *These authors contributed equally to this work. †Corresponding author. Input𝑥 Model 𝑓Label 𝑦 Object Function 𝐿Optimizing 🧊 🔥Input𝑥 Patch 𝑝Object Function 𝐿Optimizing 🔥Model 𝑓𝑓: Drop Out𝑝: Cut Out 🧊 🧊 🧊𝑥: DataAug.𝑥: Data Aug.𝑓: ShakeDrop 🔥 🧊TrainedFrozenModel Opt.Patch Opt. Self-EnsembleStrategiesFigure 1. Model optimization usually augments the training data and drop out neurons to increase generalization, motivating us to propose self-ensemble methods for adversarial patch optimization. Specifically, inspired by data augmentation in model optimization, we augment the data xand the model fvia constrained data aug-mentation and model ShakeDrop, respectively. Meanwhile, in-spired by drop out in model optimization, we cut out the training patch τto prevent it overfitting on specific models or images. many aspects of our lives, many of which are highly re-lated to our personal safety, including autonomous driving and intelligent security. Unfortunately, recent works [16,19, 34, 37] have proved that adversarial examples can success-fully disrupt the detectors in both digital and physical do-mains, posing a great threat to detector-based applications. Hence, the mechanism of adversarial examples on detectors should be further explored to help us improve the robustness of detector-based AI applications. In real scenes, attackers usually cannot obtain the de-tails of the attacked model, so black-box attacks naturally receive more attention from both academia and industry. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20514 Generally speaking, black-box adversarial attacks can be classified into 1) query-based and 2) transfer-based. For the former, we usually pre-train an adversarial perturbation on the white-box model and then fine-tune it via the infor-mation from the target black-box model, assuming we can access the target model for free. However, frequent queries may expose the attack intent, weakening the covertness of the attack. Contrarily, transfer-based black-box attacks uti-lize the adversarial examples’ model-level transferability to attack the target model without querying and ensure the se-crecy of the attack. Thus, how to enhance the model-level transferability is a key problem of transfer-based black-box attacks. Most existing works apply model ensemble strate-gies to enhance the transferability among black-box models, however, finding proper models for the same task is not easy and training adversarial patch on multiple models is labori-ous and costly. To address these issues, in this work, we fo-cus on how to enhance the model-level transferability with only one accessible model instead of model ensembling. Though the investigation of adversarial transferability is still in its early stage, the generalizability of neural networks has been investigated for a long time. Intuitively, the asso-ciation between model optimization and patch optimization can be established by shifting of formal definitions. Given the input pair of data x∈ X , and label y∈ Y, the clas-sical model learning is to find a parametric model fθsuch thatfθ(x) =y, while learning an adversarial patch treats the model fθ, the original optimization target, as a fixed input to find a hypothesis hof parametric patch τto cor-rupt the trained model such that hτ(fθ, x)̸=y. Hence, it is straight forward to analogize patch optimization with regular model optimization. Motivated by the classical ap-proaches for increasing model generalization, we propose ourTransfer-based Self-Ensemble Attack ( T-SEA ), ensem-bling the input x, the attacked model fθ, and the adversarial patch τfrom themselves to boost the adversarial transfer-ability of the attack. Specifically, we first introduce an enhanced attack base-line based on [32]. Observing from Fig. 4 that the original training strategies have some limitations, we slightly adjust its learning rate scheduler and training patch scale to revise [32] as our enhanced baseline (E-baseline). Then, as shown in Fig. 1, motivated by input augmentation in model op-timization ( e.g., training data augmentation), we introduce the constrained data augmentation (data self-ensemble) and model ShakeDrop (model self-ensemble), virtually expand-ing the inputs of patch optimization (i.e., the input data x and the attacked model f) to increase the transferability of the patch against different data and models. Meanwhile, motivated by the dropout technique in model optimization, which utilizes sub-networks of the optimizing model to overcome overfitting and thus increasing model generaliza-tion, we propose patch cutout (patch self-ensemble), ran-domly performing cutout on the training patch τto over-come overfitting. Through comprehensive experiments, we prove that the proposed E-baseline and self-ensemble strate-gies perform very well on widely-used detectors with main-stream base attack methods ( e.g., PGD [24], MIM [13]). Our contributions can be summarized as the following: • We propose a transfer-based black-box attack T-SEA, requiring only one attacked model to achieve a high adversarial transferability attack on object detectors. • Observing the issues of the existing approach, we slightly adjust the training strategies to craft an en-hanced baseline and increase its performance. • Motivated by approaches increasing generalization of deep learning model, we propose a series of strate-gies to self-ensemble the input data, attacked model, and adversarial patch, which significantly increases the model-level adversarial transferability without intro-ducing extra information. • The experimental results demonstrate that the pro-posed T-SEA can greatly reduce the AP on multiple widely-used detectors on the black-box setting com-pared to the previous methods, while concurrently per-forming well with multiple base attack methods.
Hoyer_MIC_Masked_Image_Consistency_for_Context-Enhanced_Domain_Adaptation_CVPR_2023
Abstract In unsupervised domain adaptation (UDA), a model trained on source data (e.g. synthetic) is adapted to tar-get data (e.g. real-world) without access to target anno-tation. Most previous UDA methods struggle with classes that have a similar visual appearance on the target domain as no ground truth is available to learn the slight appear-ance differences. To address this problem, we propose a Masked Image Consistency (MIC) module to enhance UDA by learning spatial context relations of the target domain as additional clues for robust visual recognition. MIC en-forces the consistency between predictions of masked target images, where random patches are withheld, and pseudo-labels that are generated based on the complete image by an exponential moving average teacher. To minimize the consistency loss, the network has to learn to infer the pre-dictions of the masked regions from their context. Due to its simple and universal concept, MIC can be integrated into various UDA methods across different visual recogni-tion tasks such as image classification, semantic segmenta-tion, and object detection. MIC significantly improves the state-of-the-art performance across the different recognition tasks for synthetic-to-real, day-to-nighttime, and clear-to-adverse-weather UDA. For instance, MIC achieves an un-precedented UDA performance of 75.9 mIoU and 92.8% on GTA →Cityscapes and VisDA-2017, respectively, which corresponds to an improvement of +2.1 and +3.0 percent points over the previous state of the art. The implementation is available at https://github.com/lhoyer/MIC .
1. Introduction In order to train state-of-the-art neural networks for visual recognition tasks, large-scale annotated datasets are neces-sary. However, the collection and annotation process can be very time-consuming and tedious. For instance, the annota-tion of a single image for semantic segmentation can take more than one hour [10,66]. Therefore, it would be beneficial to resort to existing or simulated datasets, which are easier (c) UDA with Masked Image Consistency (MIC) Unsupervised Domain Adaptation (UDA) Method MIC LossMasking Target ImageMasked Image Consistency (MIC) EMA (a) SotA UDA Pred. (b) MIC Prediction Target ImageFigure 1. (a) Previous UDA methods such as HRDA [31] strug-gle with similarly looking classes on the unlabeled target domain. Here, the interior of the sidewalk is wrongly segmented as road, probably, due to the ambiguous local appearance. (b) The proposed Masked Image Consistency (MIC) enhances the learning of context relations to consider additional context clues such as the curb in the foreground. With MIC, the adapted network is able to correctly segment the sidewalk . (c) MIC can be plugged into most existing UDA methods. It enforces the consistency of the predictions of a masked target image with the pseudo-label of the original image. So, the network is trained to better utilize context clues on the target domain. Further details are shown in Fig. 3. to annotate. However, a network trained on such a source dataset usually performs worse when applied to the actual target dataset as neural networks are sensitive to domain gaps. To mitigate this issue, unsupervised domain adapta-tion (UDA) methods adapt the network to the target domain using unlabeled target images, for instance, with adversarial training [20, 27, 57, 73] or self-training [30, 31, 72, 79, 97]. UDA methods have remarkably progressed in the last few years. However, there is still a noticeable performance gap compared to supervised training. A common problem is the confusion of classes with a similar visual appearance on the target domain such as road /sidewalk orpedestrian /rider as This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11721 there is no ground truth supervision available to learn the slight appearance differences. For example, the interior of thesidewalk in Fig. 1 is segmented as road, probably, due to a similar local appearance. To address this problem, we propose to enhance UDA with spatial context relations as additional clues for robust visual recognition. For instance, the curb in the foreground of Fig. 1 a) could be a crucial context clue to correctly recognize the sidewalk despite the ambiguous texture. Although the used network architectures already have the capability to model context relations, previ-ous UDA methods are still not able to reach the full potential of using context dependencies on the target domain as the used unsupervised target losses are not powerful enough to enable effective learning of such information. Therefore, we design a method to explicitly encourage the network to learn comprehensive context relations of the target domain during UDA. In particular, we propose a novel Masked Image Consistency (MIC) plug-in for UDA (see Fig. 1 c), which can be applied to various visual recognition tasks. Considering semantic segmentation for illustration, MIC masks out a random selection of target image patches and trains the network to predict the semantic segmentation result of the entire image including the masked-out parts. In that way, the network has to utilize the context to infer the se-mantics of the masked regions. As there are no ground truth labels for the target domain, we resort to pseudo-labels, gen-erated by an EMA teacher that uses the original, unmasked target images as input. Therefore, the teacher can utilize both context and local clues to generate robust pseudo-labels. Over the course of the training, different parts of objects are masked out so that the network learns to utilize different context clues, which further increases the robustness. After UDA with MIC, the network is able to better exploit context clues and succeeds in correctly segmenting difficult areas that rely on context clues such as the sidewalk in Fig. 1 b). To the best of our knowledge, MIC is the first UDA ap-proach to exploit masked images to facilitate learning con-text relations on the target domain. Due to its universality and simplicity, MIC can be straightforwardly integrated into various UDA methods across different visual recognition tasks, making it highly valuable in practice. MIC achieves significant and consistent performance improvements for dif-ferent UDA methods (including adversarial training, entropy-minimization, and self-training) on multiple visual recog-nition tasks (image classification, semantic segmentation, and object detection) with different domain gaps (synthetic-to-real, clear-to-adverse-weather, and day-to-night) and dif-ferent network architectures (CNNs and Transformer). It sets a new state-of-the-art performance on all tested bench-marks with significant improvements over previous methods as shown in Fig. 2. For instance, MIC respectively improves the state-of-the-art performance by +2.1, +4.3, and +3.0 per-cent points on GTA →Cityscapes(CS), CS →DarkZurich, and 40 60 80 100 Cls. Accuracy, Seg. mIoU, or Det. mAPDet. CS Foggy CS Segm. CS FoggyZ. Segm. CS DarkZ. Segm. Synthia CS Segm. CS ACDC Segm. GTA CS Cls. OfficeHome Cls. VisDA] +3.6 ] +3.7 ] +4.3 ] +1.5 ] +2.4 ] +2.1 ] +1.9 ] +3.0w/o MIC w/ MICFigure 2. MIC significantly improves state-of-the-art UDA methods across different UDA benchmarks and recognition tasks such as image classification (Cls.), semantic segmentation (Segm.), and object detection (Det.). Detailed results can be found in Sec. 4. VisDA-2017 and achieves an unprecedented UDA perfor-mance of 75.9 mIoU, 60.2 mIoU, and 92.8%, respectively.
Gosala_SkyEye_Self-Supervised_Birds-Eye-View_Semantic_Mapping_Using_Monocular_Frontal_View_Images_CVPR_2023
Abstract Bird’s-Eye-View (BEV) semantic maps have become an essential component of automated driving pipelines due to the rich representation they provide for decision-making tasks. However, existing approaches for generating these maps still follow a fully supervised training paradigm and hence rely on large amounts of annotated BEV data. In this work, we address this limitation by proposing the first self-supervised approach for generating a BEV semantic map using a single monocular image from the frontal view (FV). During training, we overcome the need for BEV ground truth annotations by leveraging the more easily available FV se-mantic annotations of video sequences. Thus, we propose the SkyEye architecture that learns based on two modes of self-supervision, namely, implicit supervision and explicit supervision. Implicit supervision trains the model by enforc-ing spatial consistency of the scene over time based on FV semantic sequences, while explicit supervision exploits BEV pseudolabels generated from FV semantic annotations and self-supervised depth estimates. Extensive evaluations on the KITTI-360 dataset demonstrate that our self-supervised approach performs on par with the state-of-the-art fully su-pervised methods and achieves competitive results using only 1% of direct supervision in BEV compared to fully su-pervised approaches. Finally, we publicly release both our code and the BEV datasets generated from the KITTI-360 and Waymo datasets.
1. Introduction Bird’s-Eye-View (BEV) maps are an integral part of an autonomous driving pipeline as they allow the vehi-cle to perceive the environment using a feature-rich yet computationally-efficient representation. These maps cap-ture both static and dynamic obstacles in the scene while encoding their absolute distances in the metric scale using a low-cost 2D representation. Such characteristics allow them *Equal contribution Figure 1. SkyEye: The first self-supervised framework for semantic BEV mapping. We use sequences of FV semantic annotations to train the network to estimate a semantic map in BEV using a single RGB input. to be used in many distance-based time-sensitive applications such as trajectory estimation and collision avoidance [12,14]. Existing approaches that estimate BEV maps from frontal view (FV) images and/or LiDAR scans require large datasets annotated in the BEV as they are trained in a fully super-vised manner [6, 19, 23, 43]. However, BEV ground truth generation relies on the presence of HD maps, annotated 3D point clouds, and/or 3D bounding boxes, which are ex-tremely arduous to obtain [27]. Recent approaches [29, 36] circumvent this problem of requiring BEV ground truths by leveraging data from simulation environments. However, these approaches suffer from the large domain gap between simulated and real-world images, which results in their re-duced performance in the real world. In this work, we address the aforementioned limitations by proposing SkyEye , the first self-supervised learning frame-work for generating an instantaneous semantic map in BEV , given a single monocular FV image. During training, our approach, depicted in Fig. 1, overcomes the need for BEV ground truths by leveraging FV semantic ground truth labels along with the spatial and temporal consistency offered by video sequences. FV semantic ground truth labels can easily be obtained with reduced human annotation effort due to the relatively small domain gap between FV images of different datasets which allows for efficient label transfer [15, 17, 37]. Additionally, no range sensor is required for data recording. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14901 During inference, our model only uses a single monocular FV image to generate the semantic map in BEV . Our proposed self-supervised learning framework lever-ages two supervision signals, namely, implicit andexplicit supervision . Implicit supervision generates the training sig-nal by enforcing spatial and temporal consistency of the scene. To this end, our model generates the FV semantic predictions for the current and future time steps using the FV image of only the current time step. These predictions are supervised using the corresponding ground truth labels in FV . Explicit supervision, in contrast, supervises the net-work using BEV semantic pseudolabels generated from FV semantic ground truths using a self-supervised depth estima-tion network augmented with a dedicated post-processing procedure. We perform extensive evaluations of SkyEye on the KITTI-360 dataset and demonstrate its generalizability on the Waymo dataset. Results demonstrate that SkyEye performs on par with the state-of-the-art fully-supervised approaches and achieves competitive performance with only 1% of pseudolabels in BEV . Further, we outperform all base-line methods w.r.t. generalization capabilities. Our main contributions can thus be stated as follows: •The first self-supervised framework for generating se-mantic BEV maps from monocular FV images. •An implicit supervision strategy that leverages seman-tic annotations in FV to encode semantic and spatial information into a latent voxel grid. •A pseudolabel generation pipeline to create BEV pseu-dolabels from FV semantic ground truth labels. • A novel semantic BEV dataset derived from Waymo. •Extensive evaluations as well as ablation studies to show the impact of our contributions. •Publicly available code for our SkyEye framework at http://skyeye.cs.uni-freiburg.de .
Gong_MMG-Ego4D_Multimodal_Generalization_in_Egocentric_Action_Recognition_CVPR_2023
Abstract In this paper, we study a novel problem in egocentric action recognition, which we term as “Multimodal Gener-alization” ( MMG ). MMG aims to study how systems can generalize when data from certain modalities is limited or even completely missing. We thoroughly investigate MMG in the context of standard supervised action recognition and the more challenging few-shot setting for learning new ac-tion categories. MMG consists of two novel scenarios, de-signed to support security, and efficiency considerations in real-world applications: (1) missing modality generaliza-tion where some modalities that were present during the train time are missing during the inference time, and (2) cross-modal zero-shot generalization, where the modali-ties present during the inference time and the training time are disjoint. To enable this investigation, we construct a new dataset MMG-Ego4D containing data points with video, audio, and inertial motion sensor (IMU) modali-ties. Our dataset is derived from Ego4D [27] dataset, but processed and thoroughly re-annotated by human experts to facilitate research in the MMG problem. We evaluate a diverse array of models on MMG-Ego4D and propose new methods with improved generalization ability. In par-ticular, we introduce a new fusion module with modality dropout training, contrastive-based alignment training, and a novel cross-modal prototypical loss for better few-shot performance. We hope this study will serve as a bench-mark and guide future research in multimodal generaliza-tion problems. The benchmark and code are available at https://github.com/facebookresearch/MMG Ego4D
1. Introduction Action recognition systems are typically trained on data captured from a third-person or spectator perspective [37, 56]. However, in areas such as robotics and augmented reality, we capture data through the eyes of agents, i.e., in a first-person or egocentric perspective. With head-*Equal contribution †Work done during an internship at Meta Reality Labs. Attention Fusion Multimodal NetworkAttention Fusion Multimodal Network Attention Fusion Multimodal Network videoAttention Fusion Multimodal Network audio video IMUAttention Fusion Multimodal NetworkAttention Fusion Multimodal NetworkTrainingTesting(a) Normal Evaluation (b) Missing Modality Evaluation (c) Cross-modalZero-shot Evaluation audio video IMU audio video IMU audio IMU audio IMUFigure 1. Overview of MMG-Ego4D challenge . In a typical eval-uation setting (a) networks are trained for the supervised setting or the few-shot setting using training/support sets with data from all modalities and evaluated on data points with all modalities. How-ever, there can often be a mismatch between training and testing modalities. Our proposed challenge contains two tasks to mimic these settings. In (b) missing modality evaluation , the model can only use a subset of training modalities for inference. In (c) Cross-modal zero-shot evaluation , the models are on modalities unseen during training. mounted devices such Ray-Ban Stories becoming popu-lar, action recognition from egocentric videos is critical to enable downstream applications, such as contextual rec-ommendations or reminders. However, egocentric action recognition is fundamentally different and more challeng-ing [6, 7, 43, 55]. While third-person video clips are of-ten curated, egocentric video clips are uncurated and have low-level corruptions, such as large motion blur due to head motion. Moreover, egocentric perception requires a careful understanding of the camera wearer’s physical surround-ings, and must interpret the objects and interactions from the wearer’s perspective. Recognizing egocentric activity exclusively from one This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6481 Audio Video IMUAttention Fusion Multimodal Networkcollect dry leaves on the ground Audio Video IMUtrim grass with other toolsput trash in trash canwipe atable ❌ ❌ ❌ ✅Unimodal NetworksFigure 2. Multimodal data is crucial for egocentric perception . Input data consists of three modalities: video, audio, and IMU. (top) Video action recognition identifies the clip with a tool and much grass in the background as the class trim grass with other tools . Audio action recognition system classifies the periodic rub-bing sound as put trash in a trash can . The IMU model clas-sifies the head movement action into the class wipe a table . (bot-tom) Multimodal action recognition system correctly combines the video feed and audio feed and identifies the activity as collect dry leaves on the ground . modality can often be ambiguous. This is because we want to perceive what the device’s wearer is performing instead of what the camera feed is capturing. To this end, Mul-timodal information can be crucial for understanding and disambiguating the user’s intent or action. We demonstrate it through an example in Fig. 2. In the example, the video feed shows a tool in the background of the grassland. An activity recognition model exclusively based on video rec-ognizes it as the class trim grass with other tools . Similarly, a model exclusively trained in audio identifies the rubbing sounds in the clip as the class put trash in a trash can , and an IMU model mistakes the head motion as wipe a table . However, a multimodal system correctly identifies the class ascollect dry leaves on the ground by combining video, au-dio, and IMU signals. While using multimodal information is essential to achieve state-of-the-performance, it also presents a unique challenge -we may not be able to use all modalities in the real world due to security or efficiency considerations. For example, a user might be located in a sensitive envi-ronment and decide to turn off the camera due to security concerns. Similarly, users may turn off microphones so that their voices are not heard. In these situations, multimodal systems must be able generalize to missing modalities (Fig. 1 (b)), i.e., work with an incomplete set of modalities at in-ference, and make a robust prediction. These challenges are not just limited to inference time but could manifest in re-Modality video audio IMU Memory per second of data (KB) 593.92 62 .76 9 .44 Typical model FLOPs (G) 70.50 42 .08 1 .65 Table 1. Compute and memory cost for different modalities . Memory used per second for each modality is computed by aver-aging the memory used by 1000 data points drawn randomly from Ego4D [27]. The provided compute number corresponds to the forward pass cost of MViT [15] for video, AST [26] for audio, and a ViT [11] based transformer model for IMU data. strictions during training. For example, if a user has to train a system, often in a few-shot setting, computationally ex-pensive modalities like video are best trained on the cloud. However, the user might prefer that their data stays on the device. However, the video will consume 60×more stor-age, and 43×more compute compared to cheaper modali-ties like IMU (see Tab. 1), significantly increasing the dif-ficulty of training on devices with limited compute and storage. In this situation, we may want to enable training with computationally less demanding modalities like audio while maintaining the flexibility of performing inference on more informative modalities like video. Multimodal sys-tems should robustly generalize across modalities . In this work, we propose MMG-Ego4D : a challenge de-signed to measure the generalization ability of egocentric activity recognition models. Our challenge consists of two novel tasks: (1) missing modality generalization aimed at measuring the generalization ability of models when eval-uated on an incomplete set of modalities (shown in Fig. 1 (b)), and (2) cross-modal zero-shot generalization aimed at measuring the generalization ability of models in general-izing to unseen modalities during test time (shown in Fig. 1 (c)). We evaluate several widely-used architectures us-ing this benchmark and introduce a novel approach that en-hances generalization capability in the MMG-Ego4D chal-lenge, while also improving performance in standard full-modalities settings. Our primary contributions are: •MMG Problem . We present MMG , a novel and practical problem with two tasks, missing modality generalization andcross-modal zero-shot generalization , for evaluating the generalization ability of multimodal action recogni-tion models. These tasks are designed to support real-world security and efficiency considerations, and we de-fine them in both supervised and more challenging few-shot settings. •MMG-Ego4D Dataset . To facilitate the study of MMG problem in ego-centric action recognition task, we intro-duce a new dataset, MMG-Ego4d , which is derived from Ego4D [27] dataset by preprocessing the data points and thoroughly re-annotating by human experts to suit the task. To the best of our knowledge, this is the first work 6482 to introduce these novel evaluation tasks and a benchmark challenge of its kind. •Strong Baselines . We present a new method that achieves strong performance on the generalization abil-ity benchmark and also improves the performance un-der the normal full-modalities setting. Our method em-ploys a Transformer-based fusion module, which allows for flexible input of different modalities. We employ a cross-modal contrastive alignment loss to project features of different modalities into a unified space. Finally, a novel loss function is introduced, which is called cross-modal prototypical loss , achieving state-of-the-art results in multimodal few-shot settings. Extensive ablation stud-ies are performed to identify each proposed component’s contribution.
Du_Dual-Bridging_With_Adversarial_Noise_Generation_for_Domain_Adaptive_rPPG_Estimation_CVPR_2023
Abstract The remote photoplethysmography (rPPG) technique can estimate pulse-related metrics (e.g. heart rate and res-piratory rate) from facial videos and has a high potential for health monitoring. The latest deep rPPG methods can model in-distribution noise due to head motion, video com-pression, etc., and estimate high-quality rPPG signals un-der similar scenarios. However, deep rPPG models may not generalize well to the target test domain with unseen noise and distortions. In this paper, to improve the general-ization ability of rPPG models, we propose a dual-bridging network to reduce the domain discrepancy by aligning in-termediate domains and synthesizing the target noise in the source domain for better noise reduction. To comprehen-sively explore the target domain noise, we propose a novel adversarial noise generation in which the noise generator indirectly competes with the noise reducer. To further im-prove the robustness of the noise reducer, we propose hard noise pattern mining to encourage the generator to learn hard noise patterns contained in the target domain features. We evaluated the proposed method on three public datasets with different types of interferences. Under different cross-domain scenarios, the comprehensive results show the ef-fectiveness of our method.
1. Introduction With the development of rPPG technology, physiolog-ical metrics such as heart rate [27], heart rate variabil-ity [34], respiratory rate [21] can also be estimated from fa-cial videos. Deep learning-based rPPG methods overcome non-physiological intensity variations [30, 49] and model noise in training samples [24, 28]. Despite the high accu-racy under intra-dataset evaluations, the deep rPPG models may not be able to generalize well to unseen interferences in the test domain. The domain gap is mainly from unseen non-physiological interferences such as lighting conditions, *Equal contribution SourceRandSYNDGRealorFake(a) GAN-based Noise Modeling NRTargetSourceDTDSourceorTarget(b) Intuitive UDA SolutionSYNSourceNR G|NRDTTargetPullPullHMNoise (c) Dual-bridging with Adversarial Noise Generation (Ours) GT-PPGFigure 1. The comparison between (a) typical intra-dataset ad-versarial rPPG noise modeling, (b) an intuitive UDA framework for rPPG feature alignment, and (c) our proposed dual-bridging network with adversarial noise modeling and hard noise pattern mining ( HM ). Here SYN denotes synthetic data, DT is for the denoised target domain, G,D, andNR denotes the generator, do-main classifier, and noise reducer, respectively. camera sensors, video compression algorithms, facial ex-pressions, etc. They can induce distortions in estimated rPPG signals and reduce both the accuracy and the relia-bility of pulse-related metrics estimation. Considering it is hard to cover all interferences during the training stage, to improve the usability of rPPG in realistic applications, one main challenge is how to boost the generalizability of rPPG models to unseen scenarios. In recent research of rPPG, both deep learning-based frameworks and mechanisms [45, 46, 49] are proposed to overcome the non-physiological intensity variations. GAN-based disentanglement learning has also been adopted to re-duce the noise from pseudo [28] or synthesized [24] noisy features. We summarize this approach in Figure 1 (a) where a discriminator is employed to distinguish the generated feature (SYN in figure) from the original one. These meth-ods can perform well under intra-dataset evaluation settings since the in-distribution noise patterns are thoroughly in-vestigated with a large number of adversarial learning iter-ations. However, they may fail when encountering unseen This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10355 domains in real application scenarios since noise patterns may be different from the ones of training data. Intuitively, the unsupervised domain adaptation (UDA) technique can help in bridging the gap between source and target domain [7, 14, 17, 44]. As shown in Figure 1 (b), a noise reducer module NR that aims to obtain noise-free domain-invariant representations can be learned by fight-ing against the domain classifier D. However, this intuitive solution may not work well since the domain classification may not be able to give sufficient information for NR to identify whether the feature components are noise or phys-iological information. Directly aligning the rPPG features from different domains may end up distorting the physio-logical information since they are from different subjects. The ground-truth PPG (GT-PPG) signal with detailed wave-form information helps preserve the physiological informa-tion and can provide much more informative guidance with the regression task. However, GT-PPG is available in the source domain but not the target domain. How to leverage the source domain GT-PPG to train NR to be robust to the noise from the target domain is the key issue to be solved in this work. To achieve it we propose the dual bridging noise modeling network as shown in Figure 1 (c). The first bridg-ing works as high-level guidance where the denoised target domain feature is adversarially pulled to the source domain feature (as Figure 1 (b)). On top of it, the second bridging aims to help synthesize the target domain noise and inject it into the source domain denoised feature so that the GT-PPG regression can help finetune the NR for better robustness in the target domain. An adversarial noise generation module (G|NR) is designed where the generator is conditioned on theNR so that it keeps on overcoming the complex noise pattern that can hardly be solved in the first bridging. With the high-level guidance (first bridging) and detailed signal regression (second bridging), the NR can handle the target domain noise better and therefore improve the accuracy of rPPG estimation in the target domain. To further discover the remaining noise vestige, we build a hard noise pattern mining mechanism to squeeze the unsolved local noise pat-tern from the denoised target feature so that G|NR can thor-oughly synthesize it. In sum, the contributions of this work are: (1) A dual-bridging noise modeling network that adapts target domain noise in a coarse-to-fine manner. (2) An adversarial noise generation mechanism to progressively synthesize and in-ject the hard target domain noisy features into the source domain while keeping the physiological information. (3) A hard noise pattern mining mechanism to further explore the target domain noise patterns with larger variations. We evaluated the proposed method on three public datasets with various types of interferences including facial motion and expression, video compression, skin tone, and heartbeat ranges. Under different cross-domain scenarios, the com-prehensive results show the effectiveness of our method.
He_3D_Video_Object_Detection_With_Learnable_Object-Centric_Global_Optimization_CVPR_2023
Abstract We explore long-term temporal visual correspondence-based optimization for 3D video object detection in this work. Visual correspondence refers to one-to-one map-pings for pixels across multiple images. Correspondence-based optimization is the cornerstone for 3D scene recon-struction but is less studied in 3D video object detection, because moving objects violate multi-view geometry con-straints and are treated as outliers during scene recon-struction. We address this issue by treating objects as first-class citizens during correspondence-based optimiza-tion. In this work, we propose BA-Det, an end-to-end op-timizable object detector with object-centric temporal cor-respondence learning and featuremetric object bundle ad-justment. Empirically, we verify the effectiveness and ef-ficiency of BA-Det for multiple baseline 3D detectors un-der various setups. Our BA-Det achieves SOTA perfor-mance on the large-scale Waymo Open Dataset (WOD) with only marginal computation cost. Our code is available at https://github.com/jiaweihe1996/BA-Det .
1. Introduction 3D object detection is an important perception task, es-pecially for indoor robots and autonomous-driving vehi-cles. Recently, image-only 3D object detection [23, 52] has been proven practical and made great progress. In real-world applications, cameras capture video streams instead of unrelated frames, which suggests abundant temporal in-formation is readily available for 3D object detection. In single-frame methods, despite simply relying on the predic-tion power of deep learning, finding correspondences play an important role in estimating per-pixel depth and the ob-ject pose in the camera frame. Popular correspondences include Perspective-n-Point (PnP) between pre-defined 3D keypoints [22, 52] and their 2D projections in monocular 3D object detection, and Epipolar Geometry [6,12] in multi-view 3D object detection. However, unlike the single-framecase, temporal visual correspondence has not been explored much in 3D video object detection. As summarized in Fig. 1, existing methods in 3D video object detection can be divided into three categories while each has its own limitations. Fig. 1a shows methods with object tracking [3], especially using a 3D Kalman Filter to smooth the trajectory of each detected object. This ap-proach is detector-agnostic and thus widely adopted, but it is just an output-level smoothing process without any fea-ture learning. As a result, the potential of video is under-exploited. Fig. 1b illustrates the temporal BEV (Bird’s-Eye View) approaches [14, 23, 26] for 3D video object detection. They introduce the multi-frame temporal cross-attention or concatenation for BEV features in an end-to-end fusion manner. As for utilizing temporal information, temporal BEV methods rely solely on feature fusion while ignoring explicit temporal correspondence. Fig. 1c depicts stereo-from-video methods [46,47]. These methods explic-itly construct a pseudo-stereo view using ego-motion and then utilize the correspondence on the epipolar line of two frames for depth estimation. However, the use of explicit correspondence in these methods is restricted to only two frames, thereby limiting its potential to utilize more tem-poral information. Moreover, another inevitable defect of these methods is that moving objects break the epipolar con-straints, which cannot be well handled, so monocular depth estimation has to be reused. Considering the aforementioned shortcomings, we seek a new method that can handle both static and moving objects , and utilize long-term temporal correspondences . Firstly, in order to handle both static and moving objects, we draw experience from the object-centric global optimization with reprojection constraints in Simultaneous Localization and Mapping (SLAM) [21, 48]. Instead of directly estimat-ing the depth for each pixel from temporal cues, we utilize them to construct useful temporal constraints to refine the object pose prediction from network prediction. Specifi-cally, we construct a non-linear least-square optimization problem with the temporal correspondence constraint in an This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5106 Temporal informationUnlearnable moduleLearnable moduleFeature Object 3D KF ෠𝑂1𝑡𝑂1𝑡 Tracking෠𝑂2𝑡𝑂2𝑡 Tracking෠𝑂𝑛𝑡𝑂𝑛𝑡Tracklet(a) Temporal Filtering 𝑂𝑛𝑡𝑂2𝑡BEVfeat𝑡BEVfeat𝑡−1 Cross att.𝑂1𝑡 (b) Temporal BEV 𝑂1𝑡 𝑂2𝑡 𝑂𝑛𝑡Feature 𝑡Feature 𝑡−𝑘 Cost volume (c) Stereo from Video 𝑂1𝑡 𝑂2𝑡 𝑂𝑛𝑡෠𝑂𝑖𝑡Det OTCL෠𝑂𝑖1෠𝑂𝑖𝑇… (d) BA-Det (Ours) Figure 1. Illustration of how to leverage temporal information in different 3D video object detection paradigms. object-centric manner to optimize the pose of objects no matter whether they are moving or not. Secondly, for long-term temporal correspondence learning, hand-crafted de-scriptors like SIFT [27] or ORB [35] are no longer suit-able for our end-to-end object detector. Besides, the long-term temporal correspondence needs to be robust to view-point changes and severe occlusions, where these traditional sparse descriptors are incompetent. So, we expect to learn a dense temporal correspondence for all available frames. In this paper, as shown in Fig. 1d, we propose a 3D video object detection paradigm with learnable long-term tempo-ral visual correspondence, called BA-Det . Specifically, the detector has two stages. In the first stage, a CenterNet-style monocular 3D object detector is applied for single-frame object detection. After associating the same objects in the video, the second stage detector extracts RoI features for the objects in the tracklet and matches dense local fea-tures on the object among multi-frames, called the object-centric temporal correspondence learning (OTCL) module. To make traditional object bundle adjustment (OBA) learn-able, we formulate featuremetric OBA. In the training time, with featuremetric OBA loss, the object detection and tem-poral feature correspondence are learned jointly. During in-ference, we use the 3D object estimation from the first stage as the initial pose and associate the objects with 3D Kalman Filter. The object-centric bundle adjustment refines the pose and 3D box size of the object in each frame at the track-let level, taking the initial object pose and temporal feature correspondence from OTCL as the input. Experiment re-sults on the large-scale Waymo Open Dataset (WOD) show that our BA-Det could achieve state-of-the-art performance compared with other single-frame and multi-frame object detectors. We also conduct extensive ablation studies to demonstrate the effectiveness and efficiency of each com-ponent in our method. In summary, our work has the following contributions: • We present a novel object-centric 3D video object detec-tion approach BA-Det by learning object detection and temporal correspondence jointly.• We design the second-stage object-centric temporal cor-respondence learning module and the featuremetric object bundle adjustment loss. • We achieve state-of-the-art performance on the large-scale WOD. The ablation study and comparisons show the effectiveness and efficiency of our BA-Det.
Dobler_Robust_Mean_Teacher_for_Continual_and_Gradual_Test-Time_Adaptation_CVPR_2023
Abstract Since experiencing domain shifts during test-time is in-evitable in practice, test-time adaption (TTA) continues to adapt the model after deployment. Recently, the area of continual and gradual test-time adaptation (TTA) emerged. In contrast to standard TTA, continual TTA considers not only a single domain shift, but a sequence of shifts. Grad-ual TTA further exploits the property that some shifts evolve gradually over time. Since in both settings long test se-quences are present, error accumulation needs to be ad-dressed for methods relying on self-training. In this work, we propose and show that in the setting of TTA, the sym-metric cross-entropy is better suited as a consistency loss for mean teachers compared to the commonly used cross-entropy. This is justified by our analysis with respect to the (symmetric) cross-entropy’s gradient properties. To pull the test feature space closer to the source domain, where the pre-trained model is well posed, contrastive learning is leveraged. Since applications differ in their require-ments, we address several settings, including having source data available and the more challenging source-free setting. We demonstrate the effectiveness of our proposed method “robust mean teacher“ (RMT) on the continual and grad-ual corruption benchmarks CIFAR10C, CIFAR100C, and Imagenet-C. We further consider ImageNet-R and propose a new continual DomainNet-126 benchmark. State-of-the-art results are achieved on all benchmarks.1
1. Introduction Assuming that training and test data originate from the same distribution, deep neural networks achieve remarkable performance. In the real world, this assumption is often vio-lated for a deployed model, as many environments are non-stationary. Since the occurrence of a data shift [35] during test-time will likely result in a performance drop, domain generalization aims to improve robustness and generaliza-*Equal contribution. 1Code is available at: https://github.com/mariodoebler/ test-time-adaptationtion already during training [11, 13, 32, 43, 45]. However, these approaches are often limited, due to the wide range of potential data shifts [30] that are unknown during train-ing. To gain insight into the current distribution shift, re-cent approaches leverage the test samples encountered dur-ing model deployment to adapt the pre-trained model. This is also known as test-time adaptation (TTA) and can be done either offline or online. While offline TTA assumes to have access to all test data at once, online TTA considers the set-ting where the predictions are needed immediately and the model is adapted on the fly using only the current test batch. While adapting the batch normalization statistics dur-ing test-time can already significantly improve the perfor-mance [38], more sophisticated methods update the model weights using self-training based approaches, like entropy minimization [49]. However, the effectiveness of most TTA methods is only demonstrated for a single domain shift at a time. Since encountering just one domain shift is very unlikely in real world applications, [50] introduced contin-ual test-time adaptation where the model is adapted to a se-quence of domain shifts. As pointed out by [50], adapting the model to long test sequences in non-stationary environ-ments is very challenging, as self-training based methods are prone to error accumulation due to miscalibrated pre-dictions. Although it is always possible to reset the model after it has been updated, this prevents exploiting previously acquired knowledge, which is undesirable for the following reason: While some domain shifts occur abruptly in prac-tice, there are also several shifts which evolve gradually over time [17]. In [26], this setting is denoted as gradual test-time adaptation . [17,26] further showed that in the set-ting of gradual shifts, pseudo-labels are more reliable, re-sulting in a better model adaptation to large domain gaps. However, if the model is reset and the domain gap increases over time, model adaptation through self-training or self-supervised learning may not be successful [17, 24]. To tackle the aforementioned challenges, we introduce a robust mean teacher (RMT) that exploits a symmetric cross-entropy (SCE) loss instead of the commonly used cross-entropy (CE) loss to perform self-training. This is motivated by our findings that the CE loss has undesirable gradient This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7704 properties in a mean teacher framework which are compen-sated for when using an SCE loss. Furthermore, RMT uses a multi-viewed contrastive loss to pull test features towards the initial source space and learn invariances with regards to the input space. While our framework performs well for both continual and gradual domain shifts, we observe that mean teachers are especially well suited for easy-to-hard problems. We empirically demonstrate this not only for gradually shifting test sequences, but also for the case where the domain difficulty with respect to the error of the initial source model increases. Since source data might not be available during test-time due to privacy or accessibil-ity reasons, recent approaches in TTA focus on the source-free setting. Lacking labeled source data, source-free ap-proaches can be susceptible to error accumulation. There-fore, as an extension to our framework, we additionally look into the setting where source data is accessible. We summarize our contributions as follows: • By analyzing the gradient properties, we motivate and propose that in the setting of TTA, the symmetric cross-entropy is better suited for a mean teacher than the commonly used cross-entropy. • We present a framework for both continual and grad-ual TTA that achieves state-of-the-art results on the ex-isting corruption benchmarks, ImageNet-R, and a new proposed continual DomainNet-126 benchmark. • For our framework, we address a wide range of practi-cal requirements, including the source-free setting and having source data available.
Higgins_MOVES_Manipulated_Objects_in_Video_Enable_Segmentation_CVPR_2023
Abstract Our method uses manipulation in video to learn to un-derstand held-objects and hand-object contact. We train a system that takes a single RGB image and produces a pixel-embedding that can be used to answer grouping ques-tions (do these two pixels go together) as well as hand-association questions (is this hand holding that pixel). Rather than painstakingly annotate segmentation masks, we observe people in realistic video data. We show that pairing epipolar geometry with modern optical flow produces sim-ple and effective pseudo-labels for grouping. Given people segmentations, we can further associate pixels with hands to understand contact. Our system achieves competitive re-sults on hand and hand-held object tasks.
1. Introduction Fig. 1 shows someone making breakfast. Despite having never been there, you understand the bag the hand is hold-ing as an object, recognize that the hand is holding the bag, and recognize that the milk carton in the background is a distinct object. The goal of this paper is to build a computer vision system with such capabilities: grouping held objects (the bag), recognizing contact (the hand holding the bag), and grouping non-held objects (the carton). We accomplish our aim by pairing modern optical flow with 3D geome-try and, to associate objects with hands, per-pixel human masks. Our results show that direct discriminative train-ing on simple pseudo-labels generated by epipolar geome-try produces strong feature representations that we can use to solve a variety of hand-held object-related tasks. The topic of understanding hands and the objects they hold has been a subject of intense interest from the com-puter vision community for decades. Recently, this has often taken the form of extensive efforts annotating hands and hand-held objects [9, 13, 36, 47]. These methods of-ten go beyond standard detection and segmentation ap-proaches [17, 25] by producing associations between hands and objects and by detecting on any held object, as opposed to a fixed set of pre-defined object classes. Since these re-Image Box2SegMOVES Features Clusters Figure 1. Given an input image, MOVES produces features (shown using PCA to project to RGB) that easily group with ordinary clus-tering systems and can also be used to associate hands with the objects they hold. The clusters are often sufficient for defining objects, but additional cues such as a box further improve them. At training time, MOVES learns this feature space from direct discriminative training on simple pseudo-labels. While MOVES learns only from objects that hands are actively holding (such as the semi-transparent bag), we show that it works well on inactive objects as well (such as the milk carton). quire expensive annotations, many researchers have started focusing on using weaker supervision [12, 37] by starting with a few readily obtained cues (e.g., basic information about humans, flow). These weakly-supervised methods, however, have not matched supervised methods regardless of supervision, methods like [36, 37] only understand ob-jects when they are held and cannot group un-held objects. We propose a simple approach based on directly predict-ing two properties: grouping , or whether pixels move to-gether (the classic Gestalt law of common fate [43]); as well ashand association , whether a hand pixel is likely holding another pixel. We show that these can be learned from auto-matically generated pseudo-labels that use optical flow [21], epipolar geometry [15], and person masks [19]. Our net-work, named MOVES , learns a mapping to a per-pixel em-bedding; this embedding is then analyzed by grouping and association heads that are trained by cross-entropy to pre-dict the pseudo-labels. While the pseudo-labels themselves This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6334 are poor and incomplete, we show that the learned classi-fiers are effective and that the embeddings are good enough to be analyzed by off-the-shelf, unspecialized algorithms like HDBSCAN [31]. Excitingly, even though our signal comes only when objects are picked up, our features gener-alize to objects that are not currently being interacted with. We train and evaluate MOVES on challenging egocentric data, including EPIC-KITCHENS [6, 7] and EGO4D [13]. Our experiments show that once trained, MOVES features enable strong performance on a number of tasks related to hands and the objects they hold. First, using MOVES on the COHESIV [37] hand-object segmentation bench-mark for EPIC-KITCHENS [6] improves by 31% relative (19.5→25.7) over the recent weakly-supervised COHE-SIV method [37] in object segmentation. Second, we show that distance in MOVES feature space is strongly predic-tive of two pixels being part of the same object, as well as a Box2Seg task where MOVES features are trivially an-alyzed to upgrade bounding-box annotations to segments. We show that Box2Seg shows strong performance on both objects that are currently being held as well as objects that are not held (unlike past work). In particular, compared to COHESIV , we show a strong gain on segmenting held objects ( 8.9→44.2mIoU) as well as non-held objects (7.5→45.0mIoU). Finally, we show that we can train an instance segmentation model [26] on the Box2Seg annota-tions and get good models for rough instance segmentation.
Cai_NeuDA_Neural_Deformable_Anchor_for_High-Fidelity_Implicit_Surface_Reconstruction_CVPR_2023
Abstract This paper studies implicit surface reconstruction lever-aging differentiable ray casting. Previous works such as IDR [34] and NeuS [27] overlook the spatial context in 3D space when predicting and rendering the surface, thereby may fail to capture sharp local topologies such as small holes and structures. To mitigate the limitation, we pro-pose a flexible neural implicit representation leveraging hi-erarchical voxel grids, namely Neural Deformable Anchor (NeuDA), for high-fidelity surface reconstruction. NeuDA maintains the hierarchical anchor grids where each vertex stores a 3D position (or anchor) instead of the direct em-bedding (or feature). We optimize the anchor grids such that different local geometry structures can be adaptively encoded. Besides, we dig into the frequency encoding strategies and introduce a simple hierarchical positional encoding method for the hierarchical anchor structure to flexibly exploit the properties of high-frequency and low-frequency geometry and appearance. Experiments on both the DTU [8] and BlendedMVS [32] datasets demonstrate that NeuDA can produce promising mesh surfaces.
1. Introduction 3D surface reconstruction from multi-view images is one of the fundamental problems of the community. Typical Multi-view Stereo (MVS) approaches perform cross-view feature matching, depth fusion, and surface reconstruction (e.g., Poisson Surface Reconstruction) to obtain triangle meshes [9]. Some methods have exploited the possibility of training end-to-end deep MVS models or employing deep networks to improve the accuracy of sub-tasks of the MVS pipeline. Recent advances show that neural implicit func-tions are promising to represent scene geometry and ap-pearance [12, 14–16, 18–21, 27, 28, 33, 34, 37]. For exam-ple, several works [6, 27, 30, 34] define the implicit surface as a zero-level set and have captured impressive topologies. Their neural implicit models are trained in a self-supervised *Corresponding author. Figure 1. We show the surface reconstruction results produced by NeuDA and the two baseline methods, including NeuS [27] and Intsnt-NeuS [17, 27]. Intsnt-NeuS is the reproduced NeuS lever-aging the multi-resolution hash encoding technique [17]. We can see NeuDA can promisingly preserve more surface details. Please refer to Figure 5 for more qualitative comparisons. manner by rendering faithful 2D appearance of geometry leveraging differentiable rendering. However, the surface prediction and rendering formulations of these approaches have not explored the spatial context in 3D space. As a re-sult, they may struggle to recover fine-grain geometry in some local spaces, such as boundaries, holes, and other small structures (See Fig. 1). A straightforward solution is to query scene properties of a sampled 3D point by fusing its nearby features. For example, we can represent scenes as neural voxel fields [3, 13, 22, 24, 25] where the embedding (or feature) at each vertex of the voxel encodes the geometry and appearance context. Given a target point, we are able to 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8476 Figure 2. We elaborate on the main differences between the hierarchical deformable anchors representation and some baseline variants. From left to right: (1) Methods such as NeuS [27], volSDF [33], and UNISUFR [19] sample points along a single ray; (2, 3) Standard voxel grid approaches store a learnable embedding (or) feature at each vertex. Spatial context could be simply handled through the feature aggregation operation. The multi-resolution (or hierarchical) voxel grid representation can further explore different receptive fields; (4) Our method maintains a 3D position (or anchor point) instead of a feature vector at each vertex. We optimize the anchor points such that different geometry structures can be adaptively represented. aggregate the features of the surrounding eight vertices. As the scope of neighboring information is limited by the resolution of grids, multi-level (or hierarchical) voxel grids have been adopted to study different receptive fields [17, 21, 25, 28, 30, 36]. These approaches do obtain sharper surface details compared to baselines for most cases, but still cannot capture detailed regions well. A possible reason is that the geometry features held by the voxel grids are uniformly distributed around 3D surfaces, while small structures are with complicated typologies and may need more flexible representations. Contributions: Motivated by the above analysis, we intro-duce Neural Deformable Anchor (NeuDA), a new neural implicit representation for high-fidelity surface reconstruc-tion leveraging multi-level voxel grids. Specifically, we store the 3D position, namely the anchor point, instead of the regular embedding (or feature) at each vertex. The input feature for a query point is obtained by directly interpolating the frequency embedding of its eight adjacent anchors. The anchor points are optimized through backpropagation, thus would show flexibility in modeling different fine-grained geometric structures. Moreover, drawing inspiration that high-frequency geometry and texture are likely encoded by the finest grid level, we present a simple yet effective hier-archical positional encoding policy that adopts a higher fre-quency band to a finer grid level. Experiments on DTU [8] and BlendedMVS [32] shows that NeuDA is superior in re-covering high-quality geometry with fine-grains details in comparison with baselines and SOTA methods. It’s worth mentioning that NeuDA employs a shallower MLP (4 vs.8 for NeuS and volSDF) to achieve better surface reconstruc-tion performance due to the promising scene representation capability of the hierarchical deformable anchor structure.2. Related Work Neural Implicit Surface Reconstruction Recently, neural surface reconstruction has emerged as a promising alter-native to traditional 3D reconstruction methods due to its high reconstruction quality and its potential to recover fine details. NeRF [16] proposes a new avenue combining neural implicit representation with volume rendering to achieve high-quality rendering results. The surface extracted from NeRF often contains conspicuous noise; thus, its recovered geometry is far from satisfactory. To obtain an accurate scene surface, DVR [18], IDR [34], and NLR [10] have been proposed to use accurate object masks to promote reconstruction quality. Furthermore, NeuS [27], UNISURF [19], and volSDF [33] learn an implicit surface via volume rendering without the need for masks and shrink the sample region of volume rendering to refine the reconstruction quality. Nevertheless, the above approaches extract geometry features from a single point along a casting ray, which may hinder the neighboring information sharing across sampled points around the surface. The quality of the reconstructed surface depends heavily on the capacity of the MLP network to induce spatial relationships between neighboring points. Thereby, NeuS [27], IDR [34], and V olSDF [33] adopt deep MLP network and still struggle with fitting smooth surfaces and details. It is worth mentioning that the Mip-NeRF [1] brings the neighboring information into the rendering procedure by tracing an anti-aliased conical frustum instead of a ray through each pixel. But it is difficult to apply this integrated positional encoding to surface reconstruction since this encoding relies on the radius of the casting cone. Neural Explicit Representation The neural explicit repre-sentation that integrates traditional 3D representation meth-ods, e.g. voxels [13,24,35] and point clouds [31], has made 2 8477 great breakthroughs in recent years. This explicit repre-sentation makes it easier to inject the neighborhood infor-mation into the geometry feature during model optimiza-tion. DVGO [24] and Plenoxels [22] represent the scene as a voxel grid, and compute the opacity and color of each sampled point via trilinear interpolation of the neighboring voxels. The V oxurf [30] further extends this single-level voxel feature to a hierarchical geometry feature by concate-nating the neighboring feature stored voxel grid from dif-ferent levels. The Instant-NGP [17] and MonoSDF [36] use multiresolution hash encoding to achieve fast convergence and capture high-frequency and local details, but they might suffer from hash collision due to its compact representation. Both of these methods leverage a multi-level grid scheme to enlarge the receptive field of the voxel grid and encourage more information sharing among neighboring voxels. Al-though the voxel-based methods have further improved the details of surface geometry, they may be suboptimal in that the geometry features held by the voxel grids are uniformly distributed around 3D surfaces, while small structures are with complicated typologies and may need more flexible representation. Point-based methods [2, 11, 31] bypass this problem, since the point clouds, initially estimated from COLMAP [23], are naturally distributed on the 3D surface with com-plicated structures. Point-NeRF [31] proposes to model point-based radiance field, which uses an MLP network to aggregate the neural points in its neighborhood to regress the volume density and view-dependent radiance at that lo-cation. However, the point-based methods are also limited in practical application, since their reconstruction perfor-mance depends on the initially estimated point clouds that often have holes and outliers. 3. Method Our primary goal is to flexibly exploit spatial context around the object surfaces to recover more fine-grained ty-pologies, as a result, boost the reconstruction quality. This section begins with a brief review of NeuS [27], which is our main baseline, in Sec. 3.1. Then, we explain the de-formable anchor technique in Sec. 3.2, and present the hi-erarchical pos
Bhalgat_A_Light_Touch_Approach_to_Teaching_Transformers_Multi-View_Geometry_CVPR_2023
Abstract Transformers are powerful visual learners, in large part due to their conspicuous lack of manually-specified pri-ors. This flexibility can be problematic in tasks that in-volve multiple-view geometry, due to the near-infinite possi-ble variations in 3D shapes and viewpoints (requiring flexi-bility), and the precise nature of projective geometry (obey-ing rigid laws). To resolve this conundrum, we propose a “light touch” approach, guiding visual Transformers to learn multiple-view geometry but allowing them to break free when needed. We achieve this by using epipolar lines to guide the Transformer’s cross-attention maps during train-ing, penalizing attention values outside the epipolar lines and encouraging higher attention along these lines since they contain geometrically plausible matches. Unlike pre-vious methods, our proposal does not require any camera pose information at test-time. We focus on pose-invariant object instance retrieval, where standard Transformer net-works struggle, due to the large differences in viewpoint between query and retrieved images. Experimentally, our method outperforms state-of-the-art approaches at object retrieval, without needing pose information at test-time.
1. Introduction Recent advances in computer vision have been charac-terized by using increasingly generic models fitted with large amounts of data, with attention-based models (e.g. Transformers) at one extreme [12, 13, 20, 24, 34, 41]. There are many such recent examples, where shedding priors in favour of learning from more data has proven to be a suc-cessful strategy, from image classification [1,13,20,29,90], action recognition [7, 23, 27, 50, 58], to text-image match-ing [36, 45, 62, 71] and 3D recognition [40, 91]. One area where this strategy has proven more difficult to apply is solving tasks that involve reasoning about multiple-view ge-ometry, such as object retrieval – i.e. finding all instances of an object in a database given a single query image. This has applications in image search [37, 39, 51, 82, 92], including Figure 1. Top-4retrieved images with (1) global retrieval (left column), (2) Reranking Transformer (RRT) [74] (middle), and (3) RRT trained with our proposed Epipolar Loss (right column). Cor-rect retrievals are green , incorrect ones are red. The Epipolar Loss imbues RRT with an implicit geometric understanding, allowing it to match images from extremely diverse viewpoints. identifying landmarks from images [53, 61, 85], recogniz-ing artworks in images [80], retrieving relevant product im-ages in e-commerce databases [14,55] or retrieving specific objects from a scene [3, 38, 46, 60]. The main challenges in object retrieval include over-coming variations in viewpoint and scale. The difficulty in viewpoint-invariant object retrieval can be partially ex-plained by the fact that it requires disambiguating similar objects by small differences in their unique details, which can have a smaller impact on an image than a large varia-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4958 tion in viewpoint. For this reason, several works have em-phasized geometric priors in deep networks that deal with multiple-view geometry [22,88]. It is natural to ask whether these priors are too restrictive, and harm a network’s ability to model the data when it deviates from the geometric as-sumptions. As a step in this direction, we explore how to “guide” attention-based networks with soft guardrails that encourage them to respect multi-view geometry, without constraining them with any rigid mechanism to do so. In this work, we focus on post-retrieval reranking meth-ods, wherein an initial ranking is obtained using global (image-level) representations and then local (region-or patch-level) representations are used to rerank the top-ranked images either with the classic Geometric Verifica-tion [57], or by directly predicting similarity scores of im-age pairs using a trained deep network [31, 74]. Rerank-ing can be easily combined with any other retrieval method while significantly boosting the precision of the underly-ing retrieval algorithm. Recently, PatchNetVLAD [31], DELG [11], and Reranking Transformers [74] have shown that learned reranking can achieve state-of-the-art perfor-mance on object retrieval. We show that the performance of such reranking methods can be further improved by implic-itlyinducing geometric knowledge, specifically the epipo-lar relations between two images arising from relative pose, into the underlying image similarity computation. This raises the question of whether multiple view rela-tions should be incorporated into the two view architec-ture explicitly rather than implicitly . In the explicit case, the epipolar relations between the two images are supplied as inputs. For example, this is the approach taken in the Epipolar Transformers architecture [33] where candidate correspondences are explicitly sampled along the epipolar line, and in [88] where pixels are tagged with their epipolar planes using a Perceiver IO architecture [34]. The disadvan-tage of the explicit approach is that epipolar geometry must be supplied at inference time, requiring a separate process for its computation, and being problematic when images are not of the same object (as the epipolar geometry is then not defined). In contrast, in the implicit approach the epipolar geometry is only required at training time and is applied as a loss to encourage the model to learn to (implicitly) take ad-vantage of epipolar constraints when determining a match. We bring the following three contributions in this work: First, we propose a simple but effective Epipolar Loss to in-duce epipolar constraints into the cross-attention layer(s) of transformer-based reranking models. We only need the rel-ative pose (or epipolar geometry) information during train-ing to provide the epipolar constraint. Once trained, the reranking model develops an implicit understanding of the relative geometry between any given image pair and can ef-fectively match images containing an object instance from very diverse viewpoints without any additional input. Sec-ond, we set up an object retrieval benchmark on top of the CO3Dv2 [63] dataset which contains ground-truth camera poses and provide a comprehensive evaluation of the pro-posed method, including a comparison between implicit and explicit incorporation of epipolar constraints. The bench-mark configuration is detailed in Sec. 4. Third, we evalu-ate on the Stanford Online Products [55] dataset using both zero-shot and fine-tuning, outperforming previous methods on this standard object instance retrieval benchmark.
Gupta_Class_Prototypes_Based_Contrastive_Learning_for_Classifying_Multi-Label_and_Fine-Grained_CVPR_2023
Abstract The recent growth in the consumption of online media by children during early childhood necessitates data-driven tools enabling educators to filter out appropriate educa-tional content for young learners. This paper presents an approach for detecting educational content in online videos. We focus on two widely used educational content classes: literacy and math. For each class, we choose prominent codes (sub-classes) based on the Common Core Standards. For example, literacy codes include ‘letter names’, ‘letter sounds’, and math codes include ‘counting’, ‘sorting’. We pose this as a fine-grained multilabel classification problem as videos can contain multiple types of educational content and the content classes can get visually similar (e.g., ‘letter names’ vs ‘letter sounds’). We propose a novel class proto-types based supervised contrastive learning approach that can handle fine-grained samples associated with multiple labels. We learn a class prototype for each class and a loss function is employed to minimize the distances between a class prototype and the samples from the class. Similarly, distances between a class prototype and the samples from other classes are maximized. As the alignment between vi-sual and audio cues are crucial for effective comprehen-sion, we consider a multimodal transformer network to cap-ture the interaction between visual and audio cues in videos while learning the embedding for videos. For evaluation, we present a dataset, APPROVE, employing educational videos from YouTube labeled with fine-grained education classes by education researchers. APPROVE consists of 193 hours of expert-annotated videos with 19 classes. The proposed approach outperforms strong baselines on AP-PROVE and other benchmarks such as Youtube-8M, and COIN. The dataset is available at https://nusci. csl.sri.com/project/APPROVE . *Work partly done during an internship at SRI International.1. Introduction With the expansion of internet access and the ubiquitous availability of smart devices, children increasingly spend a significant amount of time watching online videos. A recent nationally representative survey reported that 89% of par-ents of children aged 11 or younger say their child watches videos on YouTube [4]. Moreover, it is estimated that young children in the age range of two to four years consume 2.5 hours and five to eight years consume 3.0 hours per day on average [45,46]. Childhood is typically a key period for ed-ucation, especially for learning basic skills such as literacy and math [20, 25]. Unlike generic online videos, watch-ing appropriate educational videos supports healthy child development and learning [7, 22, 23]. Thus, analyzing the content of these videos may help parents, teachers, and me-dia developers increase young children’s exposure to high-quality education videos, which has been shown to produce meaningful learning gains [22]. As the amount of online content produced grows exponentially, automated content understanding methods are essential to facilitate this. In this work, given a video, our goal is to determine whether the video contains any educational content and characterize the content. Detecting educational content re-quires identifying multiple distinct types of content in a video while distinguishing between similar content types. The task is challenging as the education codes by Com-mon Core Standards [3, 40] can be similar such as ‘letter names’ and ‘letter sounds’, where the former focuses on the name of the letter and the latter is based on the phonetic sound of the letter. Also, understanding education content requires analyzing both visual and audio cues simultane-ously as both signals are to be present to ensure effective learning [3, 40]. This is in contrast to standard video classi-fication benchmarks such as the sports or generic YouTube videos in UCF101 [54] Kinetics400 [53], YouTube-8M [1], where visual cues are often sufficient to detect the different classes. Finally, unlike standard well-known action videos, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19923 education codes are more structured and not accessible to common users. Thus, it requires a carefully curated set of videos and expert annotations to create a dataset to en-able a data-driven approach. In this work, we focus on two widely used educational content classes: literacy and math. For each class, we choose prominent codes (sub-classes) based on the Common Core Standards that outline age-appropriate learning standards [3,40]. For example, literacy codes include ‘letter names’, ‘letter sounds’, ‘rhyming’, and math codes include ‘counting’, ‘addition subtraction’, ‘sort-ing’, ‘analyze shapes’. We formulate the problem as a multilabel fine-grained video classification task as a video may contain multiple types of content that can be similar. We employ multi-modal cues since besides visual cues, audio cues provide important cues to distinguish between similar types of ed-ucational content. We propose a class prototypes based su-pervised contrastive learning approach to address the above-mentioned challenges. We learn a prototype embedding for each class. Then a loss function is employed to minimize the distance between a class prototype and the samples asso-ciated with the class label. Similarly, the distance between a class prototype and the samples without that class label is maximized. This is unlike the standard supervised con-trastive learning setup where inter-class distance is maxi-mized and intra-class distance is minimized by considering classwise positive and negative samples. This approach is shown to be effective for single-label setups [26]. However, it is not straightforward to extend this for the proposed mul-tilabel setup as samples cannot be identified as positive or negative due to the multiple labels. We jointly learn the em-bedding of the class prototypes and the samples. The em-beddings are learned by a multimodal transformer network (MTN) that captures the interaction between visual and au-dio cues in videos. We employ automatic speech recogni-tion (ASR) to transcribe text from the audio. The MTN con-sists of video and text encoders that learn modality-specific embedding and a cross-attention mechanism is employed to capture the interaction between them. The MTN is end-to-end learned through the contrastive loss. Due to the lack of suitable datasets for evaluating fine-grained classification of education videos, we propose a new dataset, called APPROVE, of curated YouTube videos annotated with educational content. We follow Common Core Standards [3, 40] to select education content suitable for the kindergarten level. We consider two high-level classes of educational content: literacy and math. For each of these content classes, we select a set of codes. For the lit-eracy class, we select 7 codes and for the math class, we se-lect 11 codes. Each video is associated with multiple labels corresponding to these codes. The videos are annotated by trained education researchers following standard validation protocol [41] to ensure correctness. APPROVE also con-sists of carefully chosen background videos, i.e., without educational content, that are visually similar to the videos with educational content. APPROVE consists of 193 hours of expert-annotated videos with 19 classes (7 literacy codes, 11 math codes, and a background) where each video has 3 labels on average. Our contributions can be summarized as follows: • APPROVE, a fine-grained multi-label dataset of edu-cation videos, to promote exploration in this field. • Class prototypes based contrastive learning frame-work along with a multi-modal fusion transformer suit-able for the problem where videos have multiple fine-grained labels. • Outperforming relevant baselines on three datasets: APPROVE, YouTube-8M [1] and COIN [55]. 2. Related Works Self-Supervised Contastive Learning (CL) has been an effective paradigm for visual representation learning. Meth-ods such as SimCLR [8], MoCo (Momentum Contrastive learning) [18], Augmented Multiscale Deep InfoMax (AMDIM) [5], Contrastive Predictive Coding (CPC) [39], MoCov2 [10], MoCov3 [12] and SimCLRv2 [9] have achieved strong performance on image classification bench-marks. The shared property between these CL frameworks is that data augmentation is used to generate positive pairs for CL from a single instance, where other data instances are treated as negatives. Prototypical Contrastive Learn-ing (PCL) [30] extends self-supervised contrastive learn-ing with the idea of clustering data representations during training to generate unsupervised prototypes which repre-sent intra-class variation . We utilize class prototypes in-stead in the supervised setting, to learn fine-grained distinc-tions between classes . Supervised CL methods such as SupCon [26] utilize la-bels to enhance contrastive learning by forming positive and negative pairs using labels instead of data augmenta-tion. Supervised Contrastive Learning has also been used for other tasks such as image segmentation [57] and clas-sification in the presence of noisy labels [32]. Hierarchical CL [63] extends SupCon to the hierarchical classification case. However, SupCon cannot be extended to the multi-label case
in a straightforward manner, as pairs of data sam-ples with multiple labels cannot be clearly classified just into positives and negatives. Weakly-Supervised Multi-Modal CL: Weakly aligned text-image/video datasets scraped from the web such as Conceptual Captions [50] and WebVid-10M [6] enable learning of multi-modal representations. CLIP [42] applies a cross-modal contrastive loss to train individual text and image encoders. Everything at Once [51] is able to addition-ally utilize the audio modality and incorporates a pairwise fusion encoder which encodes pairs of modalities, as a re-19924 sounds in words (c) Frames from background videos rhyming addition subtraction counting analyze compare shape building drawing shapes letter names letter sounds (b) Frames from math videos (a) Frames from literacy videos Figure 1. Sample video frames from the APPROVE dataset. Videos belong to the (a) literacy classes ,(b) math classes , and (c) back-ground . Background videos do not contain educational content but share visual similarities with educational videos. The videos are labeled with fine-grained sub-classes, e.g., letter names vsletter sounds . sult, 6 forward passes of the fusion model are required for 3 modalities. Frozen in Time [6] is able to utilize both image-text and video-text datasets through the use of a Space-Time Transformer Visual Encoder. Visual Conditioned GPT [37] uses a single cross-attention fusion layer to combine pre-trained CLIP text and visual features. Flamingo [2] adds cross-attention layers interleaved with language decoder layers to fuse visual information into text generation. MER-LOT [60,61] and Triple Contrastive Learning [58] combine contrastive learning and generative language modeling to learn aligned text-image representations. Supervised Multi-Modal Learning : Supervised Multi-Modal Learning typically relies on crowd-captioned datasets such as Flickr30k [59] and MS-COCO Cap-tions [11]. Some prior works such as OSCAR [33] and VinVL [62] have utilized pre-trained object detectors and multi-modal transformers to learn image captioning using supervised aligned datasets. BLIP [28] takes a hybrid ap-proach where it bootstraps an image captioner using a la-beled dataset and uses it to generate captions for web im-ages. This generated corpus is then filtered and used for learning an aligned representation. ALign BEfore Fuse [29] highlights the importance of aligning text and image tokens before fusing them using a multi-modal transformer. In this paper, we focus on the fine-grained classification of multilabel educational videos. Due to the lack of suitable datasets, we propose a new dataset, APPROVE, which is described next. 3. APPROVE Dataset We propose a dataset, called APPROVE, of curated YouTube videos annotated with educational content. AP-PROVE consists of 193 hours of expert-annotated videos with 19 classes (7 literacy codes, 11 math, and background) and each video is associated with approximately 3 labels on average. We follow the Common Core Standards [3, 40] to select education content suitable for kindergarten level. The Common Core Standards outline what students are ex-pected to know and do at various age ranges and grades. This is a widely accepted standard followed by a range ofeducators. We consider two high-level classes of educa-tional content: literacy and math. For each of these content classes, we select a set of codes. For the literacy class, we select 7 codes including letter names, letter sounds, follow words, sight words, letters in words, sounds in words, and rhyming. For the math class, we select 11 codes including counting, individual number, comparing groups, addition subtraction, measurable attributes, sorting, spatial language, shape identification, building drawing, analyzing and com-paring shapes. More details about the standard and the de-scription of the codes are provided in the supplementary material. APPROVE also consists of carefully chosen back-ground videos, i.e., without educational content, that are vi-sually similar to the videos with educational content. We present frames corresponding to these classes in Fig. 1. Figure 2. Frequency of the classes in APPROVE. Math codes are in Orange and literacy codes in Blue . To ensure the quality and correctness of the annotations, we consider educational researchers to annotate the videos and follow a standard validation protocol [41]. Each anno-tator is trained by an expert and annotations on a selected set are examined before engaging the annotator for the fi-nal annotation. Annotators start once they reach more than 90% agreement with the expert. Further, we estimate inter-19925 Figure 3. Distribution of the number of labels per video. annotator consistency to filter out anomalies. Details about the validation process are provided in the supplementary material. It takes a month to train an education researcher to match expert-level coding accuracy. On average, it takes the trained annotators 1 min to annotate 1 min of video. The videos are curated from YouTube and are annotated by the trained annotators to determine educational content in them. Each video can have multiple class labels that are quite similar making the task a multi-label and fine-grained classification problem. For example ‘letter names’ and ‘let-ter sounds’ where visual letters are shown in both but in ’letter sounds’, the phonetic sound on the letter is empha-sized (Fig. 1 (a)). Similarly, in both ‘build and draw shapes’ and ‘analyzing and comparing shapes’, multiple shapes can appear but the latter focuses on comparing multiple shapes by shape and size (Fig. 1 (b)). Class-wise stats are pre-sented in Fig. 2. Note that the task is different from common video classification setups where either multi-label or fine-grained aspects are dealt separately. Single-label datasets such as HMDB51 [27], UCF101 [54], Kinetics700 [53] and multi-label ones such as Charades [52] are widely used benchmarks for this problem. YouTube-Birds and YouTube-Cars [64] are analogous datasets for object recog-nition from videos. Multi-Sports [34] and FineGym [49] label fine-grained action classes for sports. HVU [15] also adds scenes and attributes annotations along with action and objects. However, action, object and scene recognition are not enough for fine-grained video understanding. For in-stance, videos from a given education provider might share similar objects (person, chalkboard, etc.) and actions (writ-ing on chalkboard) while covering different topics (count-ing, shape recognition etc.) in each video. 4. Proposed Approach In this section, we first describe the proposed class pro-totypes based contrastive learning framework suitable for videos containing multiple educational codes. Then we present the approach to learning the class prototypes and finally describe the multimodal transformer network that learns features by fusing visual and text cues from videos. 4.1. Class prototypes based contrastive learning In a contrastive learning framework, feature represen-tations are typically learned by simultaneously minimiz-ing the distance between positive samples and maximizingDatasetSize (in hr)Multi-LabelFine GrainedType Annotators Action Recognition HMDB 5 ✘ ✘ V Authors UCF 27 ✘ ✘ V Authors Kinetics 800 ✘ ✘ V+A Crowd Video Classification COIN 476 ✘ ✘ V+A Crowd YT-8M -✔ ✘ F Machine APPROVE 193 ✔ ✔ V+T+A Experts Table 1. APPROVE dataset compared with selected prior datasets. V→Video Frames, A →Audio, T →Text, F→Features only. the distance between negative samples (See Figure 4.(a)). The positive and negative samples are determined with re-spect to an anchor sample usually based on the class labels. For example, supervised contrastive learning (SupCon) [26] learns a representation to minimize the intra-class distances and maximize inter-class distances. We denote xiandyi as the ith sample and its label, respectively. Let’s define zias the representation of the ith sample in a batch A, and sim(zi,zj) =zi·zj |zi||zj|the cosine similarity, then the Sup-Con loss [26] is defined as: LSupCon =X i∈A−1 |P(i)|X p∈P(i)logexp(sim(zi,zp)/τ)P a∈A\iexp(sim(zi,za)/τ), (1) where P(i)is the set of positive samples, i.e., with the same label as zi, in the batch excluding ianda∈A\iis the index of all samples in the batch excluding the ith sample. τis a scalar temperature parameter used for scaling similarity val-ues. The positive pairs are grouped into the numerator and minimizing the loss minimizes their distance in the learned representation and vice versa for negative pairs. SupCon is known to be effective for classifying samples with a single label. However, it is not straightforward to extend this for the multilabel setup as beyond positive samples, where all labels are the same, and negative samples, where none of the labels is the same, there can be a third scenario where labels are partially overlapping. Though SupCon has been extended to hierarchical classification [63], it cannot be di-rectly extended to the true multi-label case. To address this issue, we learn class prototypes as the representative for each class and consider these as anchors while determining positive and negative samples. Specif-ically, for a specific class prototype, a representation is learned to minimize distances between the prototype and samples with this class label and maximize the distances between the prototype and samples without this class la-bel. We compare
Cai_Source-Free_Adaptive_Gaze_Estimation_by_Uncertainty_Reduction_CVPR_2023
Abstract Gaze estimation across domains has been explored re-cently because the training data are usually collected under controlled conditions while the trained gaze estimators are used in nature and diverse environments. However , due toprivacy and efficiency concerns, simultaneous access to an-notated source data and to-be-predicted target data can be challenging. In light of this, we present an unsupervisedsource-free domain adaptation approach for gaze estima-tion, which adapts a source-trained gaze estimator to unla-beled target domains without source data. We propose the Uncertainty Reduction Gaze Adaptation (UnReGA) frame-work, which achieves adaptation by reducing both sampleand model uncertainty. Sample uncertainty is mitigated byenhancing image quality and making them gaze-estimation-friendly, whereas model uncertainty is reduced by minimiz-ing prediction variance on the same inputs. Extensive ex-periments are conducted on six cross-domain tasks, demon-strating the effectiveness of UnReGA and its components. Results show that UnReGA outperforms other state-of-the-art cross-domain gaze estimation methods under both pro-tocols, with and without source data. The code is availableathttps://github.com/caixin1998/UnReGA .
1. Introduction Gaze encodes rich information about the attention and psychological factors of an individual. Techniques that use eye tracking to infer human intentions and understand hu-man emotions have found an increasingly wide utilization in fields including human-computer interaction [ 20,35,36], af-fective computing [ 11], and medical diagnosis [ 21,46]. The most prevalent way to estimate human gaze is using com-mercial eye trackers, which suffer from high cost or custom invasive hardware. To overcome the limitation on devicesand environments, researchers have made great progress on *This work is partially supported by National Key RD Program of China (No. 2018AAA0102405), National Natural Science Foundation ofChina (No. 62176248). 0.66 0.98 1.21 < < 0.80↓ 0.99↓ source domain target domain model uncertainty ETH-XGaze MPII EyeDiap (a)(b) …unlabeled target (c)į į variance .98 1.2 < enhance gaze ze g g az gazggggggggg gaz e aze aze e estimator o r or or r r 1… gaze … … g gaz gagggggggg gagg e aze aze e e estimator r r r rʹ … … … gaze g gaz gagggg ga e aze aze e e estimator r r rܭ face fa f f acfa facff facfa ac ace ce ce e ce ce enhancer model model mode uncertainty sample e euncertainty 6789averaged mean error( 瀽)0-10 10-20 20-30 30-40 40-50 50-60 60-70 70-80 80-90 90-100 percentile of the uncertainty(%)(b) Figure 1. (a) The source-trained model shows high uncertainty on samples from different domains. (b) Statistics of errors and model uncertainty by the same gaze estimator on different samples. The error increases as the uncertainty grow. (c) To accomplish unsu-pervised source-free domain adaptation, the UnReGA reduces the sample uncertainty by enhancing the input images and reduces the model uncertainty by minimizing the prediction variance. appearance-based gaze estimation methods with the devel-opment of deep learning [ 4,6,12,56,57]. Notwithstanding the achievements, the appearance-based gaze estimators meet the most challenging problemthat their performance drops significantly when they are trained and tested on different domains, e.g., the domainswith different subjects, image quality, background environ-ments, or illuminations. Usually, gaze estimators are trained on the data collected under controlled conditions where true gaze is feasible to be measured and recorded by the de-ployed devices. Then, these gaze estimators would be ap-plied under a much different and uncontrolled environment. To adapt the source-data-trained model to the target data, researchers propose methods to narrow the gap between the different domains [ 16,34,42,45]. Most of the methods re-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22035 quire data from both the source and target domains during the adaptation. However, in the application of gaze esti-mation, the source data is likely to be neither available norefficient during the adaptation. First, most gaze modelsare trained with face images which might be not accessi-ble due to privacy or bandwidth issues. Secondly, process-ing source data might not be computationally practical inreal-time gaze estimation on the target domain. Therefore,we formulate gaze estimation as an unsupervised source-free domain adaptation problem, where we cannot accessthe source data when fitting the model to the target. To address the source-free domain adaptation issue, we propose to adapt the source-trained gaze estimators to the target domain by reducing both the sample uncertainty and model uncertainty on the unlabeled target data. Sample un-certainty captures noise inherent in the input images, such as sensor noise and motion blur, which is also referred to as aleatoric uncertainty [ 24].Model uncertainty is determined by the inconsistency of predication or model perturbations, which is also referred to as epistemic uncertainty [ 15,24]. We formulate it as the variance of different estimators’ pre-dictions on the same sample. We assume that reducing thetwo uncertainties helps to reduce the gaze estimator’s er-rors across different domains due to three observations: 1) Estimators show high model uncertainty on samples thatare distributed far away from the training data and show low uncertainty on the nearby samples [ 24,28]. As shown in Fig. 1(a), the ETH-XGaze-trained estimator has average model uncertainties of 0.66, 0.98, and 1.21 on the samples from ETH-XGaze [ 53], MPIIGaze [ 57], EyeDiap [ 14], re-spectively. EyeDiap has the most different distribution fromETH-XGaze and shows the highest model uncertainty. 2) Reducing the sample uncertainty pulls together the sourceand target data, and accordingly reduces the estimator’s model uncertainty on target data. In Fig. 1(a), the model un-certainties on MPIIGaze/EyeDiap decrease when we reduce the sample uncertainty by image enhancement, because by doing this, we reduce the image quality discrepancy be-tween MPIIGaze/EyeDiap and ETH-XGaze. 3) Model un-certainty empirically shows a positive correlation with gaze estimation error in cross-domain scenarios. Fig. 1(b) plots how the errors change with model uncertainty. We train 10gaze estimators from ETH-XGaze and then, for each sam-ple in MPIIGaze, we compute the model uncertainty and themean error of the estimators’ predictions. We sort the sam-ples by the model uncertainty in ascending order and groupthem by every 10-th percentile. The height of each bar inFig. 1(b) denotes the averaged mean error over the samples within each group. As can be seen, the top 10 percent of themodel uncertainty corresponds to the smallest error. To this end, we propose an Uncertainty Reduction Gaze Adaption (UnReGA) framework that accomplishesthe source-free adaptation by minimizing both the sampleand model uncertainty. As illustrated in Fig. 1(c), we first transfer the input images into a gaze-estimation-friendly do-main by introducing a face enhancer to enhance input im-ages without changing the gaze. Rather than low-quality images, high-quality images convey more details about theeyes and contribute to less sample uncertainty and better generalization ability of the source-trained gaze estimators.Next, we update an ensemble of source gaze estimators byminimizing the variance of their predictions on the unla-beled target data. Finally, we merge the updated estimators into a single model during inference. Our empirical exper-iments demonstrate that the updated estimator outperforms the not-adapted source estimator on the target domain. Our contributions are summarized as: 1. We formulate gaze estimation as an unsupervised source-free domain adaptation problem and propose anUncertainty Reduction Gaze Adaption (UnReGA) framework that adapts the trained model to target do-main without the source data by reducing both the sam-ple uncertainty and model uncertainty .
Du_SuperDisco_Super-Class_Discovery_Improves_Visual_Recognition_for_the_Long-Tail_CVPR_2023
Abstract Modern image classifiers perform well on populated classes, while degrading considerably on tail classes with only a few instances. Humans, by contrast, effortlessly han-dle the long-tailed recognition challenge, since they can learn the tail representation based on different levels of se-mantic abstraction, making the learned tail features more discriminative. This phenomenon motivated us to propose SuperDisco, an algorithm that discovers super-class repre-sentations for long-tailed recognition using a graph model. We learn to construct the super-class graph to guide the representation learning to deal with long-tailed distribu-tions. Through message passing on the super-class graph, image representations are rectified and refined by attending to the most relevant entities based on the semantic simi-larity among their super-classes. Moreover, we propose to meta-learn the super-class graph under the supervision of a prototype graph constructed from a small amount of imbal-anced data. By doing so, we obtain a more robust super-class graph that further improves the long-tailed recognition per-formance. The consistent state-of-the-art experiments on the long-tailed CIFAR-100, ImageNet, Places and iNaturalist demonstrate the benefit of the discovered super-class graph for dealing with long-tailed distributions.
1. Introduction This paper strives for long-tailed visual recognition. A computer vision challenge that has received renewed at-tention in the context of representation learning, as real-world deployment demands moving from balanced to im-balanced scenarios. Three active strands of work involve class re-balancing [15, 22, 32, 43, 65], information augmenta-tion [34, 51, 54] and module improvement [29, 31, 76]. Each of these strands is intuitive and has proven empirically suc-cessful. However, all these approaches seek to improve the classification performance of the original feature space. In this paper, we instead explore a graph learning algorithm to discover the imbalanced super-class space hidden in the original feature representation. *Currently with United Imaging Healthcare, Co., Ltd., China. (a) 100 original classes (b) 20 ground truth super-classes (c) 16 discovered super-classes (d) 32 discovered super-classes Figure 1. SuperDisco learns to project the original class space (a) into a relatively balanced super-class space. Different color curves indicate the different imbalance factors on the long-tailed CIFAR-100 dataset. Like the 20 super-class ground truth (b) our discovered super-classes for 16 super-classes (c) or 32 super-classes (d) provide a much better balance than the original classes. The fundamental problem in long-tailed recognition [18, 32, 44, 77] is that the head features and the tail features are indistinguishable. Since the head data dominate the feature distribution, they cause the tail features to fall within the head feature space. Nonetheless, humans effortlessly handle long-tailed recognition [2, 16] by leveraging semantic ab-stractions existing in language to gain better representations of tail objects. This intuition hints that we may discover the semantic hierarchy from the original feature space and use it for better representations of tail objects. Moreover, intermediate concepts have been shown advantageous for classification [5, 36] by allowing the transfer of shared fea-tures across classes. Nevertheless, it remains unexplored to exploit intermediate super-classes in long-tailed visual recognition that rectify and refine the original features. In the real world, each category has a corresponding super-class, e.g.,bus,taxi, and train all belong to the vehicle super-class. This observation raises the question: are super-classes of categories also distributed along a long-tail? We find em-pirical evidence that within the super-class space of popular datasets, the long-tailed distribution almost disappears, and each super-class has essentially the same number of samples. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19944 In Figure 1, we show the number of training samples for each of the original classes and their corresponding super-classes in the long-tailed CIFAR-100 dataset. We observe the data imbalance of super-classes is considerably lower than those of the original classes. This reflects the fact that the original imbalanced data hardly affects the degree of imbalance of the super-classes, which means the distribution of the super-classes and original data is relatively independent. These balanced super-class features could be used to guide the orig-inal tail data away from the dominant role of the head data, thus making the tail data more discriminative. Therefore, if the super-classes on different levels of semantic abstraction over the original classes can be accurately discovered, it will help the model generalize over the tail classes. As not all datasets provide labels for super-classes, we propose to learn to discover the super-classes in this paper. Inspired by the above observation, we make in this paper two algorithmic contributions. First, we propose in Section 3 an algorithm that learns to discover the super-class graph for long-tailed visual recognition, which we call SuperDisco. We construct a learnable graph that discovers the super-class in a hierarchy of semantic abstraction to guide feature rep-resentation learning. By message passing on the super-class graph, the original features are rectified and refined, which attend to the most relevant entities according to the similarity between the original image features and super-classes. Thus, the model is endowed with the ability to free the original tail features from the dominance of the head features using the discovered and relatively balanced super-class represen-tations. Even when faced with the severe class imbalance challenges, e.g., iNaturalist, our SuperDisco can still refine the original features by finding a more balanced super-class space using a more complex hierarchy. As a second contri-bution, we propose in Section 4 a meta-learning variant of our SuperDisco algorithm to discover the super-class graph, enabling the model to achieve even more balanced image representations. To do so, we use a small amount of balanced data to construct a prototype-based relational graph, which captures the underlying relationship behind samples and al-leviates the potential effects of abnormal samples. Last, in Section 5 we report experiments on four long-tailed bench-marks: CIFAR-100-LT, ImageNet-LT, Places-LT, and iNatu-ralist, and verify that our discovered super-class graph per-forms better for tail data in each dataset. Before detailing our contributions, we first embed our proposal in related work.
Bangunharcana_DualRefine_Self-Supervised_Depth_and_Pose_Estimation_Through_Iterative_Epipolar_Sampling_CVPR_2023
Abstract Self-supervised multi-frame depth estimation achieves high accuracy by computing matching costs of pixel corre-spondences between adjacent frames, injecting geometric information into the network. These pixel-correspondence candidates are computed based on the relative pose esti-mates between the frames. Accurate pose predictions are essential for precise matching cost computation as they in-fluence the epipolar geometry. Furthermore, improved depth estimates can, in turn, be used to align pose estimates. Inspired by traditional structure-from-motion (SfM) prin-ciples, we propose the DualRefine model, which tightly cou-ples depth and pose estimation through a feedback loop. Our novel update pipeline uses a deep equilibrium model frame-work to iteratively refine depth estimates and a hidden state of feature maps by computing local matching costs based on epipolar geometry. Importantly, we used the refined depth estimates and feature maps to compute pose updates at each step. This update in the pose estimates slowly alters the epipolar geometry during the refinement process. Experi-mental results on the KITTI dataset demonstrate competitive depth prediction and odometry prediction performance sur-passing published self-supervised baselines1.
1. Introduction The optimization of the coordinates of observed 3D points and camera poses forms the basis of structure-from-motion (SfM). Estimation of both lays the foundation for robotics [34, 35, 75], autonomous driving [20], or AR/VR applications [60]. Traditionally, however, SfM techniques are susceptible to errors in scenes with texture-less regions, dynamic objects, etc. This has motivated the development of deep learning models that can learn to predict depth from monocular images [14, 15, 18, 48, 50]. These models can 1https://github.com/antabangun/DualRefine (a) (b) Figure 1. (a) The estimated pose of a camera affects the epipolar geometry. (b) The epipolar line in the source image, calculated from yellow points in the target image, for the PoseNet-based [43] initial pose regression (red) and our refined pose (green). The yellow point in the source image is calculated based on our final depth and pose estimates. accurately predict depth based solely on image cues, without requiring geometric information. In recent years, self-supervised training of depth and pose models has become an attractive method, as it alleviates the need for ground truth while demonstrating precision com-parable to those of supervised counterparts [7, 19, 22, 23, 26, 28, 30, 61, 70, 74, 83, 87, 98, 106, 108]. Such an approach uses depth and pose predictions to synthesize neighboring images in a video sequence and enforce consistency between them. As the image sequence is also available at test time, recent self-supervised methods also study the use of multiple frames during inference [91]. These typically involve the construction of cost volumes from multiple views to compute pixel correspondences, bearing similarities to (multi-view) stereo models [4, 44, 77]. By incorporating multi-frame data, geometric information is integrated to make depth predic-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 726 tions, improving the performance as well as the robustness. In such a multi-frame matching-based model, the accuracy of matching costs computation is essential. Recent work in DepthFormer [29] demonstrates its importance, as they de-signed a Transformer [84]-based module to improve match-ing costs and achieve state-of-the-art (SoTA) depth accuracy. However, their approach came with a large memory cost. Unlike stereo tasks, the aforementioned self-supervised multi-frame models do not assume known camera poses and use estimates learned by a teacher network, typically a PoseNet [43]-based model. This network takes two im-ages as input and regresses a 6-DoF pose prediction. As the estimated pose affects the computation of epipolar ge-ometry (Fig. 1(a)), the accuracy of the pose estimates is crucial to obtain accurate correspondence matches between multiple frames. However, as noted in recent studies [72], pure learning-based pose regression generally still lags be-hind its traditional counterpart, due to the lack of geometric reasoning. By refining the pose estimates, we can improve the accuracy of the matching costs, potentially leading to better depth estimates as well. In Fig. 1(b), we show that the epipolar lines calculated from the regressed poses do not align with our refined estimates. Conversely, a better depth prediction may lead to a better pose prediction. Thus, instead of building the cost volume once using regressed poses, we choose to perform refinements of both depth and pose in parallel and sample updated local cost volumes at each itera-tion. This approach is fundamentally inspired by traditional SfM optimization and is closely aligned with feedback-based models that directly couple depth and pose predictions [27]. In this work, we propose a depth and pose refinement model that drives both towards an equilibrium, trained in a self-supervised framework. We accomplish this by making the following contributions: First , We introduce an iterative update module that is based on epipolar geometry and direct alignment. We sample candidate matches along the epipolar line that evolves based on the current pose estimates. Then the sampled matching costs are used to infer per-pixel con-fidences that are used to compute depth refinements. The updated depth estimates are then used in direct feature-metric alignments to refine the pose updates towards convergence. As a result, our model can perform geometrically consistent depth and pose updates. Second , These updates refine the initial estimates made by the single-frame model. By doing so, we do not rely on full cost volume construction and base our updates only on local cost volumes, making it simpler, more memory efficient, and more robust. Lastly , we design our method within a deep equilibrium (DEQ) framework [3] to implicitly drive the predictions towards a fixed point. Im-portantly, DEQ allows for efficient training with low training memory, improving upon the huge memory consumption of previous work. With our proposed novel design, we show improved depth estimates through experiments that are com-petitive with the SoTA models. Furthermore, our model demonstrates improved global consistency of visual odome-try results, outperforming other learning-based models.
Hui_Unifying_Layout_Generation_With_a_Decoupled_Diffusion_Model_CVPR_2023
Abstract Layout generation aims to synthesize realistic graphic scenes consisting of elements with different attributes in-cluding category, size, position, and between-element rela-tion. It is a crucial task for reducing the burden on heavy-duty graphic design works for formatted scenes, e.g., publi-cations, documents, and user interfaces (UIs). Diverse ap-plication scenarios impose a big challenge in unifying var-ious layout generation subtasks, including conditional andunconditional generation. In this paper , we propose a Lay-out Diffusion Generative Model (LDGM) to achieve suchunification with a single decoupled diffusion model. LDGMviews a layout of arbitrary missing or coarse element at-tributes as an intermediate diffusion status from a com-pleted layout. Since different attributes have their individ-ual semantics and characteristics, we propose to decouplethe diffusion processes for them to improve the diversity oftraining samples and learn the reverse process jointly to ex-ploit global-scope contexts for facilitating generation. As aresult, our LDGM can generate layouts either from scratch or conditional on arbitrary available attributes. Exten-sive qualitative and quantitative experiments demonstrateour proposed LDGM outperforms existing layout genera-tion models in both functionality and performance.
1. Introduction Layout determines the placements and sizes of primi-tive elements on a page of formatted scenes ( e.g., publica-tions, documents, UIs), which has critical impacts on how viewers understand and interact with the information in this page [ 13]. Layout generation is an emerging task of syn-thesizing realistic and attractive graphic scenes with prim-itive elements of different categories, sizes, positions, andrelations. It is of high demands for reducing the burdenon heavy-duty graphic design works in diverse application *This work was done when Mude Hui was an intern at MSRA. Input Icon at the top of TextCategory Left Top Wide Height Text Icon ButtonX2 X3W1 H1 W3 H3Y2 Y3W2 H2 Generation (Denoising)Diffusion (Noise adding) Output Category Left Top Wide Height Text Icon ButtonX1 Y1 X2 X3W1 H1 W3 H2Y2 Y3W2 H2 Ico nIcon Text Button Target layout Target UI Precise Attribute Coarse Attribute Missing Attribute Element RelationRelation Figure 1. The layout generation tasks can be unified into a diffu-sion (noise-adding) process and a generation (denoising) process. scenarios. Recently, there have been some research works studying unconditional generation [ 1,7,10,16,28], condi-tional generation based on user specified inputs ( e.g., el-ement types [ 12,13,15], element types and sizes [ 13,16] or element relations [ 12,15]), conditional refinement based on coarse attributes [ 24], and conditional completion based on partially available elements [ 7],etc. However, none of them can cope with all these application scenarios simulta-neously. This imposes a big challenge in unifying variouslayout generation subtasks with a single model, includingconditional generation upon various specified attributes andunconditional generation from scratch. Towards this goal,the prior work UniLayout [ 9] takes a further step by propos-ing a multi-task framework to handle six subtasks for lay-out generation with a single model. However, the supportedsubtasks are pre-defined and could not cover all applicationscenarios, e.g., conditional generation based on specified el-ement sizes. Besides, it does not take into account the com-binational cases of several subtasks, e.g., the case wherein some elements have missing attributes to be generated whilethe others are with coarse attributes to be refined in the same This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1942 layout simultaneously. Generally, a layout comprises a series of elements with multiple attributes, i.e., category, position, size and between-element relation. Each element attribute has threepossible statuses: precise, coarse or missing. Different lay-out generation subtasks supported by previous works aredefined as a limited number of cases where the attributestatuses are fixed upon attribute types, as shown in Fig-ure2. From a unified perspective, all missing or coarse attributes can be viewed as the corrupted results from theircorresponding targets. With this key insight in mind, weinnovatively propose to unify various forms of user inputsas intermediate statuses of a diffusion (corruption) processwhile modeling generation as a reverse (denoising) process. Furthermore, attributes with different corruption degrees are likely to appear at once in user inputs. And different at-tributes have their own semantics and characteristics. Thesein fact impose a challenge for the diffusion process to cre-ate diverse training samples as comprehensive simulationfor various user inputs. In this work, we propose a de-coupled diffusion model LDGM to address this challenge.The meaning of “decoupled” here is twofold: (i)we design attribute-specific forward diffusion processes upon the at-tribute types; (ii)we decouple the forward diffusion process with the reverse denoising process, wherein the forwardprocesses are individual for different types of attributes,whereas the reverse processes are integrated into one to bejointly performed. In this way, our proposed LDGM in-cludes not only attribute-aware forward diffusion processesfor different attributes to ensure the diversity of generationresults, but also a joint denoising process with fully mes-sage passing over the global-scope elements for improvingthe generation quality. Our contributions can be summa-rized in the following: • We present that various layout generation subtasks can be comprehensively unified with a single diffusion model. • We propose the Layout Diffusion Generative Model (LDGM), which allows parallel decoupled diffusion pro-cesses for different attributes and a joint denoising pro-cess for generation with sufficient global message passingand context exploitation. It conforms to the characteris-tics of layouts and achieves high generation qualities. • Extensive qualitative and quantitative experiment results demonstrate that our proposed scheme outperforms exist-ing layout generation models in terms of functionality andperformance on different benchmark datasets.
Choi_Dynamic_Neural_Network_for_Multi-Task_Learning_Searching_Across_Diverse_Network_CVPR_2023
Abstract In this paper, we present a new MTL framework that searches for structures optimized for multiple tasks with d i-verse graph topologies and shares features among tasks. We design a restricted DAG-based central network with read-in/read-out layers to build topologically diverse ta sk-adaptive structures while limiting search space and time. We search for a single optimized network that serves as multiple task adaptive sub-networks using our three-stage training process. To make the network compact and dis-cretized, we propose a flow-based reduction algorithm and a squeeze loss used in the training process. We evaluate our optimized network on various public MTL datasets and show ours achieves state-of-the-art performance. An exten -sive ablation study experimentally validates the effectiv e-ness of the sub-module and schemes in our framework.
1. Introduction Multi-task learning (MTL), which learns multiple tasks simultaneously with a single model has gained increasing attention [ 3,13,14]. MTL improves the generalization per-formance of tasks while limiting the total number of net-work parameters to a lower level by sharing representa-tions across tasks. However, as the number of tasks in-creases, it becomes more difficult for the model to learn the shared representations, and improper sharing between less related tasks causes negative transfers that sacrifice the p er-formance of multiple tasks [ 15,36]. To mitigate the negative transfer in MTL, some works [ 6,25,32] separate the shared and task-specific parameters on the network. More recent works [ 21,29,38] have been proposed to dy-namically control the ratio of shared parameters across tas ks using a Dynamic Neural Network (DNN) to construct a task adaptive network. These works mainly apply the cell-based architecture search [ 19,27,41] for fast search times, so that the optimized sub-networks of each task consist of fixed or simple structures whose layers are simply branched, as shown in Fig. 1a. They primarily focus on finding branching *Corresponding authorpatterns in specific aspects of the architecture, and featur e-sharing ratios across tasks. However, exploring optimized structures in restricted network topologies has the potent ial to cause performance degradation in heterogeneous MTL scenarios due to unbalanced task complexity. We present a new MTL framework searching for sub-network structures, optimized for each task across diverse network topologies in a single network. To search the graph topologies from richer search space, we apply Directed Acyclic Graph (DAG) for the homo/heterogeneous MTL frameworks, inspired by the work in NAS [ 19,27,40]. The MTL in the DAG search space causes a scalability issue, where the number of parameters and search time increase quadratically as the number of hidden states increases. To solve this problem, we design a restricted DAG-based central network with read-in/read-out layers that allow ou r MTL framework to search across diverse graph topologies while limiting the search space and search time. Our flow-restriction eliminates the low-importance long skip conne c-tion among network structures for each task, and creates the required number of parameters from O(N2)toO(N). The read-in layer is the layer that directly connects all the hid den states from the input state, and the read-out layer is the lay er that connects all the hidden states to the last feature layer . These are key to having various network topological rep-resentations, such as polytree structures, with early-exi ting and multi-embedding. Then, we optimize the central network to have compact task-adaptive sub-networks using a three-stage training p ro-cedure. To accomplish this, we propose a squeeze loss and a flow-based reduction algorithm. The squeeze loss limits the upper bound on the number of parameters. The reduction algorithm prunes the network based on the weighted adja-cency matrix measured by the amount of information flow in each layer. In the end, our MTL framework constructs a compact single network that serves as multiple task-specifi c networks with unique structures, such as chain, polytree, and parallel diverse topologies, as presented in Fig. 1b. It also dynamically controls the amount of sharing represen-tation among tasks. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3779 (a) Graph representation of existing DNN-based methods (i). -(ii). for MTL and ours (iii). (b) Topologies of DAG (i). and its sub-graph (ii). Figure 1. Graph representation of various neural networks. (a) Graph representation of existing dynamic neural network for multitas k learning and ours. (b) Topologies of a completed Directed Acyclic Graph (DAG) and the output sub-graph of DAG structure. The experiments demonstrate that our framework suc-cessfully searches the task-adaptive network topologies o f each task and leverages the knowledge among tasks to make a generalized feature. The proposed method outperforms state-of-the-art methods on all common benchmark datasets for MTL. Our contributions can be summarized as follows: • We present for the first time an MTL framework that searches both task-adaptive structures and sharing pat-terns among tasks. It achieves state-of-the-art perfor-mance on all public MTL datasets. • We propose a new DAG-based central network com-posed of a flow restriction scheme and read-in/out lay-ers, that has diverse graph topologies in a reasonably restricted search space. • We introduce a new training procedure that optimizes the MTL framework for compactly constructing vari-ous task-specific sub-networks in a single network.
Feng_Probing_Sentiment-Oriented_Pre-Training_Inspired_by_Human_Sentiment_Perception_Mechanism_CVPR_2023
Abstract Pre-training of deep convolutional neural networks (DC-NNs) plays a crucial role in the field of visual sentiment anal-ysis (VSA). Most proposed methods employ the off-the-shelf backbones pre-trained on large-scale object classification datasets ( i.e., ImageNet). While it boosts performance for a big margin against initializing model states from random, we argue that DCNNs simply pre-trained on ImageNet may excessively focus on recognizing objects, but failed to pro-vide high-level concepts in terms of sentiment. To address this long-term overlooked problem, we propose a sentiment-oriented pre-training method that is built upon human visual sentiment perception (VSP) mechanism. Specifically, we fac-torize the process of VSP into three steps, namely stimuli taking, holistic organizing, and high-level perceiving. From imitating each VSP step, a total of three models are sepa-rately pre-trained via our devised sentiment-aware tasks that contribute to excavating sentiment-discriminated represen-tations. Moreover, along with our elaborated multi-model amalgamation strategy, the prior knowledge learned from each perception step can be effectively transferred into a sin-gle target model, yielding substantial performance gains. Fi-nally, we verify the superiorities of our proposed method over extensive experiments, covering mainstream VSA tasks from single-label learning (SLL), multi-label learning (MLL), to label distribution learning (LDL). Experiment results demon-strate that our proposed method leads to unanimous improve-ments in these downstream tasks. Our code is released on https://github.com/tinglyfeng/sentiment_pretraining .
1. Introduction Visual sentiment analysis aims to understand the senti-ment embedded in an image, which gradually becomes a critical computer vision task that enables numerous applica-tions from opinion mining [45], entertainment assistance [5], to business intelligence [18]. Given an image, the main goal * Equal contribution. † Corresponding author.of VSA is to recognize the emotion induced by viewers, pro-viding either the categorical emotion states (CES) [9, 30] or dimensional emotion space (DES) [23, 41] representations. Traditional methods proposed for VSA normally involve extracting sentiment-related hand-crafted features like line directions [48], textures and colors [30], etc. These features are then sent to a classifier e.g., a support vector machine (SVM) to predict the emotional states. However, due to af-fective gap [15], the low-level features can hardly meet the high-level attributes requirement of VSA, thus resulting in relatively unsatisfying performance. Entering the deep learning era, DCNNs are now the dom-inant tools applied to various computer vision tasks, such as image classification, object detection, etc. Blessed with im-pressive high-level feature extraction capabilities, DCNNs have demonstrated superior advantages for modern VSA proven by a lot of milestone works [3, 50, 56]. Beneath the success, many may ignore one important factor that largely determines the performance of VSA, saying the pre-trained model. Due to the data-hungry nature of DCNNs, initializ-ing model parameters from models trained on large-scale datasets has been a go-to technique for most tasks to improve their generalization abilities. When it comes to VSA, the lack of data has been exacerbated by the arduous annota-tion process (every image needs to be annotated by multiple people due to the subjectivity of emotion), resulting in its especially heavy reliance on pre-training. In our experiments on FI dataset [57], the ResNet50 [16] pre-trained on Ima-geNet [8] outperforms the one trained from scratch by 20 percent in terms of accuracy, revealing the undeniable crucial role the pre-trained model plays in VSA. Today’s deep models proposed for VSA are mostly ini-tialized from models pre-trained on ImageNet to achieve sat-isfactory performance [59]. However, different from many other computer vision tasks that mainly depend on objec-tive semantics, VSA requires a relatively higher level of understanding of an image. Therefore, pre-training only on ImageNet which is specially designed for object classifica-tion may not be the best practice for VSA. In this paper, we argue that the models pre-trained on Ima-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2850 Stimuli Tasking High -level Perceiving Holistic Organizing LcrLsr H×WInput Image Encoder Encoder Stimuli Tasking FeaturesHolistic Organizing FeaturesHigh -level Perceiving Features DecoderPre-Training Phase encoder features H×W (4H)× (4W) lightnessColorization Super Resolution Decoder Decoder a* b*Encoder HeadScene Recognition EncoderANP PredictionImage Captioning LjpLscJigsaw Puzzles FC layer × √ ×FC layer sea beach skywaterfall ……Amalgamation Phase HeadFC layer LSTM tthe People in the picture look happyencoder features encoder features happy dog ……happy baby fat dog Lanp LicConv SigmoidTranspose Conv *2 SigmoidColor, Texture Scenery, Geometry Object State, Object InteractionPre-Training TaskPre-Training Task Sentiment HeadSentiment HeadFigure 1. Overview of our pre-training method. We split a CNN backbone into three stages, each of which is responsible for extracting features corresponding to a certain VSP step. To fully excavate sentiment-related knowledge in terms of each step, a total of three models are separately trained to perform our elaborated tasks shown at the bottom. geNet fail to achieve sentiment-related initial states to relieve the burden of learning sentiment representations from lim-ited data. Also, due to the psychological and physiological nature of VSA, we believe that only if we fully understand how human sentiment is internally constructed can we thor-oughly unveil the potential of VSA pre-training. Therefore, our proposed pre-training method is built upon human vi-sual sentiment perception mechanism. Summarized from numerous existing research in the field of psychology and neuroscience [24, 26], we factorize the process of VSP into three steps in chronological order: 1) Stimuli Taking (ST): the procedure starts with the retina receiving light signals composed of colors and textures [29]. 2) Holistic Organizing (HO): the second step taking place in the primary visual cor-tex (V1) of our brain is to construct a whole map determining the overall context and global organization of scene [10, 43]. 3) High-level Perceiving (HP): the other parts of our brain help us separate the main objects from ambient light and build our high-level awareness [13, 19, 39]. Inspired by these theories, we build our pre-training framework by instructing the DCNNs to mimic the behavior of humans. In this work, we separately perform three groups of pre-training tasks, each of which is corresponding to one VSP step and is intentionally excavated the key sentiment features. To fully leverage the sentiment knowledge learned from pre-trained models, we then elaborate an amalgamation strategy to effectively distillate their abilities into a single target model. The amalgamation process is performed by squeezing the gap between the target model and sentiment-aware pre-trained models on both the logits and features at various levels. Moreover, the pre-trained models still par-ticipate in the whole downstream training, which further unleashes the potential learning abilities of DCNNs to ac-commodate the specialties of training data. We apply our method to multiple downstream VSA tasks including single-label learning, multi-label learning, and label distributionlearning. Extensive experiments have demonstrated favor-able improvements from our proposed pre-training method. Our contributions are three-fold. 1) We propose a sentiment-oriented pre-training method to separately train a total of three models, each of which is dedicated to mimick-ing the human sentiment perception mechanism through per-forming pre-training tasks. 2) We devise an amalgamation strategy to aggregate the sentiment-discriminated knowledge from pre-trained models into a single target model during training downstream tasks, yielding favorable performance gains. 3) We conduct extensive experiments on various back-bones and diverse VSA datasets. The experiment results demonstrate that our proposed method can unanimously im-prove the performance of a wide variety of VSA tasks.
Chen_Imitation_Learning_As_State_Matching_via_Differentiable_Physics_CVPR_2023
Abstract Existing imitation learning (IL) methods such as inverse reinforcement learning (IRL) usually have a double-loop training process, alternating between learning a reward function and a policy and tend to suffer long training time and high variance. In this work, we identify the benefits of differentiable physics simulators and propose a new IL method, i.e., Imitation Learning as State Matching via Dif-ferentiable Physics ( ILD), which gets rid of the double-loop design and achieves significant improvements infinal per-formance, convergence speed, and stability. The proposed ILDincorporates the differentiable physics simulator as a physics prior into its computational graph for policy learn-ing. ILDunrolls the dynamics by sampling actions from a parameterized policy and minimizing the distance between the expert trajectory and the agent trajectory. It back-propagates the gradient into the policy via temporal physics operators, which improves the transferability to unseen en-vironments and yields higherfinal performance. ILDhas a single-loop structure that stabilizes and speeds up training. It dynamically selects learning objectives for each state dur-ing optimization to simplify the complex optimization land-scape. Experiments show that ILDoutperforms state-of-the-art methods in continuous control tasks with Brax, and can be applied to deformable object manipulation tasks, gener-alized to unseen configurations.1
1. Introduction In a variety of applications ranging from games to real-world robotic tasks [ 13,18,38], imitation learning (IL) is popularly applied. However, collecting high-quality expert data is expensive, and existing IL methods tend to suffer long training time, unstable training process, high variance of learned IL policies, and suboptimalfinal performance. Classical behavioral cloning (BC) methods learn poli-cies directly from labeled data, but often suffer the covari-ate shift problem. This problem can be tackled in DAG-‡This work is completed at the SEA AI Lab. 1The link to the code: https://github.com/sail-sg/ILDGER [ 32] by interacting with the environment and querying experts online, which however requires significant human effort to label the actions. Other IL methods mainly include inverse reinforcement learning (IRL), adversarial imitation learning (AIL), and combinations of them. IRL learns a re-ward function to match expert demonstrations [ 11,19,41], and AIL learns a discriminator to identify whether the ac-tion comes from an expert demonstration [ 18,24]. However, both IRL and AIL learn an additional intermediate signal, which introduces three main limitations: 1) the intermedi-ate signal learning leads to a double-loop training process, which means long training time and complex implementa-tion; 2) the learning signal is a noisy and frequently updated moving target, and as a result, the policy learning tends to have a high variance; 3) the intermediate signal, e.g., the re-ward function in IRL, inevitably loses the rich information embedded in the trajectories, e.g., environment dynamics. In this work, we propose a new approach to IL, named Imitation Learning as State Matching via Differentiable Physics ( ILD), which recovers expert behavior by exploiting the Differentiable Physics Simulator (DPS) [ 12,20]. Differ-ent from standard environments, DPS implements low-level physics operations with a differentiable function and allows the gradients toflow through the dynamics. ILDtakes ad-vantage of DPSs by considering the environment dynamics as aphysics priorand incorporating it into its computational graph during back-propagation of the policy, such that the learned policy fully captures both the expert demonstration and the environment specifications. To achieve this, ILD simply minimizes the state-wise distance of a rollout tra-jectory generated by a parameterized policy to the expert demonstration, which also gives a single-loop design and avoids learning intermediate signals. Nevertheless, the gra-dients of physics operators are highly non-convex, which often introduces a complex optimization landscape, and consequently, a naive implementation is often stuck in local minimum [ 12]. To alleviate this issue, we introduce a sim-ple yet effective Chamfer-αdistance for trajectory match-ing. For each state in the rollout trajectory, instead of ex-actly matching the corresponding expert state, we dynami-cally select the easiest local goal as the optimization target This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7846 Table 1. Useful Properties among IL Methods Property / Method Family IRL AIL ILD (ours) Layers of training loop Double-loop Double-loop Single-loop Source of the learning signal Reward function Discriminator Differentiable dynamics Transferability in changing dynamics Depends No Yes and gradually proceed to the harder ones as training pro-gresses. Chamfer-αdistance naturally forms a curriculum learning setup, simplifies the optimization task, and eventu-ally gives betterfinal performance. A short comparison of some useful properties of the IL methods can be found in Table 1. In contrast to the IRL and AIL methods, ILDdoes not introduce new intermediate signals and therefore requires no switching between policy learning and intermediate signal learning. In terms of the learning paradigm, IRL learns a reward function, AIL learns a discriminator, and ILDuses the differentiable dynamics which makes the learned policy aware of the environment dynamics and transferable to unseen environment configu-rations. Empirically, we validate ILDon a set of MuJoCo-like continuous control tasks from Brax [ 12] and a challeng-ing cloth manipulation task. We show that ILDachieves significant improvements over the state-of-the-art IRL and AIL methods in terms of convergence time, training stabil-ity, andfinal performance. Given afixed one-hour training time, ILDachieves 36% higher performance based on the normalized score over all the tasks and baselines.
Chen_Multivariate_Multi-Frequency_and_Multimodal_Rethinking_Graph_Neural_Networks_for_Emotion_CVPR_2023
Abstract Complex relationships of high arity across modality and context dimensions is a critical challenge in the Emotion Recognition in Conversation (ERC) task. Yet, previous works tend to encode multimodal and contextual relation-ships in a loosely-coupled manner, which may harm re-lationship modelling. Recently, Graph Neural Networks (GNN) which show advantages in capturing data relations, offer a new solution for ERC. However, existing GNN-based ERC models fail to address some general limits of GNNs, including assuming pairwise formulation and erasing high-frequency signals, which may be trivial for many applica-tions but crucial for the ERC task. In this paper, we pro-pose a GNN-based model that explores multivariate rela-tionships and captures the varying importance of emotion discrepancy and commonality by valuing multi-frequency signals. We empower GNNs to better capture the inherent relationships among utterances and deliver more sufficient multimodal and contextual modelling. Experimental results show that our proposed method outperforms previous state-of-the-art works on two popular multimodal ERC datasets.
1. Introduction Human beings constantly express their feelings in ev-eryday communication. Emotion Recognition in Conver-sation (ERC) aims at enabling machines to detect interac-tive human emotions in a dialogue, utilizing multi-sensory data, including textual, visual and acoustic information [5, 13, 18, 24]. Unlike traditional affective computing tasks that are performed on single modalities (e.g., text, speech or facial images) [12, 28, 32] or/and in non-conversational Corresponding author: Jie Shao. This work is supported by the National Natural Science Foundation of China (No. 61832001 and No. 62276047), Natural Science Foundation of Sichuan Province (No. 2023NSFSC1972) and Science and Technology Program of Yibin Sanjiang New Area (No. 2023SJXQYBKJJH001). /g18733澳/g18731澳 Oh what, you-you want both of them? Rachel Karen Green, where's the other earring?! Okay, okay, okay, look, just don't freak out, but I kinda lost it. I know it's in the apartment, but I definitely lost it. Well, what am I going to tell Monica? She wants to wear them tonight! Anger Fear Neutral Fear /g18734澳/g18732澳 /g18735澳Dialogue Emotion Surprise Low voiceSoft voice Loud voiveFlat toneHigh pitchT A V prediction prediction Visual information Textual information Acoustic information Wait, Rach! Where's the other one? Calm voice /g18736澳NeutralFigure 1. An example of multimodal dialogue (left) and the com-plex multivariate relationships of u3andu6(right). scenario [15, 23, 33], there exists a distinct and essential challenge in the ERC task -the complex multivariate re-lationships among multiple modalities and conversational context. In other words, the emotional dependencies of an utterance are usually of high arity, and involve multi-source information across both modality and context dimensions. Figure 1 presents a sample of conversation between two speakers. Take the utterance u3as an example. The visual and acoustic messages of utterance u3(an expressionless face and a flat tone) are ambiguous, but imply a veiled anger if coupled with the text. Moreover, the emotion behind u3 is also related to the preceding context u1andu2. In partic-ular, the change from calling by nickname in u1to calling by full name in u3suggests an emotion shift caused by u2, since another speaker tries to make a joke with a pretended lightness. Therefore, the relationships in fu1;u2;u3gare complex and multivariate, and involve interdependencies across both modality and context dimensions. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10761 Researchers have been exploring how to capture the complex relationships more effectively. Among existing ERC models, a dominant paradigm is to capture contextual relationships with context-sensitive modules such as recur-rent unit or transformer, whilst modelling multimodal re-lationships through various fusion methods [4, 24, 25, 34]. Despite the advances, this paradigm tends to underrate mul-tivariate relationships among modalities and context, as it limits the natural interaction between loosely-coupled mul-timodal and contextual modelling. More recently, Graph Neural Networks (GNNs) have shown great promise and yielded remarkable improvements in ERC, by revealing rich expressive power of mining struc-tural information and data relations [17, 18]. A routine so-lution is to construct a heterogeneous graph where each modality of an utterance is regarded as a node, and con-nected with other modalities of the same utterance as well as connected with the utterances in same modality in the same dialogue. Carefully-tweaked edge-weighting strategies usu-ally follow. On this basis, multimodal and contextual de-pendencies among utterances can be modelled simultane-ously through message passing, and thus deliver tighter en-tanglement and richer interaction. Powerful as these GNN-based methods are, they still suffer from two limitations: i)Insufficient multivariate relationships . Conven-tional GNNs assume pairwise relationships of ob-jects of interest, and can only offer an approximation of higher-order and multivariate relationships through multiple pairs [1, 10]. However, degeneration of those multivariate relationships into pairwise formulation may harm the expressiveness [20,30]. Therefore, com-plex multivariate relationships in ERC may not be suf-ficiently modelled by previous GNN-based methods. ii)Underestimated high-frequency information . It has been shown that the propagation rule of GNNs (i.e., ag-gregating and smoothing messages from neighbours) is an analogy to a fixed low-pass filter [26, 31], and it is mainly low-frequency messages that flow in the graph while the effects of high-frequency ones are much weakened. Moreover, Bo et al . [2] show that low-frequency messages, which retain the commonality of node features, perform better on assortative graphs (in which the linked nodes tend to have similar features and share the same label). In contrast, high-frequency information that mirrors discrepancy and inconsistency is more crucial on disassortative graphs. For ERC, the constructed graphs are in general highly disassorta-tive, where inconsistent emotional messages may exist among modalities (say being sarcastic) or short-term context. Hence, high-frequency information may pro-vide crucial guidance, which is however badly ignored by previous GNN-based ERC models, incurring bottle-neck of performance improvement.To address these issues, in this work we propose Multivariate Multi-frequency Multimodal Graph Neural Network (M3Net), which aims to capture more sufficient multivariate relationships among modalities and context, while benefiting from multi-frequency information within the graph. At the core of M3Net are two parallel compo-nents, multivariate propagation and multi-frequency propa-gation. Concretely, we first construct a hypergraph neural network with edge-dependent node weights [7] for multi-variate propagation, in which each modality of an utterance is represented as a node. We construct multimodal and con-textual hyperedges, which can connect arbitrary number of nodes, and thus can naturally encode relationships of higher arity. Meanwhile, we model multi-frequency information upon an undirected GNN, by adapting a set of frequency fil-ters [2, 8] to distil different frequency constituents from the node features. We adaptively integrate different frequency signals to capture the varying importance of emotion dis-crepancy and emotion commonality in the local neighbour-hood, so as to achieve adaptive information sharing pattern. The effectiveness of our work is further demonstrated by extensive experimental studies on two popular multimodal ERC datasets IEMOCAP [3] and MELD [27]. We show that M3Net outperforms previous state-of-the-art methods.
Clark_Where_We_Are_and_What_Were_Looking_At_Query_Based_CVPR_2023
Abstract Determining the exact latitude and longitude that a photo was taken is a useful and widely applicable task, yet it remains exceptionally difficult despite the acceler-ated progress of other computer vision tasks. Most previ-ous approaches have opted to learn single representations of query images, which are then classified at different lev-els of geographic granularity. These approaches fail to exploit the different visual cues that give context to differ-ent hierarchies, such as the country, state, and city level. To this end, we introduce an end-to-end transformer-based architecture that exploits the relationship between differ-ent geographic levels (which we refer to as hierarchies) and the corresponding visual scene information in an im-age through hierarchical cross-attention. We achieve this by learning a query for each geographic hierarchy and scene type. Furthermore, we learn a separate representation for different environmental scenes, as different scenes in the same location are often defined by completely different vi-sual features. We achieve state of the art accuracy on 4 standard geo-localization datasets : Im2GPS, Im2GPS3k, YFCC4k, and YFCC26k, as well as qualitatively demon-strate how our method learns different representations for different visual hierarchies and scenes, which has not been demonstrated in the previous methods. Above previous test-ing datasets mostly consist of iconic landmarks or images taken from social media, which makes the dataset a sim-ple memory task, or makes it biased towards certain places. To address this issue we introduce a much harder testing dataset, Google-World-Streets-15k, comprised of images taken from Google Streetview covering the whole planet and present state of the art results. Our code can be found at https://github.com/AHKerrigan/GeoGuessNet. *These authors contributed equally to the work1. Introduction Image geo-localization is the task of determining the GPS coordinates of where a photo was taken as precisely as possible. For certain locations, this may be an easy task, as most cities will have noticeable buildings, landmarks, or statues that give away their location. For instance, given an image of the Eiffel Tower one could easily assume it was taken somewhere in Paris. Noticing some of the finer fea-tures, like the size of the tower in the image and other build-ings that might be visible, a prediction within a few meters could be fairly easy. However, given an image from a small town outside of Paris, it may be very hard to predict its loca-tion. Certain trees or a building’s architecture may indicate the image is in France, but localizing finer than that can pose a serious challenge. Adding in different times of day, varying weather conditions, and different views of the same location makes this problem even more complex as two im-ages from the same location could look wildly different. Many works have explored solutions to this problem, with nearly all works focusing on the retrieval task, where query images are matched to a gallery of geo-tagged images to retrieve matching geo-tagged image [14,16,17,20,24,25]. There are two variations of the retrieval approach to this problem, same-view and cross-view. In same-view both the query and gallery images are taken at ground level. How-ever, in cross-view the query images are ground level while the gallery images are from an aerial view, either by satel-lite or drone. This creates a challenging task as images with the exact same location look very different from one an-other. Regardless of same-view or cross-view, the evalu-ation of the retrieval task is costly as features need to be extracted and compared for every possible match with geo-tagged gallery images, making global scale geo-localization costly if not infeasible. If, instead, the problem is approached as a classifica-tion task, it’s possible to localize on the global scale given enough training data [8,11,12,15,21,22]. These approaches This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23182 segment the Earth into Google’s S21cells that are assigned GPS locations and serve as classes, speeding up evaluation. Most previous classification-based visual geo-localization approaches use the same strategy as any other classifica-tion task: using an image backbone (either a Convolutional Neural Network or a Vision Transformer [2]), they learn a set of image features and output a probability distribution for each possible location (or class) using an MLP. In more recent works [11,12], using multiple sets of classes that rep-resent different global scales, as well as utilizing informa-tion about the scene characteristics of the image has shown to improve results. These approaches produce one feature vector for an image and presume that it is good enough to localize at every geographic level. However, that is not how a human would reason about finding out their location. If a person had no idea where they were, they would likely search for visual cues for a broad location (country, state) before considering finer areas. Thus, a human would look for a different set of features for each geographic level they want to predict. In this paper, we introduce a novel approach toward world-wide visual geo-localization inspired by human ex-perts. Typically, humans do not evaluate the entirety of a scene and reason about its features, but rather identify im-portant objects, markers, or landmarks and match them to a cache of knowledge about various known locations. In our approach, we emulate this by using a set of learned latent arrays called “hierarchy queries” that learn a different set of features for each geographic hierarchy. These queries also learn to extract features relative to specific scene types (e.g. forests, sports fields, industrial, etc.). We do this so that our queries can focus more specifically on features relevant to their assigned scene as well as the features related to their assigned hierarchy. This is done via a Transformer Decoder that cross-attends our hierarchy and scene queries with im-age features that are extracted from a backbone. We also implement a “hierarchy dependent decoder” that ensures our model learns the specifics of each individual hierarchy. To do this our “hierarchy dependent decoder” separates the queries according to their assigned hierarchy, and has inde-pendent weights for the Self-Attention and Feed-Forward stages that are specific to each hierarchy. We also note that the existing testing datasets contain implicit biases which make them unfit to truly measure a model’s geo-location accuracy. For instance, Im2GPS [4, 21] datasets contain many images of iconic landmarks, which only tests whether a model has seen and memorized the locations of those landmarks. Also, YFCC [18, 21] testing sets are composed entirely of images posted on-line that contained geo-tags in their metadata. This cre-ates a bias towards locations that are commonly visited and posted online, like tourist sites. Previous work has found 1https://code.google.com/archive/p/s2-geometry-library/this introduces significant geographical and often racial bi-ases into the datasets [7] which we demonstrate in Fig-ure 4. To this end, we introduce a challenging new test-ing dataset called Google-World-Streets-15k, which is more evenly distributed across the Earth and consists of real-world images from Google Streetview. The contributions of our paper include: (1) The first Transformer Decoder for worldwide image geo-localization. (2) The first model to produce multiple sets of features for an input image, and the first model capable of extracting scene-specific information without needing a separate network for every scene. (3) A new testing dataset that reduces landmark bias and reduces biases created by social media. (4) A significant improvement over previous SOTA methods on all datasets. (5) A qualitative analysis of the features our model learns for every hierarchy and scene query. 2. Related Works 2.1. Retrieval Based Image Geo-Localization The retrieval method for geo-localization attempts to match a query image to target image(s) from a reference database (gallery). Most methods train by using separate models for the ground and aerial views, bringing the fea-tures of paired images together in a shared space. Many dif-ferent approaches have been proposed to overcome the do-main gap, with some methods implementing GANs [3] that map images from one view to the other [14], others use a po-lar transform that makes use of the prior geometric knowl-edge to alter aerial views to look like ground views [16,17], and a few even combine the two techniques in an attempt to have the images appear even more similar [20]. Most methods assume that the ground and aerial images are perfectly aligned spatially. However, this is not always the case. In circumstances where orientation and spatial alignment aren’t perfect, the issue can be accounted for ahead of time or even predicted [17]. VIGOR [25] creates a dataset where the spatial location of a query image could be located anywhere within the view of its matching aerial image. Zhu [24] strays from the previous methods by using a non-uniform crop that selects the most useful patches of aerial images and ignores others. 2.2. Image Geo-Localization as Classification By segmenting the Earth’s surface into distinct classes and assigning a GPS coordinate to each class, a model is allowed to predict a class directly instead of comparing fea-tures to a reference database. Treating geo-localization this way was first introduced by Weyand et al. [22]. In their paper, they introduce a technique to generate classes that utilizes Google’s S2 library and a set of training GPS co-ordinates to partition the Earth into cells, which are treated 23183 Figure 1. A visualization of all 7 hierarchies used. The tmax value is set to 25000, 10000, 5000, 2000, 1000, 750, and 500 respectively for hierarchies 1 to 7, while the tmin value is set at 50 for every hierarchy. This generates 684, 1744, 3298, 7202, 12893, 16150, and 21673 classes for hierarchies 1 to 7 respectively. as classes. V o [21] was the first to introduce using multi-ple different partitions of varying granularity. In contrast, CPlaNet [15] develops a technique that uses combinatorial partitioning. This approach uses multiple different coarse partitions and encodes each of them as a graph, then re-fining the graph by merging nodes. More details on class generation will be discussed in Section 3.1. Up until Individual Scene Networks (ISNs) [11], no in-formation other than the image itself was used at training time. The insight behind ISNs was that different image contexts require different features to be learned in order to accurately localize the image. They make use of this by having three separate networks for indoor, natural, and ur-ban images respectively. This way each network can learn the important features for each scene and more accurately predict locations. The use of hierarchical classes was also introduced in [11]. While previous papers had utilized mul-tiple geographic partitions, the authors in [11] observed that these partitions could be connected through a hierarchical structure. To make use of this, they proposed a new eval-uation technique that combines the predictions of multiple partitio
ns, similar to YOLO9000 [13], which helps refine the overall prediction. Kordopatis-Zilos [8] developed a method that combines classification and retrieval. Their net-work uses classification to get a predicted S2 cell, then re-trieval within that cell to get a refined prediction. Most recently, TransLocator [12] was introduced, which learns from not only the RGB image but also the segmenta-tion map produced by a trained segmentation network. Pro-viding the segmentation map allows TransLocator to rely on the segmentation if there are any variations in the image, like weather or time of day, that would impact a normal RGB-based model. All of these methods fail to account for features that are specific to different geographic hierarchies and don’t fully utilize scene-specific information. We solve these problems with our query-based learning approach.3. Method In our approach, we treat discrete locations as classes, obtained by dividing the planet into Schneider-2 cells at dif-ferent levels of geographic granularity. The size of each cell is determined by the number of training images available in the given region, with the constraint that each cell has ap-proximately the same number of samples. We exploit the hierarchical nature of geo-location by learning different sets of features for each geographic hierarchy and for each scene category from an input image. Finally, we classify a query image by selecting the set of visual features correlated with the most confident scene prediction. We use these sets of features to map the image to an S2 cell at each hierarchi-cal level and combine the predictions at all levels into one refined prediction using the finest hierarchy. 3.1. Class Generation With global geo-localization comes the problem of sep-arating the Earth into classes. A naive way to do this would be to simply tessellate the earth into the rectangles that are created by latitude and longitude lines. This approach has a few issues, for one the surface area of each rectangle will vary with the distance from the poles, producing large class imbalances. Instead, we utilize Schneider 2 cells us-ing Google’s S2 Library. This process initially projects the Earth onto 6 sides of a cube, thereby resulting in an initial 6 S2 cells. To create balanced classes, we split each cell with more than tmax images from the training set located inside of it. We ignore any cells that have less than tminto ensure that classes have a significant number of images. The cells are split recursively until all cells fall within tminandtmax images. This creates a set of balanced classes that cover the entire Earth. These classes and hierarchies are visualized in Figure 1 where we can see the increasing specificity of our hierarchies. We begin with 684 classes at our coarsest hierarchy and increase that to 21673 at our finest. During evaluation we define the predicted location as the mean of the location of all training images inside a predicted class. 23184 Figure 2. Our proposed network. We randomly initialize a set of learned queries for each hierarchy and scene. An image is first encoded by Transformer Encoder and decoded by two decoders. The first decoder consists of Nlayers as a Hierarchy Independent Decoder, followed byElayers of our Hierarchy Dependent Decoder; this decoder only performs self-attention within each hierarchy, instead of across all hierarchies, and has separate Feed-Forward Networks for each hierarchy. To determine which scene to use for prediction, the scene with the highest average confidence (denoted by the 0thchannel) is selected and queries are fed to their corresponding classifier to geo-localize at each hierarchy. We get a final prediction by multiplying the class probabilities of the coarser hierarchies into the finer ones so that a prediction using all hierarchical information can be made. 3.2. Model Our model is shown in Figure 2, which is consists of a SWIN encoder, two decoders, and seven hierarchy classi-fiers. Here we outline the details behind our model’s design. One problem faced in geo-localization is that two images in the same geographic cell can share very few visual simi-larities. Two images from the same location could be taken at night or during the day, in sunny or rainy weather, or simply from the same location but one image faces North while the other faces South. Additionally, some informa-tion in a scene can be relevant to one geographic hierarchy (e.g. state) but not another (e.g. country). To that end, we propose a novel decoder-based architecture designed to learn unique sets of features for each of these possible settings. We begin by defining our geographic queries as GQ∈RHS×Dwhere His the number of geographic hi-erarchies, Sis the number of scene labels, and Dis the dimension of the features. We define each individual geo-graphic query as gqh swhere handsrepresent the index of the hierarchy and scene, respectively. The scene labels we use are provided by Places2 dataset [23]. We implement a pre-trained scene classification model to get the initial scene label from the coarsest set of labels and finer labels are ex-tracted using their hierarchical structure. We find that the middle set of 16 scenes gives the best results for our model, we show ablation on this in supplementary material.3.3. GeoDecoder Hierarchy Independent Decoder The geographic queries are passed into our GeoDecoder, whose primary function is, for each hierarchical query, to extract geographical in-formation relevant to its individual task for the image to-kens which have been produced by a Swin encoder [10]. As previously stated, our decoder performs operations on a series of learned latent arrays called Geographic Queries in a manner inspired by the Perceiver [6] and DETR [1]. We define Xas the image tokens, GQkas the geographic queries at the kthlayer of the decoder. Each layer performs multi-head self-attention (MSA) on the layer-normalized (LN) geographic queries, followed by cross-attention be-tween the output of self-attention and the image patch en-codings, where cross-attention is defined as CA(Q, K) = softmax (QKT √dk)K. where Q, K are Query and Key re-spectively. Finally, we normalize the output of the cross-attention operation and feed it into an feed-forward network (FFN) to produce the output of the decoder layer. Therefore, one decoder layer is defined as ySA=MSA (LN(GQk−1)) +GQk−1. (1) yCA=CA(LN(ySA, LN(X)) +ySA, (2) GQk=FFN (LN(yCA)) +yCA. (3) 23185 Hierarchy Dependent Decoder We find that a traditional transformer decoder structure for the entire GeoDecoder results in a homogeneity of all hi-erarchical queries. Therefore, in the final layers of the de-coder, we perform self-attention only in an intra hierarchi-cal manner rather than between all hierarchical queries. Ad-ditionally, we assign each hierarchy its own feed-forward network at the end of each layer rather than allowing hi-erarchies to share one network. We define the set of ge-ographic queries specifically for hierarchy hat layer kas GQk h. The feed-forward network for hierarchy his referred to asFFN h ySA=MSA (LN(GQk−1 h)) +GQk−1 h, (4) yCA=CA(LN(ySA), LN(X)) +ySA, (5) GQk h=FFN h(LN(yCA)) +yCA. (6) After each level, each GQk his concatenated to reform the full set of queries GQ. In the ablations Table 4, we show the results of these hierarchy dependent layers . 3.4. Losses As shown in Figure 2, our network is trained with two losses. The first loss is scene prediction loss, Lscene , which is a Cross-Entropy loss between the predicated scene la-belˆsiground truth scene labels si. Our second loss is a geo-location prediction loss, Lgeo, which is a combination of Cross-Entropy losses for each hierarchy. Given an im-ageXwe define the set of location labels as h1,h2, ..., h7, where hidenotes the ground-truth class distribution in hierarchy i, and the respective predicted distribution as ˆhi, we define Lscene (X)=CE(si,ˆsi)andLgeo(X) =P7 i=1CE(hi,ˆhi)andL(X) =Lgeo(X) +Lscene(X). 3.5. Inference With the output of our GeoDecoder GQoutwe can geo-localize the image. As our system is designed to learn different latent embeddings for different visual scenes, we must first choose which features to proceed with. For gqh s∈GQwe assign the confidence that the image belongs to scene sto that vector’s 0thelement. This minimizes the need for an additional individual scene network like in [11], while allowing specific weights within the decoder’s linear layers to specialize in differentiating visual scenes. Once we have GQout, the queries are separated and sent to the classifier that is assigned to their hierarchy. This gives us 7 different sets of class probabilities, one for each hierarchy. To condense this information into one class prediction, and to exploit the hierarchical nature of our classes, we multi-ply the probabilities of the classes in the coarser hierarchies by their sub-classes found in the finer hierarchies. If we define a class as CHi jwhere idenotes the hierarchy and jdenotes the class label within that hierarchy, we can define the probability of predicting a class CH7afor image Xas: p(X|CH7a) =p(X|CH7a)∗p(X|CH6 b)∗...∗p(X|CH1g), given that CH7ais a subclass of CH6 b,CH6 bis a subclass of CH5cand so on. We perform this for every class in our finest hierarchy so that we can use the finest geographic granular-ity while also using the information learned for all of the hierarchies. Figure 3. Example images from 16 different countries in the Google-World-Streets-15k dataset 4. Google-World-Streets-15K Dataset We propose a new testing dataset collected using Google Streetview called Google-World-Streets-15k (see Figure 3 for some representative examples). As previous testing datasets contain biases towards commonly visited locations or landmarks, the goal of our dataset is to eliminate those biases and have a more even distribution across the Earth. In total, our dataset contains 14,955 images covering 193 countries. In order to collect a fair distribution of images, we utilize a database of 43,000 cities2, as well as the surface area of every country. We first sample a country with a probabil-ity proportional to its surface area compared to the Earth’s total surface area. Then, we select a random city within that country and a GPS coordinate within a 5 Km radius of the center of the city to sample from the Google Streetview API. This ensures that the dataset is evenly distributed ac-cording to landmass and not biased towards the countries and locations that people post online. Google Streetview 2https://simplemaps.com/data/world-cities 23186 also blurs out any faces found in the photos, so a model that is using people’s faces to predict a location will have to rely on other features in the image to get a prediction. In Figure 4 We show a heatmap of Google-World-Streets-15k compared to heatmaps of YFCC26k and Im2GPS3k. We note that a majority of YFCC26k and Im2GPS3k are located in North America and Europe, with very little representation in the other 4 populated conti-nents. While Google-World-Streets-15k’s densest areas are still the Northeastern US and Europe, we provide a much more even sampling of the Earth with images on all pop-ulated continents. We also note that the empty locations on our dataset’s heatmap are mostly deserts, tundras, and mountain ranges. 5. Experiments 5.1. Training Data Our network is trained on the MediaEval Placing Tasks 2016 (MP-16) dataset [9]. This dataset consists of 4.72 mil-lion randomly chosen geo-tagged images from the
Chen_Human_Guided_Ground-Truth_Generation_for_Realistic_Image_Super-Resolution_CVPR_2023
Abstract How to generate the ground-truth (GT) image is a criti-cal issue for training realistic image super-resolution (Real-ISR) models. Existing methods mostly take a set of high-resolution (HR) images as GTs and apply various degra-dations to simulate their low-resolution (LR) counterparts. Though great progress has been achieved, such an LR-HR pair generation scheme has several limitations. First, the perceptual quality of HR images may not be high enough, limiting the quality of Real-ISR outputs. Second, existing schemes do not consider much human perception in GT generation, and the trained models tend to produce over-smoothed results or unpleasant artifacts. With the above considerations, we propose a human guided GT generation scheme. We first elaborately train multiple image enhance-ment models to improve the perceptual quality of HR im-ages, and enable one LR image having multiple HR coun-terparts. Human subjects are then involved to annotate the high quality regions among the enhanced HR images as GTs, and label the regions with unpleasant artifacts as negative samples. A human guided GT image dataset with both positive and negative samples is then constructed, and a loss function is proposed to train the Real-ISR mod-els. Experiments show that the Real-ISR models trained on our dataset can produce perceptually more realistic re-sults with less artifacts. Dataset and codes can be found at https://github.com/ChrisDud0257/HGGT .
1. Introduction Owing to the rapid development of deep learning tech-niques [14, 18,19,22,44], the recent years have witnessed the great progress in image super-resolution (ISR) [2, 8–10, 12,13,23,26–29, 31–33, 35,45,46,48,51,52,54,56], which aims at generating a high-resolution (HR) version of the low-resolution (LR) input. Most of the ISR models (e.g., *Equal contribution. †Corresponding author. This work is supported by the Hong Kong RGC RIF grant (R5001-18) and the PolyU-OPPO Joint Innovation Lab. Figure 1. From left to right and top to bottom: one original HR image (Ori) in the DIV2K [1] dataset, two of its enhanced positive versions (Pos-1 and Pos-2) and one negative version (Neg). The positive versions generally have clearer details and better percep-tual quality, while the negative version has some unpleasant visual artifacts. Please zoom in for better observation. CNN [37, 38] or transformer [5, 29] based ones) are trained on a large amount of LR-HR image pairs, while the gen-eration of LR-HR image pairs is critical to the real-world performance of ISR models. Most of the existing ISR methods take the HR images (or after some sharpening operations [46]) as ground-truths (GTs), and use them to synthesize the LR images to build the LR-HR training pairs. In the early stage, bicubic down-sampling is commonly used to synthesize the LR images from their HR counterparts [8, 9,23,33,42,56]. However, the ISR models trained on such HR-LR pairs can hardly gen-eralize to real-world images whose degradation process is much more complex. Therefore, some researchers proposed to collect HR-LR image pairs by using long-short camera focal lengths [3, 4]. While such a degradation process is more reasonable than bicubic downsampling, it only covers a small subspace of possible image degradations. Recently, researchers [12, 20,30,32,34,46,50,51,59] have proposed This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14082 to shuffle or combine different degradation factors, such as Gaussian/Poisson noise, (an-)isotropic blur kernel, down-sampling/upsampling, JPEG compression and so on, to syn-thesize LR-HR image pairs, largely improving the general-ization capability of ISR models to real-world images. Though great progress has been achieved, existing LR-HR training pair generation schemes have several limita-tions. First, the original HR images are used as the GTs to supervise the ISR model training. However, the perceptual quality of HR images may not be high enough (Fig. 1 shows an example), limiting the performance of the trained ISR models. Second, existing schemes do not consider much human perception in GT generation, and the trained ISR models tend to produce over-smoothed results. When the adversarial losses [27, 40, 48] are used to improve the ISR details, many unpleasant artifacts can be introduced. In order to tackle the aforementioned challenges, we pro-pose a human guided GT data generation strategy to train perceptually more realistic ISR (Real-ISR) models. First, we elaborately train multiple image enhancement models to improve the perceptual quality of HR images. Meanwhile, one LR image can have multiple enhanced HR counter-parts instead of only one. Second, to discriminate the visual quality between the original and enhanced images, human subjects are introduced to annotate the regions in enhanced HR images as “Positive”, “Similar” or “Negative” samples, which represent better, similar or worse perceptual qual-ity compared with the original HR image. Consequently, a human guided multiple-GT image dataset is constructed, which has both positive and negative samples. With the help of human annotation information in our dataset, positive and negative LR-GT training pairs can be generated (examples of the positive and negative GTs can be seen in Fig. 1), and a new loss function is proposed to train the Real-ISR models. Extensive experiments are conducted to validate the effectiveness and advantages of the proposed GT image generation strategy. With the same backbone, the Real-ISR models trained on our dataset can produce more perceptually realistic details with less artifacts than models trained on the current datasets.
He_Align_and_Attend_Multimodal_Summarization_With_Dual_Contrastive_Losses_CVPR_2023
Abstract The goal of multimodal summarization is to extract the most important information from different modalities to form summaries. Unlike unimodal summarization, the multimodal summarization task explicitly leverages cross-modal information to help generate more reliable and high-quality summaries. However, existing methods fail to lever-age the temporal correspondence between different modal-ities and ignore the intrinsic correlation between differ-ent samples. To address this issue, we introduce Align andAttend Multimodal Summ arization (A2Summ), a uni-fied multimodal transformer-based model which can ef-fectively align and attend the multimodal input. In ad-dition, we propose two novel contrastive losses to model both inter-sample and intra-sample correlations. Exten-sive experiments on two standard video summarization datasets (TVSum and SumMe) and two multimodal sum-marization datasets (Daily Mail and CNN) demonstrate the superiority of A2Summ, achieving state-of-the-art perfor-mances on all datasets. Moreover, we collected a large-scale multimodal summarization dataset BLiSS, which contains livestream videos and transcribed texts with annotated sum-maries. Our code and dataset are publicly available at https://boheumd.github.io/A2Summ/ .
1. Introduction With the development in multimodal learning, multi-modal summarization has drawn increasing attention [1 –9]. Different from traditional unimodal summarization tasks, such as video summarization [10 –17] and text summariza-tion [18 –22], multimodal summarization aims at generating summaries by utilizing the information from different modal-ities. With the explosive growing amount of online content (e.g., news, livestreams, vlogs, etc.), multimodal summa-rization can be applied in many real-world applications. It provides summarized information to the users, which is es-pecially useful for redundant long videos such as livestream *Part of this work was done when Bo was an intern at Adobe Research. I'm gonna draw cat. I'm doing coloring... That is just cute, just cute stuf f.[00:00-00:02] I can draw this water bottle to make it cute.[00:02-00:04] [00:45-01:00] [02:31-02:34]A2Summ TextVideo I'm gonna draw cat. I'm doing coloring... That is just cute, just cute stuf f.[00:00-00:02] I can draw this water bottle to make it cute.[00:02-00:04] [00:45-01:00] [02:31-02:34]Summarized V ideo Summarized T ext Figure 1. A2Summ is a unified multimodal summarization frame-work, which aligns and attends multimodality inputs while lever-aging time correspondence ( e.g., video and transcript) and outputs the selected important frames and sentences as summaries. and product review videos. Previous multimodal summarization methods [2,4,23,24] leverage the additional modality information but can only generate the main modality summary, i.e., either a video summary or a text summary, severely limiting the use of complementary benefits in the additional modality. Recently, multimodal summarization with multimodal output (MSMO) has been explored in several studies [1, 6, 25, 26], which aim at generating both video and text summaries using a joint model. Compared to previous methods, which only produce a unimodal summary, MSMO provides a better user experience with an easier and faster way to get useful information. However, we find that the existing MSMO methods still have the following limitations. First, even if both modalities are learned together, the correspondence between different modalities is not exploited. For example, given a video and its transcripts, which are automatically matched along the time axis, no existing method utilizes the mutual temporal alignment information and treats the two modalities separately. Second, previous works adopt simple strategies to model the cross-modal correlation by sequence modeling and attention operation [1, 4, 25, 25, 26, 26], which requires a large number of annotated multimodal data which is hard to obtain. Motivated by the above observations, we propose a novel architecture for multimodal summarization based on a uni-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14867 fied transformer model, as shown in Figure 1. First, to leverage the alignment information between different modal-ities, we propose alignment-guided self-attention module to align the temporal correspondence between video and text modalities and fuse cross-modal information in a unified manner. Second, inspired by the success of self-supervised training [27 –29], which utilizes the intrinsic cross-modality correlation within the same video and between different videos, we propose dual contrastive losses with the combi-nation of an inter-sample and an intra-sample contrastive loss, to model the cross-modal correlation at different gran-ularities. Specifically, the inter-sample contrastive loss is applied across different sample pairs within a batch, which leverages the intrinsic correlation between each video-text pair and contrasts them against remaining unmatched sam-ples to provide more training supervision. Meanwhile, the intra-sample contrastive loss operates within each sample pair, which exploits the mutual similarities between ground-truth video and text summaries and contrasts the positive features against hard-negative features. To facilitate the research of long video summarization with multimodal information, we also collected a large-scale livestream video dataset from the web. Livestream broadcast-ing is growing rapidly, and the summarization of livestream videos is still an unexplored area with great potential. Pre-vious video summarization datasets consist of short videos with great variations in scene transitions. On the contrary, livestream videos are significantly longer (in hours as op-posed to minutes) and the video content changes much more slowly over time, which makes it even harder for the sum-marization task. Besides, there has been a lack of annotated datasets with focus on transcript summarization, which can be a great complement to the livestream video summariza-tion. Therefore, we collect a large-scale multimodal sum-marization dataset with livestream videos and transcripts, which are both annotated with ground-truth summaries by selecting important frames and sentences. To summarize, our contributions include: •We propose A2Summ, a unified transformer-based ar-chitecture for multimodal summarization. It can handle multimodal input with time correspondences which pre-vious work neglects. •We present dual contrastive losses that account for mod-eling cross-modal information at different levels. Ex-tensive experiments on multiple datasets demonstrate the effectiveness and superiority of our design. •A large-scale Behance LiveStream Summarization (BLiSS) dataset is collected containing livestream videos and transcripts with multimodal summaries.
Huang_Self-Supervised_AutoFlow_CVPR_2023
Abstract Recently, AutoFlow has shown promising results on learning a training set for optical flow, but requires ground truth labels in the target domain to compute its search met-ric. Observing a strong correlation between the ground truth search metric and self-supervised losses, we introduce self-supervised AutoFlow to handle real-world videos with-out ground truth labels. Using self-supervised loss as the search metric, our self-supervised AutoFlow performs on par with AutoFlow on Sintel and KITTI where ground truth is available, and performs better on the real-world DAVIS dataset. We further explore using self-supervised AutoFlow in the (semi-)supervised setting and obtain competitive re-sults against the state of the art.
1. Introduction Data is the new oil. — Clive Humby, 2006 [13] This well-known analogy not only foretold the critical role of data for developing AI algorithms in the last decade but also revealed the importance of data curation . Like re-fined oil, data must be carefully curated to be useful for AI algorithms to succeed. For example, one key ingredientfor the success of AlexNet [21] is ImageNet [36], a large dataset created by extensive manual labeling. The manual labeling process, however, is either not ap-plicable or difficult to scale to many low-level vision tasks, such as optical flow. A common practice for optical flow is to pre-train models using large-scale synthetic datasets, e.g., FlyingChairs [6] and FlyingThings3D [26], and then fine-tune them on limited in-domain datasets, e.g., Sintel [4] or KITTI [28]. While this two-step process works better than directly training on the limited target datasets, there exists a domain gap between synthetic data and the target domain. To narrow the domain gap, AutoFlow [41] learns to ren-der a training dataset to optimize performance on a tar-get dataset, obtaining superior results on Sintel and KITTI where the ground truth is available. As obtaining ground truth optical flow for most real-world data is still an open challenge, it is of great interest to remove this dependency on ground truth to apply AutoFlow to real-world videos. In this paper, we introduce a way to remove this reliance by connecting learning to render with another independent line of research on optical flow, self-supervised learning (SSL). SSL methods for optical flow [15, 23–25, 53] use a set of self-supervised losses to train models using only im-age pairs in the target domain. We observe a strong corre-lation between these self-supervised losses and the ground This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11412 truth errors, as shown in Fig. 2. This motivates us to con-nect these two lines of research by adopting self-supervised losses as a search metric for AutoFlow [41], calling our ap-proach “Self-supervised AutoFlow”. Self-supervised AutoFlow obtains similar performance to AutoFlow on Sintel [4] and KITTI [28], and it can learn a better dataset for the real-world DA VIS data [29] where ground truth is not available. To further narrow the domain gap between synthetic data and the target domain, we also explore new ways to better synergize techniques from learn-ing to render and self-supervised learning. Numerous self-supervised methods still rely on pre-training on a synthetic dataset. Our method replaces this pre-training with supervised training on self-supervised AutoFlow data generated using self-supervised metrics. This new pipeline is still self-supervised and obtains com-petitive performance among all self-supervised methods. We further demonstrate that our method provides a strong initialization for supervised fine-tuning and obtains compet-itive results against the state of the art. We make the following main contributions: We introduce self-supervised AutoFlow to learn to ren-der a training set for optical flow using self-supervision on the target domain, connecting two independently studied directions for optical flow: learning to render and self-supervised learning. Self-supervised AutoFlow performs competitively against AutoFlow [41] that uses ground truth on Sintel and KITTI and better on DA VIS where ground truth is not available. We further analyze self-supervised AutoFlow in semi-supervised and supervised settings and obtain compet-itive performance against the state of the art.
Chen_MagicNet_Semi-Supervised_Multi-Organ_Segmentation_via_Magic-Cube_Partition_and_Recovery_CVPR_2023
Abstract We propose a novel teacher-student model for semi-supervised multi-organ segmentation. In teacher-student model, data augmentation is usually adopted on unlabeled data to regularize the consistent training between teacher and student. We start from a key perspective that fixed rela-tive locations andvariable sizes of different organs can pro-vide distribution information where a multi-organ CT scan is drawn. Thus, we treat the prior anatomy as a strong tool to guide the data augmentation and reduce the mismatch between labeled and unlabeled images for semi-supervised learning. More specifically, we propose a data augmen-tation strategy based on partition-and-recovery N3cubes cross-and within-labeled and unlabeled images. Our strat-egy encourages unlabeled images to learn organ semantics in relative locations from the labeled images (cross-branch) and enhances the learning ability for small organs (within-branch). For within-branch, we further propose to refine the quality of pseudo labels by blending the learned representa-tions from small cubes to incorporate local attributes. Our method is termed as MagicNet, since it treats the CT vol-ume as a magic-cube and N3-cube partition-and-recovery process matches with the rule of playing a magic-cube. Ex-tensive experiments on two public CT multi-organ datasets demonstrate the effectiveness of MagicNet, and noticeably outperforms state-of-the-art semi-supervised medical im-age segmentation approaches, with +7% DSC improve-ment on MACT dataset with 10% labeled images. Code is avaiable at https://github.com/DeepMed-Lab-ECNU/MagicNet .
1. Introduction Abdominal multi-organ segmentation in CT images is an essential task in many clinical applications such as *Corresponding Author. pancreasliverWithin -image branch Cross -image branch adrenal gland veins pancreas duodenum stomachaortaAll data Small -cubes Magic -cube Fixed relative -location Labeled data Unlabeled data gallbladderFigure 1. Two data augmentation strategies in MagicNet. Left: Although labeled and unlabeled images are not aligned, the lat-ter can be regarded as a shifted version of the former. Co-shift of cubes transfers organ semantics in relative locations from the labeled data to unlabeled data. Right : Segmenting small organs from original images are difficult due to the cluttered background. Small cubes mitigate the impact from background and focus more on local attributes. computer-aided intervention [25, 33]. But, training an ac-curate multi-organ segmentation model usually requires a large amount of labeled data, whose acquisition process is time-consuming and expensive. Semi-supervised learn-ing (SSL) has shown great potential to handle the scarcity of data annotations, which attempts to transfer mass prior knowledge learned from the labeled to unlabeled images. SSL attracts more and more attention in the field of medical image analysis in recent years. Popular SSL medical image segmentation methods mainly focus on segmenting a single target or targets in a local region, such as segmenting pancreas or left atrium [4, 9, 14, 15, 18, 23, 31, 35, 38, 39]. Multi-organ segmenta-tion is more challenging than single organ segmentation, due to the complex anatomical structures of the organs, e.g., the fixed relative locations (duodenum is always lo-cated at the head of the pancreas), the appearances of dif-ferent organs, and the large variations of the size. Transfer-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23869 ring current SSL medical segmentation methods to multi-organ segmentation encounters severe problems. Multiple organs introduce much more variance compared with a sin-gle organ. Although labeled and unlabeled images are al-ways drawn from the same distribution, due to the limited number of labeled images, it’s hard to estimate the precise distribution from them [32]. Thus, the estimated distribu-tion between labeled and unlabeled images always suffers from mismatch problems , and is even largely increased by multiple organs. Aforementioned SSL medical segmenta-tion methods lack the ability to handle such a large distri-bution gap, which requires sophisticated anatomical struc-ture modeling. A few semi-supervised multi-organ segmen-tation methods have been proposed, DMPCT [43] designs a co-training strategy to mine consensus information from multiple views of a CT scan. UMCT [36] further proposes an uncertainty estimation of each view to improve the qual-ity of the pseudo-label. Though these methods take the advantages of multi-view properties in a CT scan, they in-evitably ignore the internal anatomical structures of multi-ple organs, resulting in suboptimal results. Teacher-student model is a widely adopted framework for semi-supervised medical image segmentation [28]. Stu-dent network takes labeled images and unlabeled strongly augmented images as input, which attempts to minimize the distribution mismatch between labeled and unlabeled images from the model level. That is, data augmentation is adopted on unlabeled data, whose role is to regularize the consistent training between teacher and student. As mentioned, semi-supervised multi-organ segmentation suf-fers from large distribution alignment mismatch between labeled and unlabeled images. Reducing the mismatch mainly from the model level is insufficient to solve the prob-lem. Thanks to the prior anatomical knowledge from CT scans, which provides the distribution information where a multi-organ CT scan is drawn, it is possible to largely alle-viate the mismatch problem from the data level. To this end, we propose a novel teacher-student model, called MagicNet , matching with the rule of playing a magic-cube. More specifically, we propose a partition-and-recovery N3cubes learning paradigm: (1) We partition each CT scan, termed as magic-cube, into N3small cubes. (2) Two data augmentation strategies are then designed, as shown in Fig. 1. I.e., First, to encourage unlabeled data to learn organ semantics in relative locations from the la-beled data, small cubes are mixed across labeled and un-labeled images while keeping their relative locations. Sec-ond, to enhance the learning ability for small organs, small cubes are shuffled and fed into the student network. (3) We recover the magic-cube to form the original 3D geom-etry to map with the ground-truth or the supervisory signal from teacher. Furthermore, the quality of pseudo labels pre-dicted by teacher network is refined by blending with thelearned representation of the small cubes. The cube-wise pseudo-label blending strategy incorporates local attributes e.g., texture, luster and boundary smoothness which miti-gates the inferior performance of small organs. The main contributions can be summarized as follows: • We propose a data augmentation strategy based on partition-and-recovery N3cubes cross-and within-la-beled and unlabeled images which encourages unla-beled images to learn organ semantics in relative loca-tions from the labeled images and enhances the learn-ing ability for small organs. • We propose to correct the original pseudo-label by cube-wise pseudo-label blending via incorporating crucial local attributes for identifying targets especially small organs. • We verify the effectiveness of our method on BTCV [13] and MACT [11] datasets. The segmentation per-formance of our method exceeds all state-of-the-arts by a large margin, with 7.28% (10% labeled) and 6.94% (30% labeled) improvement on two datasets re-spectively (with V-Net as the backbone) in DSC.
Han_FAME-ViL_Multi-Tasking_Vision-Language_Model_for_Heterogeneous_Fashion_Tasks_CVPR_2023
Abstract In the fashion domain, there exists a variety of vision-and-language (V+L) tasks, including cross-modal retrieval,text-guided image retrieval, multi-modal classification, andimage captioning. They differ drastically in each individ-ual input/output format and dataset size. It has been com-mon to design a task-specific model and fine-tune it in-dependently from a pre-trained V+L model ( e.g., CLIP). This results in parameter inefficiency and inability to ex-ploit inter-task relatedness. To address such issues, we pro-pose a novel FAshion-focused Multi-task Efficient learn-ing method for Vision-and-Language tasks ( F AME-ViL )i n this work. Compared with existing approaches, F AME-ViL applies a single model for multiple heterogeneous fashion tasks, therefore being much more parameter-efficient. It is enabled by two novel components: (1) a task-versatilearchitecture with cross-attention adapters and task-specific adapters integrated into a unified V+L model, and (2) a sta-ble and effective multi-task training strategy that supportslearning from heterogeneous data and prevents negativetransfer . Extensive experiments on four fashion tasks show that our F AME-ViL can save 61.5% of parameters over alternatives, while significantly outperforming the conven-tional independently trained single-task models. Code isavailable at https://github.com/BrandonHanx/F AME-ViL .
1. Introduction A variety of real-world multi-modal, particularly Vision-and-Language (V+L) tasks exist in the fashion domain, in-cluding multi-modal recognition [ 44,53,61], multi-modal retrieval [ 21,83] and image captioning [ 85]. The models developed for these tasks have been applied in diverse e-commerce applications, improving product discoverability, seller-buyer engagement, and customer conversion rate af-ter catalogue browsing. Intrinsically, those V+L tasks areFigure 1. By multi-task learning a single model for heterogeneous fashion tasks, our FAME-ViL can significantly improve parameter efficiency, while boosting the model performance per task over existing independently fine-tuned single-task models . Note, each axis is normalized according to the respective maximum value for easier visualization. heterogeneous in terms of (1) different input and output formats ( e.g., text-guided garment retrieval [ 83] and image captioning [ 85] have completely different inputs and out-puts); (2) different dataset sizes as the annotation difficultyof each task differ ( e.g., the labeling effort for text-guided image retrieval is much harder than that for text-to-image retrieval [ 48,83]). Due to the heterogeneous nature of the V+L fashion tasks, existing methods [ 21,24,33,87,94] typically take a pre-trained generic V+L model [ 7,38,41–43,49,60,67, 72,79] and fine-tune it on every single task independently. Such an approach suffers from two limitations. (1) Low pa-rameter efficiency : Each real-world application requires the deployment of its dedicated fine-tuned model, where there is no parameter or inference computation sharing. This leads to a linearly increasing storage and inference com-pute redundancy in the long run. (2) Lack of inter-task relatedness : Though the fashion tasks are heterogeneous This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2669 in nature, the fundamental components of the models are closely related in that all tasks require a deep content (im-age/sentence) understanding. Exploiting the shared infor-mation across tasks thus has the potential to improve modelgeneralization capability leading to a performance boost. Perhaps a natural solution would be applying Multi-Task Learning (MTL) [ 13]. However, most existing multi-task training methods [ 8,36,46,56,63] are designed for homo-geneous tasks ( i.e., one dataset with multi-task labels) and thus cannot be directly applied to the heterogeneous fashiontasks. In our case, we are facing two challenges in build-ing the fashion-domain MTL model: (1) Architecturally ,i t is non-trivial to model the diverse tasks in one unified ar-chitecture. Taking the popular CLIP [ 60] as an example, its two-stream architecture is designed for image-text align-ment [ 52] and thus lacks the modality fusion mechanism as required by many V+L fashion tasks ( e.g., text-guided im-age retrieval [ 2,83] and image captioning [ 85]). (2) In terms ofoptimization , a fashion-domain MTL model is prone to the notorious negative transfer problem [ 8,13,36,46,56,63] due to both task input/output format differences and imbal-anced dataset sizes. To the best of our knowledge, there has been no attempt at V+L MTL for the fashion domain. In this work, we introduce a novel FAshion-focused Multi-task Efficient learning method for various Vision-and-Language based fashion tasks, dubbed as FAME-ViL . It achieves superior performance across a set of diverse fashion tasks with much fewer parameters as in Fig. 1. Specifically, we design a task-versatile architecture on topof a pre-trained generic V+L model ( i.e., CLIP [ 60]). To adapt the simple two-stream architecture of CLIP to various fashion tasks, we introduce a lightweight Cross-Attention Adapter (XAA) to enable the cross-modality interaction be-tween the two streams. It makes the model flexible to support multiple task modes ( e.g., contrastive mode for retrieval, fusion mode for understanding, and generative mode for generation). To address the negative transfer chal-lenge, we introduce a T ask-Specific Adapter (TSA) to ab-sorb inter-task input/output format incompatibilities by in-troducing lightweight additional per-task parameters. Forfurther handling the dataset imbalance problem, a multi-teacher distillation scheme [ 12] is formulated for our het-erogeneous MTL problem. It leverages the pre-trained per-task teachers to guide the optimization of our multi-task model, mitigating the overfitting risks of those tasks withsmaller training dataset sizes. Our contributions are summarized as follows: (I)For the first time, we investigate the problem of multi-task learningon heterogeneous fashion tasks, eliminating the parameter redundancy and exploiting the inter-task relatedness. (II) We propose FAME-ViL with two novel adapters, adapting a pre-trained CLIP model to all tasks. (III) We introduce an efficient and effective multi-task training strategy sup-Text Query: Long sleeve relaxed-fit silk blazer in light peach. Shawl collar. Single-button closure and patch pockets at front. Breast pocket. Slits at sleeve cuffs. Vented at back. Cross-Modal Retrieval (XMR) Reference Image Modifying Text: is a black and white dress, is straplessText-Guided Image Retrieval (TGIR) Generated Caption: Grey & brown camo print tank top. Relaxed-fit tank top in tones of grey, brown, and black. Signature snake graphic print throughout. Ribbed crewneck collar. Tonal stitching.Fashion Image Captioning (FIC) Slouchy lamb nubuck patrol hat in black. Wrinkling and light distressing throughout. Fully lined. Predicted Class: [FLAT CAPS] Sub-Category Recognition (SCR) Figure 2. An illustration of four diverse fashion V+L Tasks studied in this work: cross-modal retrieval, text-guided image retrieval,sub-category recognition, and fashion image captioning. Note,all predictions shown in this figure are made by our FAME-ViL.Green box indicates the ground truth matches of retrieval tasks. porting heterogeneous task modes in one unified model. (IV) Comprehensive experiments on four diverse fashion tasks ( i.e., cross-modal retrieval [ 52,61], text-guided im-age retrieval [ 75,83], multi-modal classification [ 61,94], and image captioning [ 85]) show that our method signifi-cantly outperforms the previous single-task state-of-the-artwith 61.5% parameter saving (see Fig. 1).
Jiang_Neural_Intrinsic_Embedding_for_Non-Rigid_Point_Cloud_Matching_CVPR_2023
Abstract As a primitive 3D data representation, point clouds are prevailing in 3D sensing, yet short of intrinsic struc-tural information of the underlying objects. Such discrep-ancy poses great challenges in directly establishing corre-spondences between point clouds sampled from deformable shapes. In light of this, we propose Neural Intrinsic Embed-ding (NIE) to embed each vertex into a high-dimensional space in a way that respects the intrinsic structure. Based upon NIE, we further present a weakly-supervised learn-ing framework for non-rigid point cloud registration. Un-like the prior works, we do not require expansive and sen-sitive off-line basis construction (e.g., eigen-decomposition of Laplacians), nor do we require ground-truth correspon-dence labels for supervision. We empirically show that our framework performs on par with or even better than the state-of-the-art baselines, which generally require more su-pervision and/or more structural geometric input.
1. Introduction Estimating correspondences between non-rigidly aligned point clouds serves as a critical building block inmany computer vision and graphics applications, including animation [21, 34], robotics [15, 44], autonomous driv-ing [10, 54], to name a few. In contrast to the well-known rigid case, more sophisticated deformation models are in demand to characterize the non-rigid motions, for instance, articulation movements of human shapes. To address this challenge, extrinsic methods in princi-ple approximate a complex global non-rigid deformation with a set of local rigid and/or affine transformations, e.g., point-wise affine transformation [24, 50, 53], deformation graph [6, 7, 25], and patch-based deformation [23, 52]. Be-ing intuitive and straightforward, the extrinsic deformation models are in general redundant and lack global structures. On the other hand, intrinsic methods [2, 8, 19, 29, 30, 33] first transform extrinsic coordinates into an alternative rep-resentation, in which shape alignment is performed. For in-stance, the seminal functional maps framework [32] utilizes eigenbasis of the Laplace-Beltrami operator as spectral em-beddings and turns non-rigid 3D shapes matching into rigid alignment of high-dimensional spectral embeddings, under the isometric deformation assumption. However, spectral embeddings are generally obtained by an inefficient, non-differentiable off-line eigen-decomposition of the Laplacian operator defined on shapes, either represented as polygonal meshes [36] or point clouds [42]. Moreover, spectral em-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21835 beddings are sensitive to various practical artifacts such as noise, partiality, and disconnectedness, to name a few. To this end, we follow the isometric assumption and first propose a learning-based framework, Neural Intrin-sic Embedding (NIE), to embed point clouds into a high-dimensional space. In particular, we expect our embedding to satisfy the following desiderata: (1) It is aware of the in-trinsic geometry of the underlying surface; (2) It is compu-tationally efficient; (3) It is robust to typical artifacts man-ifested in point clouds. Our key insight is that geodesics on a deformable surface, which are inherently related to the Riemannian metric, contain rich information of the intrinsic geometry. Therefore NIE is trained such that the Euclidean distance between embeddings approximates the geodesic distance between the corresponding points on the under-lying surface. In particular, considering the local tracing manner of geodesic computation, we choose DGCNN [49] as our backbone, which efficiently gathers local features at different abstraction levels. We also carefully formulate a set of losses and design network modifications to overcome practical learning issues including rank deficiency, and sen-sitivity to point sampling density. As a consequence, NIE manages to learn an intrinsic-aware embedding from merely unstructured point clouds. Fig. 1 demonstrates that we ob-tain the segmentation result closest to the ground truth based on geodesic distances. Furthermore, based on NIE, we propose a Neural Intrin-sic Mapping (NIM) network, a weakly supervised learn-ing framework for non-rigid point cloud matching. Though closely related to the Deep Functional Maps (DFM) frame-works, our method replaces the spectral embedding with the trained NIE and further learns to extract the optimal fea-tures based on a self-supervised loss borrowed from [14]. In the end, we establish a pipeline for weakly supervised non-rigid point cloud matching, which only requires all the point clouds to be rigidly aligned and, for training point clouds, access to the geodesic distance matrices of them. Our overall pipeline is simple and geometrically infor-mative. We conduct a set of experiments to demonstrate the effectiveness of our pipeline. In particular, we high-light that (1) our method performs on par with or even better than the competing baselines which generally require more supervision and/or more structural geometric input on near-isometric point cloud matching; (2) our method achieves sensible generalization performance, thanks to our tailored design to reduce the bias of point sampling density; (3) our method is robust regarding several artifacts, including noise and various partiality.
Bu_Rate_Gradient_Approximation_Attack_Threats_Deep_Spiking_Neural_Networks_CVPR_2023
Abstract Spiking Neural Networks (SNNs) have attracted signif-icant attention due to their energy-efficient properties and potential application on neuromorphic hardware. State-of-the-art SNNs are typically composed of simple Leaky Integrate-and-Fire (LIF) neurons and have become comparable to ANNs in image classification tasks on large-scale datasets. However, the robustness of these deep SNNs has not yet been fully uncovered. In this paper, we first experimentally observe that layers in these SNNs mostly communicate by rate coding. Based on this rate coding property, we develop a novel rate coding SNN-specified attack method, Rate Gradient Approximation Attack (RGA). We generalize the RGA attack to SNNs composed of LIF neurons with different leaky parameters and input encoding by designing surrogate gradients. In addition, we develop the time-extended enhancement to generate more effective adversarial examples. The experiment results indicate that our proposed RGA attack is more effective than the previous attack and is less sensitive to neuron hyperparameters. We also conclude from the experiment that rate-coded SNN composed of LIF neurons is not secure, which calls for exploring training methods for SNNs composed of complex neurons and other neuronal codings. Code is available at https://github.com/putshua/SNN attack RGA
1. Introduction As the third generation of artificial neural networks [47], Spiking Neural Networks (SNNs) have gained more attrac-tion due to their spatio-temporal, discrete representation, and event-driven properties. These bio-inspired neural net-works borrow the characteristics of spiking representations and neuronal dynamics from biological brains [23,75]. Un-like traditional Analog Neural Networks (ANNs), SNNs utilize spiking neurons as their essential components, which accumulate current over time, emit spikes when the mem-brane potential exceeds the threshold, and pass on informa-*Corresponding authortion through spike trains. The natural sparsity of the spike trains leads to the low power consumption of SNNs [59,69]. SNNs are competitive in real-world vision applications. The development of neuromorphic computing [10, 11, 20, 54, 56, 76] has further magnified the advantages of low-power consumption properties of SNNs, so that they can be deployed in power-limited scenarios [8, 64], such as edge computing or mobile application. However, the training al-gorithms of SNNs are also improving. The most practical training methods are ANN-SNN conversion [7], supervised training [72], and hybrid training [57, 58]. When SNNs are applied to safety-critical systems, the reliability of SNNs should be a major concern. The adver-sarial attack is one of the most significant categories that threatens model security [24, 68]. Similar to ANNs, SNNs can also be fooled by crafting adversarial examples that are imperceptible to human eyes from gradient-based back-propagation [62], which may lead to catastrophic conse-quences when SNNs are deployed in safety-related scenar-ios. Nevertheless, SNNs are still considered to be more ro-bust than ANNs. This robustness comes from inherent neu-ral dynamics, such as forgetting historical information and discrete spikes [63]. Besides, the robustness of SNNs can be improved through special structural enhancements [9] or training techniques [37, 45, 71]. Effective attack examples of ANNs can be crafted from well-defined gradients on the activation functions [68]. For SNNs, a common way to construct gradient-based attacks is by backpropagating through a surrogate function over discrete spikes. In this way, the gradient may suffer from explosion and vanishment in temporal and layer-by-layer communication [72]; at the same time, the membrane po-tential of all historical time steps needs to be saved when backpropagation, which requires a large amount of memory. Currently, high-performance SNNs typically combine leaky integrate-and-fire models and rate-encoded inputs. While the rate coding scheme brings excellent performance to SNN, it also exposes shortcomings. If the rate coding nature in SNN is considered, can we construct a more powerful at-tack? After all, the activation functions of many ANNs are inspired by the firing rate of biological neurons [52]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7896 In this paper, we develop a novel Rate Gradient Approx-imation Attack (RGA) based on components of rate coding in high-performance SNNs. RGA attack is more effective than previously used attacks as it makes better use of the rate coding feature. We expect our work to provide bench-marks for SNN defense against adversarial attacks and in-spire future research for SNNs. The main contributions of this paper are: • We observe that layers in SNN are mainly communi-cated by rate coding, either for converted SNN or for surrogate-trained SNN. • We develop the Rate Gradient Approximation Attack based on rate coding and apply it to SNNs composed of different types of neurons and input codings. We further propose a time-extended variant to get more ef-fective adversarial examples. • Experiments prove that the RGA attack outperforms the STBP attack and is less sensitive to neuron hyper-parameters. Based on the proposed attack, we com-pare the robustness of SNNs using different leaky pa-rameters with that of ANNs and manifest that the SNNs composed of LIF neurons cannot provide strong enough security. This conclusion inspires further re-search on networks with more complex neurons.
He_Few-Shot_Geometry-Aware_Keypoint_Localization_CVPR_2023
Abstract Supervised keypoint localization methods rely on large manually labeled image datasets, where objects can deform, articulate, or occlude. However, creating such large keypoint labels is time-consuming and costly, and is often error-prone due to inconsistent labeling. Thus, we desire an approach that can learn keypoint localization with fewer yet consis-tently annotated images. To this end, we present a novel formulation that learns to localize semantically consistent keypoint definitions, even for occluded regions, for varying object categories. We use a few user-labeled 2D images as input examples, which are extended via self-supervision us-ing a larger unlabeled dataset. Unlike unsupervised methods, the few-shot images act as semantic shape constraints for object localization. Furthermore, we introduce 3D geometry-aware constraints to uplift keypoints, achieving more ac-curate 2D localization. Our general-purpose formulation paves the way for semantically conditioned generative mod-eling and attains competitive or state-of-the-art accuracy on several datasets, including human faces, eyes, animals, cars, and never-before-seen mouth interior (teeth) localiza-tion tasks, not attempted by the previous few-shot methods. Project page: https://xingzhehe.github.io/FewShot3DKP/
1. Introduction Keypoint localization is a long-standing problem in com-puter vision with applications in classification [ 7,8], image generation [ 45,66], character animation [ 64,65], 3D model-ing [15,55], and anti-spoofing [ 9], among others. Traditional supervised keypoint localization approaches require a large dataset of annotated images with balanced data distributions to train robust models that generalize to unseen observa-tions [ 18,84,87]. However, annotating keypoints in images and videos is expensive, and usually requires several annota-tors with domain expertise [ 44,80,83]. Manual annotations can be inaccurate due to low resolution imagery [ 5] and tem-poral variations in illumination and appearance [ 29,79], or even subjective, especially in presence of external occlusions *Work was done while interning at Flawless AI ImageKeypointsUncertaintyStructure Figure 1. All results are obtained by 10-shot learning except Tigers where 20 examples are used. The left/right/middle keypoints are marked in red/blue/green. Using only a few shots, the model learns semantically consistent and human interpretable keypoints. Un-certainty modeling helps us identify occlusions and ambiguous boundaries, as shown in the mouth, eye, and car example. [38,89] and image blur effects [ 68,96]. Besides, modeling self-occluded object parts is proven to be an ambiguous task since 3D consistent keypoint annotations are needed [ 97]. As a consequence, supervised approaches are prone to learning suboptimal models from noisy training data. Unsupervised keypoint detection methods can predict con-sistent keypoint structures [ 19–21,27,43,98], but they lack human interpretability or may be insufficient, e.g., for editing tasks requiring detailed manipulation of object parts. Jakab This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21337 et al. [ 28] pioneered adding interpretability with a cycle loss between unsupervised images and unpaired pose examples. However, their focus is different and the proposed cycleGAN struggles in a few-shot setting as it does not exploit paired ex-amples. Unsupervised methods can be extended to few-shot setups either by learning a mapping that regresses detected keypoints to human-labeled annotations [ 73], which requires hundreds or thousands of examples, or by attaching few-shot annotated examples to the unsupervised training batch as weak supervision [ 51]. However, we show that neither approach produces competitive predictions when added to state-of-the-art unsupervised keypoint localization methods. Our contributions focus on making the latter approach work, building upon the unsupervised reconstruction in [ 20] and the skeleton formulation in [ 28]. Recent advances in semi-supervised keypoint localization has shown significant progress in the field. Still, most ex-isting methods are specialized for a single object category such as faces [ 3,57,85] and X-rays [ 6,94,95,100], or require hundreds or thousands of annotated examples to achieve competitive performance [ 51,56,81]. Our approach, how-ever, only needs a few dozens of examples. In an orthogonal direction, Honari et al. [ 24] assist keypoint localization via equivariance transforms and classification labels, though the latter are not always available. Generative image labeling has also shown great promise [ 69,90,99]. However, an-notating StyleGAN-generated images is prone to artifacts and noise. Besides, generative approaches are limited to the underlying data distribution biases [ 53,71], thus decreasing overall keypoint localization performance. Current limitations create the need for an approach that can leverage a smaller yet semantically consistent corpus of human labeled annotations while generalizing to a much larger unlabelled image set. This paper presents a novel formulation that learns to localize semantically consistent keypoint definitions, even for occluded regions, for various object categories with complex geometry using only a few user-labeled images. We use as input a few example-based user-labeled 2D images with predefined keypoint definitions and their linkages to learn to localize keypoints. Unlike un-supervised methods, the user-selected few-shot images act as semantic shape constraints for human-interpretable key-point localization. To enable generalization to the target data distribution, we extend our approach via self-supervision using a larger unlabeled dataset. In addition, we introduce 3D geometry-aware constraints to model depth and uplift 2D keypoints in 3D with viewpoint consistency, thus achieving more accurate 2D localization. Experimental results demonstrate that our proposed ap-proach competes with or outperforms state-of-the-art meth-ods in few-shot keypoint localization for human faces, eyes, animals, and cars using a only few user-defined semantic examples. We also show the capabilities of our keypointlocalization approach on a novel data distribution, specifi-cally the mouth interior, which has not been attempted with previous few-shot localization approaches. Thus, our novel general-purposed formulation paves the way for semantically conditional generative modeling with a few user-labeled ex-amples. We hope it will enable a broader set of downstream applications, including fast dataset labeling, and in-the-wild modeling and tracking of complex objects, among others. Our key contributions are summarized as follows: 1. A novel formulation for few-shot 3D geometry-aware keypoint localization that works on diverse data distributions.
Anciukevicius_RenderDiffusion_Image_Diffusion_for_3D_Reconstruction_Inpainting_and_Generation_CVPR_2023
Abstract Diffusion models currently achieve state-of-the-art per-formance for both conditional and unconditional image generation. However, so far, image diffusion models do not support tasks required for 3D understanding, such as view-consistent 3D generation or single-view object reconstruc-tion. In this paper, we present RenderDiffusion, the first dif-fusion model for 3D generation and inference, trained using only monocular 2D supervision. Central to our method is a novel image denoising architecture that generates and ren-ders an intermediate three-dimensional representation of a scene in each denoising step. This enforces a strong induc-tive structure within the diffusion process, providing a 3D consistent representation while only requiring 2D supervi-sion. The resulting 3D representation can be rendered from any view. We evaluate RenderDiffusion on FFHQ, AFHQ, ShapeNet and CLEVR datasets, showing competitive per-formance for generation of 3D scenes and inference of 3D scenes from 2D images. Additionally, our diffusion-based approach allows us to use 2D inpainting to edit 3D scenes. Project page: https: //github .com /Anciukevicius/ RenderDiffusion1. Introduction Image diffusion models now achieve state-of-the-art per-formance on both generation and inference tasks. Com-pared to alternative approaches (e.g. GANs and V AEs), they are able to model complex datasets more faithfully, particu-larly for long-tailed distributions, by explicitly maximizing likelihood of the training data. Many exciting applications have emerged in only the last few months, including text-to-image generation [49, 55], inpainting [54], object inser-tion [3], and personalization [53]. However, in 3Dgeneration and understanding, their suc-cess has so far been limited, both in terms of quality and diversity of the results. Some methods have successfully applied diffusion models directly to point cloud or voxel data [35, 67], or optimized a NeRF using a pre-trained dif-fusion model [48]. This limited success in 3D is due to two problems: first, an explicit 3D representation (e.g., voxels) leads to significant memory demands and affects conver-gence speed; and more importantly, a setup that requires ac-cess to explicit 3D supervision is problematic as 3D model repositories contain orders of magnitude fewer data com-pared to image counterparts—a particular problem for large diffusion models which tend to be more data-hungry than GANs or V AEs. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12608 In this work, we present RenderDiffusion – the first dif-fusion method for 3D content that is trained using only 2D images. Like previous diffusion models, we train our model to denoise 2D images. Our key insight is to incorporate a latent 3D representation into the denoiser. This creates an inductive bias that allows us to recover 3D objects while training only to denoise in 2D, without explicit 3D super-vision. This latent 3D structure consists of a triplane rep-resentation [8] that is created from the noisy image by an encoder, and a volumetric renderer [37] that renders the 3D representation back into a (denoised) 2D image. With the triplane representation, we avoid the cubic memory growth for volumetric data, and by working directly on 2D im-ages, we avoid the need for 3D supervision. Compared to latent diffusion models that work on a pre-trained latent space [5,50], working directly on 2D images also allows us to obtain sharper generation and inference results. Note that RenderDiffusion does assume that we have the intrinsic and extrinsic camera parameters available at training time. We evaluate RenderDiffusion on in-the-wild (FFHQ, AFHQ) and synthetic (CLEVR, ShapeNet) datasets and show that it generates plausible and diverse 3D-consistent scenes (see Figure 1). Furthermore, we demonstrate that it successfully performs challenging inference tasks such as monocular 3D reconstruction and inpainting 3D scenes from masked 2D images, without specific training for those tasks. We show improved reconstruction accuracy over a state-of-the-art method [8] in monocular 3D reconstruction that was also trained with only monocular supervision. In short, our key contribution is a denoising architecture with an explicit latent 3D representation, which enables us to build the first 3D-aware diffusion model that can be trained purely from 2D images . 2. Related Work Generative models. To achieve high-quality image synthe-sis, diverse generative models have been proposed, includ-ing GANs [1, 18, 27], V AEs [30, 61], independent compo-nent estimation [16], and autoregressive models [60]. Re-cently, diffusion models [22, 58] have achieved state-of-the-art results on image generation and many other im-age synthesis tasks [15, 23, 29, 34, 56]. Uniquely, diffu-sion models can avoid mode collapse (a common challenge for GANs), achieve better density estimation than other likelihood-based methods, and lead to high sample quality when generating images. We aim to extend such power-ful image diffusion models from 2D image synthesis to 3D content generation and inference. While many generative models (like GANs) have been extended for 3D generation tasks [21, 40, 41], applying diffusion models on 3D scenes is still relatively an un-explored area. A few recent works build 3D diffusion models using point-, voxel-, or SDF-based representations[13, 24, 32, 35, 38, 67], relying on 3D (geometry) supervi-sion. Except for the concurrent works [13, 38], these meth-ods focus on shape generation only and do not model sur-face color or texture – which are important for rendering the resulting shapes. Instead, we combine diffusion models with advanced neural field representations, leading to com-plete 3D content generation. Unlike implicit models [63], our model generates both shape and appearance, allowing for realistic image synthesis under arbitrary viewpoints. Neural field representations. There has been exponential progress in the computer vision community on representing 3D scenes as neural fields [4,11,37,39,59,64], allowing for high-fidelity rendering in various reconstruction and image synthesis tasks [2, 33, 45, 47, 66]. We utilize the recent tri-plane based representation [8, 11] in our diffusion model, allowing for compact and efficient 3D modeling with vol-ume rendering. Recent NeRF-based methods have developed generaliz-able networks [12, 65] trained across scenes for efficient few-shot 3D reconstruction. While our model is trained only on single-view images, our method can achieve high inference quality comparable to such a method like Pix-elNeRF [65] that requires multi-view data during training. Several concurrent approaches use 2D diffusion models as priors for this task [14,20,68]; unlike ours they cannot syn-thesise new images or scenes a priori . Neural field representations have also been extended to building 3D generative models [9, 31], where most meth-ods are based on GANs [8, 17, 19, 44]. Two concurrent works – DreamFusion [48] (extending DreamFields [25]) and Latent-NeRF [36] – leverage pre-trained 2D diffusion models as priors to drive NeRF optimization, achieving promising text-driven 3D generation. We instead seek to build a 3D diffusion model and achieve direct object generation via sampling. Another con-current work, GAUDI [5], introduces a 3D generative model that first learns a triplane-based latent space using multi-view data, and then builds a diffusion model over this latent space. In contrast, our approach only requires single-view 2D images and enables end-to-end 3D generation from im-age diffusion without pre-training any 3D latent space. Diff-Dreamer [7] casts 3D scene generation as repeated inpaint-ing of RGB-D images rendered from a moving camera; this inpainting uses a 2D diffusion model. However this method cannot generate scenes a priori . 3. Method Our method builds on the successful training and gener-ation setup of 2D image diffusion models, which are trained to denoise input images that have various amounts of added noise [22]. At test time, novel images are generated by ap-plying the model in multiple steps to progressively recover 12609 forward process reverse process... ... gµ... gµt =0 t =T ° ,cP p,P[p]sà triplane features to color and densitygµdenoiser input image denoised imageeÁFigure 2. Architecture overview. Images are generated by it-eratively applying the denoiser gθto noisy input images, progres-sively removing the noise. Unlike traditional 2D diffusion models, our denoiser contains 3D structure in the form of a triplane repre-sentation Pthat is inferred from a noisy input image by the en-coder eϕ. A small MLP sψconverts triplane features at arbitrary sample points into colors and densities that can then be rendered back into a denoised output image using a volumetric renderer. an image starting from pure noise samples. We keep this training and generation setup, but modify the architecture of the main denoiser to encode the noisy input image into a 3D representation of the scene that is volumetrically rendered to obtain the denoised output image. This introduces an in-ductive bias that favors 3D scene consistency, and allows us to render the 3D representation from novel viewpoints. Figure 2 shows an overview of our architecture. In the fol-lowing, we first briefly review 2D image diffusion models (Section 3.1), then describe the novel architectural changes we introduce to obtain a 3D-aware denoiser (Section 3.2). 3.1. Image Diffusion Models Diffusion models generate an image x0by moving a starting image xT∼ N
(0,I)progressively closer to the data distribution through multiple denoising steps xT−1, . . . ,x0. Forward process. To train the model, noisy images x1, . . . ,xTare created by repeatedly adding Gaussian noise starting from a a training image x0: q(xt|xt−1)∼ N(xt;p 1−βtxt−1, βtI), (1) where βtis a variance schedule that increases from β0= 0 toβT= 1 and controls how much noise is added in each step. We use a cosine schedule [42] in our experiments. Toavoid unnecessary iterations, we can directly obtain xtfrom x0in a single step using the closed form: q(xt|x0) =√¯αtx0+√ 1−¯αtϵ withϵ∼ N(0,I),¯αt:=tY s=1αsandαt:= (1−βt).(2) Reverse process. The reverse process aims at reversing the steps of the forward process by finding the posterior dis-tribution for the less noisy image xt−1given the more noisy image xt: q(xt−1|xt,x0)∼ N(xt−1;µt, σ2 tI), (3) where µt:=√¯αt−1βt 1−¯αtx0+√αt(1−¯αt−1) 1−¯αtxt andσ2 t:=1−¯αt−1 1−¯αtβt. Note that x0is unknown (it is the image we want to gener-ate), so we cannot directly compute this distribution, instead we train a denoiser gθwith parameters θto approximate it. Typically only the mean µtneeds to be approximated by the denoiser, as the variance does not depend on the unknown image x0. Please see Ho et al. [22] for a derivation. We could directly train a denoiser to predict the mean µt, however, Ho et al. [22] show that a denoiser gθcan be trained more stably and efficiently by directly predict-ing the total noise ϵthat was added to the original image x0 in Eq. 2. We follow Ho et al., but since our denoiser gθwill also be tasked with reconstructing a 3D version of the scene shown in x0as intermediate representation, we train gθto predict x0instead of the noise ϵ: L:=∥gθ(xt, t)−x0∥1, (4) where Ldenotes the training loss. Once trained, at genera-tion time, the model gθcan then approximate the mean µt of the posterior q(xt−1|xt,x0) =N(xt−1;µt, σ2 tI)as: µt≈1√αt xt−1−αt 1−¯αt xt−√¯αtgθ(xt, t) .(5) This approximate posterior is sampled in each generation step to progressively get the less noisy image xt−1from the more noisy image xt. 3.2. 3D-Aware Denoiser The denoiser gθtakes a noisy image xtas input and out-puts a denoised image ˜x0. In existing methods [22, 51], the denoiser gθis typically implemented by a type of UNet [52]. This works well in the 2D setting but does not encourage the denoiser to reason about the 3D structure of a scene. We introduce a latent 3D representation into the denoiser 12610 based on triplanes [8, 46]. For this purpose, we modify the architecture of the denoiser to incorporate two additional components: a triplane encoder eϕthat transforms the in-put image xt, posed using camera view v, into a 3D triplane representation, and a triplane renderer rψthat renders the 3D triplane representation back into a denoised image ˜x0, such that, gθ(xt, t,v):=rψ(eϕ(xt, t),v), (6) where θdenotes the concatenated parameters ϕandψof the encoder and renderer. The output image is a denoised version of the input image, thus it has to be rendered from the same viewpoint. We assume the viewpoint vof the input image to be available. Note that the noise is applied directly to the source/rendered images. Triplane representation. A triplane representation P factorizes a full 3D feature grid into three 2D feature maps placed along the three (canonical) coordinate planes, giving a significantly more compact representation [8, 46]. Each feature map has a resolution of N×N×nf, where nfis the number of feature channels. The feature for any given 3D point pis then obtained by projecting the point to each coordinate plane, interpolating each feature map bilinearly, and summing up the three results to get a single feature vec-tor of size nf. We denote this process of bilinear sampling fromPasP[p]. Triplane encoder. The triplane encoder eϕtransforms an input image xtof size M×M×3into a triplane representa-tion of size N×N×3nf, where N≥M. We use the U-Net architecture commonly employed in diffusion models [22] as a basis, but append additional layers (without skip con-nections) to output feature maps in the size of the triplanes. More architectural details are given in the supplementary material. Triplane renderer. The triplane renderer rψperforms volume rendering using the triplane features and outputs an image xt−1of size M×M×3. At each 3D sample point palong rays cast from the image, we obtain density γand colorcwith an MLP as (γ,c) =sψ(p,P[p]). The final color for a pixel is produced by integrating colors and den-sities along a ray using the same explicit volume rendering approach as MipNeRF [4]. We use the two-pass importance sampling approach of NeRF [37]; the first pass uses strati-fied sampling to place samples along each ray, and the sec-ond pass importance-samples the results of the first pass. 3.3. Score-Distillation Regularization To avoid solutions with trivial geometry on FFHQ and AFHQ, we found that it is helpful to regularize the model with a score distillation loss [48]. This encourages the model to output a scene that looks plausible from a ran-dom viewpoint vrsampled from the training set, not justthe viewpoint vof the input image xt. Specifically, at each training step, we render the denoised 3D scene eϕ(xt, t) fromvr, giving the image ˜xrand compute a score distilla-tion loss for ˜xras:∥˜xr−gθ(√¯αt˜xr+√1−¯αtϵ, t,vr)∥1 withϵ∼ N(0,I). 3.4. 3D Reconstruction Unlike existing 2D diffusion models, we can use Ren-derDiffusion to reconstruct 3D scenes from 2D images. To reconstruct the scene shown in an input image x0, we pass it through the forward process for tr≤Tsteps, and then de-noise it in the reverse process using our learned denoiser gθ. In the final denoising step, the triplanes encode a 3D scene that can be rendered from novel viewpoints. The choice of trintroduces an interesting control that is not available in existing 3D reconstruction methods. It allows us to trade off between reconstruction fidelity and generalization to out-of-distribution input images: At tr= 0, no noise is added to the input image and the 3D reconstruction reproduces the scene shown in the input as accurately as possible; how-ever, out-of-distribution images cannot be handled. With larger values for tr, input images that are increasingly out-of-distribution can be handled, as the denoiser can move the input images towards the learned distribution. This comes at the cost of reduced reconstruction fidelity, as the added noise removes some detail from the input image, which the denoiser fills in with generated content. 4. Experiments We evaluate RenderDiffusion on three tasks: monocular 3D reconstruction, unconditional generation, and 3D-aware inpainting. Datasets. For training and evaluation we use real-world human face dataset (FFHQ), a cat face dataset (AFHQv2) as well as generated datasets of scenes from CLEVR [26] and ShapeNet [10]. We adopt FFHQ and AFHQv2 from EG3D [8], which uses off-the-shelf estimator to extract ap-proximate extrinsics and augments dataset with horizontal image flips. We generate a variant of the CLEVR dataset which we call the CLEVR1 dataset, where each scene con-tains a single object standing on the plane at the origin. We randomize the objects in different scenes to have different colors, shapes, and sizes and generate 900scenes, where 400are used for training and the rest for testing. To evaluate our method on more complex shapes, we use objects from three categories of the ShapeNet dataset. ShapeNet contains man-made objects of various categories that are represented as textured meshes. We use shapes from the car,plane , andchair categories, each placed on a ground-plane. We use a total of 3200 objects from each category: 2700 for training and 500for testing. To render the scenes, we sam-ple100viewpoints uniformly on a hemisphere centered at 12611 Table 1. 3D reconstruction performance. We compare to EG3D [8] and PixelNeRF [65] on ShapeNet and our variant of the CLEVR dataset (CLEVR1). Since PixelNeRF has a significant advantage due to training with multi-view supervision, we keep it in a separate category and denote with bold numbers the best among the two methods with single-view supervision, i.e. EG3D and RenderDiffusion. ShapeNet CLEVR1 car plane chair average PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PixelNeRF [65] 27.3 0.838 29.4 0.887 30.2 0.884 28.9 0.870 43.4 0.988 EG3D [8] 21.8 0.714 25.0 0.803 25.5 0.803 24.1 0.773 33.2 0.910 RenderDiffusion (ours) 25.4 0.805 26.3 0.834 26.6 0.830 26.1 0.823 39.8 0.976 the origin. Viewing angles that are too shallow (below 12◦) are re-sampled. 70of these viewpoints are used for train-ing, and 30are reserved for testing. We use Blender [6] to render each of the objects from each of the 100viewpoints. 4.1. Monocular 3D Reconstruction We evaluate 3D reconstruction on test scenes from each of our three ShapeNet categories. Since these images are drawn from the same distribution as the training data, we do not add noise, i.e. we set tr= 0. Reconstruction is performed on the non-noisy image with one iteration of our denoiser gθ. Note that our method does not require the cam-input depth recon. novel view generation results gen. results with shaded depth inpainting results Figure 3. RenderDiffusion results on FFHQ and AFHQ. We show reconstruction (top four rows), unconditional generation (bottom left), and 3D-aware inpainting (bottom right).era viewpoint of the image as input. Baselines. We compare to two state-of-the-art methods that are also trained without 3D supervision. EG3D [8] is a generative model that uses triplanes as its 3D representation and, like our method, trains with only single images as su-pervision. As it does not have an encoder, we perform GAN inversion to obtain a 3D recons
Bravo_Open-Vocabulary_Attribute_Detection_CVPR_2023
Abstract Vision-language modeling has enabled open-vocabulary tasks where predictions can be queried using any text prompt in a zero-shot manner. Existing open-vocabulary tasks focus on object classes, whereas research on object attributes is limited due to the lack of a reliable attribute-focused evaluation benchmark. This paper introduces the Open-Vocabulary Attribute Detection (OVAD) task and the corresponding OVAD benchmark. The objective of the novel task and benchmark is to probe object-level attribute information learned by vision-language models. To this end, we created a clean and densely annotated test set cov-ering 117 attribute classes on the 80 object classes of MS COCO. It includes positive and negative annotations, which enables open-vocabulary evaluation. Overall, the bench-mark consists of 1.4 million annotations. For reference, we provide a first baseline method for open-vocabulary at-tribute detection. Moreover, we demonstrate the bench-mark’s value by studying the attribute detection perfor-mance of several foundation models.
1. Introduction One of the main goals of computer vision is to develop models capable of localizing and recognizing an open set of visual concepts in an image. This has been the main direc-tion for the recently proposed Open-V ocabulary Detection (OVD) task [50] for object detection, where the goal is to detect a flexible set of object classes that are only defined at test time via a text query. Classical supervised object de-tection methods are bound to predict objects from a fixed set of pre-defined classes, and extending them to a very large number of classes is limited by the annotation effort. Acknowledgements This work was supported by the German Aca-demic Exchange Service (DAAD) -57440921 Research Grants -Doctoral Programmes in Germany, 2019/20. This work was partially funded by the German Research Foundation (DFG) -401269959 and 417962828. We thank Philipp Schr ¨oppel, Silvio Galesso and Jan Bechtold for their critical feedback on the paper. We thank all OV AD annotators especially Mariana Sarmiento and Jorge Bravo for their help, time, and effort. Figure 1. Example from the presented open vocabulary attribute detection benchmark. The objective is to detect all objects and visual attributes of each object in the image. Objects and attributes are only specified at test time via text prompts. OVD methods overcome this constraint by utilizing vision-language modeling to learn about novel objects using the weak supervision of image-text pairs. OVD methods for object detection have made fast progress and have even surpassed supervised baselines for rare (tail) classes [16]. Best OVD methods [16, 35, 53, 54] train with extra weak supervision using image classification datasets, which are focused on retrieving object informa-tion. However, it is unclear on how well OVD methods generalize information beyond the object class. This pa-per focuses on object-level attribute information, such as the object’s state, size, and color. Attributes play a significant role in an object’s identity. A small change of an attribute in a description can mod-ify our understanding of an object’s appearance and percep-tion. Imagine driving in a forest where you encounter a bear like the one in Figure 1. Even if you do not distinguish or know the type of bear, recognizing that it is made of wood is enough to realize that it is fake and harmless. A model capa-ble of detecting object attributes enables a richer reasoning ability via combining objects and attributes. It allows the model to potentially extrapolate to novel object classes. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7041 In this paper, we introduce the Open-V ocabulary At-tribute Detection (OV AD) task. Its objective is to detect and recognize an open set of objects in an image together with an open set of attributes for every object. Both sets are de-fined by text queries during inference without knowledge of the tested classes during training. The OV AD task is a two-stage task. The first stage, referred to as open-vocabulary object detection [50], seeks to detect all objects in the im-age, including novel objects for which no bounding box or class annotation is available during training. The second stage seeks to determine all attributes present for each de-tected object. None of the attributes is annotated; therefore, all attributes are novel . Testing the OV AD task requires an evaluation bench-mark with unambiguous and dense attribute annotations to identify misses as well as false positive predictions. Current datasets [32, 33] for predicting attributes in-the-wild come with many missing or erroneous annotations, as discussed in more detail in Section 3.2. Thus, in this paper, we introduce the OV AD benchmark, an evaluation benchmark for open-vocabulary attribute detection. It is based on images of the MS COCO [29] dataset and only contains visually identifi-able attributes. On average, the proposed benchmark has 98 attribute annotations per object instance, with 7.2 objects per image, for a total of 1.4 million attribute annotations, making it the most densely annotated object-level attribute dataset. It has a large coverage with 80 object categories and 117 attribute categories. It also provides negative at-tribute annotations, which enables quantifying false posi-tive predictions. The benchmark is devoid of various label-ing errors since it is manually annotated and quality-tested for annotation consistency. Our OV AD benchmark also ex-tends the OVD benchmark [50] by including all 80 COCO object classes. This extension increases the novel set of ob-jects from 17 to 32 classes. Together with the benchmark, we provide a first baseline method that learns the OV AD task to a reasonable degree. It learns the task from image-caption pairs by using all components of the caption, not only nouns. We also compare the performance of several off-the-shelf OVD models to get an insight of how much attribute information is implicitly comprised in nouns (e.g., puppy implies a young dog). Moreover, we demonstrate the value of the benchmark by evaluating object-level attribute information learned by several open-source vision-language models, some-times also referred to as foundation models, including CLIP [34], Open CLIP [20], BLIP [26], ALBEF [27], and X-VLM [51]. Such models learn from the weak supervi-sion of image-text pairs, which is assumed to be available particularly via web content. The results show the extent to which the present success of foundation models on object classes generalizes to attributes. Contributions (1) We introduce the Open-VocabularyAttribute Detection (OV AD) task, where the objective is to detect all objects and predict their associated attributes. These objects and attributes belong to an open set of classes and can be queried using textual input. (2) We propose the OV AD benchmark: a clean and densely annotated evalua-tion dataset for open-vocabulary attribute detection, which can be used to evaluate open-vocabulary methods as well as foundation models. (3) We provide an attribute-focused baseline method for the OV AD task, which outperforms the existing open-vocabulary models that only aim for the ob-ject classes. (4) We test the performance of several open-source foundation models on visual attribute detection.
Fang_TBP-Former_Learning_Temporal_Birds-Eye-View_Pyramid_for_Joint_Perception_and_Prediction_CVPR_2023
Abstract Vision-centric joint perception and prediction (PnP) has become an emerging trend in autonomous driving research. It predicts the future states of the traffic participants in the surrounding environment from raw RGB images. How-ever, it is still a critical challenge to synchronize features obtained at multiple camera views and timestamps due to inevitable geometric distortions and further exploit those spatial-temporal features. To address this issue, we pro-pose a temporal bird’s-eye-view pyramid transformer (TBP-Former) for vision-centric PnP , which includes two novel designs. First, a pose-synchronized BEV encoder is pro-posed to map raw image inputs with any camera pose at any time to a shared and synchronized BEV space for bet-ter spatial-temporal synchronization. Second, a spatial-temporal pyramid transformer is introduced to compre-hensively extract multi-scale BEV features and predict fu-ture BEV states with the support of spatial priors. Ex-tensive experiments on nuScenes dataset show that our proposed framework overall outperforms all state-of-the-art vision-based prediction methods. Code is available at: https://github.com/MediaBrain-SJTU/TBP-Former
1. Introduction As one of the most fascinating engineering projects, autonomous driving has been an aspiration for many re-searchers and engineers for decades. Although significant progress has been made, it is still an open question in de-signing a practical solution to achieve the goal of full self-driving. A traditional and common solution consists of a sequential stack of perception, prediction, planning, and control. Despite the idea of divide-and-conquer having achieved tremendous success in developing software sys-*These authors contributed equally to this work. †Corresponding author. Figure 1. Two major challenges in vision-based perception and prediction are (a) how to avoid distortion and deficiency when ag-gregating features across time and camera views; and (b) how to achieve spatial-temporal feature learning for prediction. Our Pose-Synchronized BEV Encoder can precisely map the visual fea-tures into synchronized BEV space, and Spatial-Temporal Pyra-mid Transformer extracts feature at multiple scales. tems, a long stack could cause cascading failures in an au-tonomous system. Recently, there is a trend to combine multiple parts in an autonomous system to be a joint mod-ule, cutting down the stack. For example, [25, 46] consider joint perception and prediction and [5,43] explore joint pre-diction and planning. This work focuses on joint perception and prediction. The task of joint perception and prediction (PnP) aims to predict the current and future states of the surround-ing environment with the input of multi-frame raw sensor This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1368 data. The output current and future states would directly serve as the input for motion planning. Recently, many PnP methods are proposed based on diverse sensor input choices. For example, [4, 25, 34] take multi-frame LiDAR point clouds as input and achieve encouraging 3D detec-tion and trajectory prediction performances simultaneously. Recently, the rapid development of vision-centric methods offers a new possibility to provide a cheaper and easy-to-deploy solution for PnP. For instance, [1, 16, 17] only uses RGB images collected by multiple cameras to build PnP systems. Meanwhile, without precise 3D measurements, vision-centric PnP is more technically challenging. There-fore, this work aims to advance this direction. The core of vision-centric PnP is to learn appropriate spatial-temporal feature representations from temporal im-age sequences. It is a crux and difficult from three aspects. First, since the input and the output of vision-centric PnP are supported in camera front-view (FV) and bird’s-eye-view (BEV) respectively, one has to deal with distortion is-sues during geometric transformation between two views. Second, when the vehicle is moving, the view of the image input is time-varying and it is thus nontrivial to precisely map visual features across time into a shared and synchro-nized space. Third, since information in temporal image sequences is sufficiently rich for humans to accurately per-ceive the environment, we need a powerful learning model to comprehensively exploit spatial-temporal features. To tackle these issues, previous works on vision-centric PnP consider diverse strategies. For example, [16, 56] fol-lows the method in [38] to map FV features to BEV fea-tures, then synchronizes BEV features across time via rigid transformation, and finally uses a recurrent network to ex-ploit spatial-temporal features. However, due to the image discretization nature and depth estimation uncertainty, sim-ply relying on rigid geometric transformations would cause inevitable distortion; see Fig. 1. Some other work [49] transforms the pseudo feature point cloud to current ego coordinates and then pools the pseudo-lidar to BEV fea-tures; however, this approach encounters deficiency due to the limited sensing range in perception. Meanwhile, many works [16,17,56] simply employ recurrent neural networks to learn the temporal features from multiple BEV represen-tations, which is hard to comprehensively extract spatial-temporal features. To promote more reliable and comprehensive feature learning across views and time, we propose the tempo-ral bird’s-eye-view pyramid transformer (TBP-Former) for vision-centric PnP. The proposed TBP-Former includes two key innovations: i) pose-synchronized BEV encoder, which leverages a pose-aware cross-attention mechanism to di-rectly map a raw image input with any camera pose at any time to the corresponding feature map in a shared and synchronized BEV space; and ii) spatial-temporal pyra-mid transformer, which leverages a pyramid architecture with Swin-transformer [28] blocks to learn comprehen-sive spatial-temporal features from sequential BEV maps at multiple scales and predict future BEV states with a set of future queries equipped with spatial priors. Compared to previous works, the proposed TBP-Former brings benefits from two aspects. First, previous works [16, 17, 24, 56] consider FV-to-BEV transformation and tempo-ral synchronization as two separate steps, each of which could bring distortion due to discrete depth estimation and rigid transformation; while we merge them into one step and leverage both geometric transformation and attention-based learning ability to achieve spatial-temporal synchronization. Second, previous works [16, 53] use RNNs or 3D convolu-tions to learn spatial-temporal features; while we leverage a powerful pyramid transformer architecture to comprehen-sively capture spatial-temporal features, which makes pre-diction more effective. To summarize, the main contributions of our work are: • To tackle the distortion issues in mapping temporal image sequences to a synchronized BEV space, we propose a pose-synchronized BEV encoder (PoseSync BEV Encoder) based on cross-view attention mecha-nism to extract quality temporal BEV features. • We propose a novel Spatial-Temporal Pyramid Trans-former (STPT) to extract multi-scale spatial-temporal features from sequential BEV maps and predict future BEV states according to well-elaborated future queries integrated with spatial priors. • Overall, we propose TBP-Former, a vision-based joint perception and prediction framework for autonomous driving. TBP-Former achieves state-of-the-art perfor-mance on nuScenes [2] dataset for the vision-based prediction task. Extensive experiments show that both PoseSync BEV Encoder and STPT contribute greatly to the performance. Due to the decoupling property of the framework, both proposed modules can be eas-ily utilized as alternative modules in any vision-based BEV prediction framework.
Jeong_DistractFlow_Improving_Optical_Flow_Estimation_via_Realistic_Distractions_and_Pseudo-Labeling_CVPR_2023
Abstract We propose a novel data augmentation approach, Dis-tractFlow, for training optical flow estimation models by introducing realistic distractions to the input frames. Based on a mixing ratio, we combine one of the frames in the pair with a distractor image depicting a similar domain, which allows for inducing visual perturbations congruent with natural objects and scenes. We refer to such pairs as distracted pairs. Our intuition is that using semantically meaningful distractors enables the model to learn related variations and attain robustness against challenging devia-tions, compared to conventional augmentation schemes fo-cusing only on low-level aspects and modifications. More specifically, in addition to the supervised loss computed between the estimated flow for the original pair and its ground-truth flow, we include a second supervised loss de-fined between the distracted pair’s flow and the original pair’s ground-truth flow, weighted with the same mixing ra-tio. Furthermore, when unlabeled data is available, we ex-tend our augmentation approach to self-supervised settings through pseudo-labeling and cross-consistency regulariza-tion. Given an original pair and its distracted version, we enforce the estimated flow on the distracted pair to agree with the flow of the original pair. Our approach allows increasing the number of available training pairs signifi-cantly without requiring additional annotations. It is agnos-tic to the model architecture and can be applied to training any optical flow estimation models. Our extensive evalua-tions on multiple benchmarks, including Sintel, KITTI, and SlowFlow, show that DistractFlow improves existing mod-els consistently, outperforming the latest state of the art.
1. Introduction Recent years have seen significant progress in optical flow estimation thanks to the development of deep learning, e.g., [4,7,8,23]. Among the latest works, many focus on de-veloping novel neural network architectures, such as PWC-Net [29], RAFT [30], and FlowFormer [6]. Other stud-†Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. Figure 1. Existing augmentation schemes apply low-level vi-sual modifications, such as color jittering, block-wise random oc-clusion, and flipping, to augment the training data (top), while DistractFlow introduces high-level semantic perturbations to the frames (bottom). DistractFlow can further leverage unlabeled data to generate a self-supervised regularization. Our training leads to more accurate and robust optical flow estimation models, espe-cially in challenging real-world settings. ies investigate how to improve different aspects of super-vised training [27], e.g., gradient clipping, learning rate, and training compute load. More related to our paper are those incorporating data augmentation during training (e.g., [30]), including color jittering, random occlusion, cropping, and flipping. While these image manipulations can effectively expand the training data and enhance the robustness of the neural models, they fixate on the low-level aspects of the images. Since obtaining ground truth optical flow on real data is very challenging, another line of work investigates how to leverage unlabeled data. To this end, semi-supervised methods [9, 12] that utilize frame pairs with ground-truth flow annotations in conjunction with unlabeled data in train-ing have been proposed. For instance, FlowSupervisor [9] adopts a teacher-student distillation approach to exploit un-labeled data. This method, however, does not consider lo-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13691 calized uncertainty but computes the loss for the entire im-age between the teacher and student network. In this paper, we present a novel approach, Distract-Flow, which performs semantically meaningful data aug-mentations by introducing images of real objects and nat-ural scenes as distractors or perturbations to training frame pairs. More specifically, given a pair of consecutive frames, we combine the second frame with a random image depict-ing similar scenarios based on a mixing ratio. In this way, related objects and scenes are overlaid on top of the orig-inal second frame; see Figure 1 for an example. As a re-sult, we obtain challenging yet appropriate distractions for the optical flow estimation model that seeks dense corre-spondences from the first frame to the second frame (and in reverse too). The original first frame and the composite sec-ond frame constitute a distracted pair of frames, which we use as an additional data sample in both supervised and self-supervised training settings. Unlike our approach, existing data augmentation schemes for optical flow training apply only low-level variations such as contrast changes, geomet-ric manipulations, random blocks, haze, motion blur, and simple noise and shapes insertions [28, 30]. While such augmentations can still lead to performance improvements, they are disconnected from natural variations, scene con-text, and semantics. As we shall see in our experimental validation, the use of realistic distractions in training can provide a bigger boost to performance. Figure 1 provides a high-level outline of DistractFlow. We apply DistractFlow in supervised learning settings us-ing the ground-truth flow of the original pair. Distracted pairs contribute to the backpropagated loss proportional to the mixing ratios used in their construction. Additionally, when unlabeled frame pairs are available, DistractFlow al-lows us to impose a self-supervised regularization by fur-ther leveraging pseudo-labeling. Given an unlabeled pair of frames, we create a distracted version. Then, we enforce the estimated flow on the distracted pair to match that on the original pair. In other words, the prediction of the original pair is treated as a pseudo ground truth flow for the dis-tracted pair. Since the estimation on the original pair can be erroneous, we further derive and impose a confidence map to employ only highly confident pixel-wise flow estimations as the pseudo ground truth. This prevents the model from reinforcing incorrect predictions, leading to a more stable training process. In summary, our main contributions are as follows: • We introduce DistractFlow, a novel data augmentation approach that improves optical flow estimation by uti-lizing distractions from natural images. Our method provides augmentations with realistic semantic con-tents compared to existing augmentation schemes. • We present a semi-supervised learning scheme for op-tical flow estimation that adopts the proposed dis-tracted pairs to leverage unlabeled data. We compute a confidence map to generate uncertainty-aware pseudo labels and to enhance training stability and overall per-formance. • We demonstrate the effectiveness of DistractFlow in supervised [6, 14, 30] and semi-supervised settings, showing that DistractFlow outperforms the very recent FlowSupervisor [9] that require additional in-domain unlabeled data.
Bagad_Test_of_Time_Instilling_Video-Language_Models_With_a_Sense_of_CVPR_2023
Abstract Modelling and understanding time remains a challenge in contemporary video understanding models. With lan-guage emerging as a key driver towards powerful gener-alization, it is imperative for foundational video-language models to have a sense of time. In this paper, we consider a specific aspect of temporal understanding: consistency of time order as elicited by before/after relations. We estab-lish that seven existing video-language models struggle to understand even such simple temporal relations. We then question whether it is feasible to equip these foundational models with temporal awareness without re-training them from scratch. Towards this, we propose a temporal adapta-tion recipe on top of one such model, VideoCLIP , based on post-pretraining on a small amount of video-text data. We conduct a zero-shot evaluation of the adapted models on six datasets for three downstream tasks which require vary-ing degrees of time awareness. We observe encouraging performance gains especially when the task needs higher time awareness. Our work serves as a first step towards probing and instilling a sense of time in existing video-language models without the need for data and compute-intense training from scratch.
1. Introduction Self-supervised pretraining at scale on multimodal web corpora tied with powerful architectures [107] has led to foundational models [12] for images [2, 49, 59, 83, 84] and videos [2, 6, 26, 109, 119, 126]. These models have enabled remarkable improvements on a plethora of downstream video-language tasks such as video-text retrieval, video question-answering, and action recognition. Given the cost and difficulty of video annotations, even for a small amount of downstream data, such foundational models are emerging as the de-facto backbone for zero-shot [119, 122, 127] and few-shot generalization [2]. However, it remains unclear if these video-language models capture essential properties of a video beyond what can be learned from static images, most notably: time. Many before us have shown that existing video-language 1.2.3.4.A.C.B.D.Dog runs away beforeit brings a ball to the manThe baby eats food after itlooks into the camera The dog brings a a ball to the man beforeit runs awayThe baby looks intothe camera afterit eats foodFigure 1. Can you match the correct video-text pairs? Understand-ing the time order of events across video and language is necessary to be able to solve this task. See footnote on next page for answers. models [6, 57, 66, 119] can achieve impressive performance on several video benchmarks [22, 41, 120] without reli-ably encoding time [13, 56, 59]. For example, Buch et al. [13] show that a model that uses a single (carefully selected) frame often outperforms recent video-language models [57, 119] on standard video benchmarks such as MSR-VTT [120]. Lei et al. [56] report similar findings with a single-frame pretraining approach. These findings hint at a lack of time awareness in video models. However, it re-mains unclear if these findings are caused, indeed, by the lack of time in video models or whether the benchmarks themselves do not mandate time awareness. Furthermore, there is no clear definition of what it means for a model to be time aware. In this paper, we strive to shed light on all these factors of time awareness in video-language models. As a first step, we consider a simple notion of under-standing time, i.e., understanding temporal relations such as before andafter [4]. Consider the task presented in Fig. 1. A time invariant model shall be able to associate (A) with (1) This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2503 or (2) and (B) with (3) or (4) based on static frames alone. But to distinguish between (1) and (2), one needs to be able to understand time order and connect it across video and language1. Thus, the first question we ask in Section 3: do the representations learnt by foundational video-language models encode this sense of time? To reliably attribute lack of time awareness to models and not existing benchmarks, we design our own synthetic dataset to probe models for this sense of time. We create video-language pairs that show a sequence of two events. Then, we alter the order of events either in the text or the video and check if models can con-nect the order in video and language. We find that existing video-language models indeed struggle to associate the time order across video and language. In light of these findings, the second question we ask in Section 4 is: can we adapt a video-language model, with-out expensive re-training from scratch, to instill this sense of time? Towards this, we take inspiration from literature on understanding time in natural language, where there has been much work on developing time aware language mod-els [20, 36, 37, 130, 131]. Our objective is to instill time awareness in a video-language model without having to pre-train from scratch. To do that, we propose TACT: Temporal Adaptation by Consistent Time-ordering based on two key components: (i) we artificially create samples that provide temporal signal, for example, by flipping the order of events in the video or the text, and (ii) we introduce a modified contrastive loss to learn time order consistency based on these samples. Instead of training from scratch, we adapt an existing video-language model, VideoCLIP [61], using the paradigm of post-pretraining on a small amount of video-text data [66, 121]. We demonstrate the effectiveness of TACT in connecting the time order in video and language on four diverse real datasets in Section 5. Finally, in line with the original motive of video-language models for zero-shot generalization, we evalu-ate in Section 6 our TACT-adapted model for three sets of tasks on six downstream datasets which require a vary-ing degree of time awareness. On tasks that need higher time awareness, with the appropriate choice of adaptation dataset, TACT outperforms a strong baseline that is based on post-pretraining on canonical clip-text pairs without con-sideration of time-order. In summary, our contributions are: (i) We show that existing video-language models struggle to associate time order in video and language through controlled experi-ments on synthetic data and several evaluations on real datasets. (ii) Based on VideoCLIP [119], we propose TACT, a method for temporal adaptation using this time order con-sistency without having to pretrain from scratch. (iii) We demonstrate improved zero-shot generalizability of TACT-adapted models on tasks that require higher time awareness. 1Answers: (A)-(2), (B)-(1), (C)-(4), (D)-(3).2. Background and Related Work We briefly discuss recent advances in video-language models followed by their consideration of time. Foundational video-language models. Large-scale datasets, self-supervision, and the advent of Transform-ers [107] have led to the emergence of powerful encoders for images [21,39,103], videos [5,11,24,104,117], language models [19,64,69,86] and even universal encoders [32,46]. These encoders form the basis for several vision-language foundational models. Popular image-language models such as CLIP [83] and ALIGN [49] are trained on massive datasets by using web images and alt-text. Similarly, video-language models are catching up and can be categorised into two broad directions: (i) adapting image-language mod-els for videos [8, 23, 50, 51, 63, 66, 71, 110, 112, 121], and (ii) pure video-based models that are learned using large video-text datasets [3,7,27–29,31,58,62,65,67,68,95,119]. Recently, a new paradigm of post-pretraining has emerged where an existing image-or video-language model goes through another stage of self-supervised pretraining on a small amount of video data before it is evaluated on down-stream tasks [66, 121]. This is promising as it circumvents the prohibitive cost of pretraining on large datasets from scratch. In [66], the post-pretraining uses time-invariant mean-pooling, while [121] strives to bridge the domain gap between image captions and video subtitles. In contrast, our proposed temporal adaptation involves post-pretraining of VideoCLIP [119] with a small amount of data that instills the model to learn the time-order of events in a video. Time in vision. Time separates videos from static images or an unordered set of frames. While modeling time re-mains a challenge, it also presents a natural source of su-pervision that has been exploited for self-supervised learn-ing. For example, as a proxy signal by posing pretext tasks involving spatio-temporal jigsaw [1, 44, 53], video speed [10, 17, 48, 94, 111, 124], arrow of time [78, 80, 114], frame/clip ordering [25, 70, 90, 97, 118], video continu-ity [61], or tracking [45, 108, 113]. Several works have also used contrastive learning to obtain spatio-temporal repre-sentations by (i) contrasting temporally augmented versions of a clip [47, 77, 81], or (ii) encouraging consistency be-tween local and global temporal contexts [9, 18, 85, 123]. Nevertheless, it remains unclear if the learnt representations actually encode time reliably. Time-aware features have also been explored for specific downstream tasks such as action recognition [30,100,101]. There has been some very recent work on evaluating self-supervised video representa-tions [87, 99] on their temporal recognition ability instead of only relying on time as a guidance for training. In the same spirit, a related direction pursues evaluation and benchmarking of time awareness in video datasets [88], models [13, 14, 30, 56, 89, 125] or both [43, 92]. Huang et 2504 al. [43] measure the effect of motion on temporal action recognition to find that only a subset of classes in UCF-101 and Kinetics-400 require motion information. Ghodrati et al. [30] propose new tasks to evaluate temporal asymme-try, continuity and causality in video models. Our work derives inspiration from these but applies more generally to video-language models as language provides a basis for open-world generalization. Time in language. Time has also been extensively stud-ied in the natural language literature. Early works identified temporal structures in language such as temporal preposi-tions and quantifiers [4, 79]. More recent literature focuses on tasks such as extracting temporal relations [35, 72–74], as well as temporal reasoning [36, 37, 82, 130, 131]. For example, Han et al . [36, 37] and Zhou et al . [131] pre-train language models specifically to focus on understand-ing temporal relations such as before, after, during, etc. The emergence of large language models has also spurred an in-creased interest in dev
Chakravarthula_Seeing_With_Sound_Long-range_Acoustic_Beamforming_for_Multimodal_Scene_Understanding_CVPR_2023
Abstract Mobile robots, including autonomous vehicles rely heav-ily on sensors that use electromagnetic radiation like lidars, radars and cameras for perception. While effective in most scenarios, these sensors can be unreliable in unfavorable environmental conditions, including low-light scenarios and adverse weather, and they can only detect obstacles within their direct line-of-sight. Audible sound from other road users propagates as acoustic waves that carry information even in challenging scenarios. However, their low spatial resolution and lack of directional information have made them an overlooked sensing modality. In this work, we intro-duce long-range acoustic beamforming of sound produced by road users in-the-wild as a complementary sensing modal-ity to traditional electromagnetic radiation-based sensors. To validate our approach and encourage further work in the field, we also introduce the first-ever multimodal long-range acoustic beamforming dataset. We propose a neural aper-ture expansion method for beamforming and demonstrate its effectiveness for multimodal automotive object detection when coupled with RGB images in challenging automotive scenarios, where camera-only approaches fail or are unable to provide ultra-fast acoustic sensing sampling rates. Data and code can be found here1.
1. Introduction Autonomous mobile robots of today predominantly rely on several electromagnetic (EM) radiation-based sensing modalities such as camera, radar and lidar for diverse scene understanding tasks, including object detection, semantic segmentation, lane detection, and intent prediction. The most promising approaches rely on fused data input from these camera, lidar and radar sensor configurations [7, 42,50] and robust data-driven perception algorithms using convolutional neural networks or vision transformers. However, existing camera/radar/lidar stacks do not return signal for objects with low reflectance and in conditions where light-based sensors struggle, such as severe scattering due to fog. All existing 1light.princeton.edu/seeingwithsoundEM radiation-based sensor systems (active or passive) are fundamentally limited by the propagation of EM waves. Acoustic waves are an alternative and complementary sensing modality that are not subject to these limita-tions. Every automotive vehicle generates noise due to engine/transmission, aerodynamics, braking, and contact with the road. Even electric vehicles are required by law to emit sound to alert pedestrians [36]. However, acoustic sensing is not without challenges. Spatially resolving the acoustic spectrum at meter wavelengths (e.g., a 1 kHz sound wave has a wavelength of about 35 cm in air) has limited existing approaches to low-resolution tracking of 3D spatial coordinates [11–13, 32,44]. In this work, we show that acoustic sensing is comple-mentary to existing EM wave-based sensors, robust to chal-lenging scenarios, and achieves improved performance when combined with existing vision-only approaches. To this end, we captured a large multimodal dataset with a prototype vehicle equipped with a 1024 (32x32 grid) microphone array and a plethora of vision sensors, and had them labeled by human annotators, which we release as the first multimodal long-range beamforming dataset. To the best of our knowl-edge, there is no such large and diverse multimodal acoustic beamforming dataset, as also illustrated in Table 1. We ad-ditionally propose a neural acoustic beamforming method for small aperture microphone arrays via learned aperture expansion. The aperture-expanded beamforming maps re-cover spatial resolution typically lost in sound measurements, and facilitate fusion with visual inference tasks. We assess multimodal visual and acoustic vision tasks in diverse real-world driving scenarios. We validate that visual and acoustic signals can complement each other in challenging automo-tive scenarios and can enable future frame predictions at kHz frequencies. We also demonstrate that object detec-tion using vision and acoustic signals outperform that of vision-only signals in challenging low-light scenarios. Fur-thermore, we show the applicability of acoustic sensing in non-line-of-sight and partially occluded scenes where purely vision-based sensing fails. Specifically, we make the following contributions: This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 982 Prototype Vehicle Beamforming Map RGB-only Detection Proposed Multimodal Detection Figure 1. We capture a large dataset of acoustic pressure signals at several frequencies from roadside noise using our prototype test vehicle (left). Of the available 250-5000 Hz frequency bands in the dataset, we visualize beamformed signals at the 4000 Hz octave band here (middle left). Using RGB-only result in missed and inaccurate detections at night (middle right). The complementary nature of acoustic signals, on the other hand, helps robustly detect the objects in challenging night scenarios (right). Table 1. Existing beamforming works and datasets are limited to just a few hundred processed beamforming maps and a single RGB camera data. In stark contrast, our dataset is very large with acoustic signals captured at 40kHz frequency across 11 frequency bands in diverse urban scenarios. Dataset Michel et al. [35] Zunino et al. [51] Guidati [21] Proposed Ego Motion Static Static Static Dynamic Frequency Bands 1 1 1 11 Frequency Range ✗ 500 -6400 Hz ✗ 1 Hz -20 kHz Processed Beamforming Frames ✗ 151 ✗ 42250 RGB Cameras 1 1 ✗ 5 RGB Frames ✗ 151 ✗ 3.2 Mio Lidar Point Clouds ✗ ✗ ✗ 480,000 Annotated Frames ✗ ✗ ✗ 16,324 •We introduce long-range acoustic beamforming of road noise as a complementary sensing modality for auto-motive perception, and introduce the first annotated long-range acoustic beamforming dataset comprising of sound measurements from planar microphone array, lidar, RGB images, GPS and IMU data, in urban driving scenarios. •We propose neural acoustic beamforming for small aper-ture microphone arrays via learned aperture expansion. We validate that this beamforming approach can learn features with a spatial resolution that allows for fusion with existing RGB vision tasks. •We validate that the proposed method complements ex-isting modalities and outperforms existing RGB-only and audio-only detection methods in challenging sce-narios with occlusion or poor lighting. Scope As the proposed acoustic sensing modality relies on passive sound from traffic participants, beamforming measurements are fundamentally limited to sound-producing vehicles. Beamforming of quieter traffic participants such as pedestrians and bicycles is challenging. However, we show that infusing existing vision stacks with acoustic signals canenable robust scene understanding in challenging scenarios such as night scenes and under severe occlusion.
Chen_Movies2Scenes_Using_Movie_Metadata_To_Learn_Scene_Representation_CVPR_2023
Abstract Understanding scenes in movies is crucial for a variety of applications such as video moderation, search, and recom-mendation. However, labeling individual scenes is a time-consuming process. In contrast, movie level metadata (e.g., genre, synopsis, etc.) regularly gets produced as part of the film production process, and is therefore significantly more commonly available. In this work, we propose a novel contrastive learning approach that uses movie metadata to learn a general-purpose scene representation. Specifically, we use movie metadata to define a measure of movie sim-ilarity, and use it during contrastive learning to limit our search for positive scene-pairs to only the movies that are considered similar to each other. Our learned scene repre-sentation consistently outperforms existing state-of-the-art methods on a diverse set of tasks evaluated using multiple benchmark datasets. Notably, our learned representation offers an average improvement of 7.9% on the seven classi-fication tasks and 9.7% improvement on the two regression tasks in LVU dataset. Furthermore, using a newly collected movie dataset, we present comparative results of our scene representation on a set of video moderation tasks to demon-strate its generalizability on previously less explored tasks.
1. Introduction Automatic understanding of movie scenes is a challenging problem [53] [26] that offers a variety of downstream appli-cations including video moderation, search, and recommen-dation. However, the long-form nature of movies makes la-beling of their scenes a laborious process, which limits the effectiveness of traditional end-to-end supervised learning methods for tasks related to automatic scene understanding. The general problem of learning from limited labels has been explored from multiple perspectives [51], among which contrastive learning [28] has emerged as a particu-larly promising direction. Specifically, using natural lan-guage supervision to guide contrastive learning [41] has shown impressive results specially for zero-shot image-classification tasks. However, these methods rely on image-text pairs which are hard to collect for long-form videos. Another important set of methods within the space of con-trastive learning use a pretext task to contrast similar data-points with randomly selected ones [23] [9]. However, most of the standard data-augmentation schemes [23] used to de-fine the pretext tasks for these approaches have been shown to be not as effective for scene understanding [8]. To address these challenges, we propose a novel con-trastive learning approach to find a general-purpose scene representation that is effective for a variety of scene under-standing tasks. Our key intuition is that commonly avail-able movie metadata ( e.g., co-watch, genre, synopsis) can be used to effectively guide the process of learning a gen-eralizable scene representation. Specifically, we use such movie metadata to define a measure of movie-similarity, and use it during contrastive learning to limit our search for positive scene-pairs to only the movies that are considered similar to each other. This allows us to find positive scene-pairs that are not only visually similar but also semantically relevant, and can therefore provide us with a much richer set of geometric and thematic data-augmentations compared to previously employed augmentation schemes [23] [8] (see Figure 1 for illustration). Furthermore, unlike previous contrastive learning approaches that mostly focus on im-ages [23] [9] [13] or shots ( §3 for definition) [8], our ap-proach builds on the recent developments in vision trans-formers [12] to allow using variable-length multi-shot in-puts. This enables our method to seamlessly incorporate the interplay among multiple shots resulting in a more general-purpose scene representation. Using a newly collected internal dataset MovieCL30K containing 30,340movies to learn our scene representation, we demonstrate the flexibility of our approach to handle both individual shots as well as multi-shot scenes provided as inputs to outperform existing state-of-the-art results on diverse downstream tasks using multiple public benchmark datasets [53] [26] [42]. Furthermore, as an important practi-cal application of long-form video understanding, we apply our scene representation to another newly collected dataset MCD focused on large-scale video moderation with 44,581 video clips from 18,330movies and TV episodes contain-ingsex,violence , and drug -useactivities. We show that learning our general-purpose scene representation is crucial This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6535 Figure 1. Approach Overview – We employ commonly available movie metadata ( e.g., co-watch, genre, synopsis) to define movie similarities. The figure illustrates a pair of similar movies where movie similarity is defined based on co-watch information, i.e., viewers who watched one movie often watched the second movie as well. Our approach automatically selects thematically similar scenes from such similar movie-pairs and uses them to learn scene-level representations that can be used for a variety of downstream tasks. to recognize such age-appropriate video-content where ex-isting representations learned for short-form action recogni-tion or image classification are significantly less effective.
Jia_Think_Twice_Before_Driving_Towards_Scalable_Decoders_for_End-to-End_Autonomous_CVPR_2023
Abstract End-to-end autonomous driving has made impressive progress in recent years. Existing methods usually adopt the decoupled encoder-decoder paradigm, where the en-coder extracts hidden features from raw sensor data, and the decoder outputs the ego-vehicle’s future trajectories or actions. Under such a paradigm, the encoder does not have access to the intended behavior of the ego agent, leaving the burden of finding out safety-critical regions from the mas-sive receptive field and inferring about future situations to the decoder. Even worse, the decoder is usually composed of several simple multi-layer perceptrons (MLP) or GRUs while the encoder is delicately designed (e.g., a combina-tion of heavy ResNets or Transformer). Such an imbalanced resource-task division hampers the learning process. In this work, we aim to alleviate the aforementioned problem by two principles: (1) fully utilizing the capac-ity of the encoder; (2) increasing the capacity of the de-coder. Concretely, we first predict a coarse-grained fu-ture position and action based on the encoder features. Then, conditioned on the position and action, the future scene is imagined to check the ramification if we drive ac-cordingly. We also retrieve the encoder features around the predicted coordinate to obtain fine-grained information about the safety-critical region. Finally, based on the pre-dicted future and the retrieved salient feature, we refine the coarse-grained position and action by predicting its offset from ground-truth. The above refinement module could be stacked in a cascaded fashion, which extends the capac-ity of the decoder with spatial-temporal prior knowledge about the conditioned future. We conduct experiments on the CARLA simulator and achieve state-of-the-art perfor-mance in closed-loop benchmarks. Extensive ablation stud-ies demonstrate the effectiveness of each proposed module.
1. Introduction With the advance in deep learning, autonomous driv-ing has attracted attention from both academia and in-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21983 dustry. End-to-end autonomous driving [46, 48] aims to build a fully differentiable learning system that is able to map the raw sensor input directly to a control signal or a future trajectory. Due to its efficiency and ability to avoid cumulative errors, impressive progress has been achieved in recent years [3, 10, 12, 15, 16, 56]. State-of-the-art works [9,27,49,57,67,68] all adopt the encoder-decoder paradigm. The encoder module extracts information from raw sensor data (camera, LiDAR, Radar, etc.) and gener-ates a representation feature. Taking the feature as input, the decoder directly predicts way-points or control signals. Under such a paradigm, the encoder does not have ac-cess to the intended behavior of the ego agent, which leaves the burden of finding out the safety-critical regions from the large perceptive field of massive sensor inputs and infer-ring about the future situations to the decoder. For example, when the ego vehicle is at the intersection, if it decides to go straight, it should check the traffic light across the road, which might consist of only several pixels. If it decides to go right, then it should check whether there are any agents on its potential route and think about how they would re-act to the ego vehicle’s action. Even worse, the decoder is usually several simple multi-layer perceptrons (MLP) or GRUs while the encoder is a delicately designed combina-tion of the heavy ResNet or Transformer. Such unmatched resource-task division hampers the overall learning process. To address the aforementioned issues, we design our new model based on two principles: •Fully utilize the capacity of the encoder . Instead of leaving all future-related tasks to the decoder, we should reuse the features from the encoder conditioned on the predicted decision. •Extend the capacity of the decoder with dense su-pervision . Instead of simply adding depth/width of MLP which would cause severe overfit, we should en-large the encoder with prior structure and correspond-ing supervision so that it could capture the inherent driving logical reasoning. To instantiate these two principles, we propose a cascaded decoder paradigm to predict the future action of the ego ve-hicle in a coarse-to-fine fashion as shown in Fig. 1. Con-cretely, (i) We first adopt an MLP similar to classical ap-proaches to generate the coarse future trajectory and action. (ii) We then retrieve features around the predicted future location from the encoder and further feed them into sev-eral convolutional layers to obtain goal-related scene fea-tures (we denote the module as Look Module and the fea-ture as Look Feature ). This follows the intuition that human drivers would check their intended target to ensure safety and legitimacy. (iii) Inspired by the fact that human drivers would anticipate other agents’ future motion to avoid possi-ble collisions, we design a Prediction Module , which takesthe coarse action and features of the current scene as input and generates future scene representation features (denoted asPrediction Feature ). Considering the difficulty of ob-taining supervision of the future scene representation con-ditioned on the predicted action during open-loop imitation learning, we adopt the teach-forcing technique [2]: during training, we additionally feed samples with ground-truth action/trajectory into Prediction Module and supervise the corresponding Prediction Feature with ground-truth future scene. As for the target of the supervision, we choose fea-tures from Roach [77], an RL-based teacher network with privileged input, which contains decision-related informa-tion. (iv) Based on the Look Feature andPrediction Fea-ture, we predict the offset between the coarse prediction and ground-truth for refinement. The aforementioned process could be stacked cascadedly, which enlarges the capacity of the decoder with spatial-temporal prior knowledge about the conditioned future. We conducted experiments on two competitive closed-loop autonomous driving benchmarks with CARLA [21] and achieved state-of-the-art performance. We also con-ducted extensive ablation studies to demonstrate the effec-tiveness of the components of the proposed method. In summary, our work has three-fold contributions: 1. We propose a scalable decoder paradigm for end-to-end autonomous driving, which, to the best of our knowledge, is the first to emphasize the importance of enlarging the capacity of the decoder in this field.
Chen_NeuralEditor_Editing_Neural_Radiance_Fields_via_Manipulating_Point_Clouds_CVPR_2023
Abstract This paper proposes NeuralEditor that enables neural radiance fields (NeRFs) natively editable for general shape editing tasks. Despite their impressive results on novel-view synthesis, it remains a fundamental challenge for NeRFs to edit the shape of the scene. Our key insight is to exploit the explicit point cloud representation as the underlying struc-ture to construct NeRFs, inspired by the intuitive interpreta-tion of NeRF rendering as a process that projects or “plots” the associated 3D point cloud to a 2D image plane. To this end, NeuralEditor introduces a novel rendering scheme based on deterministic integration within K-D tree-guided density-adaptive voxels, which produces both high-quality rendering results and precise point clouds through opti-mization. NeuralEditor then performs shape editing via mapping associated points between point clouds. Exten-sive evaluation shows that NeuralEditor achieves state-of-the-art performance in both shape deformation and scene morphing tasks. Notably, NeuralEditor supports both zero-shot inference and further fine-tuning over the edited scene. Our code, benchmark, and demo video are available at im-mortalco.github.io/NeuralEditor.
1. Introduction Perhaps the most memorable shot of the film Transform-ers,Optimus Prime is seamlessly transformed between ahumanoid and a Peterbilt truck – such free-form editing of 3D objects and scenes is a fundamental task in 3D com-puter vision and computer graphics, directly impacting ap-plications such as visual simulation, movie, and game in-dustries. In these applications, often we are required to ma-nipulate a scene or objects in the scene by editing or mod-ifying its shape, color, light condition, etc., and generate visually-faithful rendering results on the edited scene effi-ciently. Among the various editing operations, shape edit-ing has received continued attention but remains challeng-ing, where the scene is deformed in a human-guided way, while all of its visual attributes ( e.g., shape, color, bright-ness, and light condition) are supposed to be natural and consistent with the ambient environment. State-of-the-art rendering models are based on implicit neural representations, as exemplified by neural radiance field (NeRF) [27] and its variants [3,33,37,39,48]. Despite their impressive novel-view synthesis results, most of the NeRF models substantially lack the ability for users to ad-just, edit, or modify the shape of scene objects. On the other hand, shape editing operations can be natively applied to ex-plicit 3D representations such as point clouds and meshes. Inspired by this, we propose NeuralEditor – a general and flexible approach to editing neural radiance fields via manipulating point clouds (Fig. 1). Our key insight is to benefit from the best of both worlds: the superiority in rendering performance from implicit neural representation This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12439 combined with the ease of editing from explicit point cloud representation. NeuralEditor enables us to perform a wide spectrum of shape editing operations in a consistent way. Such introduction of point clouds into NeRF for general shape editing is rooted in our interpretation of NeRF ren-dering as a process that projects or “plots” the associated 3D point cloud to a 2D image plane . Conceptually, with a dense enough point cloud where each point has an opacity and its color is defined as a function of viewing direction, di-rectly plotting the point cloud would achieve similar visual effects ( i.e., transparency and view-dependent colors) that are rendered by NeRF. This intrinsic integration between NeRF and point clouds underscores the advantage of our NeuralEditor over existing mesh-based NeRF editing meth-ods such as NeRF-Editing [51], Deforming-NeRF [44], and CageNeRF [30], where the process of constructing and op-timizing the mesh is separated from the NeRF modeling, making them time-consuming. More importantly, with the point cloud constructed for a scene, the shape editing can be natively defined as and easily solved by just moving each point into the new, edited position and re-plotting the point cloud. Therefore, our approach supports more general scene editing operations which are difficult to achieve via mesh-guided space deformation. The key component in our NeuralEditor lies in a point cloud-guided NeRF model that natively supports general shape editing operations. While the recent method Point-NeRF [43] has demonstrated improved novel-view synthe-sis capability based on point clouds, it is not supportive to shape editing. Our idea then is to exploit the underlying point cloud in ways of not only optimizing its structure and features ( e.g., adaptive voxels) for rendering, but also ex-tracting additional useful attributes ( e.g., normal vectors) to guide the editing process. To this end, we introduce K-D trees [4] to construct density-adaptive voxels for efficient and stable rendering, together with a novel deterministic in-tegration strategy. Moreover, we model the color with the Phong reflection [31] to decompose the specular color and better represent the scene geometry. With a much more precise point cloud attributed to these improvements, our NeuralEditor achieves high-fidelity ren-dering results on deformed scenes compared with prior work as shown in Fig. 1, even in a zero-shot inference man-ner without additional training. Through fast fine-tuning, the visual quality of the deformed scene is further enhanced, almost perfectly consistent with the surrounding light con-dition. In addition, under the guidance of a point cloud dif-fusion model [24], NeuralEditor can be naturally extended for smooth scene morphing across multiple scenes, which is difficult for existing NeRF editing work. Our contributions are four-fold. (1) We introduce NeuralEditor, a flexible and versatile approach that makes neural radiance fields editable through manipulating pointclouds. (2) We propose a point cloud-guided NeRF model based on K-D trees and deterministic integration, which produces precise point clouds and supports general scene editing. (3) Due to the lack of publicly available bench-marks for shape editing, we construct and release a repro-ducible benchmark that promotes future research on shape editing. (4) We investigate a wide range of shape editing tasks, covering both shape deformation (as studied in exist-ing NeRF editing work) and challenging scene morphing (a novel task addressed here). NeuralEditor achieves state-of-the-art performance on all shape editing tasks in a unified framework, without extra information or supervision.
Huo_GeoVLN_Learning_Geometry-Enhanced_Visual_Representation_With_Slot_Attention_for_Vision-and-Language_CVPR_2023
Abstract Most existing works solving Room-to-Room VLN prob-lem only utilize RGB images and do not consider lo-cal context around candidate views, which lack sufficient visual cues about surrounding environment. Moreover, natural language contains complex semantic information thus its correlations with visual inputs are hard to model merely with cross attention. In this paper, we propose GeoVLN, which learns Geometry-enhanced visual repre-sentation based on slot attention for robust Visual-and-Language Navigation. The RGB images are compensated with the corresponding depth maps and normal maps pre-dicted by Omnidata as visual inputs. Technically, we intro-duce a two-stage module that combine local slot attention and CLIP model to produce geometry-enhanced represen-tation from such input. We employ V&L BERT to learn a cross-modal representation that incorporate both language and vision informations. Additionally, a novel multiway at-tention module is designed, encouraging different phrases of input instruction to exploit the most related features from visual input. Extensive experiments demonstrate the effec-tiveness of our newly designed modules and show the com-pelling performance of the proposed method.
1. Introduction With the rapid development of vision, robotics, and AI research in the past decade, asking robots to follow hu-man instructions to complete various tasks is no longer an unattainable dream. To achieve this, one of the fundamen-tal problems is that given a natural language instruction, let robot (agent) make its decision about the next move automatically based on past and current visual observa-*Equal contributions. †Corresponding authors. Yanwei Fu is with School of Data Science, Fudan University, Shanghai Key Lab of Intelligent Information Processing, and Fudan ISTBI–ZJNU Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University, Jinhua, China. Geometry-Enhanced Visual RepresentationNormal Depth Aggregate RGB Information with Local-Aware Slot Attention.Slot Enhanced RGB Feature Go through the door and turn left. Recurrent VLN BERT ActionFigure 1. Illustration of our learning geometry-enhanced vi-sual representation (GeoVLN) for visual-and-language navigation. Critically, our GeoVLN utilizes the slot attention mechanism. tions. This is referred as Vision-and-Language Navigation (VLN) [2]. Importantly, such navigation abilities should also work well in previously unseen environments. In the popular Room-to-Room navigation task [2], the agent is typically assumed to be equipped with a single RGB camera. At each time step, given a set of visual obser-vations captured from different view directions and several navigation options, the goal is to choose an option as the next station. The process will be repeated until the agent reaches the end point described by the user instruction. In-volving both natural language and vision information, the main challenge here is to learn a cross-modal representa-tion that incorporates the correlations between user instruc-tion and current surrounding environment to aid decision-making. As solutions, early studies [2,8,30] resort to LSTM [13] to process temporal visual data stream. However, recent works [4, 11, 15, 18, 19, 22, 25] have taken advantage of the superior performance of the Transformer [31] and typically employ this attention-based model to facilitate representa-tion learning with cross attention and predict actions in ei-ther recurrent [15] or one-shot [4] fashion. Despite their advantages, these approaches still have several limitations. • 1) They only rely on RGB images which provide very This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23212 limited 2D visual cues and lack geometry information. Thus it is hard for agent to build scene understanding about novel environments; • 2) they process each candidate view independently without considering local spatial context, leading to in-accurate decisions; • 3) natural language contains high-level semantic fea-tures and different phrases within an instruction may focus on various aspects visual information, e.g. tex-ture, geometry. Nevertheless, we empirically find that constructing cross-modal representation with na ¨ıve at-tention mechanism leads to suboptimal performance. To address these problems, we propose a novel frame-work, named GeoVLN, which learns Geometry-enhanced visual representation based on slot attention for robust Visual-and-Language Navigation. Our framework is illus-trated in Fig. 1. In particular, beyond RGB images, we also utilize the corresponding depth maps and normal maps as observations at each time step (Fig. 1), as they provide rich geometry information about environment that facili-tates decision-making. Crucially, these additional mid-level cues are estimated by the recent scalable data generation framework Omnidata [7, 17] rather than sensor captured or user provided. We design a novel two-stage slot attention [20] based module to learn geometry-enhanced visual representation from the above multimodal observations. Note that the slot attention is originally proposed to learn object-centric rep-resentation for complex scenes from single/multi-view im-ages, but we utilize its feature learning capability and ex-tend it to work together with multimodal observations in the VLN tasks. Particularly, we treat each candidate RGB im-age as a query, and choose its nearby views as keys and val-ues to perform slot attention within a local spatial context. The key insight is that our model can implicitly learn view-to-view correspondences via slot attention, and thus encour-age the candidates to pool useful features from surrounding neighbors. Additionally, we process all complementary ob-servations, including depth maps and normal maps, through a pre-trained CLIP [26] image encoder to obtain respective latent vectors. These vectors are then concatenated with the output of slot attention module to form our final geometry-enhanced visual representation. On the other hand, we employ BERT as language en-coder to acquire global latent state and word embeddings from the input instruction. Given the respective latent em-beddings for language and vision inputs, we adopt V&L BERT [11] to merge multimodal features and learn cross-modal representation for the final decision-making in a re-current fashion following [15]. Different from previous works [21,30] that directly output probabilities of each can-didate option, we present a multi-way attention module toencourage different phrases of input instruction to focus on the most informative visual observation, which boosts the performance of our network, especially in unseen environ-ments. To summarize, we propose the following contributions that solve the Room-to-Room VLN task with compelling accuracy and robustness: • We extend slot attention to work on VLN task, which is combined with CLIP image encoder to learn geometry-enhanced visual representations for accurate and robust navigation. • A novel multiway attention module encouraging dif-ferent phrases of input instruction to focus on the most informative visual observation, e.g. texture, depth. • We compensate RGB images with the corresponding depth maps and normal maps predicted with off-the-shelf method, improving the performance yet not in-volving additional training data. • We integrate all the above technical innovations into a unified framework, named GeoVLN, and the extensive experiments validate our design choices.
Huang_KiUT_Knowledge-Injected_U-Transformer_for_Radiology_Report_Generation_CVPR_2023
Abstract Radiology report generation aims to automatically gen-erate a clinically accurate and coherent paragraph from the X-ray image, which could relieve radiologists from the heavy burden of report writing. Although various image caption methods have shown remarkable performance in the natural image field, generating accurate reports for medical images requires knowledge of multiple modalities, including vision, language, and medical terminology. We propose a Knowledge-injected U-Transformer (KiUT) to learn multi-level visual representation and adaptively dis-till the information with contextual and clinical knowledge for word prediction. In detail, a U-connection schema be-tween the encoder and decoder is designed to model in-teractions between different modalities. And a symptom graph and an injected knowledge distiller are developed to assist the report generation. Experimentally, we outper-form state-of-the-art methods on two widely used bench-mark datasets: IU-Xray and MIMIC-CXR. Further experi-mental results prove the advantages of our architecture and the complementary benefits of the injected knowledge.
1. Introduction Radiology images ( e.g., chest X-ray) play an indispens-able role in routine diagnosis and treatment, and the radiol-ogy reports of images are essential in facilitating later treat-ments. Getting a hand-crafted report is a time-consuming and error-prone process. Given a radiology image, only experienced radiologists can accurately interpret the image and write down corresponding findings. Therefore, auto-matically generating high-quality radiology reports is ur-gently needed to help radiologists eliminate the overwhelm-ing volume of radiology images. In recent years, radiology report generation has attracted much attention in the deep learning and medical domain. The encoder-decoder archi-tecture inspired by neural machine translation [34] has been widely adopted by most existing methods [17, 24, 40, 42]. Corresponding Author. RR-EncoderDecoderBackBoneU Connection Injected Knowledge Distiller ··· ··· Injected Knowledge Visual Clinical Radiology Report PA and lateral views of chest demonstrate an extensive left-sided pleural effusion with compressive atelectasis. An underlying pneumonia cannot be excluded. A tiny right pleural effusion may also be present. The cardiac silhouette also appears enlarged, but it is difficult to completely assess the left border given the large pleural effusion. The right lung is clear of focal opacities worrisome for pneumonia. There is no pneumothorax. Radiology ImageContextual ··· Figure 1. A transformer architecture with U-connection is adopted to generate reports from radiology images. The process involves injecting and distilling visual, clinical, and contextual knowledge The color labels in the image and report represent the different ab-normal regions and their corresponding description, respectively. With the recent advent of the attention mechanism, the ar-chitecture’s capability is greatly ameliorated. Despite the remarkable performance, these models re-strained themselves in the methodology of image caption [6, 7, 36, 39, 43], and suffer from such data biases: 1) the normal cases dominate the dataset over the abnormal cases; 2) the descriptions of normal regions dominate the entire report. Recently, some methods have been proposed to ei-ther alleviate case-level bias by utilizing posterior and prior knowledge [27] or relieve the region-level bias by distilling the contrastive information of abnormal regions [28]. Thus, in the medical field’s cross-modal task, a model needs to not only capture visual details of every abnor-mal region but also consider the interaction between the vi-sual and textual modalities among different levels. More-over, external clinical knowledge is required to achieve the radiologist-like ability in radiology image understand-ing and report writing. The external knowledge, e.g., the clinical entities and relationships, could be pre-defined by experts or mined from medical documents. However, di-rectly adopting the knowledge brings inconsistencies due to the heterogeneous context embedding space [21]. And too complex knowledge may be prone to distract the visual en-coder and divert the representation [27]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19809 Instead of using external knowledge to augment the fea-ture extraction like previous approaches [27, 44], we pro-pose to introduce the injected knowledge in the final decod-ing stage. A graph with the clinical entities, i.e., symptoms and their relationships, is constructed under the guidance of professional doctors. These entities have homogeneous embedding space with the training corpus, and this signal could be injected smoothly with visual and contextual infor-mation. We further design the Injected Knowledge Distiller on top of the decoder to distill contributive knowledge from visual, contextual, and clinical knowledge. Following these premises, we explore a novel framework dubbed as Knowledge-injected and U-Transformer (KiUT) to achieve the radiologist-like ability to understand the ra-diology images and write reports. As Fig. 1 shows, it con-sists of a Region Relationship Encoder and Decoder with U-connection architecture and Injected Knowledge Distiller. Our contributions can be summarized as follows: • We propose a novel model following the encoder-decoder architecture with U-connection that fully ex-ploits different levels of visual information instead of only one single input from the visual modality. In our experiments, the U-connection schema presents im-provement not only in radiology report generation but also in the natural image captioning task. • Our proposed model injects clinical knowledge by constructing a symptom graph, combining it with the visual and contextual information, and distilling them when generating the final words in the decoding stage. • The Region Relationship Encoder is developed to re-store the extrinsic and intrinsic relationships among image regions for extracting abnormal region features, which are crucial in the medical domain. • We evaluate our approach on two public radiology report generation datasets, IU-Xray [8] and MIMIC-CXR [18]. KiUT achieves state-of-the-art perfor-mance on the two benchmark datasets.
Gao_Flexible-Cm_GAN_Towards_Precise_3D_Dose_Prediction_in_Radiotherapy_CVPR_2023
Abstract Deep learning has been utilized in knowledge-based ra-diotherapy planning in which a system trained with a set of clinically approved plans is employed to infer a three-dimensional dose map for a given new patient. However, previous deep methods are primarily limited to simple sce-narios, e.g., a fixed planning type or a consistent beam an-gle configuration. This in fact limits the usability of such approaches and makes them not generalizable over a larger set of clinical scenarios. Herein, we propose a novel con-ditional generative model, Flexible-CmGAN, utilizing ad-ditional information regarding planning types and various beam geometries. A miss-consistency loss is proposed to deal with the challenge of having a limited set of condi-tions on the input data, e.g., incomplete training samples. To address the challenges of including clinical preferences, we derive a differentiable shift-dose-volume loss to incor-porate the well-known dose-volume histogram constraints. During inference, users can flexibly choose a specific plan-ning type and a set of beam angles to meet the clinical re-quirements. We conduct experiments on an illustrative face dataset to show the motivation of Flexible-CmGAN and fur-ther validate our model’s potential clinical values with two radiotherapy datasets. The results demonstrate the supe-rior performance of the proposed method in a practical het-erogeneous radiotherapy planning application compared to existing deep learning-based approaches.
1. Introduction Radiation therapy (RT) is an essential modality for can-cer treatment and is applicable to about 50% of patients [12,25]. However, many studies demonstrate that millions of patients currently do not have access to radiotherapy due to limited infrastructures and trained experts to handle com-plex planning procedures [ 15,20,59]. RT treatment planning is a process that involves a multi-disciplinary team (e.g., oncologists, therapists, physicists) to figure out the treatment beam configurations and intensity for cancer patients [ 25]. The modern RT treatments can be Restricted © Siemens Medical Solutions USA, Inc., 2022 (a) edge vs. shoe (2D)(b) CT vs. dose (beam-dynamic)(c) CT vs. dose (beam-static)(d) CT vs. dose (beam-static)Figure 1. Vanilla image-to-image translation (a) and dose predic-tion (b)-(d). (a) has a clear shape match between the source and target domains. (b)-(d) illustrate our challenges, including hetero-geneous patterns, no clear match between source and target, and 3D data (showing 2D for simplicity) is harder than 2Ds. divided into two broad categories using static-or dynamic-beams. The intensity modulated radiotherapy (IMRT) [ 56] and volumetric modulated arc therapy (VMAT) [ 48,57] are the most common static-and dynamic-beam types, respec-tively [ 10]. IMRT uses several personalized but fixed beam angles, delivering radiation precisely to the tumor while sparing the surrounding normal tissues according to the lo-cation of the tumor and anatomical organs at risk (OARs). During VMAT, the treatment beam is on while its treatment head is moving on an arc trajectory [ 10]. As shown in Fig-ure1, the dose maps of static-and dynamic-beam RT plans can look significantly different, which results from differ-ent delivery nature energy fluences in those two modes. Moreover, even using the same planning mode, different configurations (e.g., beam angles, isocenter) are needed for different patients due to different tumor locations/shapes, anatomy structures, and other clinical parameters. Knowledge-based planning (KBP) aims to use computer technologies to reduce the time for individualized treatment plans [ 6,43]. Historically, KBP technologies relied on sta-tistical models or handcrafted features [ 46,54]. While pro-viding promising results, these methods are hard to gener-alize beyond an inherently targeted limited set of scenar-ios [33]. Advanced artificial intelligence (e.g., deep learn-ing [37]) has shown great potential to alter the way oncol-ogy therapies are administered [ 25,53]. An integral part of This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 715 KBP methods is to predict the dose distribution that should be delivered to a patient [ 6,32,59]. Three-dimensional (3D) deep learning models have been applied in dose map pre-diction across different cancers [ 7,32,33,41,45,55] where the inputs are generally computed tomography (CT) im-age/volume and masks of organs at risk (OARs) / planning target volume (PTV). However, the existing automatic dose prediction models mainly focus on relatively simple scenar-ios, e.g., single planning mode or/and consistent angle con-figuration, which significantly limits the model flexibility. In this paper, we propose a novel conditional generative model, flexible-multiple-condition GAN, short for Flexible-CmGAN or FCGAN, for precise 3D dose prediction in heterogeneous RT contexts. In addition to the conditions (CT, PTV/OAR masks) that other methods in literature have used, we further integrate two conditions: the planning mode (i.e., static-or dynamic-beam) and the angle config-uration. Furthermore, we show that our model is robust in those scenarios where angle configuration may not be avail-able. Briefly, our contributions include •We proposed a novel GAN variant, FCGAN, that con-siders multi-level conditions and handles missing con-dition values with a new miss-consistency loss. •We derived a differentiable andspatially-unbiased loss function from a widely applied dose-volume histogram and show its effectiveness within the deep learning training for 3D dose prediction. •We introduced the deep 3D dose prediction for prac-tical heterogeneous treatment scenarios (i.e., multi-type, multi-beam configuration), which enables easy and fast user-interaction by changing input conditions and checking results interactively during inference. •We conducted experiments on two clinical radiother-apy datasets and a face dataset, validating that our ap-proach is superior to state-of-the-art deep models.
Jin_Randomized_Adversarial_Training_via_Taylor_Expansion_CVPR_2023
Abstract In recent years, there has been an explosion of research into developing more robust deep neural networks against adversarial examples. Adversarial training appears as one of the most successful methods. To deal with both the ro-bustness against adversarial examples and the accuracy over clean examples, many works develop enhanced ad-versarial training methods to achieve various trade-offs be-tween them [19,38,80]. Leveraging over the studies [8,32] that smoothed update on weights during training may help find flat minima and improve generalization, we suggest reconciling the robustness-accuracy trade-off from another perspective, i.e., by adding random noise into determinis-tic weights. The randomized weights enable our design of a novel adversarial training method via Taylor expansion of a small Gaussian noise, and we show that the new adversar-ial training method can flatten loss landscape and find flat minima. With PGD, CW, and Auto Attacks, an extensive set of experiments demonstrate that our method enhances the state-of-the-art adversarial training methods, boosting both robustness and clean accuracy. The code is available at https://github.com/Alexkael/Randomized-Adversarial-Training .
1. Introduction Trade-off between adversarial robustness and clean ac-curacy has recently been intensively studied [63, 68, 80] and demonstrated to exist [34, 58, 70, 77]. Many differ-ent techniques have been developed to alleviate the loss of clean accuracy when improving robustness, including data augmentation [1, 5, 29], early-stopping [60, 81], instance reweighting [3, 82], and various adversarial training meth-ods [11, 36, 42, 74, 80]. Adversarial training is believed to be the most effective defense method against adversarial at-tacks, and is usually formulated as a minimax optimization problem where the network weights are assumed determin-istic in each alternating iteration. Given the fact that both clean and adversarial examples are drawn from unknown distributions which interact with one another through the network weights, it is reasonable to relax the assumption that neural networks are simply deterministic models where weights are scalar values. This paper is based on a view, il-lustrated in Fig. 1, that randomized models enable the train-ing optimization to consider multiple directions within a small area and may achieve smoothed weights update – in a way different from checkpoint averaging [8,32] – and ob-tain robust models against new clean/adversarial examples. Building upon the above view, we find a way, drastically different from most existing studies in adversarial training, to balance robustness and clean accuracy, that is, embed-ding neural network weights with random noise . Whilst the randomized weights framework is not new in statistical learning theory, where it has been used in many previous works for e.g., generalization analysis [18, 52, 75], we hope to advance the empirical understanding of the robustness-accuracy trade-off problem in adversarial training by lever-aging the rich tool sets in statistical learning. Remarkably, it turns out adversarial training with the optimization over randomized weights can improve the state-of-the-art adver-sarial training methods over both the adversarial robustness and clean accuracy. By modeling weights as randomized variables with an artificially injected weight perturbation, we start with an empirical analysis of flatness of loss landscape in Sec. 3. We show that our method can flatten loss landscape and find flatter minima in adversarial training, which is generally re-garded as an indicator of good generalization ability. After the flatness analysis, we show how to optimize with ran-domized weights during adversarial training in Sec. 4. A This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16447 Figure 1. A conceptual illustration of decision boundaries learned via (a)adversarial training of TRADES and (b)our method. (a)shows that TRADES considers a deterministic model and optimizes the distance of adversarial data and boundary only through one direction. Our method in (b)takes into account randomized weights (perturbed boundaries) and optimizes the distance of adversarial data and boundary via multi directions in a small area. The boundary learned by our method can be smoother and more robust against new data. novel adversarial training method based on TRADES [80] is proposed to reconcile adversarial robustness with clean accuracy by closing the gap between clean latent space and adversarial latent space over randomized weights. Specifi-cally, we utilize Taylor series to expand the objective func-tion over weights, in such a way that we can deconstruct the function into Taylor series (e.g., zeroth term, first term, sec-ond term, etc). From an algorithmic viewpoint, these Taylor terms can thus replace the objective function effectively and time-efficiently. As Fig. 1 shows, since our method takes randomized models into consideration during training, the learned boundary is smoother and the learned model is more robust in a small perturbed area. We validate the effectiveness of our optimization method with the first and the second derivative terms of Taylor se-ries. In consideration of training complexity and efficiency, we omit the third and higher derivative terms. Through an extensive set of experiments on a wide range of datasets (CIFAR-10 [40], CIFAR-100, SVHN [51]) and model ar-chitectures (ResNet [27], WideResNet [78], VGG [65], MobileNetV2 [62]), we find that our method can further enhance the state-of-the-art adversarial training methods on both adversarial robustness and clean accuracy, con-sistently across the datasets and the model architectures . Overall, this paper makes the following contributions: • We conduct a pilot study of the trade-off between ad-versarial robustness and clean accuracy with random-ized weights, and offer a new insight on the smoothed weights update and the flat minima during adversarial training (Sec. 3). • We propose a novel adversarial training method under a randomized model to smooth the weights. The key enabler is the Taylor series expansion (in Sec. 4) of the robustness loss function over randomized weights (deterministic weights with random noise), so that the optimization can be simultaneously done over the ze-roth, first, and second orders of Taylor series. In doing so, the proposed method can effectively enhance ad-versarial robustness without a significant compromiseon clean accuracy. • An extensive set of empirical results are provided to demonstrate that our method can improve both robust-ness and clean accuracy consistently across different datasets and different network architectures (Sec. 5).
Huang_Learning_To_Measure_the_Point_Cloud_Reconstruction_Loss_in_a_CVPR_2023
Abstract For point cloud reconstruction-related tasks, the recon-struction losses to evaluate the shape differences between reconstructed results and the ground truths are typically used to train the task networks. Most existing works mea-sure the training loss with point-to-point distance, which may introduce extra defects as predefined matching rules may deviate from the real shape differences. Although some learning-based works have been proposed to overcome the weaknesses of manually-defined rules, they still measure the shape differences in 3D Euclidean space, which may limit their ability to capture defects in reconstructed shapes. In this work, we propose a learning-based Contrastive Adver-sarial Loss (CALoss) to measure the point cloud reconstruc-tion loss dynamically in a non-linear representation space by combining the contrastive constraint with the adversar-ial strategy. Specifically, we use the contrastive constraint to help CALoss learn a representation space with shape sim-ilarity, while we introduce the adversarial strategy to help CALoss mine differences between reconstructed results and ground truths. According to experiments on reconstruction-related tasks, CALoss can help task networks improve re-construction performances and learn more representative representations.
1. Introduction Point clouds, as the common description for 3D shapes, have been broadly used in many areas such as 3D detec-tion [ 17,18] and surface reconstruction [ 9,13,19]. For the point cloud reconstruction-related tasks [ 5,7,11,16], net-works need to predict point clouds as similar as possible to the ground truths. Reconstruction losses that can dif-ferentiably calculate the shape differences between recon-structed results and ground truths are required to train the task networks. Existing works often use the matching-based * means the corresponding author Figure 1. SgandSodenote ground truths and point clouds gen-erated by the task network. Spis a positive sample with similar shapes as Sgacquired by perturbation [ 2]. Matching-based losses measure distances between points matched by different predefined rules. PCLoss [ 6] learns to extract descriptors in 3D Euclidean space by linearly weighting coordinates according to their dis-tances to predicted center points, while our method dynamically measures the shape differences with distances between learned global representations in the constructed representation space. Lp andLrdenote representation distances between Sg,SpandSg, So, respectively. Ladv ris an adversarial loss to maximize represen-tation distances between Sg,So.Ladv randLpare used to optimize CALoss, while Lris adopted to train the task network. reconstruction losses Chamfer Distance (CD) and Earth Mover’s Distance (EMD) to constrain shape differences. CD matches points firstly with their nearest neighbors in another point cloud and then calculates the shape difference as average point-to-point distance, while EMD calculates the average point-to-point distance under an optimization-based global matching. We can find that CD and EMD actually measure the distances between matched points in-stead of the distances between shapes . As the predefined matching rules are static and unlearnable, training results of CD and EMD may fall into inappropriate local mini-mums where the reconstruction losses are small but the shapes are obviously different. Learning-based losses in This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12208 PFNet [ 8], PUGAN [ 10], and CRN [ 21] introduce extra su-pervision from discriminators trained with the adversarial strategy to find the detailed differences. Their reconstruc-tion performances are improved by introducing adversar-ial losses, while they still need CD/EMD to evaluate the basic shape distances and cannot fully get rid of the influ-ence from predefined matching rules. PCLoss [ 6] presents a reconstruction loss measured by the distances between ex-tracted intermediate descriptors in 3D Euclidean space with-out any manually-defined matching rule, which is updated together with the task network in a generative-adversarial process to search shape differences during training. How-ever, the descriptor extraction in Euclidean space actually limits the searching of shape differences, while the training efficiency is also restricted as the descriptors are constructed with dense connections between multiple predicted center points and all points. In summary, existing reconstruction losses mainly rely on distances in 3D Euclidean space to measure the shape differences. In this work, we propose a novel framework named Con-trastive Adversarial Loss (CALoss) learning to measure the point cloud reconstruction loss dynamically in a high di-mensional representation space constructed by a series of fully differentiable structures. The differences between our work and existing works are presented in Fig. 1. CALoss is composed of Lp,Lr, and Ladv racquired from distances between global representations. Ladv randLpare used to op-timize CALoss, where Lris used to train the task network. We introduce Lpas the contrastive constraint to help CALoss construct a representation space with the shape sim-ilarity that similar shapes should have close representations. In this way, by adding adversarial loss on representations, Ladv rcan guide CALoss to search for the shape differences between ground truths Sgand reconstructed results So. By updating dynamically according to the reconstructed results in each iteration, CALoss can continuously find existing de-fects in reconstructed shapes and prevent the task network from falling into unexpected local minimums. As the mea-surement for shape differences is implemented in non-linear representation space, CALoss has more extensive search-ing space. Besides, the representations adopted to measure shape differences are aggregated with the global pooling operation without any requirement of dense connections as PCLoss [ 6], which can improve the training efficiency. Our contribution in this work can be summarized as •We propose a novel Contrastive Adversarial Loss (CALoss) learning to measure the point cloud reconstruction loss with distances between high-dimensional global representations. •By combining the contrastive constraint and adversar-ial training strategy, CALoss can construct a represen-tation space where similar shapes have close represen-tations and learn to search for shape differences in this space dynamically during training. •Experiments on point cloud reconstruction, unsuper-vised classification, and point cloud completion con-firm that CALoss can help the task network improve reconstruction performances and learn more represen-tative representations with higher training efficiency.
Chen_From_Node_Interaction_To_Hop_Interaction_New_Effective_and_Scalable_CVPR_2023
Abstract Existing Graph Neural Networks (GNNs) follow the message-passing mechanism that conducts information in-teraction among nodes iteratively. While considerable progress has been made, such node interaction paradigms still have the following limitation. First, the scalability lim-itation precludes the broad application of GNNs in large-scale industrial settings since the node interaction among rapidly expanding neighbors incurs high computation and memory costs. Second, the over-smoothing problem re-stricts the discrimination ability of nodes, i.e., node rep-resentations of different classes will converge to indistin-guishable after repeated node interactions. In this work, we propose a novel hop interaction paradigm to address these limitations simultaneously. The core idea is to convert the interaction target among nodes to pre-processed multi-hop features inside each node. We design a simple yet effective HopGNN framework that can easily utilize existing GNNs to achieve hop interaction. Furthermore, we propose a multi-task learning strategy with a self-supervised learning ob-jective to enhance HopGNN. We conduct extensive exper-iments on 12 benchmark datasets in a wide range of do-mains, scales, and smoothness of graphs. Experimental re-sults show that our methods achieve superior performance while maintaining high scalability and efficiency. The code is athttps://github.com/JC-202/HopGNN .
1. Introduction Graph Neural Networks (GNNs) have recently become very popular and have demonstrated great results in a wide range of graph applications, including social net-works [38], point cloud analysis [37] and recommenda-tion systems [21]. The core success of GNNs lies in the message-passing mechanism that iteratively conducts infor-Figure 1. Comparison of node interaction and hop interaction. The hop interaction first pre-computes multi-hop features and then conducts non-linear interaction among different hops via GNNs, which enjoy high efficiency and effectiveness. mation interaction among nodes [8, 17, 18]. Each node in a graph convolution layer first aggregates information from local neighbors and combines them with non-linear trans-formation to update the self-representation [20, 27, 42]. Af-ter stacking Klayers, nodes can capture long-range K-hop neighbor information and obtain representative representa-tions for downstream tasks [29, 45]. However, despite the success of such popular node interaction paradigms, the number of neighbors for each node would grow exponen-tially with layers [2, 40], resulting in the well-known scala-bility andover-smoothing limitation of GNNs. The scalability limitation precludes the broad applica-tion of GNNs in large-scale industrial settings since the node interaction among rapidly expanding neighbors in-curs high computation and memory costs [15, 51]. Al-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7876 though we can reduce the size of neighbors by sampling techniques [9, 20], it still executes node interaction itera-tively during training, and the performance is highly sensi-tive to the sampling quality [15]. Recently, scalable GNNs that focus on simplifying or decoupling node interactions have emerged [16, 43, 52]. Such decoupled GNNs first pre-compute the linear aggregation of K-hop neighbors to gen-erate node features and then utilize the MLP to each node without considering the graph structure during training and inference. However, despite high efficiency and scalability, such methods lead to suboptimal results due to the lack of nonlinear interactions among nodes. Another limitation is over-smoothing , which restricts the discriminative ability of nodes, i.e., node representations will converge to indistinguishable after repeated node in-teractions [5, 31]. On one hand, it causes performance de-generation when increasing layers of GNNs [27, 33]. On the other hand, in some heterophily graphs where con-nected nodes are usually from different classes, shallow GNNs are also surprisingly inferior to pure Multi-ayer Per-ceptrons (MLPs) [36, 53]. The reason is the interaction among massive local inter-class neighbors would blur class boundaries of nodes [6, 22, 53]. Recently, to carefully con-sider the neighbor influence and maintain the node dis-crimination, emerging advanced node interaction GNNs, such as deep GNNs with residual connections [11, 29] and heterophilic-graph-oriented GNNs with adaptive aggrega-tion [4, 35, 39, 46], have achieved promising results. How-ever, these advanced node interactions suffer high compu-tational costs and fail to handle large-scale datasets. These two limitations have typically been studied sepa-rately, as addressing one often necessitates compromising the other. However, can we bridge the two worlds, enjoying the low-latency, node-interaction-free of decoupled GNNs and the high discrimination ability of advanced node inter-action GNNs simultaneously? We argue that it is possible to transform the node interaction into a new hop interaction paradigm without losing performance, but drastically reduc-ing the computational cost. As shown in Figure 1, the core idea of hop interaction is to decouple the whole node in-teraction into two parts, the non-parameter hop feature pre-processing and non-linear interaction among hops. Inspired by the recommendation system, the non-linear interaction among different semantic features can enhance discrimina-tion [19], e.g., model the co-occurrence of career, sex and age of a user to identify its interest. By treating the precom-puted Lhop neighbors as Lsemantic features within each node, we can consider node classification as a feature inter-action problem, i.e., model the non-linear hop interaction to obtain discriminative node representations. To this end, we design a simple yet effective HopGNN framework to address the above limitation simultaneously. It first pre-computes the multi-hop representation accord-ing to the graph structure. Then, without loss of generality, we can utilize GNNs over a multi-hop feature graph inside each node to achieve hop interaction flexibly and explicitly. Specifically, we implement an attention-based interaction layer and average pooling for the HopGNN to fuse multi-hop features and generate the final prediction. Furthermore, to show the generality and flexibility of our framework, we provide a multi-task learning strategy that combines the self-supervised objective to enhance performance. Our contributions are summarized as follows: 1.New perspective: We propose a new graph learn-ing paradigm going from node to hop interaction. It con-ducts non-linear interactions among pre-processed multi-hop neighbor features inside each node.
Attaiki_Understanding_and_Improving_Features_Learned_in_Deep_Functional_Maps_CVPR_2023
Abstract Deep functional maps have recently emerged as a success-ful paradigm for non-rigid 3D shape correspondence tasks. An essential step in this pipeline consists in learning feature functions that are used as constraints to solve for a functional map inside the network. However, the precise nature of the in-formation learned and stored in these functions is not yet well understood. Specifically, a major question is whether these features can be used for any other objective, apart from their purely algebraic role in solving for functional map matrices. In this paper, we show that under some mild conditions, the features learned within deep functional map approaches can be used as point-wise descriptors and thus are directly com-parable across different shapes, even without the necessity of solving for a functional map at test time. Furthermore, informed by our analysis, we propose effective modifications to the standard deep functional map pipeline, which promote structural properties of learned features, significantly im-proving the matching results. Finally, we demonstrate that previously unsuccessful attempts at using extrinsic archi-tectures for deep functional map feature extraction can be remedied via simple architectural changes, which encourage the theoretical properties suggested by our analysis. We thus bridge the gap between intrinsic and extrinsic surface-based learning, suggesting the necessary and sufficient conditions for successful shape matching. Our code is available at https://github.com/pvnieo/clover .
1. Introduction Computing dense correspondences between 3D shapes is a classical problem in Geometry Processing, Computer Vision, and related fields, and remains at the core of many tasks including statistical shape analysis [8, 52], registration [80], deformation [7] or texture transfer [14] among others. Since its introduction, the functional map (fmap) pipeline [49] has become a de facto tool for addressing this prob-lem. This framework relies on representing correspondences as linear transformations across functional spaces, by en-coding them as small matrices using the Laplace-Beltrami Before After feature 1 feature 2 Figure 1. Left: Features learned in existing deep functional map pipelines are used in a purely algebraic manner as constraints for a linear system, thus lacking interpretability and clear geometric content. Right: We propose a theoretically-justified modification to this pipeline, which leads to learning robust andrepeatable features that enable matching via nearest neighbor search. basis. Methods based on this approach have been success-fully applied with hand-crafted features [5, 67, 71] to many scenarios including near-isometric, [20, 30, 44, 61, 70], non-isometric [17, 32] and partial [12, 35, 36, 63, 76, 77] shape matching. In recent years, a growing body of literature has advocated improving the functional map pipeline by using deeply learned features, pioneered by [34], and built upon by many follow-up works [3, 16, 19, 29, 39, 40, 68, 69]. In all of these methods, the learned features are only used to constrain the linear system when estimating the functional maps inside the network. Thus, no attention is paid to their geometric nature, or potential utility beyond this purely algebraic role. On the other hand, features learned in other deep match-ing paradigms are the main focus of optimization, and they represent either robust descriptive geometric features that are used directly for matching using nearest neighbor search [6, 13, 24, 33, 37, 78, 79], or as distributions that, at every point, are interpreted as vertex indices on some tem-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1316 plate shape [42, 46, 53], or a deformation field that is used to deform the input shape to match a template [26]. In contrast, feature (also known as “probe” [50]) func-tions, within deep functional maps are used purely as an opti-misation tool and thus the information learned and stored in these functions is not yet well-understood. In this work, we aim to show that features in deep functional maps networks can, indeed, have geometric significance and that, under cer-tain conditions, they are directly comparable across shapes, and can be used for matching simply via nearest neighbor search, see Fig. 1. Specifically, we introduce the notion of feature complete-ness and show that under certain mild conditions, extracting a pointwise map from a functional map or via the nearest neighbor search between learned features leads to the same result. Secondly, we propose a modification of the deep functional map pipeline, by imposing the learned functional maps to satisfy the conditions imposed by our analysis. We show that this leads to a significant improvement in accu-racy, allowing state-of-the-art results by simply performing a nearest-neighbor search between features at test time. Fi-nally, based on our theoretical results, we also propose a modification of some extrinsic feature extractors [73, 75], which previously failed in the context of deep functional maps, which improve their overall performance by a sig-nificant margin. Since our theoretical results hold for the functional map paradigm in general, they can be incorpo-rated in anydeep fmap method, hence improving previous methods, and making future methods more robust. Overall, our contributions can be summarized as follows: •We introduce the notions of feature completeness and basis aligning functional maps and use them to establish a theoretical result about the nature of features learned in the deep functional map framework. •Informed by our analysis, we propose simple modifica-tions to this framework, which lead to state-of-the-art results in challenging scenarios. •Based on our theoretical results, we propose a simple modification to some extrinsic feature extractors, that were previously unsuccessful for deep functional maps, improving their overall accuracy, and bridging the gap between intrinsic and extrinsic surface-based learning.
Gao_Back_to_the_Source_Diffusion-Driven_Adaptation_To_Test-Time_Corruption_CVPR_2023
Abstract Test-time adaptation harnesses test inputs to improve the accuracy of a model trained on source data when tested on shifted target data. Most methods update the source model by (re-)training on each target domain. While re-training can help, it is sensitive to the amount and order of the data and the hyperparameters for optimization. We update the target data instead, and project all test inputs to-ward the source domain with a generative diffusion model. Our diffusion-driven adaptation (DDA) method shares its models for classification and generation across all domains, training both on source then freezing them for all targets, to avoid expensive domain-wise re-training. We augment dif-fusion with image guidance and classifier self-ensembling to automatically decide how much to adapt. Input adapta-tion by DDA is more robust than model adaptation across a variety of corruptions, models, and data regimes on the ImageNet-C benchmark. With its input-wise updates, DDA succeeds where model adaptation degrades on too little data (small batches), on dependent data (correlated orders), or on mixed data (multiple corruptions).
1. Introduction Deep networks achieve state-of-the-art performance for visual recognition [ 3,8,25,26], but can still falter when there is ashift between the source data and the target data for test-ing [ 38]. Shift can result from corruption [ 10,27]; adver-sarial attack [ 7]; or natural shifts between simulation and reality, different locations and times, and other such differ-ences [ 17,36]. To cope with shift, adaptation and robust-ness techniques update predictions to improve accuracy on target data. In this work, we consider two fundamental axes of adaptation: what to adapt—the model or the input—and how much to adapt—using the update or not. We propose a test-time input adaptation method driven by a generative diffusion model to counter shifts due to image corruptions. *indicates equal contribution,†indicates corresponding author.The dominant paradigm for adaptation is to train the model by joint optimization over the source and target [ 6,13, 44,53,54]. However, train-time adaptation faces a crucial issue: not knowing how the data may differ during testing. While train-time updates can cope with known shifts, what if new and different shifts should arise during deployment? In this case, test-time updates are needed to adapt the model (1) without the source data and (2) without halting inference. Source-free adaptation [ 15,19,20,23,51,55] satisfies (1) by re-training the model on new targets without access to the source. Test-time adaptation [ 46,51,56,58] satisfies (1) and (2) by iteratively updating the model during inference. Al-though updating the model can improve robustness, these updates have their own cost and risk. Model updates may be too computationally costly, which prevents scaling to many targets (as each needs its own model), and they may be sen-sitive to different amounts or orders of target data, which may result in noisy updates that do not help or even hinder robustness. In summary, most methods update the source model , but this does not improve all deployments. We propose to update the target data instead. Our diffusion-driven adaptation method, DDA, learns a diffu-sion model on the source data during training, then projects inputs from all targets back to the source during testing. Figure 1shows how just one source diffusion model en-ables adaptation on multiple targets. DDA trains a diffusion model to replace the source data, for source-free adaptation, and adapts target inputs while making predictions, for test-time adaptation. Figure 2shows how DDA adapts the input then applies the source classifier without model updates. Our experiments compare and contrast input and model updates on robustness to corruptions. For input updates, we evaluate and ablate our DDA and compare it to Diff-Pure [ 30], the state-of-the-art in diffusion for adversarial defense. For model updates, we evaluate entropy mini-mization methods (Tent [ 56] and MEMO [ 58]), the state-of-the-art for online and episodic test-time updates, and BUFR [ 5], the state-of-the-art for source-free offline up-dates. DDA achieves higher robustness than DiffPure and MEMO across ImageNet-C and helps where Tent degrades This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11786 elastic transformglass blurshotnoise? elastic transformglass blurshotnoise glass blur shotnoiseelastic transform (a) Setting: Multi-Target Adaptation (b) Cycle-Consistent Paired Translation (c) DDA (ours): Many-to-One Diffusion Figure 1. One diffusion model can adapt inputs from new and multiple targets during testing . Our adaptation method, DDA, projects inputs from all target domains to the source domain by a generative diffusion model. Having trained on the source data alone, our source diffusion model for generation and source classification model for recognition do not need any updating, and therefore scale to multiple target domains without potentially expensive and sensitive re-training optimization. due to limited, ordered, or mixed data. DDA is model-agnostic, by adapting the input, and improves across stan-dard (ResNet-50) and state-of-the-art convolutional (Con-vNeXt [ 26]) and attentional (Swin Transformer [ 25]) archi-tectures without re-tuning. Our contributions: •We propose DDA as the first diffusion-based method for test-time adaptation to corruption and include a novel self-ensembling scheme to choose how much to adapt. •We identify and empirically confirm weak points for on-line model updates—small batches, ordered data, and mixed targets—and highlight how input updates address these natural but currently challenging regimes. •We experiment on the ImageNet-C benchmark to show that DDA improves over existing test-time adaptation methods across corruptions, models, and data regimes.
Geng_PartManip_Learning_Cross-Category_Generalizable_Part_Manipulation_Policy_From_Point_Cloud_CVPR_2023
Abstract Learning a generalizable object manipulation policy is vital for an embodied agent to work in complex real-world scenes. Parts, as the shared components in different object categories, have the potential to increase the generaliza-tion ability of the manipulation policy and achieve cross-category object manipulation. In this work, we build the first large-scale, part-based cross-category object manip-ulation benchmark, PartManip , which is composed of 11 object categories, 494 objects, and 1432 tasks in 6 task classes. Compared to previous work, our benchmark is also more diverse and realistic, i.e., having more objects and using sparse-view point cloud as input without oracle information like part segmentation. To tackle the difficul-ties of vision-based policy learning, we first train a state-*Equal contribution. †Corresponding author: hewang@pku.edu.cn.based expert with our proposed part-based canonicaliza-tion and part-aware rewards, and then distill the knowledge to a vision-based student. We also find an expressive back-bone is essential to overcome the large diversity of different objects. For cross-category generalization, we introduce domain adversarial learning for domain-invariant feature extraction. Extensive experiments in simulation show that our learned policy can outperform other methods by a large margin, especially on unseen object categories. We also demonstrate our method can successfully manipulate novel objects in the real world. Our benchmark has been released in https://pku-epic.github.io/PartManip.
1. Introduction We as humans are capable of manipulating objects in a wide range of scenarios with ease and adaptability. For building general-purpose intelligent robots that can work in This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2978 unconstrained real world environments, it is thus important to equip them with generalizable object manipulation skills. Towards this goal, recent advances in deep learning and re-inforcement learning have led to the development of some generalist agents such as GATO [32] and SayCan [1], how-ever their manipulation skills are limited to a set of known instances and fail to generalize to novel object instances. ManiSkill [25] proposes the first benchmark for learning category-level object manipulation, e.g., learn open drawers on tens of drawer sets and test on held-out ones. However, this generalization is limited within different instances from one object category, thus falling short to reach human-level adaptability. The most recent progress is shown in GAPart-Net [53], which defines several classes of generalizable and actionable parts (GAParts), e.g. handles, buttons, doors, that can be found across many different object categories but similar ways. For these GAPart classes, the paper then find a way to consistently define GAPart pose across object categories and devise heuristics to manipulate those parts, e.g., pull handles to open drawers, based on part poses. As a pioneering work, GAPartNet points a promising way to perform cross-category object manipulation but leave the manipulation policy learning unsolved. In this work, we thus propose the first large-scale, part-based cross-category object manipulation benchmark, Part-Manip , built upon GAPartNet. Our cross-category bench-mark requires agents to learn skills such as opening a door on storage furniture and generalizing to other object cate-gories such as an oven or safe, which presents a great chal-lenge for policy learning to overcome the huge geometry and appearance gaps among object categories. Furthermore, our benchmark is more realistic and di-verse. We use partial point clouds as input without any ad-ditional oracle information like part segmentation masks in the previous benchmark ManiSkill [17,25], making our set-ting very close to real-world applications. Our benchmark also has much more objects than ManiSkill. We selected around 500 object assets with more than 1400 parts from GAPartNet [11] and designed six classes of cross-category manipulation tasks in simulation. Thanks to the rich annota-tion provided in GAPartNet, we can define part-based dense rewards to ease policy learning. Due to the difficulty presented by our realistic and di-verse cross-category setting, we find that directly using state-of-the-art reinforcement learning (RL) algorithms to learn a vision-based policy does not perform well. Ideally, we wish the vision backbone to extract informative geomet-ric and task-aware representations, which can facilitate the actor to take correct actions. However, the policy gradient in this case would be very noisy and thus hinder the vision backbone from learning, given the huge sampling space. To mitigate this problem, we propose a two-stage training framework: first train a state-based expert that can accessoracle part pose information using reinforcement learning; and then distill the expert policy to a vision-based student that only takes realistic inputs. For state-based expert policy learning, we propose a novel part-based pose canonicalization method that trans-forms all state information into the part coordinate frame, which can significantly reduce the task variations and ease the learning. In addition, we devise several part-aware re-ward functions that can access the pose of the part under interaction, providing a more accurate guidance to achieve the manipulation objective. In combination, these tech-niques greatly improve the policy training on diverse in-stances from different categories as well as generalization to unseen object instances and categories. For the vision-based student policy learning, we first in-troduce a 3D Sparse UNet-based backbone [16] to han-dle diverse objects, yielding much more expressivity than PointNet. To tackle the generalization issue, we thus propose to learn domain-invariant (category-independent) features via introducing an augmentation strategy and a domain adversarial training strategy [8, 9, 22]. These two strategies can alleviate the problem of overfitting and greatly boost the performance on unseen object instances and even categories. Finally, we propose a DAgger [33] + behavior clone strategy to carefully distill the expert policy to the student and thus maintain the high-performance of the expert. Through extensive experiments in simulation, we vali-date our design choices and demonstrate that our approach outperforms previous methods by a significant margin, es-pecially for unseen object categories. We also show real-world experiments.
Inoue_LayoutDM_Discrete_Diffusion_Model_for_Controllable_Layout_Generation_CVPR_2023
Abstract Controllable layout generation aims at synthesizing plausible arrangement of element bounding boxes with op-tional constraints, such as type or position of a specific el-ement. In this work, we try to solve a broad range of lay-out generation tasks in a single model that is based on dis-crete state-space diffusion models. Our model, named Lay-outDM , naturally handles the structured layout data in the discrete representation and learns to progressively infer a noiseless layout from the initial input, where we model the layout corruption process by modality-wise discrete diffu-sion. For conditional generation, we propose to inject lay-out constraints in the form of masking or logit adjustment during inference. We show in the experiments that our Lay-outDM successfully generates high-quality layouts and out-performs both task-specific and task-agnostic baselines on several layout tasks.1
1. Introduction Graphic layouts play a critical role in visual communica-tion. Automatically creating a visually pleasing layout has tremendous application benefits that range from authoring of printed media [45] to designing application user inter-face [5], and there has been a growing research interest in the community. The task of layout generation considers the arrangement of elements, where each element has a tuple of attributes, such as category, position, or size, and de-pending on the task setup, there could be optional control inputs that specify part of the elements or attributes. Due to the structured nature of layout data, it is crucial to con-sider relationships between elements in a generation. For this reason, current generation approaches either build an autoregressive model [2, 11] or develop a dedicated infer-ence strategy to explicitly consider relationships [19–21]. In this paper, we propose to utilize discrete state-space 1Please find the code and models at: https://cyberagentailab.github.io/layout-dm . ...Cat egor yGenerat ed la y out(s)La y out DMCat egor y +siz eCompletion R efinementCat egor y+r elationship+ ......433211...MMMM...MM...corruptcondition ( optional)denoiseInitial la y outFinal la y outFigure 1. Overview of LayoutDM. Top: LayoutDM is trained to gradually generate a complete layout from a blank state in discrete state space. Bottom : During sampling, we can steer LayoutDM to perform various conditional generation tasks without additional training or external models. diffusion models [3, 9, 14] for layout generation tasks. Dif-fusion models have shown promising performance for var-ious generation tasks, including images and texts [13]. We formulate the diffusion process for layout structure by modality-wise discrete diffusion , and train a denoising back-bone network to progressively infer the complete layout with or without conditional inputs. To support variable-length layout data, we extend the discrete state-space with a special PAD token instead of the typical end-of-sequence token used in autoregressive models. Our model can incor-porate complex layout constraints via logit adjustment , so that we can refine an existing layout or impose relative size constraints between elements without additional training. We discuss two key advantages of LayoutDM over ex-isting models for conditional layout generation. Our model avoids the immutable dependency chain issue [20] that hap-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10167 pens in autoregressive models [11]. Autoregressive mod-els fail to perform conditional generation when the con-dition disagrees with the pre-defined generation order of elements and attributes. Unlike non-autoregressive mod-els [20], our model can generate variable-length elements. We empirically show in Sec. 4.5 that naively extending non-autoregressive models by padding results in suboptimal variable length generation while padding combined with our diffusion formulation leads to significant improvement. We evaluate LayoutDM on various layout generation tasks tackled by previous works [20, 21, 33, 36] using two large-scale datasets, Rico [5] and PubLayNet [45]. Lay-outDM outperforms task-agnostic baselines in the major-ity of cases and shows promising performance compared with task-specific baselines. We further conduct an ablation study to prove the significant impact of our design choices in LayoutDM, including quantization of continuous vari-ables and positional embedding. We summarize our contributions as follows: • We formulate the discrete diffusion process for layout generation and propose a modality-wise diffusion and a padding approach to model highly structured layout data. • We propose to inject complex layout constraints via masking and logit adjustment during the inference, so that our model can solve diverse tasks in a single model. • We empirically show solid performance for various con-ditional layout generation tasks on public datasets.
Heppert_CARTO_Category_and_Joint_Agnostic_Reconstruction_of_ARTiculated_Objects_CVPR_2023
Abstract We present CARTO, a novel approach for reconstruct-ing multiple articulated objects from a single stereo RGB observation. We use implicit object-centric representations and learn a single geometry and articulation decoder for multiple object categories. Despite training on multiple cat-egories, our decoder achieves a comparable reconstruction accuracy to methods that train bespoke decoders separately for each category. Combined with our stereo image encoder we infer the 3D shape, 6D pose, size, joint type, and the joint state of multiple unknown objects in a single forward pass. Our method achieves a 20.4% absolute improvement in mAP 3D IOU50 for novel instances when compared to a two-stage pipeline. Inference time is fast and can run on a NVIDIA TITAN XP GPU at 1 HZ for eight or less objects present. While only trained on simulated data, CARTO transfers to real-world object instances. Code and evaluation data is available at: carto.cs.uni-freiburg.de
1. Introduction Reconstructing 3D shapes and inferring the 6D pose and sizes of objects from partially observed input observations remains a fundamental problem in computer vision with ap-plications in Robotics [10, 11, 13, 20] and AR/VR [8, 44].This object-centric 3D scene understanding problem is chal-lenging and under-constrained since inferring 6D pose and shape can be ambiguous without prior knowledge about the object of interest. Previous work has shown that it is possible to perform category-level 3D shape reconstruction and 6D pose estimation in real-time [7], enabling the reconstruction of complete, fine-grained 3D shapes and textures. However, there are a wide variety of real-world objects that do not have a constant shape but can be articulated according to the object’s underlying kinematics. There has been great progress in articulated object tracking [5,9,32,37] and recon-struction [10,26] from a sequence of observations. However, a sequence of observations is cumbersome since it often requires prior interaction with the environment. In contrast, object reconstruction from a single stereo image through inferring latent information about an object a priori enables both grasping and manipulation of previously unknown artic-ulated objects. Additionally, estimates from a single image can also serve as a good initial guess for object tracking approaches [37]. Previous approaches to articulated object reconstruction from a single observation use a two-stage approach [16] where first objects are detected using, e.g., Mask-RCNN [4]. Then, based on the detection output, object properties, e.g. part-poses and NOCS maps [35], are predicted and the object is reconstructed using backward optimization [24]. Such an This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21201 approach is complex, error prone, does not scale across many categories, and does not run in real-time. To mitigate the aforementioned challenges, building on ideas from [7] -a single-shot approach to output complete 3D information (3D shape, 6D pose, and sizes of multiple ob-jects) on a per-pixel manner -we present “Category and Joint Agnostic Reconstruction of ARTiculated Objects” (CARTO). First, extending [24], we train a robust category-and joint-agnostic 3D decoder [3, 24, 28] by learning disentangled latent shape and joint codes. The shape code encodes the canonical shape of the object while the joint code captures the articulation state of the object consisting of the type of articulation (e.g., prismatic or revolute) and the amount of articulation (i.e. joint state). To disentangle both codes we impose structure among our learned joint codes by propos-ing a physically grounded regularization term. Second, in combination with our stereo RGB image encoder we can do inference in a single-shot manner to detect the objects’ spatial centers, 6D poses and sizes as well as shape and joint codes. The latter two can then be used as input to our category-and joint-agnostic decoder to directly reconstruct all detected objects. To evaluate CARTO, we first evaluate the reconstruction and articulation state prediction quality of our category-and joint-agnostic decoder and compare against decoders trained on a single category. In an ablation study we show the neces-sity of our proposed joint code regularization over naively training the joint codes. We then quantitatively compare our full pipeline to a two-stage approach on synthetic data and show qualitative results on a new real-world dataset. Our main contributions can be summarized as follows: •An approach for learning a shape and joint decoder jointly in a category-and joint-agnostic manner. •A single shot method, which in addition to predicting 3D shapes and 6D pose, also predicts the articulation amount and type (prismatic or revolute) for each object. • Large-scale synthetic and annotated real-world evalua-tion data for a set of articulated objects across 7 cate-gories. • Training and evaluation code for our method.
Achlioptas_ShapeTalk_A_Language_Dataset_and_Framework_for_3D_Shape_Edits_CVPR_2023
Abstract Editing 3D geometry is a challenging task requiring spe-cialized skills. In this work, we aim to facilitate the task of editing the geometry of 3D models through the use of natural language. For example, we may want to modify a 3D chair model to “make its legs thinner” or to “open a hole in its back” . To tackle this problem in a man-ner that promotes open-ended language use and enables fine-grained shape edits, we introduce the most extensive existing corpus of natural language utterances describing shape differences: ShapeTalk .ShapeTalk contains over half a million discriminative utterances produced by con-trasting the shapes of common 3D objects for a variety of object classes and degrees of similarity. We also intro-duce a generic framework, ChangeIt3D , which builds on ShapeTalk and can use an arbitrary 3D generative model of shapes to produce edits that align the output better with the edit or deformation description. Finally, we introduce metrics for the quantitative evaluation of language-assisted shape editing methods that reflect key desiderata within this editing setup. We note that ShapeTalk allows methods to be trained with explicit 3D-to-language data, bypassing the necessity of “lifting” 2D to 3D using methods like neural rendering, as required by extant 2D image-language foun-dation models. Our code and data are publicly available at https://changeit3d.github.io/ .
1. Introduction Visual content creation and adaptation, whether in 2D or 3D scenes, has traditionally been a time-consuming ef-fort, requiring specialized skills, software, and multiple it-erations. The use of natural language promises to democ-ratize this process and let ordinary users perform semanti-cally plausible content synthesis, as well as addition, dele-tion, and modification by describing their intent in words – and then letting AI-powered tools translate that into editsof their content. There has been very strong recent interest in and impressive results from large visual language models able to transform text into 2D images, such as DALL-E 2 from OpenAI, or Imagen from Google. The same need ex-ists for 3D asset creation for video games, movies, as well as mixed-reality experiences – though fully automated tools in the 3D area are only now starting to appear [26, 33, 35]. The task of editing 2D or 3D content via language is even more challenging, as references to extant scene components have to be resolved, while unreferenced parts of the scene should be kept unchanged as much as possible. This work focuses on the task of modifying the shape of a 3D object in a fine-grained manner according to the semantics of free-form natural language. Operating directly in a 3D representation has many advantages for downstream tasks that need to be 3D-aware, such as scene composition and manipulation, interaction, etc. Even if only 2D views are needed, 3D provides superior attribute disentanglement and guarantees view consistency. Furthermore, note that modifying the 3D geometry of an object in ways that are faithful to its class semantics is itself a highly non-trivial undertaking (e.g., stretching a sedan should keep the wheels circular) and has been the focus of recent work [44, 45]. Our language-driven shape deformation task is applica-ble to many real-world situations: e.g., in assisting visually-impaired users, graphic designers, or artists to interact with objects of interest and change them to better fit their design needs. We build a framework, ChangeIt3D , to address this task, consisting of three major components: the ShapeTalk large-scale dataset with an order of magnitude more utter-ances than in previous work (Section 2), a modular architec-ture for implementing edits on top of a variety of 3D shape representations, and a set of evaluation metrics to quantify the quality of the performed transformations. The ShapeTalk dataset, linking 3D shapes and free-form language, contains over half a million discriminative ut-terances produced by contrasting pairs of common 3D ob-jects for a variety of object classes and degrees of similar-ity. Shape differentiation helps focus the language on fine-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12685 It has an arm for wall mountingIt is not a pocket watchIt has hour and minute handsIt is deeper The arm swivelsThe aspect ratio is 4:3The stand is a square The target is thinnerThere is no design on the backThe legs are not nichedThe bottom does not have a cross barThe seat is thinnerThe lip of the target is thinnerThe neck of the target is narrower than the distractor'sThe body of the target is slightly tallerThe mouth of the target has a smaller opening Its handle is thinnerIts blade is not serratedIt looks like a butter knifeIts blade is not pointed TargetDistractorTargetDistractorTargetDistractor It has a top armThe lampshade is taperedThe body is not as ornateThe base is thickerTargetDistractorTargetDistractorTargetDistractorFigure 1. Samples of contrastive utterances in ShapeTalk. For each paired distractor-target object, ultra-fine-grained shape differences are enumerated by an annotator in decreasing order of importance in the annotator’s judgment. Interestingly, both continuous anddiscrete geometric features that objects share across categories naturally emerge in the language of ShapeTalk; e.g., humans describe the “thinness” of a chair leg or a vase lip (top row) or the presence of an “arm” that a lamp or a clock might have (bottom row). grained but important differences that might not rise to the surface when we describe object geometry individually, as in PartIt [19], where clearly different geometries can end up with very similar descriptions because they share a com-mon underlying structure. Furthermore, unlike the dataset used by ShapeGlot [5], our goal is to obtain as complete de-scriptions of the geometry differences between two objects as possible, with the goal of enabling reconstruction of the differing object from the reference object and the language – going well beyond simple discrimination. Examples of utterances in ShapeTalk are provided in Figure 1. We approach the task of language-based shape editing by enabling shape edits and deformations on top of a variety of 3D generative models of shapes, including Point-Cloud Auto-Encoders (PC-AE) [4], implicit neural methods (Im-Net) [11] and Shape Gradient Fields (SGF) [8]. To this end, we train a network on ShapeTalk for a discriminative task of identifying the target within a distractor-target pair (exam-ples in Figure 1) and show that the same network can guide edits done directly inside the latent spaces of these genera-tive models. We note that a great deal of ShapeTalk refers to shape parts. Even though the underlying shape represen-tations we deploy do not have explicit knowledge of parts, we demonstrate that our framework can apply a variety of part-based edits and deformations. This confirms a remark-able finding – already described in [24] and [20] – that the notion of parts can be learned from language alone, without any geometric part supervision. As already mentioned, making edits to an existing shape is more demanding than ab initio shape generation as (a) it requires understanding of the input shape and its relation to the modification language, and (b) changes to parts notreferenced in the modification utterance should be avoided. Hence, a further contribution of our work is a set of eval-uation metrics for the modification success and quality, re-flecting realism of the resulting shape, faithfulness to the language instructions, and stability or avoidance of unnec-essary changes. Such metrics are essential for encouraging further progress in the field. In summary, this work introduces 1a new large-scale multimodal dataset, ShapeTalk, with referential language that differentiates shapes of common objects with rich lev-els of detail, enabling a new setup for doing language-driven shape deformations directly in 3D. We approach the task of language-based shape editing with 2a modular frame-work supporting diverse 3D shape representations and im-plementing fine-grained edits guided by a 3D-aware neural-listening network. To set the stage for future developments on the task, we introduce 3a set of intuitive evaluation metrics for the shape edits and deformations performed.
Cao_Event-Guided_Person_Re-Identification_via_Sparse-Dense_Complementary_Learning_CVPR_2023
Abstract Video-based person re-identification (Re-ID) is a promi-nent computer vision topic due to its wide range of video surveillance applications. Most existing methods utilize spatial and temporal correlations in frame sequences to obtain discriminative person features. However, inevitable degradation, e.g., motion blur contained in frames, leading to the loss of identity-discriminating cues. Recently, a new bio-inspired sensor called event camera, which can asyn-chronously record intensity changes, brings new vitality to the Re-ID task. With the microsecond resolution and low latency, it can accurately capture the movements of pedes-trians even in the degraded environments. In this work, we propose a Sparse-Dense Complementary Learning (SDCL) Framework, which effectively extracts identity features by fully exploiting the complementary information of dense frames and sparse events. Specifically, for frames, we build a CNN-based module to aggregate the dense features of pedestrian appearance step by step, while for event streams, we design a bio-inspired spiking neural network (SNN) backbone, which encodes event signals into sparse feature maps in a spiking form, to extract the dynamic motion cues of pedestrians. Finally, a cross feature alignment module is constructed to fuse motion information from events and appearance cues from frames to enhance identity represen-tation learning. Experiments on several benchmarks show that by employing events and SNN into Re-ID, our method significantly outperforms competitive methods. The code is available at https://github.com/Chengzhi-Cao/SDCL . ∗Corresponding author. This work was supported by the National Key R &D Program of China under Grant 2020AAA0105702, the National Natural Science Foundation of China (NSFC) under Grants 62225207, 62276243 and U19B2038, the University Synergy Innovation Program of Anhui Province under Grant GXXT-2019-025. (b) (c) (a) (d) (e) Figure 1. Visual examples of learned feature maps. From top to bottom: (a) original images, (b) corresponding events, (c) feature maps of events, (d) feature maps of frames in PSTA [49] (w/o events), (e) feature maps of frames in our network (w/ events).
1. Introduction Person re-identification (Re-ID) identifies a specific per-son in non-overlapping camera networks and is used in a variety of surveillance applications [15, 31, 32]. Due to the availability of video data, video-based person Re-ID has at-tracted considerable attention. Compared with image-based Re-ID methods, video sequences contain numerous detailed This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17990 spatial and temporal information, which is beneficial to im-proving Re-ID performance [34, 46, 50]. Most existing video-based Re-ID approaches rely on spatial and temporal correlation modules, which are use-ful for deriving human representations that are resistant to temporal changes and noisy regions [19,33,51,53]. To gen-erate a person’s representation from a video, they focus on shared information across numerous frames while taking into consideration the temporal context. Although video data can provide a wealth of appearance cues for identity representation learning, they also bring motion blur, illu-mination variations, and occlusions [24, 54]. These data-inherent phenomena result in the loss and ambiguity of es-sential identity-discriminating shape cues, and cannot be well solved by existing video-based Re-ID solutions [43]. Instead of depending solely on video sequences, this work intends to exploit event streams captured by event cameras to compensate for lost information and guide feature extraction in frames [44, 61]. Since the novel bio-inspired event camera can record per-pixel intensity changes asynchronously, it has high temporal resolution, high dynamic range, and low latency [12], providing a new perspective for person Re-ID. In other words, unlike tradi-tional cameras that capture dense RGB pixels at a fixed rate, the event camera can accurately encode the time, location, and sign of the brightness changes [37, 41], offering robust motion information to identify a specific person. In this paper, we propose a sparse-dense complemen-tary learning network (SDCL) to fully extract complemen-tary features of consecutive dense frames and sparse event streams for video-based person Re-ID. First, for dense video sequences, we build a CNNs-based backbone to ag-gregate frame-level features step-by-step. For sparse event streams, we design a deformable spiking neural network to suit the sparse and asynchronous characteristics of events. Because spiking neural network (SNN) has a specific event-triggered computation characteristic that can respond to the events in a nearly latency-free way, it is naturally fit for processing events and can preserve the spatial and tempo-ral information of events by utilizing a discretized input representation. Meanwhile, we introduce deformable op-eration to deal with the degradation of spikes in deeper layers of SNN, better utilizing the spatial distribution of events to guide the deformation of the sampling grid. Fi-nally, to jointly utilize sparse-dense complementary infor-mation, we propose a cross-feature alignment module to exploit the clear movement information from events and ap-pearance cues from frames to enhance representation capac-ity. As shown in Figure 1, the feature maps of events still preserve the sparse distribution of events, which can guide the baseline to capture and learn discriminative representa-tion clearly. Compared with the baseline (without events) in the fourth row, the learned feature maps in the fifth rowshow that our method tends to focus on the most important semantic regions in original frames and easily selects the better represented areas. The representation of events shows the contour and pose of a specific person. It presents that the sparse events can guide the baseline network to capture and learn discriminative representation clearly. the learned fea-ture maps of dense RGB frames and sparse events intend to capture different semantic regions, but they still have spatial correlation. Both of them contribute to the final results. This work makes the following contributions: • We introduce a new modality, called event streams, and explore its dynamic properties to guide person Re-ID. To the best of our knowledge, this is the first event-guided solution to tackle the video-based Re-ID task. • We propose a sparse-dense complementary learning network to fully utilize the sparse events and dense frames simultaneously to enhance identity representa-tion learning in degraded conditions. • We design a deformable spiking neural network to suit the sparse characteristics of event streams, which greatly utilizes the spatial consistency of events to pro-vide motion information for dense RGB frames in a lightweight architecture. Extensive experiments are conducted on multiple datasets to demonstrate how the bio-inspired event camera can help improve the Re-ID performance of baseline models and achieve higher retrieval accuracy than SOTA methods.
Jang_Unsupervised_Contour_Tracking_of_Live_Cells_by_Mechanical_and_Cycle_CVPR_2023
Abstract Analyzing the dynamic changes of cellular morphol-ogy is important for understanding the various functions and characteristics of live cells, including stem cells and metastatic cancer cells. To this end, we need to track all points on the highly deformable cellular contour in every frame of live cell video. Local shapes and textures on the contour are not evident, and their motions are complex, of-ten with expansion and contraction of local contour fea-tures. The prior arts for optical flow or deep point set tracking are unsuited due to the fluidity of cells, and pre-vious deep contour tracking does not consider point cor-respondence. We propose the first deep learning-based tracking of cellular (or more generally viscoelastic mate-rials) contours with point correspondence by fusing dense representation between two contours with cross attention. Since it is impractical to manually label dense tracking points on the contour, unsupervised learning comprised of the mechanical and cyclical consistency losses is proposed to train our contour tracker. The mechanical loss forcing the points to move perpendicular to the contour effectively helps out. For quantitative evaluation, we labeled sparse tracking points along the contour of live cells from two live cell datasets taken with phase contrast and confocal fluorescence microscopes. Our contour tracker quantita-tively outperforms compared methods and produces quali-tatively more favorable results. Our code and data are pub-licly available at https://github.com/JunbongJang/ contour-tracking/
1. Introduction During cell migration, cells change their morphology by expanding or contracting their plasma membranes continu-ously like viscoelastic materials [ 21]. The dynamic change in the morphology of a live cell is called cellular morpho-* Corresponding authors: kimtaekyun@kaist.ac.kr and kwonmoo.lee@childrens.harvard.edu Figure 1. Visualization of contour tracking results. Dense point correspondences between adjacent contours are shown with white arrows overlaid on the first frame. The first frame’s contour points are in dark green, and the last frame’s contour points are in red. Only half of the contour points and correspondences are shown for visualization purposes. The trajectories of a few tracked points are shown on the right. dynamics and ranges from cellular to the subcellular move-ment of contour at varying spatiotemporal scales. While cellular morphodynamics plays a vital role in angiogene-sis, immune response, stem cell differentiation, and cancer invasiveness [ 6,17], it is challenging to understand the var-ious functions of cellular morphodynamics because its un-characterized heterogeneity could mask crucial mechanistic details. As an initial step to understanding cellular morpho-dynamics, cellular morphodynamics is quantified by track-ing every point along the cellular contour (contour track-ing) and estimating their velocity [ 12,13,21]. Then, quan-tification of cellular morphodynamics is further processed by other downstream machine learning tasks to characterize the drug-sensitive morphodynamic phenotypes with distinct molecular mechanisms [ 6,20,36]. Because contour tracking (e.g., Fig. 1) is the important first step, the tracking accuracy This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 227 is crucial in this live cell analysis. There are two main difficulties involved with contour tracking of a live cell. First, the live cell’s contour exhibits visual features that can be difficult to distinguish by human eyes, meaning that a pixel and its neighboring pixels have similar color values or features. Optical flow [ 14,31] can track every pixel in the current frame by assuming that the corresponding pixel in the next frame will have the same distinct feature, but this assumption is not sufficient to find corresponding pixels given cellular visual features. Second, the expansion and contraction of the cellular contour change the total number of tracking points due to one point split-ting into many points or many points converging into one. PoST [ 24] tracks a fixed number of a sparse set of points that cannot accurately represent the fluctuating shape of the cellular contour. Other deep contour tracking or video seg-mentation methods [ 10,27,39] do not provide dense point-to-point correspondence information between a contour and its next contour. Previous cellular contour tracking method (mechanical model) [ 21] evades the first problem by taking the segmen-tation of the cell body as inputs instead of raw images. Then, it finds the dense correspondences of all points be-tween two contours by minimizing the normal torsion force and linear spring force with the Marquard-Levenberg algo-rithm [ 23]. However, the mechanical model has limited ac-curacy because it does not consider visual features in raw images. Also, its linear spring force which keeps every dis-tance between points the same is less effective during the expansion and contraction of the cell, as shown in our ex-periments (see Tab. 1). Therefore, we present a deep learning-based contour tracker that can overcome these difficulties. Our contour tracker is comprised of a feature encoder, two cross atten-tions [ 35], and a fully connected neural network (FCNN) for offset regression, as shown in Fig. 2. Given two consec-utive images and their contours represented as a sequence of points, our contour tracker encodes the visual features of two images and samples their feature at the location of contours. The sampling makes our contour tracker focus on contour features and reduces the noise from irrelevant features unlike optical flow [ 14]. The cross attention [ 35] fuses the sampled features from two contours globally and locally and regresses the offset for each contour point of the first frame. To obtain the dense point-to-point correspon-dences between the current and the next contours, offset points from the current contour are matched with the closest contour points in the next frame. In every frame, some con-tour points merge due to contraction, so new contour points emerge in the next frame as shown in Fig. 1. With dense point-to-point correspondences, new contour points in the next contour are also tracked. The proposed architectural design achieves the best accuracy among variants, includingcircular convolutions [ 26], and correspondence matrix [ 4]. To the best of our knowledge, this is the first deep learning-based contour tracking with dense point-to-point correspon-dences for live cells. In this contour tracking, supervised learning is not feasi-ble because it is difficult to label every point of the contour manually. Instead, we propose to train our contour tracker solely by unsupervised learning comprised of mechanical and cycle consistency losses. Inspired by the mechanical model [ 21] that minimizes the normal torsion and linear spring force, we introduce the mechanical losses to end-to-end learning. The mechanical-normal loss that keeps the angle difference small between the offset point and the di-rection normal to the cellular contour played a significant role in boosting accuracy. Also, we implement cycle consis-tency loss to encourage all contour points tracked forward-then-backward to return to their original location. How-ever, previous approaches such as PoST [ 24] and Anima-tion Transformer (AnT) [ 4] rely on supervised learning in addition to cycle consistency loss or find mid-level corre-spondences [ 38] instead of pixel-level correspondences. We evaluate our contour tracker on the live cell dataset taken with a phase contrast microscope [ 13] and another live cell dataset taken with a confocal fluorescence micro-scope [ 36]. For a quantitative comparison of contour track-ing methods, we labeled sparse tracking points on the con-tour of live cells for all sampled frames. In total, we la-beled 13 live cell videos for evaluation. Evaluation with a sparse set of points is motivated by the fact that if tracking of dense contour points is accurate, tracking any one of con-tour points should be accurate also. We also qualitatively show our contour tracker works on another viscoelastic or-ganism, jellyfish [ 30]. Our contributions are summarized as follows. • We propose the first deep learning-based model that tracks cellular contours densely while surpassing the accuracy of other methods. • We present an unsupervised learning strategy by me-chanical loss and cycle consistency loss for contour tracking. • We demonstrate that the use of forward and backward cross attention with cycle consistency has a synergistic effect on finding accurate dense correspondences. • We label tracking points in the live cell videos and quantitatively evaluate cellular contour tracking for the first time. 228 Shared Forward Cross Attention Backward Cross Attention Feature maps SharedhareEncoder FCNNIt It+1… … FCNN Point Features Query QueryKey, Value Sharedhare EncoderCt Otїt+1 Ot+1їtijForwardBackwardCycle Lossni t Normal LossShared Ct+1A Linear Loss Figure 2. Our architecture on the left and unsupervised learning losses on the right. Shared encoder comprised of VGG16 and FPN encodes first and second images. Point features are sampled at the location of ordered contour points indicated by rainbow colors from red to purple. Point features are inputted as query or key and value to the cross attentions. Lastly, shared FCNN takes the fused features and regresses forward Ot→t+1or backward Ot+1→toffsets. The cycle consistency, mechanical-normal, and mechanical-linear losses are shown in red color.
Foo_System-Status-Aware_Adaptive_Network_for_Online_Streaming_Video_Understanding_CVPR_2023
Abstract Recent years have witnessed great progress in deep neu-ral networks for real-time applications. However, most existing works do not explicitly consider the general case where the device’s state and the available resources fluc-tuate over time, and none of them investigate or address the impact of varying computational resources for online video understanding tasks. This paper proposes a System-status-aware Adaptive Network (SAN) that considers the device’s real-time state to provide high-quality predictions with low delay. Usage of our agent’s policy improves ef-ficiency and robustness to fluctuations of the system status. On two widely used video understanding tasks, SAN obtains state-of-the-art performance while constantly keeping pro-cessing delays low. Moreover, training such an agent on various types of hardware configurations is not easy as the labeled training data might not be available, or can be com-putationally prohibitive. To address this challenging prob-lem, we propose a Meta Self-supervised Adaptation (MSA) method that adapts the agent’s policy to new hardware con-figurations at test-time, allowing for easy deployment of the model onto other unseen hardware platforms.
1. Introduction Online video understanding, where certain predictions are immediately made for each video frame by using in-formation in the current frame and potentially past frames, is an important task right at the intersection of video-based research and practical vision applications (e.g., self-driving vehicles [11], security surveillance [4], streaming services [32], and human-computer interactions [20]). In particular, in many of these real-world video-based applications, a fast and timely response is often crucial to ensure high usability and reduce potential security risk. Therefore, in many prac-tical online applications, it is essential to ensure that the model is working with low delay while maintaining a good † Equal contribution; § Currently at Meta; ‡ Corresponding authorperformance, which can be challenging for many existing deep neural networks. Recently, much effort has been made to reduce the de-lay of deep neural networks, including research into effi-cient network design [16,36,44], input-aware dynamic net-works [5,6,12,24], and latency-constrained neural architec-tures [1, 2, 21]. However, all these works do not explicitly consider the dynamic conditions of the hardware platform, and assume stable computation resources are readily avail-able. In practical scenarios, the accessible computing re-sources of the host devices can be fluctuating and dynamic due to the fact that multiple computationally expensive yet important threads are running concurrently. For example, in addition to performing vision-related tasks such as object detection, human activity recognition, and pose estimation, state-of-the-art robotic systems usually need to simultane-ously perform additional tasks like simultaneous localiza-tion and mapping (SLAM) to successfully interact with hu-mans and the environment. Those tasks are also often com-putationally heavy and could compete with vision tasks for computing resources. As a result, at times when the host device is busy with other processes, conducting inference for each model might require significantly more time than usual, leading to extremely long delays, which could cause safety issues and lagging responses in many real-world ap-plications. Therefore, the study and development of models providing reliable yet timely responses under various hard-ware devices and fluctuating computing resources is cru-cially important. Unfortunately, such studies are lacking in the field. To achieve and maintain low delay for online video understanding tasks under a dynamic computing resource budget, we propose a novel System-status-aware Adaptive Network ( SAN ). Different from previous works, SAN ex-plicitly considers the system status of its host device to make on-the-fly adjustments to its computational complexity , and is thus capable of processing video streams effectively and efficiently in a dynamic system environment. SAN com-prises of two components: a) a simple yet effective dynamic main module that offers reliable predictions under various This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10514 network depths and input resolutions; b) a lightweight agent that learns a dynamic system-status-aware policy used to control the execution of the main module, which facilitates adaptation to the fluctuating system load. With the adaptiv-ity of the main module and the control policy generated by the agent, our SAN can achieve good performance on the online video understanding task while maintaining a low delay under fluctuating system loads. In various applications, we may need to deploy SAN onto different hardware platforms for online video under-standing. However, it is inconvenient to train SAN for each hardware platform, and it might also be difficult to find adequate storage to load the large labeled dataset on all platforms (e.g., mobile devices). In light of these dif-ficulties, we further propose a method for deployment-time self-supervised agent adaptation , which we call MetaSelf-supervised Adaptation (MSA). With MSA, we can conve-niently train a SAN model on a set of local platforms, and perform a quick deployment-time agent adaptation on a tar-get device, without the need for the original labeled training data. Specifically, our proposed MSA introduces an auxil-iary task of delay prediction together with a meta-learning procedure, that facilitates the adaptation to the target de-ployment device. In summary, the main contributions of this paper are: • We are the first to explicitly consider the fluctuating system status of the hardware device at inference time for online video understanding. To address this, we propose SAN , a novel system-status-aware network that adapts its behavior according to the video stream and the real-time status of the host system. • We further propose a novel Meta Self-supervised Adaptation method MSA that alleviates the training burden and allows our model to effectively adapt to new host devices with potentially unclear computation profiles at deployment time. • We empirically demonstrate that our proposed method achieves promising performance on the challenging online action recognition and pose estimation tasks, where we achieve low delays under a rapidly fluctu-ating system load without jeopardizing the quality of the predictions.
Chung_Parallel_Diffusion_Models_of_Operator_and_Image_for_Blind_Inverse_CVPR_2023
Abstract Diffusion model-based inverse problem solvers have demonstrated state-of-the-art performance in cases where the forward operator is known (i.e. non-blind). However, the applicability of the method to blind inverse problems This work was supported by the National Research Foundation of Ko-rea under Grant NRF-2020R1A2B5B03001980, by the KAIST Key Re-search Institute (Interdisciplinary Research Group) Project, by the Insti-tute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2021-0-02068, Artificial Intelligence Innovation Hub), and by the IITP grant funded by the Korea government(MSIT) (No. 2022-0-00984, Development of Arti-ficial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation).has yet to be explored. In this work, we show that we can indeed solve a family of blind inverse problems by constructing another diffusion prior for the forward operator. Specifically, parallel reverse diffusion guided by gradients from the intermediate stages enables joint optimization of both the forward operator parameters as well as the image, such that both are jointly estimated at the end of the parallel reverse diffusion procedure. We show the efficacy of our method on two representative tasks — blind deblurring, and imaging through turbulence — and show that our method yields state-of-the-art performance, while also being flexible to be applicable to general blind inverse problems when we know the functional forms. Code available: https://github.com/BlindDPS/blind-dps This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6059
1. Introduction Inverse problems subsume a wide set of important prob-lems in science and engineering, where the objective is to recover the latent image from the corrupted measurement, generated by the forward operator. Considering the taxon-omy, they can be split into two major categories — non-blind inverse problems, and blind inverse problems. The former considers the cases where the forward operator is known, and hence eases the problem. In contrast, the latter considers the cases where the operator is unknown , and thus the operator needs to be estimated together with the recon-struction of the latent image. The latter problem is consider-ably harder than the former problem, as joint minimization is typically much less stable. In this work, we mainly focus on leveraging generative priors to solve inverse problems in imaging. Among many different generative model classes, diffusion models have established the new state-of-the-art. In diffusion models, we define the forward data noising process, which gradually corrupts the image into white Gaussian noise. The genera-tive process is defined by the reverse of such process, where each step of reverse diffusion is governed by the score func-tion [53]. With the recent surge of diffusion models, it has been demonstrated in literature that diffusion models are not only powerful generative models, but also excellent gener-ative priors to solve inverse problems. Namely, one can ei-ther resort to iterative projections to the measurement sub-space [13, 53], or estimate posterior sampling [11] to ar-rive at feasible solutions that meet the data consistency. For both linear [13,27,53] and some non-linear [11,51] inverse problems, guiding unconditional diffusion models to solve down-stream inverse problems were shown to have stronger performance even when compared to the fully supervised counterparts. Nevertheless, current solvers are strictly limited to cases where the forward operator is known and fixed. For ex-ample, [11, 27] consider non-blind deblurring with known kernels. The problem now boils down to optimizing only for the latent image, since the likelihood can be computed robustly. Unfortunately, in real world problems, knowing the kernel exactly is impractical. It is often the case where the kernel is also unknown, and we have to jointly estimate the image and the kernel. In such cases, not only do we need a prior model of the image, but we also need some proper prior model of the kernel [41, 55]. While conven-tional methods exploit, e.g. patch-based prior [55], sparsity prior [41], etc., they often fall short of accurate modeling of the distribution. In this work, we aim to leverage the ability of diffusion models to act as strong generative priors and propose Blind-DPS (Blind Diffusion Posterior Sampling) — constructing multiple diffusion processes for learning the prior of each component — which enable posterior sampling even whenthe operator is unknown. BlindDPS starts by initializing both the image and the operator parameter with Gaussian noise. Reverse diffusion progresses in parallel for both models, where the cross-talk between the paths are enforced from the approximate likelihood and the measurement, as can be seen in Fig. 2. With our method, both the image and the kernel starts with a coarse estimation, gradually getting closer to the ground truth as t→0(see Fig. 1(c)). In fact, our method can be thought of as a coarse-to-fine strategy naturally admitting a Gaussian scale-space rep-resentation [29, 36], which can be seen as a continuous generalization of the coarse-to-fine optimization strategy that most of the optimization-based methods take [41, 44]. Furthermore, our method is generally applicable to cases where we know the structure of the forward model a priori (e.g. convolution). To demonstrate the generality, we fur-ther show that our method can also be applied in imaging through turbulence. From our experiments, we show that the proposed method yields state-of-the-art performance while being generalizable to different inverse problems.
Choi_Local-Guided_Global_Paired_Similarity_Representation_for_Visual_Reinforcement_Learning_CVPR_2023
Abstract Recent vision-based reinforcement learning (RL) meth-ods have found extracting high-level features from raw pix-els with self-supervised learning to be effective in learning policies. However, these methods focus on learning global representations of images, and disregard local spatial struc-tures present in the consecutively stacked frames. In this paper, we propose a novel approach, termed self-supervised PairedSimilarity Representation Learning (PSRL) for effec-tively encoding spatial structures in an unsupervised manner. Given the input frames, the latent volumes are first gener-ated individually using an encoder, and they are used to capture the variance in terms of local spatial structures, i.e., correspondence maps among multiple frames. This enables for providing plenty of fine-grained samples for training the encoder of deep RL. We further attempt to learn the global se-mantic representations in the action aware transform module that predicts future state representations using action vec-tors as a medium. The proposed method imposes similarity constraints on the three latent volumes; transformed query representations by estimated pixel-wise correspondence, pre-dicted query representations from the action aware transform model, and target representations of future state, guiding action aware transform with locality-inherent volume. Ex-perimental results on complex tasks in Atari Games and DeepMind Control Suite demonstrate that the RL methods are significantly boosted by the proposed self-supervised learning of paired similarity representations.
1. Introduction Deep reinforcement learning (RL) has been an appealing tool for training agents to solve various tasks including com-plex control and video games [12]. While most approaches have focused on training RL agent under the assumption This work was partly supported by IITP grant funded by the Ko-rea government (MSIT) (No.RS-2022-00155966, AI Convergence Inno-vation Human Resources Development (Ewha Womans University)) and the Mid-Career Researcher Program through the NRF of Korea (NRF-2021R1A2C2011624).†Corresponding author: dbmin@ewha.ac.krthat compact state representations are readily available, this assumption does not hold in the cases where raw visual ob-servations ( e.g.images) are used as inputs for training the deep RL agent. Learning visual features from raw pixels only using a reward function leads to limited performance and low sample efficiency. To address this challenge, a number of deep RL ap-proaches [1,10,38,40,43,44,46] leverage the recent advance of self-supervised learning which effectively extracts high-level features from raw pixels in an unsupervised fashion. In [38, 46], they propose to train the convolutional encoder for pairs of images using a contrastive loss [24,50]. For train-ing the RL agent, given a query and a set of keys consisting of positive and negative samples, they minimize the contrastive loss such that the query matches with the positive sample more than any of the negative samples [38, 46]. While the parameters of the query encoder are updated through back-propagation using the contrastive loss [50], the parameters of the key encoder are computed with an exponential mov-ing average (EMA) of the query encoder parameters. The output representations of the query encoder are passed to the RL algorithm for training the agent. These approaches have shown compelling performance and high sample efficiency on the complex control tasks when compared to existing image-based RL approaches [31, 33, 51]. While these approaches can effectively encode the global semantic representations of images with the self-supervised representation learning, there has been no attention on the local fine-grained structures present in the consecutively stacked images. Our key observation is that spatial defor-mation, i.e., the change in terms of the spatial structures across the consecutive frames, can provide plenty of local samples for training the RL agent. Establishing dense corre-spondence [19, 34, 39, 42, 55], which has been widely used for various tasks such as image registration and recognition in computer vision, can be an appropriate tool in modeling the local spatial deformation. In this work, we propose a novel approach, termed self-supervised Paired Similarity Representation Learning (PSRL), that learns representations for deep RL by effec-tively encoding the spatial structures in a self-supervised This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15072 fashion. The query representations generated from an en-coder are used to predict the correspondence maps among the input frames. A correspondence aware transform is then applied to generate future representations. We further extend our framework by introducing the concept of fu-ture state prediction, originally used for action planning in RL [8,11], into the proposed action aware transform in order to learn temporally-consistent global semantic representa-tions. The proposed method is termed ‘Paired Similarity’ as it encodes both local and global information of agent obser-vations. More structured details of the terms are provided in the supplementary material due to lack of space. To learn the proposed paired similarity representation, we impose similarity constraints on the three representations; trans-formed query representations by the estimated pixel-wise correspondence, predicted query representations from the action aware transform module, and target representations of future state. When applying the paired similarity constraint, the prediction and projection heads of global similarity con-straint are shared with the local constraint head, inducing locality-inherent volume to guide the global prediction. Fi-nally, the well-devised paired similarity representation is then used as input to the RL policy learner. We evaluate the proposed method with two challeng-ing benchmarks including Atari 2600 Games [31, 51] and DMControl Suite [48], which are the common benchmarks adopted to evaluate the performance of recent sample-efficient deep RL algorithms. The proposed method com-petes favorably compared to the state-of-the-arts in 13 out of 26 environments on Atari 2600 Games and in 4 out of 6 tasks on DMControl Suite, in terms of cumulative rewards per episode. We highlight our contributions as follows. •While prior approaches place emphasis only on encod-ing global representations, our method takes advan-tage of spatial deformation to learn local fine-grained structures together, providing sufficient supervision for training the encoder of deep RL. •We propose to impose the paired similarity constraints for visual deep RL by guiding the global prediction heads with locality-inherent volume. •We introduce the action aware transform module to self-supervised framework to learn temporally-consistent instance discriminability by using action as a medium.
Harenstam-Nielsen_Semidefinite_Relaxations_for_Robust_Multiview_Triangulation_CVPR_2023
Abstract We propose an approach based on convex relaxations for certifiably optimal robust multiview triangulation. To this end, we extend existing relaxation approaches to non-robust multiview triangulation by incorporating a least squares cost function. We propose two formulations, one based on epipolar constraints and one based on fractional repro-jection constraints. The first is lower dimensional and re-mains tight under moderate noise and outlier levels, while the second is higher dimensional and therefore slower but remains tight even under extreme noise and outlier levels. We demonstrate through extensive experiments that the pro-posed approaches allow us to compute provably optimal re-constructions even under significant noise and a large per-centage of outliers.
1. Introduction Multiview triangulation is the problem of estimating the location of a point in 3D given two or more 2D observa-tions in images taken from cameras with known poses and intrinsics. The 2D observations are typically estimated by some form of feature matching pipeline, so they are always corrupted by noise and outliers. As a result the 3D point cannot be exactly recovered, and instead the solution has to be phrased as a nonconvex optimization problem. While solutions are typically computed using faster but sub-optimal local optimization methods, there have also been efforts to compute globally optimal triangulations us-ing semidefinite relaxations [1,4,13]. These relaxations can work well even in high-noise scenarios, but their practical use remains limited as they are not robust and even a single outlier can deteriorate the result significantly. In this work, inspired by recent advances in semidefinite relaxations for outlier-robust perception [28], we will show that [1, 4] can be extended to also handle significant amounts of outliers. Implementation: github.com/linusnie/robust-triangulation-relaxations (a) 22 views, no outliers (b) 22 views, 19 outliers Figure 1. Example of a triangulated point from the Reichstag dataset. Blue point: ground truth from [12]. Red point: non-robust global optimum found by the relaxation from [1] (see Eq. (T)). Green point: robust global optimum found by our proposed relax-ation in Eq. (RT). Semidefinite relaxations have the advantage of being globally solvable in polynomial time, meaning that they can be used to enable practical certifiably optimal algorithms. After solving the relaxed problem we either have that 1) the relaxation is tight and we provably recover the global opti-mum of the original problem, or 2) the relaxation is prov-ably not tight and we can report failure to find the global optimum. The key metric for the usefulness of a certifiably optimal algorithm is then the percentage of problem cases where the underlying relaxation is tight. Despite their often slower runtime, certifiably optimal methods offer several advantages: Firstly, in safety-critical systems it may be required or desirable to complement the computed solution with some guarantee that the solver is not stuck in a local optima. Secondly, in many offline appli-cations runtime is actually not as critical and then one may want to trade off better accuracy for extra runtime. Thirdly, globally optimal solutions of real-world problems can serve This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 749 as ground truth for assessing the performance of local opti-mization methods. In this work, we demonstrate a certifiably optimal ap-proach to robust triangulation by developing two convex re-laxations for the truncated least squares cost function. En-abling the combination of robustness with the capacity to compute certifiably optimal solutions. Our main contribu-tions can be summarized as follows: • We extend the convex triangulation methods from [1] and [4] with a truncated least squares cost function and propose two corresponding convex relaxations. • We validate empirically that both relaxations remain tight even under large amounts of noise and high out-lier ratios. • We show that the relaxations are tight in the noise-free and outlier-free case by explicitly constructing the dual solution. To the best of our knowledge, this is the first example of a successful semidefinite relaxation of a robust estimation problem with reprojection errors.
Gan_Collaborative_Noisy_Label_Cleaner_Learning_Scene-Aware_Trailers_for_Multi-Modal_Highlight_CVPR_2023
Abstract Movie highlights stand out of the screenplay for efficient browsing and play a crucial role on social media platforms. Based on existing efforts, this work has two observations: (1) For different annotators, labeling highlight has un-certainty, which leads to inaccurate and time-consuming annotations. (2) Besides previous supervised or unsuper-vised settings, some existing video corpora can be useful, e.g., trailers, but they are often noisy and incomplete to cover the full highlights. In this work, we study a more practical and promising setting, i.e., reformulating high-light detection as “learning with noisy labels”. This setting does not require time-consuming manual annotations and can fully utilize existing abundant video corpora. First, based on movie trailers, we leverage scene segmentation to obtain complete shots, which are regarded as noisy labels. Then, we propose a Collaborative noisy Label *Corresponding author.Cleaner (CLC) framework to learn from noisy highlight moments. CLC consists of two modules: augmented cross-propagation (ACP) and multi-modality cleaning (MMC). The former aims to exploit the closely related audio-visual signals and fuse them to learn unified multi-modal repre-sentations. The latter aims to achieve cleaner highlight labels by observing the changes in losses among different modalities. To verify the effectiveness of CLC, we further collect a large-scale highlight dataset named MovieLights. Comprehensive experiments on MovieLights and YouTube Highlights datasets demonstrate the effectiveness of our approach. Code has been made available at: https: / / github . com / TencentYoutuResearch / HighlightDetection-CLC .
1. Introduction With the growing number of new publications of movies in theaters and streaming media, audiences become even This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18898 harder to choose their favorite one to enjoy for the next two hours. An effective solution is to watch the movie trailers before choosing the right movie. This is because trailers are generally carefully edited by filmmakers and contain the most prominent clips from the original movies. As a condensed version of full-length movies, trailers are elab-orately made with highlight moments to impress the audi-ences. Consequently, they are high potential in serving as supervision sources to train automatic video highlight de-tection algorithms and facilitating the mass production of derivative works for video creators in online video plat-forms, e.g., YouTube and TikTok. Existing video highlight detection (VHD) approaches are generally trained with annotated key moments of long-form videos. However, they are not suitable to tackle the movie highlight detection task by directly learning from trailers. The edited shots in trailers are not equivalent to ground-truth highlight annotations in movies. Although a previous work [43] leverages the officially-released trail-ers as the weak supervision to train a highlight detector, the highlighted ness of trailer shots is extremely noisy and varies with the preference of audiences, as shown in Fig. 1. On one hand, trailers tend to be purposefully edited to avoid spoilers, thus missing key moments of the storylines. On the other hand, some less important moments in the orig-inal movies are over-emphasized in the trailers because of some artistic or commercial factors. The subjective nature of trailer shots makes them noisy for the VHD task, which is ignored by existing VHD approaches. To alleviate the issue, we reformulate the highlight de-tection task as “learning with noisy labels”. Specifically, we first leverage a scene-segmentation model to obtain the movie scene boundaries. The clips containing trailers and clips from the same scenes as the trailers provide more com-plete storylines. They have a higher probability of being highlight moments but still contain some noisy moments. Subsequently, we introduce a framework named Collab-orative noisy Label Cleaner (CLC) to learn from these pseudo-noisy labels. The framework firstly enhances the modality perceptual consistency via the augmented cross-propagation (ACP) module, which exploits closely related audio-visual signals during training. In addition, a multi-modality cleaning (MMC) mechanism is designed to filter out noisy and incomplete labels. To support this study and facilitate benchmarking exist-ing methods in this direction, we construct MovieLights, a Movie Highlight Detection Dataset. MovieLights con-tains 174 movies and the highlight moments are all from officially released trailers. The total length of these videos is over 370 hours. We conduct extensive experiments on MovieLights, in which our CLC exhibits promising results. We also demonstrate that our proposed CLC achieves sig-nificant performance-boosting over the state-of-the-art onthe public VHD benchmarks. In summary, our major contributions are as follows: • We introduce a scene-aware paradigm to learn high-light moments in movies without any manual annota-tion. To the best of our knowledge, this is the first time that highlights detection is regarded as learning with noisy labels. • We present an augmented cross-propagation to capture the interactions across modalities and a consistency loss to maximize the agreement between the different modalities. • We incorporate a multi-modality noisy label cleaner to tackle label noise, which further improves the robust-ness of networks to annotation noise. • Experiments on movie datasets and benchmark datasets validate the effectiveness of our framework.
Cao_Contrastive_Mean_Teacher_for_Domain_Adaptive_Object_Detectors_CVPR_2023
Abstract Object detectors often suffer from the domain gap be-tween training (source domain) and real-world applications (target domain). Mean-teacher self-training is a powerful paradigm in unsupervised domain adaptation for object de-tection, but it struggles with low-quality pseudo-labels. In this work, we identify the intriguing alignment and syn-ergy between mean-teacher self-training and contrastive learning. Motivated by this, we propose Contrastive Mean Teacher (CMT) – a unified, general-purpose framework with the two paradigms naturally integrated to maximize beneficial learning signals. Instead of using pseudo-labels solely for final predictions, our strategy extracts object-level features using pseudo-labels and optimizes them via contrastive learning, without requiring labels in the target domain. When combined with recent mean-teacher self-training methods, CMT leads to new state-of-the-art target-domain performance: 51.9% mAP on Foggy Cityscapes, outperforming the previously best by 2.1% mAP . Notably, CMT can stabilize performance and provide more signifi-cant gains as pseudo-label noise increases.
1. Introduction The domain gap between curated datasets (source do-main) and real-world applications (target domain, e.g., on edge devices or robotic systems) often leads to deteriorated performance for object detectors. Meanwhile, accurate la-bels provided by humans are costly or even unavailable in practice. Aiming at maximizing performance in the target domain while minimizing human supervision, unsupervised domain adaptation mitigates the domain gap via adversarial training [7, 30], domain randomization [23], image transla-tion [4, 20, 21], etc. In contrast to the aforementioned techniques that explic-itly model the domain gap, state-of-the-art domain adaptive object detectors [5, 27] follow a mean-teacher self-training paradigm [2, 9], which explores a teacher-student mutual Code available at https://github.com/Shengcao-Cao/CMT TeacherDetectorStudentDetectorTarget-domain ImageStrong AugmentationWeak AugmentationPredictionsPseudo-labelsDetection LossEMA Mean-teacher Self-trainingMomentum EncoderOnline EncoderUnlabeled ImageAugmentation #1Augmentation #2QueryKeyContrastive LossEMA Momentum Contrast AttractRepel Pseudo-labels with NoiseObject-level Contrastive LearningFigure 1. Overview of Contrastive Mean Teacher. Top: Mean-teacher self-training [2, 5, 9, 27] for unsupervised domain adap-tation (left) and Momentum Contrast [16] for unsupervised rep-resentation learning (right) share the same underlying structure, and thus can be naturally integrated into our unified framework, Contrastive Mean Teacher .Bottom: Contrastive Mean Teacher benefits unsupervised domain adaptation even when pseudo-labels are noisy. In this example, the teacher detector incorrectly detects the truck as a train and the bounding box is slightly off. Rein-forcing this wrong pseudo-label in the student harms the perfor-mance. Contrarily, our proposed object-level contrastive learn-ing still finds meaningful learning signals from it, by enforc-ing feature-level similarities between the same objects and dis-similarities between different ones. learning strategy to gradually adapt the object detector for cross-domain detection. As illustrated in Figure 1-top, the teacher generates pseudo-labels from detected objects in the target domain, and the pseudo-labels are then used to su-pervise the student’s predictions. In return, the teacher’s weights are updated as the exponential moving average (EMA) of the student’s weights. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23839 Outside of unsupervised domain adaptation, contrastive learning [3, 6, 14, 16] has served as an effective approach to learning from unlabeled data. Contrastive learning opti-mizes feature representations based on the similarities be-tween instances in a fully self-supervised manner. Intrigu-ingly , as shown in Figure 1-top, there in fact exist strong alignment and synergy between the Momentum Contrast paradigm [16] from contrastive learning and the mean-teacher self-training paradigm [2, 9] from unsupervised do-main adaptation: The momentum encoder (teacher detec-tor) provides stable learning targets for the online encoder (student detector), and in return the former is smoothly up-dated by the latter’s EMA. Inspired by this observation, we propose Contrastive Mean Teacher (CMT) – a unified framework with the two paradigms naturally integrated. We find that their benefits can compound, especially with con-trastive learning facilitating the feature adaptation towards the target domain from the following aspects. First, mean-teacher self-training suffers from the poor quality of pseudo-labels, but contrastive learning does not rely on accurate labels. Figure 1-bottom shows an il-lustrative example: On the one hand, the teacher de-tector produces pseudo-labels in the mean-teacher self-training framework, but they can never be perfect (other-wise, domain adaptation would not be needed). The stu-dent is trained to fit its detection results towards these noisy pseudo-labels. Consequently, mis-predictions in the pseudo-labels become harmful learning signals and limit the target-domain student performance. On the other hand, contrastive learning does not require accurate labels for learning. Either separating individual instances [6, 16] or separating instance clusters [3] (which do not necessarily coincide with the actual classes) can produce powerful rep-resentations. Therefore, CMT effectively learns to adapt its features in the target domain, even with noisy pseudo-labels. Second, by introducing an object-level contrastive learn-ing strategy, we learn more fine-grained, localized repre-sentations that are crucial for object detection. Tradition-ally, contrastive learning treats data samples as monolithic instances but ignores the complex composition of objects in natural scenes. This is problematic as a natural image consists of multiple heterogeneous objects, so learning one homogeneous feature may not suffice for object detection. Hence, some recent contrastive learning approaches learn representations at the pixel [35], region [1], or object [38] levels, for object detection yet without considering the chal-lenging scenario of domain adaptation . Different from such prior work, in CMT we propose object-level contrastive learning to precisely adapt localized features to the target domain. In addition, we exploit predicted classes from noisy pseudo-labels, and further augment our object-level contrastive learning with multi-scale features, to maximizethe beneficial learning signals. Third, CMT is a general-purpose framework and can be readily combined with existing work in mean-teacher self-training. The object-level contrastive loss acts as a drop-in enhancement for feature learning, and does not change the original training pipelines. Combined with the most recent methods ( e.g., Adaptive Teacher [27], Probabilistic Teacher [5]), we achieve new state-of-the-art performance in unsupervised domain adaptation for object detection. To conclude, our contributions include: • We identify the intrinsic alignment and synergy between contrastive learning and mean-teacher self-training, and propose an integrated unsupervised domain adaptation framework, Contrastive Mean Teacher (CMT). • We develop a general-purpose object-level contrastive learning strategy to enhance the representation learning in unsupervised domain adaptation for object detection. Notably, the benefit of our strategy becomes more pro-nounced with increased pseudo-label noise (see Figure 3). • We show that our proposed framework can be combined with several existing mean-teacher self-training methods without effort, and the combination achieves state-of-the-art performance on multiple benchmarks, e.g., improv-ing the adaptation performance on Cityscapes to Foggy Cityscapes from 49.8% mAP to 51.9% mAP.
Cheng_M6Doc_A_Large-Scale_Multi-Format_Multi-Type_Multi-Layout_Multi-Language_Multi-Annotation_Category_Dataset_CVPR_2023
Abstract Document layout analysis is a crucial prerequisite for document understanding, including document retrieval and conversion. Most public datasets currently contain only PDF documents and lack realistic documents. Mod-els trained on these datasets may not generalize well to real-world scenarios. Therefore, this paper introduces a large and diverse document layout analysis dataset called M6Doc. The M6designation represents six properties: (1) Multi-Format (including scanned, photographed, and PDF documents); (2) Multi-Type (such as scientific arti-cles, textbooks, books, test papers, magazines, newspa-pers, and notes); (3) Multi-Layout (rectangular, Manhattan, non-Manhattan, and multi-column Manhattan); (4) Multi-Language (Chinese and English); (5) Multi-Annotation Category (74 types of annotation labels with 237,116 an-notation instances in 9,080 manually annotated pages); and (6) Modern documents. Additionally, we propose a transformer-based document layout analysis method called TransDLANet, which leverages an adaptive element match-ing mechanism that enables query embedding to better match ground truth to improve recall, and constructs a segmentation branch for more precise document image in-stance segmentation. We conduct a comprehensive evalua-tion of M6Doc with various layout analysis methods and demonstrate its effectiveness. TransDLANet achieves state-of-the-art performance on M6Doc with 64.5% mAP . The M6Doc dataset will be available at https://github. com/HCIILAB/M6Doc . *Corresponding Author. Figure 1. Examples of complex page layouts across different doc-ument formats, types, layouts, languages.
1. Introduction Document layout analysis (DLA) is a fundamental pre-processing task for modern document understanding and 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15138 digitization, which has recently received increasing atten-tion [25]. DLA can be classified into physical layout anal-ysis and logical layout analysis [15]. Physical layout anal-ysis considers the visual presentation of the document and distinguishes regions with different elements such as text, image, and table. Logical layout analysis distinguishes the semantic structures of documents according to the mean-ing and assigns them to different categories, such as chapter heading, section heading, paragraph, and figure note. Currently, deep learning methods have dominated DLA, which require a plethora of training data. Some datasets have been proposed in the community to promote the de-velopment of DLA, as shown in Table 1. However, these datasets have several limitations. (1) Small size. Early DLA datasets, such as PRImA [1] and DSSE200 [41], were small-scale and contained only hundreds of images. (2) Limited document format. The formats of current public large-scale datasets such as PubLayNet [44], DocBank [17], and DocLayNet [29], are all PDF documents. It presents a huge challenge to evaluate the effectiveness of different methods in realistic scenarios. (3) Limited document diver-sity. Most datasets include only scientific articles, which are typeset using uniform templates and severely lack vari-ability. Although DocLayNet [41] considers documents of seven types, they are not commonly used. The lack of style diversity would prejudice the development of multi-domain general layout analysis. (4) Limited document languages. Most datasets’ language is English. Since the text features of documents in different languages are fundamentally dif-ferent, DLA methods may encounter domain shift problems in different languages, which remain unexplored. (5) Few annotation categories. The annotation categories of current datasets are not sufficiently fine-grained, preventing more granular layout information extraction. To promote the development of fine-grained logical DLA in realistic scenarios, we have built the Multi-Format, Multi-Type, Multi-Layout, Multi-Language, and Multi-Annotation Categories Modern document ( M6Doc) dataset. M6Doc possesses several advantages. Firstly, M6Doc considers three document formats (scanned, pho-tographed, and PDF) and seven representative docu-ment types (scientific articles, magazines, newspapers, etc.). Since scanned/photographed documents are com-monly seen and widely used, the proposed M6Doc dataset presents great diversity and closely mirrors real-world sce-narios. Secondly, M6Doc contains 74 document annotation categories, which are the most abundant and fine-grained up to date. Thirdly, M6Doc is the most detailed manu-ally annotated DLA dataset, as it contains 237,116 anno-tation instances in 9,080 pages. Finally, M6Doc includes four layouts (rectangular, Manhattan, non-Manhattan, and multi-column Manhattan) and two languages (Chinese and English), covering more comprehensive layout scenarios.Several examples of the M6Doc dataset are shown in Fig-ure 1. In addition, we propose a transformer-based model, TransDLANet, to perform layout extraction in an instance segmentation manner effectively. It adopts a standard Transformer encoder without positional encoding as a fea-ture fusion method and uses an adaptive element matching mechanism to enable the query vector to better focus on the unique features of layout elements. This helps understand the spatial and global interdependencies of distinct layout elements and also reduces duplicate attention on the same instance. Subsequently, a dynamic decoder is exploited to perform the fusion of RoI features and image features. Fi-nally, it uses three parameter-shared multi-layer perception (MLP) branches to decode the fused interaction features for multi-task learning. The contributions of this paper are summarized as fol-lows: •M6Doc is the first layout analysis dataset that contains both real-world (photographed and scanned) files and born-digital files. Additionally, it is the first dataset that includes Chinese examples. It has several repre-sentative document types and layouts, facilitating the development of generic layout analysis methods. •M6Doc is the most fine-grained logical layout analy-sis categories. It can serve as a benchmark for several related tasks, such as logical layout analysis, formula recognition, and table analysis. •We propose the TransDLANet, a Transformer-based method for document layout analysis. It includes a Transformer-like encoder to better capture the corre-lation between queries, a dynamic interaction decoder, and three multi-ayer perceptron branches with shared parameters to decode the fused interaction features for multi-task learning.
Bao_CiCo_Domain-Aware_Sign_Language_Retrieval_via_Cross-Lingual_Contrastive_Learning_CVPR_2023
Abstract This work focuses on sign language retrieval—a recently proposed task for sign language understanding. Sign lan-guage retrieval consists of two sub-tasks: text-to-sign-video (T2V) retrieval and sign-video-to-text (V2T) retrieval. Dif-ferent from traditional video-text retrieval, sign language videos, not only contain visual signals but also carry abun-dant semantic meanings by themselves due to the fact that sign languages are also natural languages. Considering this character, we formulate sign language retrieval as a cross-lingual retrieval problem as well as a video-text re-trieval task. Concretely, we take into account the linguistic properties of both sign languages and natural languages, and simultaneously identify the fine-grained cross-lingual (i.e., sign-to-word) mappings while contrasting the texts and the sign videos in a joint embedding space. This pro-cess is termed as cross-lingual contrastive learning. An-other challenge is raised by the data scarcity issue—sign language datasets are orders of magnitude smaller in scale than that of speech recognition. We alleviate this issue by adopting a domain-agnostic sign encoder pre-trained on large-scale sign videos into the target domain via pseudo-labeling. Our framework, termed as domain-aware sign language retrieval via Cross-l ingual Contrastive learning or CiCo for short, outperforms the pioneering method by large margins on various datasets, e.g., +22.4 T2V and +28.0 V2T R@1 improvements on How2Sign dataset, and +13.7 T2V and +17.1 V2T R@1 improvements on PHOENIX-2014T dataset. Code and models are available at:https://github.com/FangyunWei/SLRT .
1. Introduction Sign languages are the primary means of communication used by people who are deaf or hard of hearing. Sign lan-guage understanding [1, 10, 12–15, 18, 32, 33, 61, 73] is sig-nificant for overcoming the communication barrier between *Corresponding author. Youjust keepgoingaroundtill yougetto the end. (Sign Language) (Natural Language) “You just keep going around till you get to the end.”Sign-Video Database… TextDatabase “You just keep going around till you get to the end.”“You are going to do smaller sections.” … “You can use any coffee that you want.”…… Cross-Lingual (Word-Sign) Mapping(a) Text-to-sign-video (T2V) retrieval. Youjust keepgoingaroundtill yougetto the end. (Sign Language) (Natural Language) “You just keep going around till you get to the end.”Sign-Video Database… TextDatabase “You just keep going around till you get to the end.”“You are going to do smaller sections.” … “You can use any coffee that you want.”…… Cross-Lingual (Word-Sign) Mapping (b) Sign-video-to-text (V2T) retrieval. Figure 1. Illustration of: (a) T2V retrieval; (b) V2T retrieval. the hard-of-hearing and non-signers. Sign language recog-nition and translation (SLRT) has been extensively studied, with the goal of recognizing the arbitrary semantic mean-ings conveyed by sign languages. However, the lack of available data significantly limits the capability of SLRT. In this paper, we focus on developing a framework for a recently proposed sign language retrieval task [18]. Un-like SLRT, sign language retrieval focuses on retrieving the meanings that signers express from a closed-set , which can significantly reduce error rates in realistic deployment. Sign language retrieval is both similar to and distinct from the traditional video-text retrieval. On the one hand, like video-text retrieval, sign language retrieval is also com-posed of two sub-tasks, i.e., text-to-sign-video (T2V) re-trieval and sign-video-to-text (V2T) retrieval. Given a free-form written query and a large collection of sign language videos, the objective of T2V is to find the video that best matches the written query (Figure 1a). In contrast, the goal of V2T is to identify the most relevant text description given a query of sign language video (Figure 1b). On the other hand, different from the video-text retrieval, sign languages, like most natural languages, have their own grammars and linguistic properties. Therefore, sign lan-guage videos not only contain visual signals, but also carry This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19016 Youjust keepgoingaroundtill yougetto the end. (Sign Language) (Natural Language) “You just keep going around till you get to the end.”Sign-Video Database… TextDatabase “You just keep going around till you get to the end.”“You are going to do smaller sections.” … “You can use any coffee that you want.”…… Cross-Lingual (Sign-to-Word) Mapping(a) While contrasting the sign videos and the texts in a joint embedding space, we simultaneously identify the fine-grained cross-lingual (sign-to-word) mappings of sign languages and natural languages via the proposed cross-lingual contrastive learning. Existing datasets do not annotate the sign-to-word mappings. (b) We show four instances of the sign “Book” in How2Sign [19] dataset, which are identified by our approach. Please refer to the sup-plementary material for more examples. Figure 2. Illustration of: (a) cross-lingual (sign-to-word) mapping; (b) sign-to-word mappings identified by our CiCo. semantics ( i.e., sign1-to-word mappings between sign lan-guages and natural languages) by themselves, which differ-entiates them from the general videos that merely contain visual information. Considering the linguistic characteris-tics of sign languages, we formulate sign language retrieval as a cross-lingual retrieval [6,34,59] problem in addition to a video-text retrieval [5, 24, 42–44, 57, 68] task. Sign language retrieval is extremely challenging due to the following reasons: (1) Sign languages are completely separate and distinct from natural languages since they have unique linguistic rules, word formation, and word or-der. The transcription between sign languages and natu-ral languages is complicated, for instance, the word or-der is typically not preserved between sign languages and natural languages. It is necessary to automatically iden-tify the sign-to-word mapping from the cross-lingual re-trieval perspective; (2) In contrast to the text-video retrieval datasets [45, 48] which contain millions of training sam-ples, sign language datasets are orders of magnitude smaller in scale—for example, there are only 30K video-text pairs in How2Sign [19] training set; (3) Sign languages convey information through the handshape, facial expression, and body movement, which requires models to distinguish fine-1We use sign to denote lexical item within a sign language vocabulary.grained gestures and actions; (4) Sign language videos typ-ically contain hundreds of frames. It is necessary to build efficient algorithms to lower the training cost and fit the long videos as well as the intermediate representations into lim-ited GPU memory. In this work, we concentrate on resolving the challenges listed above: • We consider the linguistic rules ( e.g., word order) of both sign languages and natural languages. We formulate sign language retrieval as a cross-lingual retrieval task as well as a video-text retrieval problem. While contrasting the sign videos and the texts in a joint embedding space as achieved in most vision-language pre-training frame-works [5, 44, 57], we simultaneously identify the fine-grained cross-lingual (sign-to-word) mappings between two types of languages via our proposed cross-lingual contrastive learning as shown in Figure 2. • Data scarcity typically brings in the over-fitting issue. To alleviate this issue, we adopt transfer learning and adapt a recently released domain-agnostic sign encoder [61] pre-trained on large-scale sign-videos to the target do-main. Although this encoder is capable of distinguish-ing the fine-grained signs, direct transferring may be sub-optimal due to the unavoidable domain gap between the pre-training dataset and sign language retrieval datasets. To tackle this problem, we further fine-tune a domain-aware sign encoder on pseudo-labeled data from target datasets. The final sign encoder is composed of the well-optimized domain-aware sign encoder and the powerful domain-agnostic sign encoder. • In order to effectively model long videos, we decouple our framework into two disjoint parts: (1) a sign en-coder which adopts a sliding window on sign-videos to pre-extract their vision features; (2) a cross-lingual con-trastive learning module which encodes the extracted vi-sion features and their corresponding texts in a joint em-bedding space. Our framework, called domain-aware sign language re-trieval via Cross-l ingual Contrastive learning or CiCo for short, outperforms the pioneer SPOT-ALIGN [18] by large margins on various datasets, achieving 56.6 (+22.4) T2V and 51.6 (+28.0) V2T R@1 accuracy (improvement) on How2Sign [19] dataset, and 69.5 (+13.7) T2V and 70.2 (+17.1) V2T R@1 accuracy (improvement) on PHOENIX-2014T [8] dataset. With its simplicity and strong perfor-mance, we hope our approach can serve as a solid baseline for future research.
Chen_LargeKernel3D_Scaling_Up_Kernels_in_3D_Sparse_CNNs_CVPR_2023
Abstract Recent advance in 2D CNNs has revealed that large ker-nels are important. However, when directly applying large convolutional kernels in 3D CNNs, severe difficulties are met, where those successful module designs in 2D become surprisingly ineffective on 3D networks, including the pop-ular depth-wise convolution. To address this vital chal-lenge, we instead propose the spatial-wise partition con-volution and its large-kernel module. As a result, it avoids the optimization and efficiency issues of naive 3D large ker-nels. Our large-kernel 3D CNN network, LargeKernel3D, yields notable improvement in 3D tasks of semantic seg-mentation and object detection. It achieves 73.9% mIoU on the ScanNetv2 semantic segmentation and 72.8% NDS nuScenes object detection benchmarks, ranking 1ston the nuScenes LIDAR leaderboard. The performance further boosts to 74.2% NDS with a simple multi-modal fusion. In addition, LargeKernel3D can be scaled to 17 ×17×17 kernel size on Waymo 3D object detection. For the first time, we show that large kernels are feasible and essential for 3D visual tasks. Our code and models is available at github.com/dvlab-research/LargeKernel3D.
1. Introduction 3D Sparse convolutional neural networks (CNNs) have been widely used as feature extractors in 3D tasks, e.g., se-mantic segmentation [9,24] and object detection [55,65,75]. The advantages of efficiency and convenient usage en-sure its important role in various applications, such as autonomous driving and robotics. However, 3D sparse CNNs are recently challenged by transformer-based meth-ods [45,46,79], mainly from the aspect of building effective receptive fields. Both global and local [21,45] self-attention mechanisms are able to capture context information from a large spatial scope. 2D Vision Transformers (ViTs) also emphasize their advantages in modeling long-range depen-dencies [20, 42, 51]. In contrast, common 3D sparse CNNs are limited in this regard. It is because the receptive fields *Equal Contribution.of default 3D sparse CNN are constrained by small kernel sizes and spatial disconnection of sparse features (due to the property of submanifold sparse convolution [25]). Literature about 2D CNNs [18, 43, 62] presents a series of methods, combined with large kernels, to enlarge the re-ceptive fields and model capacity. ConvNeXt [43] employs 7×7 depth-wise convolution as a strong design, combing with other training techniques to challenge its Swin Trans-former counterpart [42]. RepLKNet [18] pursues extremely large kernel sizes of 31 ×31 to boost the performance of dif-ferent tasks. To ensure the effectiveness of RepLKNet [18], additional factors, including depth-wise convolution, are also required. Other work [27] also emphasizes the im-portance of depth-wise convolution. Due to differences be-tween 3D and 2D tasks, these methods, however, are found not a good solution for 3D sparse CNNs. We first analyze the difficulties of 3D large-kernel CNN design in two aspects. The first challenge is efficiency . It is easy to understand that 3D convolution is with the cubic kernel size and computation increases fast. For example, the model size increases 10+ times when kernels change from 3 ×3×3 to 7×7×7. The second difficulty exists in theoptimization procedure. 3D datasets may contain only thousands of scenes, which cannot match 2D image bench-marks [15, 40] in terms of scales. In addition, 3D point clouds or voxels are sparse, instead of dense images. Thus, it might be insufficient to optimize the proliferated parame-ters of large kernels and leads to over-fitting. In this paper, we propose spatial-wise partition convo-lution as the 3D large-kernel design. It is a new family of group convolution by sharing weights among spatially ad-jacent locations, rather than depth-wise convolution [29] of channel-level groups. As shown in Fig. 1, spatial-wise parti-tion convolution remaps a large kernel ( e.g., 7×7) as a small one ( e.g., 3×3) via grouping spatial neighbors, while the ab-solutely large spatial size remains unchanged. With regard to the efficiency issue, it occupies few model sizes to keep parameters the same as those of small kernels. Moreover, it takes less latency, compared with plain large kernel coun-terparts. As for the optimization challenge, weight-sharing among spatial dimensions gives parameters more chance to update and overcome the over-fitting issue. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13488 Figure 1. Sparse convolutions with different kernels. Small-kernel sparse convolution gathers features in a local area. It is efficient but discards sufficient information flow due to feature disconnection and the small scope. Large-kernel sparse convolution is capable of capturing long-range information, at the price of a large number of parameters and computation. Our proposed spatial-wise partition convolution uses large kernel sizes, and shares weights among local neighbors for efficiency. We show 2D features for the simplicity sake. To increase the detail-capturing ability of large ker-nels [18], we introduce position embeddings for spatial-wise group convolution. It makes notable effects for large kernel sizes. We name the proposed block as spatial-wise large-kernel convolution ( SW-LK Conv ). We compare the efficiency between plain 3D submanifold sparse convolu-tion and ours, as shown in Tab. 1. Both parameters and latency of the baseline increases dramatically, as its kernel size becomes larger, while ours is far more efficient. SW-LK Conv can readily replace plain convolution lay-ers in existing 3D convolutional networks. We establish large-kernel backbone networks LargeKernel3D on existing 3D semantic segmentation [9] and object detection [16, 75] networks. It achieves notable improvement upon state-of-the-art methods [9, 16, 75], with a small model complex-ity overhead. Extensive experiments validate our effective-ness on large-scale benchmarks, including ScanNetv2 [13], nuScenes [4], and Waymo [57]. For object detection, LargeKernel3D achieves 72.8% NDS on nuScenes, rank-ing1ston the nuScenes LIDAR leaderboard. Without bells and whistles, it further improves to 74.2% NDS in a simple voxel-wise multi-modal fusion manner. More importantly, it is scalable to 17 ×17×17 kernel sizes on the large-scale Waymo 3D object detection. We visualize the Effective Receptive Fields (ERFs) of plain 3D CNNs and our LargeKernel3D in Fig. 2. It shows that deep small-kernel networks are also constrained by lim-ited ERFs, since sparse features are spatially disconnected. Note that our large-kernel networks elegantly resolve this issue. For the first time, we show that large-kernel CNN designs become effective on essential 3D visual tasks.
Hong_3D_Concept_Learning_and_Reasoning_From_Multi-View_Images_CVPR_2023
Abstract Humans are able to accurately reason in 3D by gathering multi-view observations of the surrounding world. Inspired by this insight, we introduce a new large-scale benchmark for 3D multi-view visual question answering (3DMV-VQA). This dataset is collected by an embodied agent actively mov-ing and capturing RGB images in an environment using the Habitat simulator. In total, it consists of approximately 5k scenes, 600k images, paired with 50k questions. We evaluate various state-of-the-art models for visual reasoning on our benchmark and find that they all perform poorly. We suggestthat a principled approach for 3D reasoning from multi-view images should be to infer a compact 3D representation of the world from the multi-view images, which is further grounded on open-vocabulary semantic concepts, and then to execute reasoning on these 3D representations. As the first step to-wards this approach, we propose a novel 3D concept learning and reasoning (3D-CLR) framework that seamlessly com-bines these components via neural fields, 2D pre-trained vision-language models, and neural reasoning operators. Ex-perimental results suggest that our framework outperforms baseline models by a large margin, but the challenge remains largely unsolved. We further perform an in-depth analysis of the challenges and highlight potential future directions. . This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9202
1. Introduction Visual reasoning, the ability to composite rules on inter-nal representations to reason and answer questions about visual scenes, has been a long-standing challenge in the field of artificial intelligence and computer vision. Several datasets [23, 33, 69] have been proposed to tackle this chal-lenge. However, they mainly focus on visual reasoning on 2D single-view images. Since 2D single-view images only cover a limited region of the whole space, such reasoning inevitably has several weaknesses, including occlusion, and failing to answer 3D-related questions about the entire scene that we are interested in. As shown in Fig. 1, it’s difficult, even for humans, to count the number of chairs in a scene due to the object occlusion, and it’s even harder to infer 3D relations like “closer” from a single-view 2D image. On the other hand, there’s strong psychological evidence that human beings conduct visual reasoning in the under-lying 3D representations [55]. Recently, there have been several works focusing on 3D visual question answering [2,16,62,64]. They mainly use traditional 3D representations (e.g., point clouds) for visual reasoning. This is inconsistent with the way human beings perform 3D reasoning in real life. Instead of being given an entire 3D representation of the scene at once, humans will actively walk around and explore the whole environment, ingesting image observations from different views and converting them into a holistic 3D repre-sentation that assists them in understanding and reasoning about the environment. Such abilities are crucial for many embodied AI applications, such as building assistive robots. To this end, we propose the novel task of 3D visual rea-soning from multi-view images taken by active exploration of an embodied agent. Specifically, we generate a large-scale benchmark, 3DMV-VQA (3D multi-view visual question answering), that contains approximately 5k scenes and 50k question-answering pairs about these scenes. For each scene, we provide a collection of multi-view image observations. We generate this dataset by placing an embodied agent in the Habitat-Matterport environment [47], which actively ex-plores the environment and takes pictures from different views. We also obtain scene graph annotations from the Habitat-Matterport 3D semantics dataset (HM3DSem) [61], including ground-truth locations, segmentations, semantic information of the objects, as well as relationships among the objects in the environments, for model diagnosis. To evaluate the models’ 3D reasoning abilities on the entire environment, we design several 3D-related question types, including concept, counting, relation and comparison. Given this new task, the key challenges we would like to investigate include: 1) how to efficiently obtain the compact visual representation to encode crucial properties ( e.g., se-mantics and relations) by integrating all incomplete observa-tions of the environment in the process of active exploration for 3D visual reasoning? 2) How to ground the semantic con-cepts on these 3D representations that could be leveraged for downstream tasks, such as visual reasoning? 3) How to infer the relations among the objects, and perform step-by-step reasoning? As the first step to tackling these challenges, we propose a novel model, 3D-CLR (3D Concept Learning and Reason-ing). First, to efficiently obtain a compact 3D representation from multi-view images, we use a neural-field model based on compact voxel grids [57] which is both fast to train and effective at storing scene properties in its voxel grids. As for concept learning, we observe that previous works on 3D scene understanding [1,3] lack the diversity and scale with re-gard to semantic concepts due to the limited amount of paired 3D-and-language data. Although large-scale vision-language models (VLMs) have achieved impressive performances for zero-shot semantic grounding on 2D images, leverag-ing these pretrained models for effective open-vocabulary 3D grounding of semantic concepts remains a challenge. To address these challenges, we propose to encode the features of a pre-trained 2D vision-language model (VLM) into the compact 3D representation defined across voxel locations. Specifically, we use the CLIP-LSeg [37] model to obtain fea-tures on multi-view images, and propose an alignment loss to map the features in our 3D voxel grid to 2D pixels. By cal-culating the dot-product attention between the 3D per-point features and CLIP language embeddings, we can ground the semantic concepts in the 3D compact representation. Fi-nally, to answer the questions, we introduce a set of neural reasoning operators, including FILTER ,COUNT ,RELATION operators and so on, which take the 3D representations of different objects as input and output the predictions. We conduct experiments on our proposed 3DMV-VQA benchmark. Experimental results show that our proposed 3D-CLR outperforms all baseline models a lot. However, failure cases and model diagnosis show that challenges still exist concerning the grounding of small objects and the separation of close object instances. We provide an in-depth analysis of the challenges and discuss potential future directions. To sum up, we have the following contributions in this paper. •We propose the novel task of 3D concept learning and reasoning from multi-view images. •By having robots actively explore the embodied environ-ments, we collect a large-scale benchmark on 3D multi-view visual question answering (3DMV-VQA). •We devise a model that incorporates a neural radiance field, 2D pretrained vision and language model, and neural rea-soning operators to ground the concepts and perform 3D reasoning on the multi-view images. We illustrate that our model outperforms all baseline models. •We perform an in-depth analysis of the challenges of this new task and highlight potential future directions. 9203
Chen_Learning_From_Unique_Perspectives_User-Aware_Saliency_Modeling_CVPR_2023
Abstract Everyone is unique. Given the same visual stimuli, peo-ple’s attention is driven by both salient visual cues and their own inherent preferences. Knowledge of visual preferences not only facilitates understanding of fine-grained attention patterns of diverse users, but also has the potential of ben-efiting the development of customized applications. Never-theless, existing saliency models typically limit their scope to attention as it applies to the general population and ig-nore the variability between users’ behaviors. In this paper, we identify the critical roles of visual preferences in atten-tion modeling, and for the first time study the problem of user-aware saliency modeling. Our work aims to advance attention research from three distinct perspectives: (1) We present a new model with the flexibility to capture atten-tion patterns of various combinations of users, so that we can adaptively predict personalized attention, user group attention, and general saliency at the same time with one single model; (2) To augment models with knowledge about the composition of attention from different users, we further propose a principled learning method to understand visual attention in a progressive manner; and (3) We carry out ex-tensive analyses on publicly available saliency datasets to shed light on the roles of visual preferences. Experimen-tal results on diverse stimuli, including naturalistic images and web pages, demonstrate the advantages of our method in capturing the distinct visual behaviors of different users and the general saliency of visual stimuli.
1. Introduction With the pervasiveness of a visual attention network in the brain, attention has become an important interface for understanding people’s behavioral patterns. A collection *Work done during an internship at Google Research. †Corresponding author ... ... ... Visual StimulusPersonalized AttentionUser-group AttentionGeneral Saliency ...Figure 1. A hierarchy for user-aware saliency modeling, where each level focuses on a different perspective of attention. of studies focus on leveraging human attention to optimize graphical designs [7, 16, 36], web page layouts [8, 37, 45], and user experience in immersive environments [22,38,44]. They demonstrate its usefulness for a broad range of ap-plications, and more importantly, highlight the intertwined nature between attention and users’ preferences [11]. As shown in Figure 1, attention modeling can be formulated as a hierarchy of tasks, i.e.,from a sophisticated understanding of individuals’ behaviors (personalized attention), to model-ing the visual behaviors of larger groups (user-group atten-tion), and the saliency of visual stimuli (general saliency). With the great diversity in attentional behaviors among dif-ferent groups ( e.g., attention of users with diverse character-istics, children vs elderly, male vs female, etc.), knowledge of visual preferences can play an essential role in enabling a more fine-grained understanding of attention. To accurately capture human attention on visual stimuli, considerable efforts have been placed on building saliency prediction models [10, 19, 21, 29, 30]. While achieving op-timistic results for modeling attention of the general popu-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2701 lation, there are two key challenges remaining largely unre-solved: (1) Existing models ignore the variability of users’ visual behaviors, and hence do not have the ability to iden-tify fine-grained attention patterns of distinct users; and (2) Apart from the shortage of models for user-aware saliency modeling, there has also been no attempt to formulate a training paradigm to understand the composition of atten-tion, which hampers the integration of attention from di-verse users. To fill the gap, we concentrate on a new re-search problem for modeling attention of adaptively se-lected users, and tackle the challenge with a new compu-tational model together with a progressive learning method. At the heart of our saliency model is the incorporation of visual preferences with personalized filters and adap-tive user masks. Unlike conventional methods designed for predicting a single saliency map representing attention of all users, it takes advantage of personalized filters to encode individuals’ attention patterns. The attention pat-terns are adaptively integrated based on a user mask indicat-ing the presence of users in the current sample, which en-ables attention prediction for various combinations of users. The aforementioned paradigm serves as the foundation for bridging individuals’ preferences with visual saliency, and augments models with more abundant information about fine-grained visual behaviors. It not only shows promise in modeling attention of specific users, but also benefits the inference of the general saliency. A key challenge in user-aware saliency modeling is the lack of understanding when aggregating attention from di-verse users. The issue becomes more critical when further considering the joint effects of stimuli and user preferences on visual attention [11], where the former factor may over-shadow the impacts of the latter one, leading to difficul-ties in capturing the variability of users’ attention. Inspired by human learning that acquires knowledge through a set of carefully designed curricula [2], we propose to tackle the aforementioned issues with a progressive learning ap-proach. The essence of our method is to encourage a model to learn the composition of attention from a dynamic set of users, from individuals to user groups representing the general population. Through optimizing on dynamically evolving annotations, it provides opportunities for models to learn both the unique attention patterns of different users and the saliency of visual stimuli. To summarize, our major contributions are as follows: • We identify the significance of characterizing visual preferences for attention modeling, and develop a novel model that can predict attention of various users. • We present a progressive learning method to under-stand the composition of attention and capture its vari-ability between different users. • We perform extensive experiments and analyses to in-vestigate the roles of visual preferences on tackling the challenges of user-aware and general saliency, and addressing the issues of incomplete users. Results demonstrate that user-aware saliency modeling is ad-vantageous in all the above three aspects.
Bashkirova_MaskSketch_Unpaired_Structure-Guided_Masked_Image_Generation_CVPR_2023
Abstract Recent conditional image generation methods produce images of remarkable diversity, fidelity and realism. How-ever, the majority of these methods allow conditioning only on labels or text prompts, which limits their level of con-trol over the generation result. In this paper, we intro-duce MaskSketch, an image generation method that allows spatial conditioning of the generation result using a guid-ing sketch as an extra conditioning signal during sampling. MaskSketch utilizes a pre-trained masked generative trans-former, requiring no model training or paired supervision, and works with input sketches of different levels of abstrac-tion. We show that intermediate self-attention maps of a masked generative transformer encode important structural information of the input image, such as scene layout and object shape, and we propose a novel sampling method based on this observation to enable structure-guided gen-eration. Our results show that MaskSketch achieves high image realism and fidelity to the guiding structure. Evalu-ated on standard benchmark datasets, MaskSketch outper-forms state-of-the-art methods for sketch-to-image transla-tion, as well as unpaired image-to-image translation ap-proaches. The code can be found on our project website: https://masksketch.github.io/
1. Introduction Recent Image generation methods achieved remarkable success, allowing diverse and photorealistic image synthe-sis [4,11,44,46]. The majority of state-of-the-art generative models allow conditioning with class labels [ 2,4,11,13] or text prompts [ 40,41,44,46]. However, some applications require a more fine-grained control over the spatial compo-sition of the generation result. While methods conditioned with segmentation maps [ 14] or strokes [ 34] achieve some spatial control over the generated image, sketching allows a more fine-grained specification of the target spatial layout, which makes it desirable for many creative applications. In this paper, we propose MaskSketch, a method for con-ditional image synthesis that uses sketch guidance to de-⇤Work done during an internship at Google. Figure 1. Given an input sketch and class label, MaskSketch sam-ples realistic images that follow the given structure. MaskSketch works on sketches of various degree of abstraction by leveraging a pre-trained masked image generator [ 4], while not requiring model finetuning or pairwise supervision. fine the desired structure, and a pre-trained state-of-the-art masked generative transformer , MaskGIT [ 4], to leverage a strong generative prior. We demonstrate the capability of MaskSketch to generate realistic images of a given struc-ture for sketch-to-photo image translation. Sketch-to-photo [5,20,32] is one of the most challenging applications of structure-conditional generation due to the large domain gap between sketches and natural images. MaskSketch achieves a balance between realism and structure fidelity. Our experiments show that MaskSketch outperforms state-of-the-art sketch-to-photo [ 20] and unpaired image transla-tion methods [ 6,25,37], according to standard metrics for image generation [ 23] and user preference studies. In MaskSketch, we formulate a structure similarity con-straint based on the observation that the intermediate self-attention maps of a masked generative transformer [ 4] en-code rich structural information (see Fig. 2). We use this This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1879 Figure 2. Self-attention maps (PCA) of the intermediate layers of a pre-trained masked generative transformer [ 4] encode information about the spatial layout of the input. Notably, they are robust to the domain shift between natural images ( left) and sketches ( right ). structure similarity constraint to guide the generated im-age towards the desired spatial layout [ 22,48]. Our study shows that the proposed attention-based structure similarity objective is robust to the domain shift occurring in sketch-to-photo translation. The proposed structure-based sam-pling method leverages a pre-trained image generator, and does not require model finetuning or sketch-photo paired data. Moreover, it is significantly faster than other methods that exploit self-attention maps for guided image genera-tion [ 48]. Figure 1shows the translation results produced by our method on sketches of various levels of abstraction. The limitations of existing sketch-to-photo translation methods [ 5,20,32] come from having to learn both an implicit natural domain prior and the mapping that aligns sketches to natural images, for which the domain gap is se-vere. MaskSketch, on the other hand, uses the strong gen-erative prior of a pre-trained generative transformer, which allows highly realistic generation. In addition, MaskSketch uses the domain-invariant self-attention maps for structure conditioning, allowing its use on sketches of a wide range of abstraction levels. Our contributions can be summarized as follows: •We show that the self-attention maps of a masked gen-erative transformer encode important structural infor-mation and are robust to the domain shift between im-ages and sketches. •We propose a sampling method based on self-attention similarity, balancing the structural guidance of an input sketch and the natural image prior. •We demonstrate that the proposed sampling approach, MaskSketch, outperforms state-of-the-art methods in unpaired sketch-to-photo translation. •To the best of our knowledge, MaskSketch is the first method for sketch-to-photo translation in the existing literature that produces photorealistic results requiring only class label supervision.2. Related Work While there is a vast volume of literature on image gener-ative models thanks to recent progress ranging from gener-ative adversarial networks [ 2,18,27] generative transform-ers [4,13,54] and diffusion models [ 11,35,40,46], in this section, we focus on reviewing image-conditioned image generation, also known as image translation. Supervised image conditional generation Sketch-to-photo image translation is a special case of image-conditional image generation. Early conditional image gen-eration methods were based on generative adversarial net-works. For example, pix2pix [ 26] conditioned the genera-tion result by minimizing the patchwise distance between the ground truth and the generated image; SPADE [ 38] and OASIS [ 47] used spatially-adaptive instance normal-ization to condition generation on a segmentation map; Co-CosNet [ 55], CoCosNet V2 [ 57] warped the reference im-age using a correlation matrix between the image and the given segmentation map. Similarly to MaskSketch, Make-a-Scene and NUWA [ 14,52] are designed to condition gen-eration on semantic segmentation and text prompts with a VQ-based transformer. While these methods allow spatial conditioning, they are inapplicable for sketch-to-photo due to the lack of ground truth paired data, domain gap between sketches and segmentation maps and lack of efficient meth-ods that extract semantic segmentation from sketches. Unsupervised image-conditional generation In unsu-pervised image-conditioned translation, the ground truth in-put and translation pairs are not available for training. For example, CycleGAN [ 58] used a cycle reconstruction loss to ensure a semantically consistent translation, UNIT [ 31], MUNIT [ 25], and StarGANv2 [ 8] disentangled domain-specific and shared information between the source and tar-get image domains by mapping them to a shared latent em-bedding space. PSP [ 42] used StyleGAN [ 46] inversion along with style mixing for segmentation-and edge-guided translation. SDEdit [ 33] uses a diffusion model to translate the input strokes or segmentation maps to natural images. The closest work to our
Feng_Detecting_Backdoors_in_Pre-Trained_Encoders_CVPR_2023
Abstract Self-supervised learning in computer vision trains on unlabeled data, such as images or (image, text) pairs, to obtain an image encoder that learns high-quality embed-dings for input data. Emerging backdoor attacks towards encoders expose crucial vulnerabilities of self-supervised learning, since downstream classifiers (even further trained on clean data) may inherit backdoor behaviors from en-coders. Existing backdoor detection methods mainly fo-cus on supervised learning settings and cannot handle pre-trained encoders especially when input labels are not avail-able. In this paper, we propose DECREE , the first back-door detection approach for pre-trained encoders, requir-ing neither classifier headers nor input labels. We eval-uate DECREE on over 400 encoders trojaned under 3 paradigms. We show the effectiveness of our method on im-age encoders pre-trained on ImageNet and OpenAI’s CLIP 400 million image-text pairs. Our method consistently has a high detection accuracy even if we have only limited or no access to the pre-training dataset. Code is available at https://github.com/GiantSeaweed/DECREE .
1. Introduction Self-supervised learning (SSL), specifically contrastive learning [5, 10, 15], is becoming increasingly popular as it does not require labeling training data that entails substantial manual efforts [12] and yet can provide close to the state-of-the-art performance. It has a wide range of application scenarios, e.g., similarity-based search [18], linear probe [1], andzero-shot classification [4, 24, 25]. Similarity-based search queries data based on their semantic similarity. Linear probe utilizes an encoder trained by contrastive learning to project inputs to an embedding space, and then trains a linear classifier on top of the encoder to map embeddings to downstream classification labels. Zero-shot classification trains an image encoder and a text encoder (by contrastive Truck Linear probeZero-shot Prediction ‘a photo of ship’ ‘a photo of truck’ Linear Classifier Attack Pre-training Dataset/SubsetTrigger Pattern𝐸Clean Encoder Inject Backdoor in Encoders 𝐸’Backdoored Encoder 𝐸’ 𝐸’Text Encoder𝑇 Input ImagesWrong Text EmbeddingsImage EmbeddingsLaunch Attack in Downstream Applications CaptionsInput Images ShipCorrectAttack Targets (Image or Text Captions) ‘a photo of truck’ SimilarDissimilarInjectBackdoor CorrectWrongPredictionFigure 1. Illustration of Backdoor Attack on Self-Supervised Learn-ing (SSL). The adversary first injects backdoor into a clean encoder and launches attack when the backdoored encoder is leveraged to train downstream tasks. The backdoored encoder produces similar embeddings for the attack target and any input image with trigger, causing misbehaviors in downstream applications. learning) that map images and texts to the same embedding space. The similarity of the two embeddings from an image and a piece of text is used for prediction. The performance of SSL heavily relies on the large amount of unlabeled data, which indicates high computa-tional cost. Regular users hence tend to employ pre-trained encoders published online by third parties. Such a produc-tion chain provides opportunities for adversaries to implant malicious behaviors. Particularly, backdoor attack or trojan attack [8, 13, 32] injects backdoors in machine learning models, which can only be activated (causing targeted misclassification) by stamping a specific pattern, called trigger , to an input sample. It is highly stealthy as the back-doored/trojaned model functions normally on clean inputs. While existing backdoor attacks mostly focus on classi-fiers in the supervised learning setting, where the attacker in-duces the model to predict the target label for inputs stamped with the trigger, recent studies demonstrate the feasibility of conducting backdoor attacks in SSL scenarios [3, 20, 46]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16352 Figure 1 illustrates a typical backdoor attack on image en-coders in SSL. The adversary chooses an attack target so that the backdoored encoder produces similar embeddings for any input image with trigger and the attack target. The attack target can be an image (chosen from some dataset or downloaded from the Internet), or text captions . Text captions are compositions of a label text and prompts, where the label text usually denotes “{class name}”, like “truck”, “ship”, “bird”, etc. For example, in Figure 1, the adversary could choose a “truck” image or a text caption “a photo of truck” as the attack target. After encoder poisoning and downstream classifier training, the classifier tends to predict the label of the attack target when the trigger is present. As shown in Figure 1, when the attack target is a truck image and the encoder is used for linear probe, the classifier inher-its the backdoor behavior from the encoder. As a result, a clean ship image can be correctly predicted by the classifier whereas a ship image stamped with the trigger is classified as “truck”. If the attack target is “a photo of truck” and the encoder is used in zero-shot prediction, a clean ship image shares a similar embedding with the text caption “a photo of ship”, causing correct prediction. In contrast, the embedding of a ship image stamped with the trigger is more similar to the embedding of “a photo of truck”, causing misprediction. These vulnerabilities hinder the real world applications of pre-trained encoders. Existing backdoor detection methods are insufficient to defend such attacks. A possible defense method is to leverage existing backdoor detection methods focusing on supervised learning to scan downstream classi-fiers. Apart from its limited detection performance (as we will discuss later in Section 3), it cannot work properly under the setting of zero-shot classification, where there exists no concrete classifier. This calls for new defense techniques that directly detect backdoored encoders without downstream classifiers. More details regarding the limitations of existing methods can be found in Section 3. In this paper, we propose DECREE , the first backdoor de-tection approach for pre-trained encoders in SSL. To address the insufficiency of existing detection methods, DECREE directly scans encoders. Specifically, for a subject encoder, DECREE first searches for a minimal trigger pattern such that any inputs stamped with the trigger share similar em-beddings. The identified trigger is then utilized to decide whether the given encoder is benign or trojaned. We evaluate DECREE on 444 encoders and it significantly outperforms existing backdoor detection techniques. We also show the effectiveness of DECREE on large size image encoders pre-trained on ImageNet [12] and OpenAI’s CLIP [40] image encoders pre-trained on 400 million uncurated (image, text) pairs. DECREE consistently achieves high detection accu-racy even when it only has limited access or no access to the pre-training dataset. Threat Model. Our threat model is consistent with theliterature [3, 20]. We only consider backdoor attacks on vision encoders. We assume the attacker has the capabilities of injecting a small portion of samples into the training set of encoders. Once the encoder is trojaned, the attacker has no control over downstream applications. Given an encoder, the defender has limited or no access to the pre-training dataset and needs to determine whether the encoder is trojaned or not. She does not have any knowledge about the attack target either. We consider injected backdoors that are static (e.g. patch backdoors) and universal (i.e. all the classes except for the target class are the victim).
Jin_TensoIR_Tensorial_Inverse_Rendering_CVPR_2023
Abstract We propose TensoIR, a novel inverse rendering approach based on tensor factorization and neural fields. Unlike previous works that use purely MLP-based neural fields, thus suffering from low capacity and high computation costs, we extend TensoRF , a state-of-the-art approach for radiance field modeling, to estimate scene geometry, sur-face reflectance, and environment illumination from multi-view images captured under unknown lighting conditions. Our approach jointly achieves radiance field reconstruction and physically-based model estimation, leading to photo-realistic novel view synthesis and relighting results. Bene-fiting from the efficiency and extensibility of the TensoRF-based representation, our method can accurately model secondary shading effects (like shadows and indirect light-ing) and generally support input images captured under sin-gle or multiple unknown lighting conditions. The low-rank tensor representation allows us to not only achieve fast and compact reconstruction but also better exploit shared in-formation under an arbitrary number of capturing lighting conditions. We demonstrate the superiority of our method to baseline methods qualitatively and quantitatively on var-ious challenging synthetic and real-world scenes.
1. Introduction Inverse rendering is a long-standing problem in com-puter vision and graphics, aiming to reconstruct physical attributes (like shape and materials) of a 3D scene from captured images and thereby supporting many downstream applications such as novel view synthesis, relighting and material editing. This problem is inherently challenging and ill-posed, especially when the input images are cap-tured in the wild under unknown illumination. Recent works [6, 7, 28, 41] address this problem by learning neural field representations in the form of multi-layer perceptrons * Equal contribution.†Equal advisory. a) Captured real dataTensoIR Roughness Albedo Normal b) Shape and material reconstruc�on c) Religh�ng d) Material edi�ngFigure 1. Given multi-view captured images of a real scene (a), our approach – TensoIR – is able to achieve high-quality shape and material reconstruction with high-frequency details (b). This allows us to render the scene under novel lighting and viewpoints (c), and also change its material properties (d). (MLP) similar to NeRF [22]. However, pure MLP-based methods usually suffer from low capacity and high compu-tational costs, greatly limiting the accuracy and efficiency of inverse rendering. In this work, we propose a novel inverse rendering framework that is efficient and accurate. Instead of purely using MLPs, we build upon the recent TensoRF [11] scene representation, which achieves fast, compact, and state-of-the-art quality on radiance field reconstruction for novel view synthesis. Our tensor factorization-based inverse ren-dering framework can simultaneously estimate scene ge-ometry, materials, and illumination from multi-view im-ages captured under unknown lighting conditions. Bene-fiting from the efficiency and extensibility of the TensoRF representation, our method can accurately model secondary shading effects (like shadows and indirect lighting) and gen-erally support input images captured under a single or mul-tiple unknown lighting conditions. Similar to TensoRF, our approach models a scene as a neural voxel feature grid, factorized as multiple low-rank This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 165 tensor components. We apply multiple small MLPs on the same feature grid and regress volume density, view-dependent color, normal, and material properties, to model the scene geometry and appearance. This allows us to si-multaneously achieve both radiance field rendering – using density and view-dependent color, as done in NeRF [22] – and physically-based rendering – using density, normal and material properties, as done in inverse rendering meth-ods [3, 20]. We supervise both renderings with the cap-tured images to jointly reconstruct all scene components. In essence, we reconstruct a scene using both a radiance field and a physically-based model to reproduce the scene’s ap-pearance. While inverse rendering is our focus and primar-ily enabled by the physically-based model, modeling the ra-diance field is crucial for the success of the reconstruction (see Fig. 3), in significantly facilitating the volume density reconstruction and effectively regularizing the same tensor features shared by the physically-based model. Despite that previous works [41] similarly reconstruct NeRFs in inverse rendering, their radiance field is pre-computed and fixed in the subsequent inverse rendering stage; in contrast, our ra-diance field is jointly reconstructed and also benefits the physically-based rendering model estimation during opti-mization, leading to much higher quality. Besides, our ra-diance field rendering can also be directly used to provide accurate indirect illumination for the physically-based ren-dering, further benefiting the inverse rendering process. Accounting for indirect illumination and shadowing is a critical challenge in inverse rendering. This is especially challenging for volume rendering, since it requires sam-pling a lot of secondary rays and computing the integrals along the rays by performing ray marching. Limited by the high-cost MLP evaluation, previous NeRF-based meth-ods and SDF-based methods either simply ignore secondary effects [6, 7, 39], or avoid online computation by approx-imating these effects in extra distilled MLPs [41, 42], re-quiring expensive pre-computation and leading to degrada-tion in accuracy. In contrast, owing to our efficient tensor-factorized representation, we are able to explicitly compute the ray integrals online for accurate visibility and indirect lighting with the radiance field rendering using low-cost second-bounce ray marching. Consequently, our approach enables higher accuracy in modeling these secondary ef-fects, which is crucial in achieving high-quality scene re-construction (see Tab. 2). In addition, the flexibility and efficiency of our tensor-factorized representation allows us to perform inverse ren-dering from multiple unknown lighting conditions with lim-ited GPU memory. Multi-light capture is known to be ben-eficial for inverse rendering tasks by providing useful pho-tometric cues and reducing ambiguities in material estima-tion, thus being commonly used [13,15,18]. However, since each lighting condition corresponds to a separate radiancefield, this can lead to extremely high computational costs if reconstructing multiple purely MLP-based NeRFs like pre-vious works [28,41,42]. Instead, we propose to reconstruct radiance fields under multi-light in a joint manner as a fac-torized tensor. Extending from the original TensoRF repre-sentation that is a 4D tensor, we add an additional dimen-sion corresponding to different lighting conditions, yielding a 5D tensor. Specifically, we add an additional vector factor (whose length equals the number of lights) per tensor com-ponent to explain the appearance variations under different lighting conditions, and we store this 5D tensor by a small number of bases whose outer-product reconstructs the 5D tensor. When multi-light capture is available, our frame-work can effectively utilize the additional photometric cues in the data, leading to better reconstruction quality than a single-light setting (see Tab. 1). As shown in Fig. 1, our approach can reconstruct high-fidelity geometry and reflectance of a complex real scene captured under unknown natural illumination, enabling photo-realistic rendering under novel lighting conditions and additional applications like material editing. We eval-uate our framework extensively on both synthetic and real data. Our approach outperforms previous inverse rendering methods [41,42] by a large margin qualitatively and quanti-tatively on challenging synthetic scenes, achieving state-of-the-art quality in scene reconstruction – for both geometry and material properties – and rendering – for both novel view synthesis and relighting. Owing to our efficient tenso-rial representation and joint reconstruction scheme, our ap-proach also leads to a much faster reconstruction speed than previous neural field-based reconstruction methods while achieving superior quality. In summary, • We propose a novel tensor factorization-based inverse rendering approach that jointly achieves physically-based rendering model estimation and radiance field reconstruction, leading to state-of-the-art scene recon-struction results; • Our framework includes an efficient online visibility and indirect lighting computation technique, providing accurate second-bounce shading effects; • We enable efficient multi-light reconstruction by mod-eling an additional lighting dimension in the factorized tensorial representation.
He_Primitive_Generation_and_Semantic-Related_Alignment_for_Universal_Zero-Shot_Segmentation_CVPR_2023
Abstract We study universal zero-shot segmentation in this work to achieve panoptic, instance, and semantic segmentation for novel categories without any training samples. Such zero-shot segmentation ability relies on inter-class relation-ships in semantic space to transfer the visual knowledge learned from seen categories to unseen ones. Thus, it is desired to well bridge semantic-visual spaces and apply the semantic relationships to visual feature learning. We introduce a generative model to synthesize features for unseen categories, which links semantic and visual spaces as well as address the issue of lack of unseen training data. Furthermore, to mitigate the domain gap between semantic and visual spaces, firstly, we enhance the vanilla generator with learned primitives, each of which contains fine-grained attributes related to categories, and synthesize unseen features by selectively assembling these primitives. Secondly, we propose to disentangle the visual feature into the semantic-related part and the semantic-unrelated part that contains useful visual classification clues but is less relevant to semantic representation. The inter-class relationships of semantic-related visual features are then required to be aligned with those in semantic space, thereby transferring semantic knowledge to visual feature learning. The proposed approach achieves impressively state-of-the-art performance on zero-shot panoptic segmentation, in-stance segmentation, and semantic segmentation.
1. Introduction Image segmentation aims to group pixels with different semantics, e.g., category or instance [11]. Deep learn-ing methods [9, 11, 27, 34, 35, 44] have greatly advanced the performance of image segmentation with the powerful learning ability of CNNs [28] and Transformer [54]. How-ever, since deep learning methods are data-driven, great †Equal contribution. Corresponding author (henghui.ding@gmail.com). Training Semantic Knowledge ⋯⋯personbaseball batfencesurfboardsearockgrassfrisbeeSeenUnseen TestingSeen classesSeen andUnseen classesw/osemanticSegment seen classes onlyImagew/semantic Segment seen & unseenGround TruthImageGround TruthImage Relationships frisbeegrass!!!""#"" "!Figure 1. Zero-shot image segmentation aims to transfer the knowledge learned from seen classes to unseen ones ( i.e., never shown up in training) with the help of semantic knowledge. challenges are induced by the intense demand for large-scale labeled training samples, which are labor-intensive and time-consuming. To address this issue, zero-shot learning (ZSL) [36,47] is proposed to classify novel objects with no training samples. Recently, ZSL is extended into segmentation tasks like zero-shot semantic segmen-tation (ZSS) [4, 57] and zero-shot instance segmentation (ZSI) [63]. Herein, we further introduce zero-shot panoptic segmentation (ZSP) and aim to build a universal framework for zero-shot panoptic/semantic/instance segmentation with the help of semantic knowledge, as shown in Fig. 1. Different from image classification, segmentation re-quires pixel-wise classification and is more challenging in terms of class representation learning. Substantial efforts have been devoted to zero-shot semantic segmentation [4, 57] and can be categorized into projection-based meth-ods [19, 57, 61] and generative model-based methods [4, 25, 38]. The generative model-based methods are usu-ally superior to the projection-based methods because they produce synthetic training features for the unseen group, which contribute to alleviating the crucial bias issue [49] of tending to classify objects into seen classes. Owing to the above merits, we follow the paradigm of generative model-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11238 based methods to address zero-shot segmentation tasks. However, the current generative model-based methods are usually in the form of per-pixel-level generation, which is not robust enough in the more complicated scenarios. Recently, several works propose to decouple the segmen-tation into class-agnostic mask prediction and object-level classification [8, 11, 29, 56]. We follow this strategy and degenerate the pixel-level generation to a more robust object-level generation. What’s more, previous generative works [4, 25, 38] usually learn a direct mapping from semantic embedding to visual features. Such a genera-tor does not consider the visual-semantic gap of feature granularity that images contain much richer information than languages. The direct mapping from coarse to fine-grained information results in low-quality synthetic fea-tures. To address this issue, we propose to utilize abundant primitives with very fine-grained semantic attributes to compose visual representations. Different assemblies of these primitives construct different class representations, where the assembly is decided by the relevance between primitives and semantic embeddings. Primitives greatly enhance the expressive diversity and effectiveness of the generator, especially in terms of rich fine-grained attributes, making the synthetic features for different classes more reliable and discriminative. However, there are only real image features of seen classes to supervise the generator, leaving unseen classes unsupervised. To provide more constraints for the feature generation of unseen classes, we propose to transfer the inter-class relationships in semantic space to visual space. The category relationships obtained by semantic embed-dings are employed to constrain the inter-class relationships of visual features. With such constraint, the visual features, especially the synthesized features for unseen classes, are promoted to have a homogeneous inter-class structure as in semantic space. Nevertheless, there is a discrepancy between the visual space and the semantic space [10, 52], so as to their inter-class relationships. Visual features contain richer information and cannot be fully aligned with semantic embeddings. Directly aligning two disjoint relationships inevitably compromises the discriminative of visual features. To address this issue, we propose to disen-tangle visual features into semantic-related and semantic-unrelated features, where the former is better aligned with the semantic embedding while the latter is noisy to semantic space. We only use semantic-related features for relation-ship alignment. The proposed relationship alignment and feature disentanglement are mutually beneficial. Feature disentanglement builds semantic-related visual space to facilitate relationship alignment and excludes semantic-unrelated features that are noisy for alignment. Relationship alignment in turn contributes to disentangling semantic-related features by providing semantic clues.Overall, the main contributions are as follows: • We study universal zero-shot segmentation and pro-pose Primitive generation with collaborative relation-shipAlignment and feature Disentanglement learn ing (PADing ) as a unified framework for ZSP/ZSI/ZSS. • We propose a primitive generator that employs lots of learned primitives with fine-grained attributes to synthesize visual features for unseen categories, which helps to address the bias issue and domain gap issue. • We propose a collaborative relationship alignment and feature disentanglement learning approach to facilitate the generator producing better synthetic features. • The proposed approach PADing achieves new state-of-the-art performance on zero-shot panoptic segmen-tation (ZSP), zero-shot instance segmentation (ZSI), and zero-shot semantic segmentation (ZSS).
Du_Object-Goal_Visual_Navigation_via_Effective_Exploration_of_Relations_Among_Historical_CVPR_2023
Abstract Object-goal visual navigation aims at steering an agent toward an object via a series of moving steps. Previous works mainly focus on learning informative visual repre-sentations for navigation, but overlook the impacts of nav-igation states on the effectiveness and efficiency of nav-igation. We observe that high relevance among naviga-tion states will cause navigation inefficiency or failure for existing methods. In this paper, we present a History-inspired Navigation Policy Learning (HiNL) framework to estimate navigation states effectively by exploring relation-ships among historical navigation states. In HiNL, we pro-pose a History-aware State Estimation (HaSE) module to alleviate the impacts of dominant historical states on the current state estimation. Meanwhile, HaSE also encour-ages an agent to be alert to the current observation changes, thus enabling the agent to make valid actions. Furthermore, we design a History-based State Regularization (HbSR) to explicitly suppress the correlation among navigation states in training. As a result, our agent can update states more ef-fectively while reducing the correlations among navigation states. Experiments on the artificial platform AI2-THOR (i.e., iTHOR and RoboTHOR) demonstrate that HiNL sig-nificantly outperforms state-of-the-art methods on both Suc-cess Rate and SPL in unseen testing environments.
1. Introduction Object-goal visual navigation is to direct an agent to move consecutively toward an object of a specific category. Without knowing the environment map beforehand, at each navigation step, an agent first needs to represent its visual observations, then estimate its navigation states from the visual representations and the preceding states, and at last predict the corresponding action. Therefore, to achieve an effective and efficient navigation system, learning instruc-tive visual representations and navigation states is critical. Prevailing visual navigation works [13, 14, 50] focus on *Corresponding author ⋯ 𝑠!"# MoveAhead Agent 𝑠! MoveAhead 𝑠!$# MoveAhead Obstaclelow-profilesofaField of ViewTarget ⋯⋯⋯ FailedFailedFailed(a) Demonstration of inefficient action predictions caused by highly-correlated navigation states. Our agent is stuck by an obstacle, i.e., low-profile sofa, and repeatedly predicts an invalid action, i.e.,MoveAhead . LSTM 𝑠! 𝑠"𝑠" HiNL 𝑠! 𝑠"𝑠"Correlation Coefficient1 -1 (b) Demonstration of the correlation coefficients among navigation states trained in two manners. Navigation states estimated via LSTM are highly-relevant. In contrast, our HiNL produce low-correlated navigation states. Figure 1. Motivation of our proposed History-inspired Navigation Learning (HiNL) framework. extracting informative visual representations, while some methods [13, 49] adjust navigation policy during inference. All these approaches commonly employ recurrent neural networks ( e.g., LSTM) to estimate navigation states. How-ever, we observe that the navigation states of existing meth-ods [13, 14, 49] exhibit high relevance, as demonstrated in Figure 1b, and the highly-correlated navigation states would lead to inefficient navigation policy ( i.e., failure to respond to observation changes rapidly). For instance, as shown in Figure 1a, an agent is stuck by the low-profile sofa and fails to take proper actions to circumvent the obstacle. Hence, we aim to endow an agent with the capability of updating its navigation states effectively while avoiding producing highly-correlated states. In this work, we propose a History-inspired Navigation This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2563 Learning (HiNL) framework to obtain informative naviga-tion states by exploiting the relationships among historical navigation states. HiNL consists of two novel components: (i) a History-aware State Estimation (HaSE) module, and (ii) a History-based State Regularization (HbSR). Here, our HaSE module is designed to generate a state that can be promptly updated according to visual observations. Specif-ically, HaSE first analyzes the correlations among historical navigation states and then eliminates the influence of dom-inant historical states on the current state estimation. As a result, an agent is able to predict navigation states which can dynamically react to the current visual observations and then make sensible navigation actions. Furthermore, existing reinforcement learning-based object-goal navigation systems [13,14,49] often assume the navigation state transition exhibits the first-order Markov property. This would allow the emergence of high corre-lations among navigation states, leading to inferior naviga-tion policy. To address this issue, we introduce an explicit constraint on the correlation among all the states, namely History-based State Regularization (HbSR). To be specific, HbSR enforces to relevance ( i.e., correlations) between a state and all its preceding states (except its previous state) to be low. Here, we do not constrain states of two consec-utive steps because temporally close states generally have relevance in practice considering the navigation continuity. After training with our HbSR, the correlations among the navigation states become much lower (see Figure 1b). This pheromone also indicates HiNL effectively updates states. Hence, our navigation system can respond to observation changes adaptively. To demonstrate the superiority of HiNL, we conduct experiments in the widely-adopted artificial environment iTHOR [26] and RoboTHOR [11]. HiNL outperforms the state-of-the-art by a large margin. To be specific, we im-prove the Success Rate (SR) from 72.2% to 80.1% and Suc-cess weighted by Path Length (SPL) from 0.449 to 0.498 in iTHOR. Overall, our major contributions are summarized as follows: • We propose a History-inspired Navigation Policy (HiNL) framework to effectively estimate navigation states by utilizing historical states. • We design a History-aware State Estimation (HaSE) to eliminate dominant historical states in the current state estimation. Therefore, the agent reduces the impact of distant navigation states on the state estimation, and thus reacts dynamically to the observation changes. • We introduce a History-based State Regularization (HbSR) to explicitly constrain the correlations among navigation states. By doing this, the agent can effec-tively update navigation states with low relevance.2. Related Works Visual navigation. Traditional works [3, 4, 32] often leverage an entire environment map for navigation and di-vide the task into three parts: mapping, localization, and path planning. However, environment maps are generally unavailable in unseen environments. Dissanayake et al. [12] adopt simultaneous localization and mapping (SLAM) to infer robot positions. Campari et al. [5] learn agent states via a Taskonomy model bank [52], but they need an RGB-D sensor to construct an online map during navigation. Recently, due to significant advancements in deep learn-ing [16,22,45,51], reinforcement learning-based navigation methods [28, 29, 31, 33, 54] take visual observations as in-puts and predict navigation actions. Vision-Language Nav-igation (VLN) approaches [9, 10, 15, 39, 40] steer an agent to the target based on its visual observations and navigation guidance in natural language. Similar to VLN, point-goal visual navigation methods [44,48] aim at driving an agent to a given point with step-wise directional indications. More-over, audio-visual navigation methods [7, 8, 18] utilize ad-ditional audio signals to move a robot to the target position. Al-Halah et al . [1] propose a transfer learning model for multiple navigation tasks by embedding various navigation goals, e.g., image, sketch, and audio. Our work falls in the field of object-goal visual naviga-tion [27,31,36,47,50]. However, existing object-goal navi-gation methods mainly focus on representing visual features comprehensively while we investigate the impact of naviga-tion states on navigation performance. Wortsman et al. [49] exploit word embedding ( i.e., GloVe embedding [35]) to represent the target category and introduce a meta network that mimics a reward function during inference. Du et al. [13] introduce an object relation graph, dubbed ORG, to encode visual observations. They also design a tenta-tive policy for deadlock avoidance and adjust the naviga-tion policy in unseen testing environments. The Hierarchi-cal Object-to-Zone (HOZ) graph [53] offers coarse-to-fine guidance based on real-time updates. Additionally, VT-Net [14] incorporates object and region features with lo-cation cues, and EmbCLIP [24] leverages the contrastive language image pretraining encoder for visual navigation tasks. Correlation Modeling in Reinforcement Learning. Several methods [37, 43] explore correlations in hidden Markov models for inverse reinforcement learning (IRL). For action prediction, Hester et al. [20] propose Texplore to model correlations within the transition dynamics via a random forest. Vsovsic et al. [42] introduce a Bayesian ap-proach to learn policy from demonstrations of experts by capturing correlations among actions. Alt et al . [2] de-sign a Bayesian learning framework to establish tempo-ral and spatial correlations among actions. Furthermore, 2564 ⋯ 𝑠! 𝑠"#$ 𝑠"#% 𝑠" 𝑎"#% 𝑣" 𝑎" History-aware State Estimation(HaSE)𝑠"#&:"#% 𝑣"𝑎"#%𝑠"#% 𝑎"𝑣𝑎𝑙𝑢𝑒" 𝑙()* 𝑙+,-./0𝑙12-34Input Historical State Guidance ExtractionHistory-awareFusion 𝑠" History-based State Regularization (HbSR) Environment State-inducedPerceptron 𝑔" 𝑖"𝑠"#%Visual FeatureExtractor Figure 2. Our History-inspired Navigation Policy Learning (HiNL) framework. HiNL takes visual representations as input and outputs navigation actions. HiNL involves two innovative part: a History-aware State Estimation (HaSE) module and a History-based State Regularization (HbSR). HaSE is proposed to estimate navigation states that can reflect current observation changes from the perspective of network design, while HbSR is designed to enforce the informativeness of states from the view of the training objective. Both of them help to achieve effective and efficient navigation policy. Sermanet [41] propose a self-supervised TCN for learn-ing robotic behaviors and representations from unlabeled multi-viewpoint videos. TCN uses a metric learning loss to creat
Huang_Twin_Contrastive_Learning_With_Noisy_Labels_CVPR_2023
Abstract Learning from noisy data is a challenging task that sig-nificantly degenerates the model performance. In this paper, we present TCL, a novel twin contrastive learning model to learn robust representations and handle noisy labels for classification. Specifically, we construct a Gaussian mix-ture model (GMM) over the representations by injecting the supervised model predictions into GMM to link label-free latent variables in GMM with label-noisy annotations. Then, TCL detects the examples with wrong labels as the out-of-distribution examples by another two-component GMM, taking into account the data distribution. We further propose a cross-supervision with an entropy regularization loss that bootstraps the true targets from model predictions to handle the noisy labels. As a result, TCL can learn discriminative representations aligned with estimated labels through mixup and contrastive learning. Extensive experimental results on several standard benchmarks and real-world datasets demonstrate the superior performance of TCL. In particular, TCL achieves 7.5% improvements on CIFAR-10 with 90% noisy label—an extremely noisy scenario. The source code is available at https://github.com/Hzzone/TCL .
1. Introduction Deep neural networks have shown exciting performance for classification tasks [13]. Their success largely results from the large-scale curated datasets with clean human anno-tations, such as CIFAR-10 [19] and ImageNet [6], in which the annotation process, however, is tedious and cumbersome. In contrast, one can easily obtain datasets with some noisy annotations—from online shopping websites [40], crowd-sourcing [42, 45], or Wikipedia [32]—for training a clas-Corresponding authorsification neural network. Unfortunately, the mislabelled data are prone to significantly degrade the performance of deep neural networks. Therefore, there is considerable inter-est in training noise-robust classification networks in recent years [20, 21, 25, 29, 31, 48]. To mitigate the influence of noisy labels, most of the meth-ods in literature propose the robust loss functions [37, 47], reduce the weights of noisy labels [35, 39], or correct the noisy labels [20, 29, 31]. In particular, label correction meth-ods have shown great potential for better performance on the dataset with a high noise ratio. Typically, they correct the labels by using the combination of noisy labels and model predictions [31], which usually require an essential itera-tive sample selection process [1, 20, 21, 29]. For example, Arazo et al. [1] uses the small-loss trick to carry out sample selection and correct labels via the weighted combination. In recent years, contrastive learning has shown promising results in handling noisy labels [21, 21, 29]. They usually leverage contrastive learning to learn discriminative repre-sentations, and then clean the labels [21, 29] or construct the positive pairs by introducing the information of nearest neighbors in the embedding space. However, using the near-est neighbors only considers the label noise within a small neighborhood, which is sub-optimal and cannot handle ex-treme label noise scenarios, as the neighboring examples may also be mislabeled at the same time. To address this issue, this paper presents TCL, a novel twin contrastive learning model that explores the label-free unsupervised representations and label-noisy annotations for learning from noisy labels. Specifically, we leverage contrastive learning to learn discriminative image represen-tations in an unsupervised manner and construct a Gaus-sian mixture model (GMM) over its representations. Un-like unsupervised GMM, TCL links the label-free GMM andlabel-noisy annotations by replacing the latent variable of GMM with the model predictions for updating the pa-rameters of GMM. Then, benefitting from the learned data This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11661 distribution, we propose to formulate label noise detection as an out-of-distribution (OOD) problem, utilizing another two-component GMM to model the samples with clean and wrong labels. The merit of the proposed OOD label noise detection is to take the full data distribution into account, which is robust to the neighborhood with strong label noise. Furthermore, we propose a bootstrap cross-supervision with an entropy regulation loss to reduce the impact of wrong labels, in which the true labels of the samples with wrong la-bels are estimated from another data augmentation. Last, to further learn robust representations, we leverage contrastive learning and Mixup techniques to inject the structural knowl-edge of classes into the embedding space, which helps align the representations with estimated labels. The contributions are summarized as follows: We present TCL, a novel twin contrastive learning model that explores the label-free GMM for unsuper-vised representations and label-noisy annotations for learning from noisy labels. We propose a novel OOD label noise detection method by modeling the data distribution, which excels at han-dling extremely noisy scenarios. We propose an effective cross-supervision, which can bootstrap the true targets with an entropy loss to regu-larize the model. Experimental results on several benchmark datasets and real-world datasets demonstrate that our method outperforms the existing state-of-the-art methods by a significant margin. In particular, we achieve 7.5% improvements in extremely noisy scenarios.
Hu_TriVol_Point_Cloud_Rendering_via_Triple_Volumes_CVPR_2023
Abstract Existing learning-based methods for point cloud render-ing adopt various 3D representations and feature query-ing mechanisms to alleviate the sparsity problem of point clouds. However, artifacts still appear in rendered images, due to the challenges in extracting continuous and discrim-inative 3D features from point clouds. In this paper, we present a dense while lightweight 3D representation, named TriVol, that can be combined with NeRF to render photo-realistic images from point clouds. Our TriVol consists of triple slim volumes, each of which is encoded from the point cloud. TriVol has two advantages. First, it fuses respective fields at different scales and thus extracts local and non-local features for discriminative representation. Second, since the volume size is greatly reduced, our 3D decoder can be efficiently inferred, allowing us to increase the res-olution of the 3D space to render more point details. Ex-tensive experiments on different benchmarks with varying kinds of scenes/objects demonstrate our framework’s effec-tiveness compared with current approaches. Moreover, our framework has excellent generalization ability to render a category of scenes/objects without fine-tuning. The source code is available at https://github.com/dvlab-research/TriVol.git . *Equal Contribution.1. Introduction Photo-realistic point cloud rendering (without hole arti-facts and with clear details) approaches can be employed for a variety of real-world applications, e.g., the visualization of automatic drive [6, 8, 23, 29], digital humans [1, 19, 27], and simulated navigation [5,11,43]. Traditional point cloud renderers [36] adopt graphics-based methods, which do not require any learning-based models. They project existing points as image pixels by rasterization and composition. However, due to the complex surface materials in real-world scenes and the limited precision of the 3D scanners, there are a large number of missing points in the input point cloud, leading to vacant and blurred image areas inevitably as illustrated in Fig. 1. In recent years, learning-based approaches [1, 10, 35, 45, 49] have been proposed to alleviate the rendering problem in graphics-based methods. They use a variety of query-ing strategies in the point cloud to obtain continuous 3D features for rendering, e.g., the ball querying employed in the PointNet++ [34] and the KNN-based querying in Point-NeRF [45]. However, if the queried position is far away from its nearest points, the feature extraction process will usually fail and have generalization concerns. To guarantee accurate rendering, two groups of frameworks are further proposed. The first group [1, 35, 38] projects the features of all points into the 2D plane and then trains 2D neural net-works, like UNet [37], to restore and sharpen the images. However, since such a 2D operation is individual for dif-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20732 ferent views, the rendering outcomes among nearby views are inconsistent [17, 40], i.e., the appearance of the same object might be distinct under different views. To overcome the artifact, physical-based volume rendering [24] in Neural Radiance Fields (NeRF) [26] without 2D neural rendering is an elegant solution [40]. The second group [10, 47] applies a 3D Convolutional Network (ConvNet) to extract a dense 3D feature volume, then acquires the 3D feature of arbitrary point by trilinear interpolation for subsequent volume ren-dering. Nevertheless, conducting such a dense 3D network is too heavy for high-resolution 3D representation, limiting their practical application. In summary, to the best of our knowledge, there is currently no lightweight point cloud renderer whose results are simultaneously view-consistent and photo-realistic. In this paper, we propose a novel 3D representation, named TriV ol, to solve the above issues. Our TriV ol is composed of three slender volumes which can be efficiently encoded from the point cloud. Compared with dense grid voxels, the computation on TriV ol is efficient and the re-spective fields are enlarged, allowing the 3D representation with high resolution and multi-scale features. Therefore, TriV ol can provide discriminative and continuous features for all points in the 3D space. By combining TriV ol with NeRF [26], the point cloud rendering results show a sig-nificant quantitative and qualitative improvement compared with the state-of-the-art (SOTA) methods. In our framework, we develop an efficient encoder to transform the point cloud into the Initial TriVol and then adopt a decoder to extract the Feature TriVol . Although the encoder can be implemented with the conventional point-based backbones [33, 34], we design a simple but effective grouping strategy that does not need extra neural models. The principle is first voxelizing the point cloud into grid voxels and then re-organizing the voxels on each of three axes individually, which is empirically proven to be a better method. As for the decoder, it can extract the feature rep-resentation for arbitrary 3D points. Hence, we utilize three independent 3D ConvNet to transfer each volume into dense feature representations. Due to the slender shape of the vol-umes, the computation of the 3D ConvNet is reduced. Also, the 3D ConvNet can capture more non-local information in the grouped axis via a larger receptive field, and extract lo-cal features in the other two directions. With the acquired dense Feature TriVol , the feature of any 3D points can be queried via trilinear interpolation. By combining the queried features with the standard NeRF [26] pipeline, the photo-realistic results are achieved. Exten-sive experiments are conducted on three representative pub-lic datasets (including both datasets of scene [9] and object [4, 12]) level, proving our framework’s superiority over re-cent methods. In addition, benefiting from the discrimina-tive and continuous feature volume in TriV ol, our frame-work has a remarkable generalization ability to render un-seen scenes or objects of the same category, when there is no further fine-tuning process. In conclusion, our contributions are three-fold. • We propose a dense yet efficient 3D representation called TriV ol for point cloud rendering. It is formed by three slim feature volumes and can be efficiently transformed from the point cloud. • We propose an effective encoder-decoder framework for TriV ol representation to achieve photo-realistic and view-consistent rendering from the point cloud. • Extensive experiments are conducted on various benchmarks, showing the advantages of our frame-work over current works in terms of rendering quality. 2. Related Works 2.1. 3D Representation 3D representation is very important when analyzing and understanding 3D objects or scenes. There are several im-portant 3D representations, including point cloud, dense voxels [10], sparse voxels [7], Multi-Plane Images (MPI), triple-plane (triplane) [3,13,28,40], multi-plane [25,39,41], and NeRF [26], designed from different tasks. A point cloud is a set of discrete data points in space representing a 3D object or scene. Each point location has a coordi-nate value and could further contain the color. The point cloud is an efficient 3D representation that is usually cap-tured from a real-world scanner or obtained via Multi-view Stereo (MVS) [14]. Each voxel in the dense voxels rep-resents a value on a regular grid in the 3D space. By us-ing interpolation, the continuous 3D features of all 3D po-sitions can be obtained. The sparse voxels [7,15,16] are the compressive representation of the dense voxels since only parts of the voxels have valid values. The triplane repre-sentation [3, 40] is also the simplification of dense voxels, obtained by projecting the 3D voxels to three orthogonal planes. The MPI [10, 25, 41] represents the target scene as a set of RGB and alpha planes within a reference view frustum. Moreover, NeRF [26] is a recently proposed im-plicit 3D representation, which can represent the feature of any input 3D coordinate with an MLP. The MLP maps the continuous input 3D coordinate to the geometry and appear-ance of the scene at that location. We propose TriV ol as a new 3D representation of the point cloud and demonstrate its advantages in point cloud rendering. 2.2. Point-based Rendering Point cloud rendering can be implemented with graphics-and learning-based approaches. The points are 20733 projected to the 2D plane via rasterization and composi-tion in the graphics-based algorithms [36]. The learning-based methods [7, 19, 20, 45] design various strategies to compensate the missing information from the sparse point cloud. For example, ME [7] first conducts sparse ConvNet to extract features for existing points, then computes the features of arbitrary 3D points by ball querying in the lo-cal space. Obviously, most of the points in the whole 3D space have no meaningful features. Point-NeRF [45] makes use of multi-view images to enhance the features of the in-put point cloud, formulating the sparse 3D feature volume and then querying any point features by KNearest Neigh-bors (KNN) [34]. Also, quite a few learning-based methods [1, 31, 32, 35, 38, 44, 48] project the point cloud onto the 2D plane and utilize the 2D networks to recover the hole ar-tifacts caused by the point cloud’s discrete property. For instance, NPBG [1] renders the point cloud with learned neural features in multiple scales and sets a 2D UNet for refinement. Furthermore, several approaches construct 3D feature volumes for rendering [10], e.g., NPCR [10] uses 3D ConvNet to obtain 3D volumes from point clouds and produce multiple depth layers to synthesize novel views. 3. Approach We aim to train a category-specific point renderer Rto directly generate photo-realistic images I(the image height and width are denoted as HandW) from the colored point cloud P, given camera parameters (intrinsic parameter K and extrinsic parameters Randt). When rendering novel point clouds of the same category, no fine-tuning process is required. The rendering process can be represented as I=R(P|R, t, K ), (1) where Pis usually obtained from MVS [2, 14], LiDAR scanners [9], or sampled from synthesized mesh models. In this section, we first encode the
point cloud as the pro-posed TriV ol, then utilize three 3D UNet to decode it into the feature representation. Finally, we combine NeRF [26] by querying point features from TriV ol at sampled locations to render the final images. An overview of our method is illustrated in Fig. 2. 3.1. TriVol Representation Grid Voxels. To begin with, we voxelize sparse point cloud Pinto grid voxels Vwith shape RC×S×S×S, where Sis the resolution of each axis, and Cis the number of fea-ture channels. Since Vis a sparse tensor, directly querying within Vwill only get meaningless values for most of the 3D locations, leading to the vacant areas in the rendered im-ages. Therefore, the critical step is transforming the sparse Vinto a dense and discriminative representation. One ap-proach is employing a 3D encoder-decoder, e.g., 3D UNet.Nevertheless, such a scheme is not efficient to represent a high-resolution space and render fine-grained details. The reason is two-fold: 1) conducting 3D ConvNet on Vre-quires a lot of computations and memory resources, leading to a small value of S[10]; 2) the sparsity of Vimpedes the feature propagation since regular 3D ConvNet only has a small kernel size and receptive field. From grid voxels to TriVol. To overcome above two is-sues of V, we propose TriV ol, including Vx, Vy, Vz, as a novel 3D representation. As illustrated in Fig. 2, each item in TriV ol is a thin volume whose resolution of one axis is obviously smaller than S, and the others are the same as S. As a consequence, the number of total voxels is reduced for lightweight computations. Note that our TriV ol is different from triple-plane representations that are employed in exist-ing works, e.g., ConvOnet [30] and EpiGRAF [40]. Their point features are projected to three standard planes, thus much 3D information might be lost [30]. 3.1.1 Encoder for Initial TriVol We first encode the input point cloud into the Initial TriVol ({¯Vx,¯Vy,¯Vz}). This process can be completed by existing point-cloud-based networks, such as PointNet [33], Point-Net++ [34], and Dense/Sparse ConvNet [7, 15]. Neverthe-less, these networks bring an additional and heavy computa-tion burden. Instead, we design an efficient strategy without an explicit learning model. The main step in our encoder is the x-grouping, y-grouping, and z-grouping along different axes. The pro-cedure can be denoted as {¯Vx,¯Vy,¯Vz}=E(V), where E={Ex, Ey, Ez}. Specifically, to obtain the slim vol-umes of the x-axis, we first divide VintoG×S×Sgroups along the x-axis, thus each group contains N=S/G vox-els. Then we concatenate all Nvoxels in each group as one new feature voxel to obtain ¯Vx∈R(C·N)×G×S×S, where C·Nis the number of feature channels for each new voxel. ¯Vyand¯Vzare encoded by the similar grouping method but along yandzaxis, respectively. Therefore, Ecan be for-mulated as ¯Vx=Ex(V)∈R(C·N)×G×S×S ¯Vy=Ey(V)∈R(C·N)×S×G×S ¯Vz=Ez(V)∈R(C·N)×S×S×G(2) Our encoder is simple and introduces two benefits. Firstly, we can set the different sizes of GandSto balance the performance and computation. When G≪S, huge computing resources are not required, compared with grid voxels V. Thus, we can increase the resolution Sto model more point cloud details. Secondly, since the voxels in the same group share the identical receptive field, the receptive field on the grouped axis is amplified Ntimes, allowing the 20734 x -grouping (a) Encoding (b) Decoding(c) FeatureQuerying(d) Rendering Position𝐱=(𝒙,𝒚,𝒛)𝑧𝑦𝑥 Position𝐱 =(𝒙,𝒚,𝒛)𝑧𝑦𝑥 Position𝐱=(𝒙,𝒚,𝒛)𝑧𝑦𝑥 𝒓𝒅MLP Grid Voxels𝑽Initial TriVol{𝑽𝒙,𝑽𝒚,𝑽𝒛}3D-Unet{𝑫𝒙,𝑫𝒚,𝑫𝒛}Point Cloud 𝑷Image𝑰 NeRF𝒈Feature TriVol{𝑽𝒙,𝑽𝒚,𝑽𝒛} y -grouping z -grouping Grouping{𝑬𝒙,𝑬𝒚,𝑬𝒛}Figure 2. Overview of the proposed TriV ol for point cloud rendering. (a) Encoding : Input point cloud is encoded to our Initial TriVol along x,y, andzaxis; (b) Decoding : Each volume is decoded to dense feature volume via a unique 3D UNet; (c) Feature Querying : Any point’s feature is queried by the trilinear interpolation in the Feature TriVol ;(d) Rendering : We combine the queried point feature with NeRF to render the final image. subsequent decoder to capture global features on the cor-responding axis without using a large kernel size. For in-stance, in volume ¯Vx, we can extract non-local features on thex-axis and local features on the yandzaxis. 3.1.2 Decoder for Feature TriVol After obtaining the Initial TriVol {¯Vx,¯Vy,¯Vz}from the en-coder, we utilize 3D ConvNet to decode them as the Feature TriVol {Vx, Vy, Vz}. Our Feature TriVol decoder consists of three 3D UNet [37] modules D={Dx, Dy, Dz}. Each 3D UNet can acquire non-local features on the grouped axis with the amplifying receptive field and can extract local fea-tures on the ungrouped two axes that preserve the standard local receptive field. The decoding procedure can be repre-sented as Vx=Dx(¯Vx), Vy=Dy(¯Vy), Vz=Dz(¯Vz),(3) where Vx∈RF×G×S×S,Vy∈RF×S×G×S, and Vz∈ RF×S×S×G, and Fdenotes channel number of Feature TriVol . Although three 3D UNets are required, the small number of Gstill makes it possible for us to set a large res-olution Swithout increasing computing resources (verified in Sec. 4), resulting in realistic images with rich details. 3.2. TriVol Rendering The encoder and decoder modules have transformed the sparse point cloud into a dense and continuous Feature TriVol . Therefore, the feature of any 3D location can be di-rectly queried by the trilinear interpolation in {Vx, Vy, Vz}. Finally, the rendering images can be obtained from the point cloud by following feature querying and volume rendering pipeline of NeRF [26].3.2.1 Feature Querying The feature querying consists of point sampling along the casting ray and feature interpolation in the TriV ol. Point sampling . Given the camera parameters {R, t, K }, we can calculate a random ray with camera center ro∈R3 and normalized direction rd∈R3, we adopt the same coarse-to-fine sampling strategy as NeRF [26] to collect the queried points x∈R3along the ray, as x=ro+z·rd, z∈[zn, zf], (4) where zn,zfare the near and far depths of the ray. Querying . For Feature TriVol {Vx, Vy, Vz}and a queried location x, we first utilize trilinear interpolation to calculate 3 feature vectors: Vx(x)∈RF,Vy(x)∈RF, and Vz(x)∈ RFas shown in Fig. 2. Then, we concatenate them as the final feature F(x)for location x, as F(x) =Vx(x)⊕Vy(x)⊕Vz(x), (5) where ⊕is the concatenation operation. 3.2.2 Volume Rendering Implicit mapping . For the queried feature of all points on the ray, we set a Multi-Layer Perceptron (MLP) as an im-plicit function gto map interpolated feature F(x)to their densities σ∈R+and colors c∈R3, with the view direc-tionrdas the condition, as σ,c=g(F(x),rd). (6) Rendering . The final color of each pixel ˆccan be computed by accumulating the radiance on the ray through the pixel 20735 Methods 3D Representation Feature Extraction Feature Query Rendering NPBG++ [35] Point Cloud 2D UNet -Graphics & CNN Dense 3D ConvNet Grid V oxels 3D UNet Trilinear Interpolation NeRF Sparse 3D ConvNet Sparse V oxels 3D Sparse UNet Ball Query NeRF Point-NeRF [46] Point Cloud MLP KNN NeRF Ours Triple V olumes (TriV ol) 3D UNet Trilinear Interpolation NeRF Table 1. Compare different point cloud renderers in 3D representation, feature extraction, feature query, and rendering strategy. using volume density [24]. Suppose there are Mpoints on the ray, such a volume rendering can be described as ˆc=MX i=1Tiαici, αi= 1−exp(−σiδi), Ti=exp(−i−1X j=1σjδj),(7) where Tirepresents volume transmittance, δjis the distance between neighbor samples along the ray r. 3.3. Loss Function Our model is only supervised by the rendering loss, which is calculated by the mean square error between ren-dered colors and ground-truth colors, as {ˆc1, ...,ˆcH×W}=R(P|R, t, K ), R={Dx, Dy, Dz, g}, L=∥ˆci−¯ci∥2 2,(8) where I={ˆc1, ...,ˆcH×W}are the rendered colors, {¯c1, ...,¯cH×W}are the ground-truth colors. 4. Experiments 4.1. Datasets We evaluate the effectiveness of our framework with the TriV ol representation on object-level and scene-level datasets. For the object level, we use both synthesized and real-world scanned datasets, including ShapeNet [4] andGoogle Scanned Objects (GSO) [12]. ShapeNet is a richly-annotated and large-scale 3D synthesized dataset. There are about 51,300 unique 3D textured mesh models, and we choose the common Car category for evaluation. GSO dataset contains over 1000 high-quality 3D-scanned household items, and we perform experiments on the cat-egory of shoe. The point clouds in object-level datasets can be obtained by 3D mesh sampling [21], and ground-truth rendered images are generated by Blender [18] un-der random camera poses. For the scene level, we conduct the evaluation on the ScanNet [9] dataset, which contains over 1500 indoor scenes. Each scene is constructed from an RGBD camera. We split the first 1,200 scenes as a training set and the rest as a testing set.4.2. Implementation Details We set the volume resolution Sto 256, and the number of groups Gto 16. The decoders ( Dx,Dy, and Dz) do not share the weights. The number of layers in NeRF’s MLP g is 4. For each iteration, we randomly sample 1024 rays from one viewpoint. The number of points for each ray is 64for both coarse and fine sampling. The resolutions H×Wof rendered images on the ScanNet [9] dataset are 512×640, and the other datasets are 256×256. AdamW [22] is adopted as the optimizer, where the learning rate is initial-ized as 0.001and will decay to 0.0001 after 100 epochs. We train all the models using four RTX 3090 GPUs. For more details, please refer to the supplementary file. 4.3. Baselines Besides the comparison with existing point cloud ren-dering methods, e.g., NPBG++ [35], ADOP [38] and Point-NeRF [46], several important baselines are also combined with NeRF [26] to demonstrate the effectiveness of our framework. They can be described as follows: •Voxels-128 : This baseline generates dense voxels with 3D dense UNet. However, due to its constrained effi-ciency and the memory limitation of the computation resource, the voxel resolution is set as 1283. •Sparse ConvNet : MinkowskiEngine [7] is a popular sparse convolution library. We make use of its sparse 3D UNet, i.e., MinkUnet34C [7], as the baseline. Its feature querying is performed by a ball query. •ConvOnet [30] : ConvOnet converts the point cloud features into a triple-plane representation for 3D re-construction. We replace its occupancy prediction with NeRF
Hu_Density-Insensitive_Unsupervised_Domain_Adaption_on_3D_Object_Detection_CVPR_2023
Abstract 3D object detection from point clouds is crucial in safety-critical autonomous driving. Although many works have made great efforts and achieved significant progress on this task, most of them suffer from expensive annotation cost and poor transferability to unknown data due to the do-main gap. Recently, few works attempt to tackle the do-main gap in objects, but still fail to adapt to the gap of varying beam-densities between two domains, which is crit-ical to mitigate the characteristic differences of the LiDAR collectors. To this end, we make the attempt to propose a density-insensitive domain adaption framework to address the density-induced domain gap. In particular, we first in-troduce Random Beam Re-Sampling (RBRS) to enhance the robustness of 3D detectors trained on the source domain to the varying beam-density. Then, we take this pre-trained de-tector as the backbone model, and feed the unlabeled target domain data into our newly designed task-specific teacher-student framework for predicting its high-quality pseudo la-bels. To further adapt the property of density-insensitivity into the target domain, we feed the teacher and student branches with the same sample of different densities, and propose an Object Graph Alignment (OGA) module to con-struct two object-graphs between the two branches for en-forcing the consistency in both the attribute and relation of cross-density objects. Experimental results on three widely adopted 3D object detection datasets demonstrate that our proposed domain adaption method outperforms the state-of-the-art methods, especially over varying-density data. Code is available at https://github.com/WoodwindHu/DTS.
1. Introduction 3D object detection is a fundamental task in various real-world scenarios, such as autonomous driving [24, 35] and robot navigation [29], aiming to detect and localize traffic-related objects such as cars, pedestrians, and cy-BCorresponding author: W. Hu. This work was supported by Na-tional Natural Science Foundation of China (61972009). 0100200300400 -0.6-0.4-0.200.2WaymoKITTInuScenes Zenith Angle 𝜁 N→K (low to high)W→N (high to low)(a)(b)010203040506070 SNST3DOurs𝐷!AP"#Figure 1. (a) The significant difference of beam densities among Waymo, KITTI, and nuScenes datasets. The beam density Dζ represents the number of beams per unit zenith angle. The beams are evenly distributed in nuScenes, while the density of beams in Waymo and KITTI increases as the zenith angle ζincreases, with the highest density near the horizontal direction. (b) Compared to previous works (SN [43] and ST3D [51]), our method is more effective in transferring the knowledge from low density to high density or high density to low density (N: nuScenes, K: KITTI, W: Waymo.). clists in 3D point clouds [16, 25, 26]. With the advent of deep learning, this task has obtained remarkable advances [24, 35–37, 49, 56, 60] in recent years, which however re-quires costly dense annotations of point clouds. Further, in real-world scenarios, upgrading LiDARs to other prod-uct models can be time-consuming and labor-intensive to collect and annotate massive data for each kind of prod-uct, while it is reasonable to use labeled data from previous sensors. Also, the number of LiDAR points used in mass-produced robots and vehicles is usually fewer than that in large-scale public datasets [44]. To bridge the domain gap caused by different LiDAR beams, it is essential to develop methods that address these differences. However, the gen-eralization ability of existing methods is proved to be lim-ited [43] when the 3D models trained on a specific dataset are directly applied to an unknown dataset collected with a different LiDAR, which prevents the wide applicability of 3D object detection in autonomous driving. To reduce the domain gap between different datasets, some works [13, 14, 27, 34, 43, 44, 47, 51, 55, 57] proposed unsupervised domain adaptation (UDA) methods to transfer knowledge from a labeled source domain to an unlabeled target domain. However, most of them focus on reducing This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17556 the domain gap introduced by the bias in object sizes on the labeled source domain, which neglect another important do-main gap induced by varying densities of point clouds ac-quired from different types of LiDAR. We argue that this domain gap is crucial for 3D object detection in two aspects: 1) As demonstrated in Figure 1(a), different LiDAR col-lectors generally produce point cloud data with distinctive densities and distributions, leading to huge density-induced domain gap. 2) Most 3D detectors are directly trained on a single environment and thus sensitive to the cross-domain density variation. As shown in Figure 1(b), existing domain adaption methods suffer from performance bottlenecks in cross-density scenarios. Although few works [44] attempt to downsample point clouds of high density and transfer its knowledge to the low-density domain, they are limited to the model design that cannot realize the knowledge trans-fer from a low-density domain to a high-density domain. Hence, it is demanded to train robust 3D feature representa-tions that can adapt to point cloud data of varying densities. To this end, we make the attempt to propose a novel Density-insensitive Teacher-Student (DTS) framework to address the domain gap induced by varying point densities and distributions. The key idea of DTS is to first pre-train a density-insensitive object detector on the source domain, and then employ a self-training strategy [20, 51, 58] to fine-tune this detector on the unlabeled target domain by itera-tively predicting and updating its pseudo results. However, there still remain two concerns: 1) Previous self-training methods may be prone to its mistake by using single-branch prediction. 2) How to adapt and improve the property of density-insensitivity of the pre-trained 3D detector on the target domain is important. Therefore, we introduce a task-specific teacher-student framework in order to provide more reliable and robust supervision, in which the teacher and student branches are fed with variants of the same sample in different densities. Further, considering the object pre-diction should be invariant in the two branches, we propose to capture their cross-density object-aware consistency for enhancing the density-insensitivity on the target domain. To be specific, we first introduce Random Beam Re-Sampling (RBRS) to train the density-invariant 3D object detector on the labeled source domain, by randomly mask-ing or interpolating the beams of the point clouds. Then, we take this pre-trained 3D detector as the backbone model to build a teacher-student framework to iteratively predict and update the pseudo labels on the unlabeled target do-main. To achieve the goal of density-insensitivity, we feed the student and teacher models with the RBRS-augmented sample and the original sample, respectively. Moreover, in order to enforce the consistency in attributes and relations of detected objects in the teacher and student branches for more reliable supervision, we construct two graphs based on the objects predicted from the teacher and student mod-els, and propose a novel Object Graph Alignment (OGA) to keep consistent cross-density object-attributes (node-level) and object-relations (edge-level) between the two graphs. During the training, the student model is optimized based on the predictions of the teacher while the weights of the teacher model are updated by taking the exponential mov-ing average of the weights of the student model. In this way, our DTS is effective in reducing the density-induced domain gap and achieving state-of-the-art performance on the unknown target data. In summary, our main contributions include • We propose a density-insensitive unsupervised domain adaption framework to alleviate the influence of the domain gap caused by varying density distributions. We develop beam re-sampling to randomize the den-sity of point clouds, which effectively enhances the ro-bustness of 3D object detection to varying densities. • We exploit a task-specific teacher-student framework to fine-tune the pre-trained 3D detector on the tar-get domain. To adapt and improve the density-insensitivity on the target domain, we introduce an ob-ject graph alignment module to keep the cross-density object-aware consistency. • Experimental results demonstrate our model signif-icantly outperforms the state-of-the-art methods on three widely adopted 3D object detection datasets in-cluding NuScenes [4], KITTI [11], and Waymo [39].
Hanspal_Efficient_Verification_of_Neural_Networks_Against_LVM-Based_Specifications_CVPR_2023
Abstract The deployment of perception systems based on neu-ral networks in safety critical applications requires assur-ance on their robustness. Deterministic guarantees on net-work robustness require formal verification. Standard ap-proaches for verifying robustness analyse invariance to an-alytically defined transformations, but not the diverse and ubiquitous changes involving object pose, scene viewpoint, occlusions, etc. To this end, we present an efficient ap-proach for verifying specifications definable using Latent Variable Models that capture such diverse changes. The ap-proach involves adding an invertible encoding head to the network to be verified, enabling the verification of latent space sets with minimal reconstruction overhead. We re-port verification experiments for three classes of proposed latent space specifications, each capturing different types of realistic input variations. Differently from previous work in this area, the proposed approach is relatively independent of input dimensionality and scales to a broad class of deep networks and real-world datasets by mitigating the ineffi-ciency and decoder expressivity dependence in the present state-of-the-art.
1. Introduction The deployment of perception systems based on neu-ral networks in safety-critical applications requires assur-ance on their performance, notably accuracy and robust-ness. Formal verification contributes to this requirement by providing provable and deterministic guarantees that a net-work meets a given specification . Typically, specifications are mathematically expressed constraints on the network’s intended input/output and may encode desirable proper-ties, such as robustness to noise (including adversarial at-tacks) [35], geometric changes [1,2], bias-field changes [14] and beyond. While the above is useful, practical applications require robustness against diverse changes in a scene, including changes in the pose of objects, viewpoints, occlusions, etc.Such changes cannot be efficiently mathematically defined, but may be encoded from data by using generative mod-els. For instance, [11, 12, 28, 34] use generative models to generate novel in-domain images for data augmentation, ad-versarial training or evaluating network generalisation; [34] additionally derives formal conditions for a latent space set to necessarily contain sufficient perturbations for it to be trusted for adversarial training and robustness checks. All these approaches either provide statistical robustness mea-sures, or generate attacks based on gradient-search, which is not guaranteed to find an attack if one exists. Popular for network robustification and empirical eval-uation, latent space sets are seldom used as inputs for ver-ification due to the valid concern over the lack of mathe-matical guarantees on the completeness of the specifications they encode. Therefore, we reiterate that this work is most useful for changes that are difficult to mathematically de-fine. We additionally argue that formal verification of latent space-based specifications can be more valuable than their empirical evaluation. This is because the latent space is a continuous domain and countably infinite number of inputs can be mapped to and reconstructed from a latent space set. Therefore, no amount of testing, or search in the latent space can provide guarantees against all the variations encoded in it. To the best of our knowledge, only [20,27] encode speci-fications in a latent space and propose architectures to verify them. There are, however, two difficulties with verification in the latent space. The first concerns the scalability of verifi-cation methods; the second relates to the quality of recon-structions affecting the verification outcomes. In this paper, we focus on alleviating these two concerns. Specifically, we propose a novel, invertible encoder-based pipeline for verifying latent space sets, that lends two key benefits of: • Computational efficiency and relative independence to in-put dimensionality, • Verification outcomes’ independence to reconstructions, and precise counterexamples with high recall. We focus our analysis on pose and attribute variations in vi-sion inference tasks, but the approach is likely extendable This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3894 to other variations, domains and tasks. Next, we recall key notions for network verification and discuss the existing rel-evant work, before presenting and validating our method in subsequent sections.
Cui_Feature_Aggregated_Queries_for_Transformer-Based_Video_Object_Detectors_CVPR_2023
Abstract Video object detection needs to solve feature degradation situations that rarely happen in the image domain. One solution is to use the temporal information and fuse the features from the neighboring frames. With Transformer-based object detectors getting a better performance on the image domain tasks, recent works began to extend those methods to video object detection. However, those exist-ing Transformer-based video object detectors still follow the same pipeline as those used for classical object detec-tors, like enhancing the object feature representations by aggregation. In this work, we take a different perspective on video object detection. In detail, we improve the qualities of queries for the Transformer-based models by aggrega-tion. To achieve this goal, we first propose a vanilla query aggregation module that weighted averages the queries ac-cording to the features of the neighboring frames. Then, we extend the vanilla module to a more practical version, which generates and aggregates queries according to the features of the input frames. Extensive experimental re-sults validate the effectiveness of our proposed methods: On the challenging ImageNet VID benchmark, when inte-grated with our proposed modules, the current state-of-the-art Transformer-based object detectors can be improved by more than 2.4%on mAP and 4.2%on AP 50. Code is avail-able at https://github.com/YimingCuiCuiCui/FAQ.
1. Introduction Object detection is an essential yet challenging task which aims to localize and categorize all the objects of in-terest in a given image [14,50,98]. With the development of deep learning, extraordinary processes have been achieved in static image object detection [3, 14, 22, 42, 47, 71]. Exist-ing object detectors can be mainly divided into three cate-gories: two-stage [3,29,32,46,65], one-stage [47,52,57,62– 64, 72, 73] and query-based models [4, 27, 56, 66, 71, 103]. For better performance, two-stage models generate a set of proposals and then refine the prediction results, like R-CNN families [15, 26, 32, 65]. However, these two-stageobject detectors usually suffer from a low inference speed. Therefore, one-stage object detectors are introduced to bal-ance the efficiency and performance, which directly pre-dicts the object locations and categories based on the in-put image feature maps, like YOLO series [62–64, 69] and FCOS [72,73]. Recently, query-based object detectors have been introduced, which generate the predictions based on a series of input queries and do not require complicated post-processing pipelines like NMS [2,55,60]. Some typical ex-ample models are DETR series [4,56,66,103] in Figure 1(a) and Sparse R-CNN series [24, 35, 71]. With the existing approaches getting better performance on the image domain, researchers began to extend the tasks to the video domain [10, 41,67, 75, 83,85]. One of the most challenging issues of video object detection is handling the feature degradation caused by motion, which rarely appears in static images. Since videos provide informative tem-poral hints, post-processing-based video object detectors are proposed [1, 31, 39, 40, 68]. As shown in Figure 1(c), these methods first apply image object detectors on every individual frame and then associate the prediction results. However, since the image object detectors and the post-processing pipelines are not optimized jointly, these models usually suffer from poor performance. Besides post-processing methods, feature-aggregation-based models [6, 13, 30, 34, 38, 82, 100, 104] are introduced to improve the feature representations for video object de-tection. These approaches first weighted average the fea-tures from the neighboring frames and then fed the aggre-gated features into the task heads for the final prediction, as shown in Figure 1(b). The pipeline for weighted averaging is usually based on feature similarity [6, 79, 82, 104, 105] or learnable networks [13, 34, 100]. Since Transformer-based models perform better on image object detection, re-searchers have begun extending them to the video domain [34,76,100]. TransVOD families [34,100] introduce a tem-poral Transformer to the original Deformable-DETR [103] to fuse both the spatial and temporal information to handle the feature degradation issue. Similarly, PTSEFormer [76] introduces progressive feature aggregation modules to the current Transformer-based image object detectors to boost This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6365 Figure 1. The differences between the existing works and ours. (a) Transformer-based object detectors. (b) Feature-aggregation based video object detectors. (c) Post-processing based video object detectors. (d) Ours. Previous works can be divided into feature-aggregation based (b) and post-processing based (c) models. For Transformer-based models, these works either enhance the features used for detection or the prediction results of each frame. In contrast, our methods (d) pay attention to the aggregation of queries for those Transformer-based object detection models to handle the feature degradation issues. the performance. Following the TransVOD series [34,100], we use Transformer-based object detectors as the baseline models in this work. Unlike the existing models, we take a deeper look at the Transformer-based object detectors and find out the unique properties of their designs. We notice that the queries of Transformer-based object detectors play an essential role in the final prediction performance. Therefore, different from the existing works, which apply different modules to ag-gregate features (Figure 1(b)) or detection results in every single frame (Figure 1(c)), we introduce a module to aggre-gate the queries for the Transformer decoder, as shown in Figure 1(d). The existing TransVOD families [34, 100] ini-tialize the spatial and temporal queries randomly regardless of the input frames and then aggregate them after several Transformer layers. Unlike them, our models focus on ini-tializing the object queries and enhancing their qualities of Transformer-based approaches for better performance. By associating and aggregating the initialization of the queries with the input frames, our models can achieve a much better performance compared to the TransVOD families [34, 100] and PTSEFormer [76]. Meanwhile, our methods can be in-tegrated into most of the existing Transformer-based image object detectors to be adaptive to the video domain task. Our contributions are summarized as follows: • To the best of our knowledge, we are the first to fo-cus on the initialization of queries and aggregate them based on the input features for Transformer-basedvideo object detectors to balance the model efficiency and performance. • We design a vanilla query aggregation (VQA) mod-ule, which enhances the query representations for the Transformer-based object detectors to improve their performance on the video domain tasks. Then we ex-tend it to a dynamic version, which can adaptively gen-erate the initialization of queries and adjust the weights for query aggregation according to the input frames. • Our proposed method is a plug-and-play module which can be integrated into most of the recent state-of-the-art Transformer-based object detectors for video tasks. Evaluated on the ImageNet VID bench-mark, the performance of video object detection can be improved by at least 2.0%on mAP when integrated with our proposed modules.