title
stringlengths
28
135
abstract
stringlengths
0
12k
introduction
stringlengths
0
12k
Dong_Fast_Monocular_Scene_Reconstruction_With_Global-Sparse_Local-Dense_Grids_CVPR_2023
Abstract Indoor scene reconstruction from monocular images has long been sought after by augmented reality and robotics developers. Recent advances in neural field representa-tions and monocular priors have led to remarkable re-sults in scene-level surface reconstructions. The reliance on Multilayer Perceptrons (MLP), however, significantly limits speed in training and rendering. In this work, we propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene recon-struction without MLPs. Our globally sparse and locally dense data structure exploits surfaces’ spatial sparsity, en-ables cache-friendly queries, and allows direct extensions to multi-modal data such as color and semantic labels. To apply this representation to monocular scene reconstruc-tion, we develop a scale calibration algorithm for fast geo-metric initialization from monocular depth priors. We apply differentiable volume rendering from this initialization to refine details with fast convergence. We also introduce effi-cient high-dimensional Continuous Random Fields (CRFs) to further exploit the semantic-geometry consistency be-tween scene objects. Experiments show that our approach is 10× faster in training and 100× faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
1. Introduction Reconstructing indoor spaces into 3D representations is a key requirement for many real-world applications, includ-ing robot navigation, immersive virtual/augmented reality experiences, and architectural design. Particularly useful is reconstruction from monocular cameras which are the most prevalent and accessible to causal users. While much re-search has been devoted to this task, several challenges re-main. Conventional monocular reconstruction from multi-view RGB images uses patch matching [34], which takes hours to reconstruct even a relatively small scene. Several 3D re-*CMU RI. Work done during the internship at NVIDIA. †NVIDIA Research wall floorcabinet chairsofa tabledoor windowpicture curtain Figure 1. Color and semantic scene reconstruction from our sys-tem with monocular images and learned monocular priors. construction methods [38,46] have demonstrated fast recon-struction by applying 3D convolutional neural networks to feature volumes, but they have limited resolution and strug-gle to generalize to larger scenes. Recently, unified neural radiance fields [22] and neural implicit representations were developed for the purpose of accurate surface reconstruction from images [29, 43,47]. While this was successfully demonstrated on single objects, the weak photometric constraint leads to poor reconstruc-tion and slow convergence for large-scale scenes. Guo et al. [14] and Yu et al. [49] improved the quality and con-vergence speed of neural field reconstruction on large-scale scenes by incorporating learned geometrical cues like depth and normal estimation [11, 31], however, training and eval-uation remain inefficient. This is primarily because these approaches rely on MLPs and feature grids [23] that encode the entire scene rather than concentrating around surfaces. In contrast to MLPs, an explicit SDF voxel grid can be adaptively allocated around surfaces, and allows fast query and sampling. However, an efficient implementation of differentiable SDF voxel grids without MLPs is missing. Fridovich-Keil and Yu et al. [12] used an explicit density and color grid, but is limited to rendering small objects. Muller et al. [23] developed a feature grid with spatial hash-ing for fast neural rendering, but its backbone hash map is not collision-free, causing inevitable slow random access and inaccurate indexing at large scales. Dong et al. [10] pro-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4263 (a) COLMAP [34] (b) NeRF [22] (c) V olSDF [47] (d) NeuS [43] (e) ManhattanSDF [14] (f) MonoSDF-MLP [49] (g) MonoSDF-Grid [49] (h) Ours Figure 2. Qualitative reconstruction comparison on ScanNet [7]. While being 10× faster in training, we achieve similar reconstruction results to state-of-the-art MonoSDF [49], with fine details (see Fig. 9). posed a collision-free spatially hashed grid following Niess-neret al. [28], but lacks support for differentiable rendering. Several practical challenges hinder the implementation of an efficient differentiable data structure: 1. a collision-free spatial hash map on GPU that supports one-to-one indexing from positions to voxels; 2. differentiable trilinear inter-polations between spatially hashed voxels; 3. parallel ray marching and uniform sampling from a spatial hash map. Our approach: we address such challenges using a dif-ferentiable globally sparse andlocally dense voxel grid. We transform a collision-free GPU hash map [35] to a differen-tiable tensor indexer [30]. This generates a one-to-one map between positions and globally sparse voxel blocks around approximate surfaces, and enables skipping empty space for efficient ray marching and uniform sampling. We fur-ther manage locally dense voxel arrays within sparse voxel blocks for GPU cache-friendly contiguous data query via trilinear interpolation. As a result, using explicit SDF grids leads to fast SDF gradient computation in a single forward pass, which can further accelerate differentiable rendering. This new data structure presents a new challenge — we can only optimize grid parameters if they are allocated around surfaces. To resolve this, we make use of off-the-shelf monocular depth priors [11, 31] and design a novel ini-tialization scheme with global structure-from-motion (SfM) constraints to calibrate these unscaled predicted depths. It results in a consistent geometric initialization via volumet-ric fusion ready to be refined through differentiable volume rendering.We additionally incorporate semantic monocular pri-ors [17] to provide cues for geometric refinement in 3D. For instance, we use colors and semantics to guide the sharp-ening of normals around object boundaries, which in turn improves the quality of colors and semantics. We enforce these intuitive notions through our novel continuous Condi-tional Random Field (CRF). We use Monte Carlo samples on the SDF zero-crossings to create continuous CRF nodes and define pairwise energy functions to enforce local con-sistency of colors, normals, and semantics. Importantly, we define similarity in a high dimensional space that consists of coordinates, colors, normals, and semantics, to reject spa-tially close samples with contrasting properties. To make inference tractable, we follow Krahenbuhl et al. [16] and use variational inference, leading to a series of convolutions in a high-dimensional space. We implement an efficient per-mutohedral lattice convolution [1] using the collision-free GPU hashmap to power the continuous CRF inference. The final output of our system is a scene reconstruction with geometry, colors, and semantic labels, as shown in Fig. 1. Experiments show that our method is 10× faster in training, 100× faster in inference, and has comparable accuracy measured by F-scores against state-of-the-art im-plicit reconstruction systems [14, 49]. In summary, we pro-pose a fast scene reconstruction system for monocular im-ages. Our contributions include: • A globally sparse locally dense differentiable volumetric data structure that exploits surface spatial sparsity without an MLP; 4264 • A scale calibration algorithm that produces consistent ge-ometric initialization from unscaled monocular depths; • A fast monocular scene reconstruction system equipped with volume rendering and high dimensional continuous CRFs optimization.
Dashpute_Thermal_Spread_Functions_TSF_Physics-Guided_Material_Classification_CVPR_2023
Abstract Robust and non-destructive material classification is a challenging but crucial first-step in numerous vision appli-cations. We propose a physics-guided material classifica-tion framework that relies on thermal properties of the ob-ject. Our key observation is that the rate of heating and cooling of an object depends on the unique intrinsic proper-ties of the material, namely the emissivity and diffusivity. We leverage this observation by gently heating the objects in the scene with a low-power laser for a fixed duration and then turning it off, while a thermal camera captures measure-ments during the heating and cooling process. We then take this spatial and temporal “thermal spread function” (TSF) to solve an inverse heat equation using the finite-differences approach, resulting in a spatially varying estimate of dif-fusivity and emissivity. These tuples are then used to train a classifier that produces a fine-grained material label at each spatial pixel. Our approach is extremely simple re-quiring only a small light source (low power laser) and a thermal camera, and produces robust classification results with86% accuracy over 16classes1.
1. Introduction Material classification is an important task pertinent to a diverse set of fields including but not limited to medicine and biology [1], chip manufacturing, recycling [2, 3], land and weather monitoring using satellites, and vision and robotics. Robust material classification is particularly crit-ical in separating various parts of an object based on their constituent materials [2,3]. Common tools for material clas-sification span a large spectrum including simple tools such as infrared spectroscopy, hyperspectral imaging to more ex-otic tools such as ultrasound, and x-ray fluorescent imagers. Material classification primarily relies on various dimen-sions of light including bidirectional reflectance function 1Code: https://github.com/aniketdashpute/TSF laser dot. The TSF data is used to (c) estimate diffusivity map and Thermal CameraHeat source -laserUnknown material (a) DiffusivityAbsorption coeff(c)(d)(b)(e)Figure 1. Material classification with thermal properties. Ma-terials have unique thermodynamics that enables robust classifica-tion. We propose a simple setup composed of (a) a 60mW laser as a heat source and a thermal camera to capture the heat profile (in-set). This results in a (b) stack of images we call Thermal Spread Function (TSF) that encodes heating and cooling effect around the laser dot. The TSF data is used to (c) estimate diffusivity map and (d) the external heat source term using inverse Finite Difference Method, which is then used to (e) classify the material robustly. (BRDF) slices [4], color and NIR images [5], frequency and depth-dependent ToF distortion [6], spectral imaging methods [7,8], multi-modal methods [9], and thermal imag-ing [10]. Methods based on RGB images are popular due to availability of RGB cameras and large labled datasets, but suffer from lack of robustness. In contrast, spectrum-based imaging based methods enable accurate classification but often require complex optical systems such as hyper-spectral cameras, and are sensitive to external illumination conditions. Human perception of materials is often multi-modal, such as relying on touch and vision to accurately classify material. This act of touching further involves a thermody-namic exchange that relies on the material composition of This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1641 the object. A metallic object results in rapid conduction of heat whereas non-metallic objects like ones made of wood result in slower transfer rate. Thus the intrinsic thermal properties provides an insight into the material properties that is often missed or confused by vision alone. Previous work in contact ways of knowing conductivity -haptic sens-ing [11, 12], and haptic displays [13, 14] leveraged this idea by developing “an artificial fingertip”. The drawback of this method is that it is invasive -it requires to touch the scene and thus can lead to interfering with it. Thermal characteri-zation for recycling has also been done using a spectrometer and a fluxmeter [15]. Thermal Imaging methods enable contact-free estima-tion of thermal properties, thus allowing us to classify ma-terials rapidly and in a non-destructive manner. One of the most popular contact-less methods for determining thermal diffusivity is the laser flash method. A laser is flashed on a thin slice (microns thick) of a material and the temper-ature change is observed from the other side, providing a quantitative estimate of the thermal diffusivity or conduc-tivity [16, 17]. This is restrictive due to the constrained lab setup and requirement of thin slices. Thermal imaging has also been used for non-destructive infrastructure inspection where the difference in thermal behaviour of unaltered and defected zones allow defect detection [18]. We take inspiration from the contact-less methods and develop a non-invasive thermal imaging system for material classification. As opposed to previous methods, our method is robust enough to be used in uncontrolled environments, and not limited to constrained lab setups. We use a visible laser beam as an external heat source that shines on a mate-rial, which absorbs a fraction of this beam corresponding to optical wavelength . The absorption of this energy leads to a rise in temperature that shows up in the long wave infrared domain (LWIR) and is captured by the thermal camera. The thermal camera is used to capture the heating process, and once the heat source is off, its cooling (refer Fig. 1). We define the temperature transients obtained from the heating-cooling as its Thermal Spread Function (TSF) and use it for robustly classifying materials. A key challenge with using TSF for classifying materials is that a thermal camera requires a known emissivity (ε)(ra-tio of radiated energy of the object to that of a black-body) for accurately estimating the temperature. To overcome this ambiguity, we leverage a physically accurate heat diffusion equation (see Sec. 2) that carefully models the thermody-namic interactions between the ambient scene and the ob-ject. This estimated TSF is then used for training a material classifier which enables robust material classification. Our approach and main contributions When objects are heated through radiation on surface and allowed to cool down, they display characteristic tem-heat source to heat the object and observe its heating and thenHeat SourceLWIRCameraenergy f from sourceheat absorbed𝐹=𝜖!"𝑓𝜎𝑢!"#$=𝜎𝜖𝑢$temperature u𝑥𝑦𝑧 Figure 2. Heating and capturing process . We use an external heat source to heat the object and a thermal camera to observe the heating and cooling effect. Refer Sec. 2 for detailed explanation. perature changes. These changes are based on their initial temperature, surface absorption, and heat diffusivity. We inject heat through a small portion on the surface of a ma-terial which diffuses throughout the body over time. If we observe a small patch of material in the vicinity of injection, we observe the diffusion -both during the injection phase and during the cooling phase after no external heat is sup-plied. We call this varying 2D temperature profile as the Thermal Spread Function (TSF) of the material. We measure the TSF of the material through a Long Wave Infrared (LWIR) thermal camera. We derive diffu-sivity and an absorption factor from the TSF to characterize the material as these properties are independent of the ini-tial temperature of the object. Our main contributions are the following. • We first derive a physically accurate model that character-izes the Thermal Spread Functions (TSFs) as a function of initial temperature of the object and an object’s ther-modynamic properties. • We then use a Finite Differences (FD) Method to solve the inverse heat problem for recovering parameters related to diffusion, absorption and emission • Finally, we design and demonstrate a simple optical setup for non-invasively recovering the thermodynamic proper-ties and using them to classify materials.
Johari_ESLAM_Efficient_Dense_SLAM_System_Based_on_Hybrid_Representation_of_CVPR_2023
Abstract We present ESLAM, an efficient implicit neural represen-tation method for Simultaneous Localization and Mapping (SLAM). ESLAM reads RGB-D frames with unknown cam-era poses in a sequential manner and incrementally recon-structs the scene representation while estimating the cur-rent camera position in the scene. We incorporate the latest advances in Neural Radiance Fields (NeRF) into a SLAM system, resulting in an efficient and accurate dense visual SLAM method. Our scene representation consists of multi-scale axis-aligned perpendicular feature planes and shal-low decoders that, for each point in the continuous space, decode the interpolated features into Truncated Signed Dis-tance Field (TSDF) and RGB values. Our extensive exper-iments on three standard datasets, Replica, ScanNet, and TUM RGB-D show that ESLAM improves the accuracy of 3D reconstruction and camera localization of state-of-the-art dense visual SLAM methods by more than 50%, while it runs up to ×10 faster and does not require any pre-training. Project page: https://www.idiap.ch/paper/eslam
1. Introduction Dense visual Simultaneous Localization and Mapping (SLAM) is a fundamental challenge in 3D computer vi-sion with several applications such as autonomous driving, robotics, and virtual/augmented reality. It is defined as con-structing a 3D map of an unknown environment while si-multaneously approximating the camera pose. While traditional SLAM systems [16, 41, 45, 55, 76, 77]mostly focus on localization accuracy, recent learning-based dense visual SLAM methods [2, 11, 25, 35, 60, 64, 65, 67, 81, 86] provide meaningful global 3D maps and show reasonable but limited reconstruction accuracy. Following the advent of Neural Radiance Fields (NeRF) [37] and the demonstration of their capacity to rea-son about the geometry of a large-scale scene [8, 13, 20, 22, 26, 75, 78] and reconstruct 3D surfaces [1, 29, 47, 48, 62, 71, 72, 82, 85], novel NeRF-based dense SLAM methods have been developed. In particular, iMAP [59] and NICE-SLAM [87] utilize neural implicit networks to achieve a consistent geometry representation. IMAP [59] represents the geometry with a single huge MLP, similar to NeRF [37], and optimizes the camera poses during the rendering process. NICE-SLAM [87] im-proves iMAP by storing the representation locally on voxel grids to prevent the forgetting problem. Despite promis-ing reconstruction quality, these methods are computation-ally demanding for real-time applications, and their ability to capture geometry details is limited. In addition, NICE-SLAM [87] uses frozen pre-trained MLPs, which limits its generalizability to novel scenes. We take NICE-SLAM [87] as a baseline and provide the following contributions: • We leverage implicit Truncated Signed Distance Field (TSDF) [1] to represent geometry, which converges noticeably faster than the common rendering-based representations like volume density [59] or occu-pancy [87] and results in higher quality reconstruction. • Instead of storing features on voxel grids, we propose employing multi-scale axis-aligned feature planes [6] This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17408 which leads to reducing the memory footprint growth rate w.r.t. scene side-length from cubic to quadratic. • We benchmark our method on three challenging datasets, Replica [57], ScanNet [12], and TUM RGB-D [58], to demonstrate the performance of our method in comparison to existing ones and provide an exten-sive ablation study to validate our design choices. Thanks to the inherent smoothness of representing the scene with feature planes, our method produces higher-quality smooth surfaces without employing explicit smoothness loss functions like [70]. Concurrent with our work, the followings also propose Radiance Fields-based SLAM systems: iDF-SLAM [38] also uses TSDF, but it is substantially slower and less ac-curate than NICE-SLAM [87]. Orbeez-SLAM [10] op-erates in real-time at the cost of poor 3D reconstruction. Compromising accuracy and quality, MeSLAM [27] intro-duces a memory-efficient SLAM. MonoNeuralFusion [88] proposes an incremental 3D reconstruction model, assum-ing that ground truth camera postures are available. Lastly, NeRF-SLAM [54] presents a monocular SLAM system with hierarchical volumetric Neural Radiance Fields opti-mized using an uncertainty-based depth loss.
Gan_CNVid-3.5M_Build_Filter_and_Pre-Train_the_Large-Scale_Public_Chinese_Video-Text_CVPR_2023
Abstract Owing to well-designed large-scale video-text datasets, recent years have witnessed tremendous progress in video-text pre-training. However, existing large-scale video-text datasets are mostly English-only. Though there are certain methods studying the Chinese video-text pre-training, they pre-train their models on private datasets whose videos and text are unavailable. This lack of large-scale public datasets and benchmarks in Chinese hampers the research and downstream applications of Chinese video-text pre-training. Towards this end, we release and benchmark CNVid-3.5M, a large-scale public cross-modal dataset con-taining over 3.5M Chinese video-text pairs. We summarize our contributions by three verbs, i.e., “Build”, “Filter”, and “Pre-train”: 1) To build a public Chinese video-text dataset, we collect over 4.5M videos from the Chinese websites. 2) To improve the data quality, we propose a novel method to filter out 1M weakly-paired videos, resulting in the CNVid-3.5M dataset. And 3) we benchmark CNVid-3.5M with three mainstream pixel-level pre-training archi-tectures. At last, we propose the Hard Sample Curriculum Learning strategy to promote the pre-training performance. To the best of our knowledge, CNVid-3.5M is the largest public video-text dataset in Chinese, and we provide the first pixel-level benchmarks for Chinese video-text pre-training. The dataset, codebase, and pre-trained models are available at https://github.com/CNVid/CNVid-3.5M.
1. Introduction Owing to well-designed large-scale datasets, video-text pre-training [15, 17, 19] has achieved superior performance in various downstream tasks, such as video-text retrieval [4, 10, 36], video question answering [27, 34, 42], and video captioning [1, 22, 30]. However, recent large-scale video-*Equal contribution. †Corresponding author. Large -Scale Video -Text Dataset English Chinese We FILTER the noisy data by a novel method! We BUILD and release one to fill this blank! We PRE -TRAIN various benchmarks for you to choose! There are nolarge public Chinese Video Datasets! Some videos have weak vision -text consistency! Video: a live show Text:🎶BGM🎶Mismatched Data HowTo WebVid ···None SolutionHow to use the dataset in actual applications ? SolutionSolutionPre-training DatasetCNVid -3.5M Applications Figure 1. Here presents the motivations of this paper, based on which we highly summarize our contributions with three verbs: “Build”, “Filter”, and “Pre-train”. text datasets are mostly English-only ( e.g., Howto100M [25] and WebVid-2.5M [4]). Though some methods [14,26, 45] turn to study the Chinese video-text pre-training, they pre-train their models on private datasets whose videos and text are unavailable. Therefore, the research towards Chinese video-text pre-training is still in its infancy due to the lack of large-scale public datasets. Towards this problem, directly translating English text into Chinese is a simple solution. However, it may result in unacceptable performance degradation for two reasons: 1) Translation errors are inevitable. Moreover, since most of the large-scale video-text datasets employ the Automatic Speech Recognition (ASR) system to generate text, the language translator would amplify the error from the incom-plete and noisy ASR text. And 2) there remains an intrinsic linguistic gap between English and Chinese. Many widely-used English idioms and slang can hardly find their Chinese counterparts, leading some translated text incomprehensible and even contrary to the original meaning. In this paper, we aim to release and benchmark a large-scale public Chinese video-text dataset to facilitate future researchers and the community. As illustrated in Figure 1, three verbs could highly summarize our contributions, i.e., “Build”, “Filter”, and “Pre-train”. Tobuild a large-scale Chinese video-text dataset, we collect over 4.5M videos from Chinese websites. All videos are associated with user-uploaded titles and ASR text. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14815 Wefilter out the weakly-paired data by a novel method to improve the data quality. As some work [25, 26] pointed out, the pre-training performance would suffer from the noisy ASR text that fails to accurately describe the video content. Unfortunately, the problem is raised with few practical solutions. Therefore, we employ a well-trained image-text model to evaluate the video-text consistency for three reasons: 1) The text information in existing image-text datasets [11] are usually manually-written titles or captions, whose consistency is guaranteed. 2) Some video-text pre-training architectures [23,35] are based upon image-text ones. And 3) it is cheap and efficient to “hire” a well-trained model to check millions of videos. In this way, we filter out about 1M weakly-paired videos based on the balance between the pre-training performance and efficiency, deriving the proposed CNVid-3.5M dataset. Wepre-train various models to benchmark our CNVid-3.5M dataset. Current video-text pre-training methods could be roughly divided into two categories: 1) feature-level pre-training methods [24, 33, 40] that employ offline video and textual feature extractors, and 2) pixel-level ones [4,15,36] that learn cross-modal representations end-to-end from raw videos and text. Since there remain domain gaps between pre-training datasets and frozen feature extrac-tors, pixel-level pre-training methods usually achieve better performance and have been widely employed in recent years. However, existing Chinese video-text pre-training methods [14, 26, 45] are all feature-level ones pre-trained onprivate datasets, limiting their contributions on the development of Chinese video-text pre-training techniques. Hence, we adopt three mainstream pixel-level pre-training frameworks, which are the first pixel-level benchmarks for Chinese video-text pre-training. Moreover, we propose the novel Hard Sample Curricu-lum Learning strategy to promote the pre-training perfor-mance. Since contrastive learning is a significant compo-nent in video-text pre-training, some methods [16, 18, 43] employ the hard sample mining [12,29] strategy to promote the cross-modal alignment. However, hard sample mining would bring side effects to pre-training when the model is far from convergence. Suppose that a model is incapable of discriminating the ground-truth video-text pairs, recklessly introducing hard negatives would lead to the sub-optimal performance. Inspired by the curriculum learning [32, 37] strategy that “starts small” and gradually “learns hard”, we combine these two strategies and propose the novel Hard Sample Curriculum Learning (HSCL). By gradually and smoothly emphasizing those hard samples, HSCL could effectively improve the pre-training performance. Our contributions are summarized in four folds: • To fill in the blank of large-scale public Chinese video-text datasets, we collect over 4.5M videos associated with titles and ASR text from the websites.• To improve the data quality, we propose a novel method to filter out 1M weakly-paired videos, result-ing in the CNVid-3.5M dataset. • To promote the pre-training performance, we propose the novel Hard Sample Curriculum Learning strategy for better cross-modal contrastive learning. • To the best of our knowledge, the constructed CNVid-3.5M is the largest public Chinese video-text dataset. Moreover, we provide the first Chinese pixel-level benchmarks based on CNVid-3.5M. The dataset, codebase, and benchmarks are available at https://github.com/CNVid/CNVid-3.5M.
Chen_iQuery_Instruments_As_Queries_for_Audio-Visual_Sound_Separation_CVPR_2023
Abstract Current audio-visual separation methods share a stan-dard architecture design where an audio encoder-decoder network is fused with visual encoding features at the en-coder bottleneck. This design confounds the learning of multi-modal feature encoding with robust sound decod-ing for audio separation. To generalize to a new instru-ment, one must fine-tune the entire visual and audio net-work for all musical instruments. We re-formulate the visual-sound separation task and propose Instruments as Queries (iQuery) with a flexible query expansion mech-anism. Our approach ensures cross-modal consistency and cross-instrument disentanglement. We utilize “visually named” queries to initiate the learning of audio queries and use cross-modal attention to remove potential sound source interference at the estimated waveforms. To gen-eralize to a new instrument or event class, drawing inspi-ration from the text-prompt design, we insert additional queries as audio prompts while freezing the attention mech-anism. Experimental results on three benchmarks demon-strate that our iQuery improves audio-visual sound source separation performance. Code is available at https: //github.com/JiabenChen/iQuery .
1. Introduction Humans use multi-modal perception to understand com-plex activities. To mimic this skill, researchers have studied audio-visual learning [3, 17, 33] by exploiting the synchro-nization and correlation between auditory and visual infor-mation. In this paper, we focus on the sound source sepa-ration task, where we aim to identify and separate different sound components within a given sound mixture [60, 74]. Following the “Mix-and-Separate” framework [32, 34, 81], we learn to separate sounds by mixing multiple audio sig-nals to generate an artificially complex auditory represen-tation and then use it as a self-supervised task to separate individual sounds from the mixture. The works [26, 53, 89] showed that visually-guided sound separation is achievableby leveraging visual information of the sound source. Prevalent architectures take a paradigm of a visual-conditioned encoder-decoder architecture [23, 26, 58, 88], where encoded features from audio and visual modalities are fused at the bottleneck for decoding to yield separated spectrogram masks. However, it is noticed that this design often creates a “muddy” sound and “cross-talk” that leaks from one instrument to another. To create a clean sound separation, one would like the audio-visual encoders to be (1) self-consistent within the music instrument and (2) con-trasting across. One approach [27] added critic functions explicitly to enforce these properties. Another method [99] used a two-step process with the second motion-conditioned generation process to filter out unwanted cross-talks. We call these approaches decoder-centric. Most recent works focus on addressing the “muddy” and “cross-talk” issue by improving fine details of audio-visual feature extraction: for example, adding human motion en-coding as in [23, 88, 99], or cross-modality representations [58] via self-supervised learning. Once the feature repre-sentations are learned, the standard encoder-decoder FCN style segmentation is used as an afterthought. We consider these methods feature-centric. The standard designs have two limitations. First, it is hard to balance decoder-centric and feature-centric approaches that enforce a common goal of cross-modality consistency and cross-instrument con-trast. Second, to learn a new musical instrument, one has to retrain the entire network via self-supervision. To tackle these limitations, we propose a query-based sound separation framework, iQuery. We recast this prob-lem from a query-based transformer segmentation view, where each query learns to segment one instrument, similar to visual segmentation [15, 16, 65, 78]. We treat each au-dio query as a learnable prototype that parametrically mod-els one sound class. We fuse visual modality with audio by “visually naming” the audio query: using object detec-tion to assign visual features to the corresponding audio query. Within the transformer decoder, the visually initial-ized queries interact with the audio features through cross-attention, thus ensuring cross-modality consistency. Self-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14675 𝜀!"#$ Motion-awareCross-AttentionSelf-AttentionAudio-aware Cross-AttentionAudio-Visual Transformer Decoder InputFramesx3NVisually-named AudioQueries𝑄! DetectedObjectMotionFeatureObjectFeatureMLP STFT AudioFeature Video EncoderObject DetectorImage EncoderAudio EncoderAudio DecoderAudioEmbeddingMask Predictions MMixAudioMixAudioSpectrum Separated Sound 1Separated Sound 2𝜀!𝐹! 𝐹"𝑘 MixAudio𝐹#MaskEmbedding⊕CorrespondingQuery………… ……Audio-Visual Feature ExtractionFigure 1. Pipeline of iQuery. Our system takes as input an audio mixture and its corresponding video frames, and disentangles separated sound sources for each video. Our pipeline consists of two main modules: an Audio-Visual Feature Extraction module which extracts audio, object, and motion features through three corresponding encoders, and an Audio-Visual Transformer module for sound separation. The query-based sound separation transformer has three key components: 1)“visually-named” audio queries are initialized by extracted object features, 2)cross-attention between the audio queries with static image features, dynamic motion features and audio features, 3) self-attention between the learned audio queries to ensure cross-instrument contrast. attention across the audio queries for different instruments implements a soft version of the cross-instrument contrast objective. With this design, we unify the feature-centric with the decoder-centric approach. How do we achieve generalizability? Motivated by re-cent success in fine-tuning domain transfer with the text-prompt [28] and visual-prompt designs [7, 35, 41, 86], we adaptively insert the additional queries as audio prompts to accommodate new instruments. With the audio-prompt de-sign, we freeze most of the transformer network parame-ters and only fine-tune the newly added query embedding layer. We conjecture that the learned prototype queries are instrument-dependent, while the cross/self-attention mech-anism in the transformer is instrument-independent. Our main contributions are: • To the best of our knowledge, we are the first to study the audio-visual sound separation problem from a tun-able query view to disentangle different sound sources explicitly through learnable audio prototypes in a mask transformer architecture. • To generalize to a new sound class, we design an audio prompt for fine-tuning with most of the transformer ar-chitecture frozen. • Extensive experiments and ablations verify the ef-fectiveness of our core designs for disentangle-ment, demonstrating performance gain for audio-visual sound source separation on three benchmarks.2. Related work Audio-Visual Sound Source Separation. Recent years have witnessed promising results of audio-visual multi-modality joint learning [49, 62, 67, 75, 83] in domains like audio-visual sound source localiza-tion [4, 5, 14, 36, 55, 61, 63, 93], audio-visual event localization [68, 76, 77, 95] and sound synthesis from videos [25, 52, 54, 80, 97]. Sound source separation, a challenging classical problem, has been researched exten-sively in the audio signal processing area [11, 22, 37, 40]. A well-known example is the cocktail party problem [31, 48] in speech domain [1, 21]. Works have been proposed recently for tasks like speech separation [2, 27, 39, 51, 70], active sound separation [45, 46] and on-screen sound sep-aration [25, 53, 71, 72]. Our work focuses on audio-visual sound separation. Recent audio-visual sound separation methods could be classified generally into two categories: feature-centric and decoder-centric as discussed in Sec. 1. Feature-centric methods exploit various ways for visual feature extraction selection to aid this multi-modality task. Some works consider frame-based appearance features (static frame features [24, 79, 89] or detected object re-gions [26, 66]) for extracting visual semantic cues ( e.g., instrument categories) to guide sound separation. [12, 13] adds embeddings from an audio-visual scene graph at the U-Net bottleneck to model the visual context of sound sources. Based on the assessment that motion signals 14676 Ground truthMaskPredicted MaskCCoLPredicted MaskOurs Ground truthSpectrogram Video FramesAccordion+Violin PredictedCCoLPredictedOurs Saxophone+ Acoustic Guitar Figure 2. Qualitative results on MUSIC test set. The first column shows the mixed video frames, the second to the fourth columns compare our predicted spectrogram masks against masks yielded by state-of-the-art algorithm [66] and ground truth masks, and the fifth to the seventh columns visualize separated spectrograms. [66] produces blurry masks and contains unseparated components from another sound source, while our system successfully generates accurate mask and clean spectrograms as the ground truth. could more tightly couple the moving sounding object with corresponding variations of sounds, recent approaches focus on including motion information into the pipeline (e.g., optical flow [88], and human pose [23,58]). Based on this, [94] proposes a framework to search for the optimal fusion strategy for multi-modal features. Decoder-centric methods explore prevention of “cross-talk” between the audio sources in the decoder stage. [99] designs a two-stage pipeline, where the second stage conducts a counterfactual synthesis through motion features to remove potentially leaked sound. The approach of [27] added critic functions explicitly to enforce cross-modal consistency and cross-instrument contrast. Vision Transformers. Motivated by transformer’s suc-cess in natural language processing [73], transformers were first introduced in computer vision for image classification as ViT [20]. Given the superior long-range modeling ca-pacity, many follow-up works [47, 69, 82] have upgraded ViT to achieve higher performance and widely surpassed convolutional neural networks. Further, transformer-based models are adopted for various downstream tasks, such as 2D object detection [9, 91, 100], semantic/instance seg-mentation [65, 78, 92], 3D object detection [50, 85], shape recognition [84, 90] and video understanding [6, 42]. Par-ticularly, following the pipeline from DETR [9], Mask-Former [16] and Mask2Former [15] represent each mask candidate as a learnable query and conduct parallel decod-ing for instance-level segmentation. However, only few ap-proaches [39, 58, 71, 72, 99] have extended transformer for audio-visual sound separation fields. [58] adopts a BERT[18] architecture to learn visual, pose, and audio feature rep-resentations. [99] designs an audio-motion transformer to refine sound separation results through audio-motion fea-ture fusion. These methods focus mainly on learning bet-ter contextualized multi-modality representations through an encoder transformer. In contrast, our mask transformer-based network focuses on the entire process of visual-audio separation task. We disentangle different sound sources through independent learnable query prototypes and seg-ment each time-frequency region on the spectrogram via mask prediction in an end-to-end fashion. 3. Method We first describe the formulation of the audio-visual sound separation task and introduce our pipeline iQuery briefly in Sec. 3.1. Then we introduce networks for learn-ing representations from visual and audio modalities in Sec. 3.2 and our proposed cross-modality cross-attention trans-former architecture for visual sound separation in Sec. 3.3. Finally, we introduce our adaptive query fine-tuning strat-egy through designs of flexible tunable queries in Sec. 3.4. 3.1. Overview As mentioned before, our goal is to disentangle the au-dio mixture concerning its corresponding sound sources in the given mixture by using so-called queries. Follow-ing previous works [21, 89], we adopt a commonly used “Mix-and-Separate” self-supervised source separation pro-cedure. Given Kvideo clips with accompanying audio signal: {(Vk, sk(t))}k∈[1,K], we create a sound mixture: 14677 smix(t) =PK k=1sk(t)as training data. Our disentan-glement goal is to separate sounds
Cho_Look_Around_for_Anomalies_Weakly-Supervised_Anomaly_Detection_via_Context-Motion_Relational_CVPR_2023
Abstract Weakly-supervised Video Anomaly Detection is the task of detecting frame-level anomalies using video-level labeled training data. It is difficult to explore class representative features using minimal supervision of weak labels with asingle backbone branch. Furthermore, in real-world sce-narios, the boundary between normal and abnormal is am-biguous and varies depending on the situation. F or exam-ple, even for the same motion of running person, the ab-normality varies depending on whether the surroundingsare a playground or a roadway. Therefore, our aim isto extract discriminative features by widening the relativegap between classes’ features from a single branch. In the proposed Class-Activate Feature Learning (CLA V), the fea-tures are extracted as per the weights that are implicitlyactivated depending on the class, and the gap is then en-larged through relative distance learning. Furthermore, asthe relationship between context and motion is importantin order to identify the anomalies in complex and diversescenes, we propose a Context–Motion Interrelation Mod-ule (CoMo), which models the relationship between the ap-pearance of the surroundings and motion, rather than uti-lizing only temporal dependencies or motion information.The proposed method shows SOTA performance on fourbenchmarks including large-scale real-world datasets, andwe demonstrate the importance of relational information byanalyzing the qualitative results and generalization ability.
1. Introduction Video anomaly detection (V AD) in surveillance systems refers to the identification of undefined, unusual, or unseenabnormal events (e.g., traffic accidents, robberies, and otherunforeseeable events) from amongst normal situations withtemporal intervals. Currently, numerous CCTVs installedin public places such as banks, streets, and buildings record This work was supported by the Institute of Information & communi-cations Technology Planning & Evaluation(IITP) grant funded by the Ko-rea government(MSIT) (No. 2021-0-00172, The development of human Re-identification and masked face recognition based on CCTV camera) 0RWLRQ &RQWH[W,QWHUDFWLRQ6SDFH 7HPSRUDO6SDFH E 5HODWLYH 'LVWDQFHOHDUQLQJ D &ODVV$FWLYDWH )HDWXUHV F &RQWH[W0RWLRQ ,QWHUUHODWLRQ G 1RUPDO H $EQRUPDO 3URMHFWLRQ 5HSURMHFWLRQ $QRPDO\6FRUH $QRPDO\6FRUH Figure 1. Concept of proposed method. We extract discrimina-tive features that (a) are activated according to normal or abnormalclasses, and (b) enlarge their gaps using relative distance learning.Furthermore, by projecting features into an interaction space, we(c) explore relationships between the context and motion informa-tion of the scene. For detecting anomalies, the proposed method considers not only motion but also its relationship with the context. For example, (d) shows a normal video with a physical fighting ina basketball game while (e) shows an abnormal fighting video. Thered highlighted ranges are ground-truth abnormal frames and ours(red line) accurately detects anomalies without false alarms. our daily life and play an important role in public safety. However, because it is time-consuming and laborious forhumans to pinpoint anomalies in petabytes of surveillancevideos or to monitor constantly, the V AD task, which pro-vides automatic and instantaneous responses, is a hot topicin the field of deep learning [5, 26]. Weakly-supervised V AD (WV AD) utilizes minimal knowledge about abnormal events through video-level la-beled training data that only has a label stating whether anabnormal event exists in each video clip or not. WV ADfaces several challenges. First, it is difficult for the networkto learn to classify anomalies at the frame-level through weak labeled training data. Therefore, most WV AD meth-ods [13, 20,31,35] learn through a Multiple Instance Learn-ing (MIL)-based approach. When normal and abnormalvideo clips are divided into multiple snippets and each is This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12137 contained in a negative and positive bag, there is at least one abnormal snippet in the positive bag. Therefore, the MIL approach assumes that the highest abnormality score in the positive bag derives from the abnormal snippet, and forces it to be 1 while the highest score in the negative bag is set to 0. However, given that 1) the boundary between normal and abnormal is ambiguous in the real world, there is a limit to regression learning that forces the predicted score of snippets to a fixed values. Tian et al. [33] and Wu et al. [37] forced the gap between classes through feature learning by enlarging the feature magnitude and adjusting the distance of the feature with the center feature, respectively. How-ever, 2)it is difficult to extract the discrepancy of fea-tures from a single-branch model for enlarging the gap (shown in Fig. 7). Another challenging issue neglected in previous studies is that in real-world scenarios, for a com-plex and diverse scene, the definition of ‘abnormal event’ can differ depending on the context and motion relation-ship. Zhu et al. [47] extracted appearance-invariant features by utilizing only optical flow data to focus on moving parts, while [24, 33,42] focused on temporal dependencies to con-sider multi-scale temporal information. However, 3) focus-ing only on motion or temporal information and even excluding appearance information leads to an incomplete understanding of complex scenes. In complex scenes, the boundary between normal and abnormal is ambiguous, and the distinction sometimes dif-fers depending on the situation. That is, rather than having a fixed explicit prior to the abnormal class, it is necessary to implicitly learn class representative features by relatively comparing each class. Furthermore, abnormal events oc-curring in the real world vary depending on the relationship between context and motion. For example, in Fig. 1, (d) a physical skirmish during a basketball game is a normal and acceptable event; but (e) a physical fight on the street is an abnormal event. Thus, the same motion has a differ-ent class depending on the relationship between motion and surrounding or appearance. Therefore, our motivation is to extract class-activated features by considering the relative boundary between classes and to understand the reciprocal relationship between context and motion information. To overcome the aforementioned challenges, we propose distance learning that adjusts the interval between normal and abnormal through 1) relative feature distance rather than individual values such as magnitude or score. This adjusts the relative distance between the hard-negative nor-mal sample and the abnormal sample based on the intra-class variance of normal samples. In addition, 2) Class-Activate Feature Learning (CLA V) is proposed with an add-on Implicit Class-Activate (ICA) module to implicitly activate representative features from a single branch for each class with Class-Specific (CS) loss function as an aux-iliary task to explore each normal or abnormal pattern. Fur-thermore, for the first time in WV AD, we address the impor-tance of the relationship between static and dynamic infor-mation for WV AD and propose 3) a Context-Motion In-terrelation Module (CoMo) that has a dynamic path and a context path focusing on motion and appearance, respec-tively, in the scene, for modeling the relationship between these two information. Then, each feature is projected from the temporal space to the interaction space and correlate propagation is performed by the graph convolution module. As shown in Fig. 1, (a) the CLA V feature enlarged the gap by (b) distance learning and explored relational information through (c) CoMo, and has no false alarm in (d) the basket-ball game scene with physical fighting, and shows accurate temporal localization in (e) the abnormal scene with fight-ing. We evaluate and discuss the effectiveness of the pro-posed method on four weak-labeled benchmarks, including large-scale real-world dataset UCF-Crimes [31] and XD-Violence [38], and it showed SOTA results.
Chang_Depth_Estimation_From_Indoor_Panoramas_With_Neural_Scene_Representation_CVPR_2023
Abstract Depth estimation from indoor panoramas is challenging due to the equirectangular distortions of panoramas and inaccurate matching. In this paper, we propose a prac-tical framework to improve the accuracy and efficiency of depth estimation from multi-view indoor panoramic images with the Neural Radiance Field technology. Specifically, we develop two networks to implicitly learn the Signed Distance Function for depth measurements and the radi-ance field from panoramas. We also introduce a novel spherical position embedding scheme to achieve high ac-curacy. For better convergence, we propose an initializa-tion method for the network weights based on the Manhat-tan World Assumption. Furthermore, we devise a geomet-ric consistency loss, leveraging the surface normal, to fur-ther refine the depth estimation. The experimental results demonstrate that our proposed method outperforms state-of-the-art works by a large margin in both quantitative and qualitative evaluations. Our source code is available at https://github.com/WJ-Chang-42/IndoorPanoDepth.
1. Introduction Panoramic imaging has emerged as an attractive imag-ing technique in many fields, such as computer visionand robotics. Different from traditional imaging devices, panoramic cameras capture a holistic scene and present it as a 2D image with equirectangular projection. Indoor panora-mas, captured in the interior scenes by panoramic cameras, have been widely used in interior design and decoration. Recovering depth information aligned with RGB panoramic images benefits a line of down-streaming applications, such as augmented reality and indoor mapping. Recent works on depth estimation from panoramas em-ploy Convolutional Neural Network (CNN) structures with prior knowledge learned from depth labels and achieve ex-cellent performance. Most of these works adopt a sin-gle panoramic image to predict the relative depth map [7, 23, 29, 31, 37, 39]. These methods require lots of RGB and depth pairs while training and encounter the problem of domain adaptation in practice. There are a few works attempting to employ multiview panoramic images in the depth estimation task [32, 38]. They recover depth infor-mation by finding the correspondence of different views. However, strict vertical or horizontal position relations are required for input images in these methods. Panoramas show great distortions when presented as 2D images. Prior works adopt various technologies to over-come this problem, such as processing panoramas [7,26,27, †Corresponding Author This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 899 31] with perspective projection and developing special con-volution kernels [8, 30, 37]. Recently, the Neural Radiance Field (NeRF) [18] based on volume rendering has attracted great attention, which aims to synthesize novel views and recover the geometry of a complex scene. It considers im-age pixels as the rendering results of camera rays casting to the scene and learns geometric information from the corre-spondence among each ray, which eliminates affects from distortions while processing panoramic images. However, when applied to panoramas, the state-of-the-art scene rep-resentation methods still require a number of input images and take a long time to converge. It is a compelling research problem to explore how to leverage the omnidirectional in-formation in panoramas to achieve satisfying depth estima-tion results with fewer images and faster convergence. To exploit the holistic spatial information in panoramas, we propose a framework to achieve holistic depth estima-tion with a few panoramic images. Our framework consists of two main networks with a novel positional embedding scheme for learning a better representation from panoramas. The geometry network estimates the Signed Distance Func-tion (SDF) to represent the 3D information of the scene, and the color network reconstructs the color texture. With the assistance of the rendering equation, the expected color of a pixel in an image is rendered with radiance values of the sampled 3D coordinates along camera rays. Both net-works are optimized by minimizing the difference between the rendered colors. Inspired by [2], we propose a method to initialize the parameters of the geometry network based on the assumption that floors and ceilings are always ver-tical to the gravity direction in indoor panoramic images, which provides guidance to properly optimize the geome-try network. Experimental results show that the proposed initialization scheme facilitates the network converge faster and achieves better results. In addition, considering that the geometric information from the depth is supposed to be con-sistent with the geometry from the surface normal, we de-vise the geometric consistency loss, which further refines the depth measurements. Moreover, we construct a syn-thetic dataset that provides RGB-D image pairs from var-ious positions. We evaluate our method on our synthetic dataset and two real-world datasets. The experimental re-sults demonstrate that our method achieves superior perfor-mance among state-of-the-art approaches. Even with fewer image views and a short training period, our method works well and outputs promising depth measurements. Our con-tributions are summarized as follows: • We propose an unsupervised method for depth estima-tion from multi-view indoor panoramic images by uti-lizing a neural network with a specially designed po-sitional embedding scheme to implicitly learn the SDF of the scene represented by panoramas. • Inspired by the Manhattan World Assumption, we pro-pose an initialization method for the network weights for better convergence. • We devise a loss item based on geometric consistency that the geometric information from depth is supposed to be consistent with the surface norm. • We release a synthetic panoramic RGB-D dataset ren-dered from photorealistic indoor scenes. Experimen-tal results on our synthetic dataset and two realis-tic datasets demonstrate that our proposed method achieves superior performance in both quantitative and qualitative ways.
Cai_MARLIN_Masked_Autoencoder_for_Facial_Video_Representation_LearnINg_CVPR_2023
Abstract This paper proposes a self-supervised approach to learn universal facial representations from videos, that can trans-fer across a variety of facial analysis tasks such as Facial Attribute Recognition (FAR), Facial Expression Recognition (FER), DeepFake Detection (DFD), and Lip Synchroniza-tion (LS). Our proposed framework, named MARLIN , is a facial video masked autoencoder, that learns highly robust and generic facial embeddings from abundantly available non-annotated web crawled facial videos. As a challenging auxiliary task, MARLIN reconstructs the spatio-temporal details of the face from the densely masked facial regions which mainly include eyes, nose, mouth, lips, and skin to capture local and global aspects that in turn help in encod-ing generic and transferable features. Through a variety of experiments on diverse downstream tasks, we demonstrate MARLIN to be an excellent facial video encoder as well as feature extractor, that performs consistently well across a variety of downstream tasks including FAR (1.13% gain over supervised benchmark), FER (2.64% gain over unsu-pervised benchmark), DFD (1.86% gain over unsupervised benchmark), LS (29.36% gain for Frechet Inception Dis-tance), and even in low data regime. Our code and models are available at https://github.com/ControlNet/MARLIN.
1. Introduction Facial analysis tasks [34, 43, 70, 85] provide essential cues for human non-verbal behavior analysis, and help un-fold meaningful insights regarding social interaction [36], communication [40], cognition [68] with potential appli-cations in Human-Computer Interaction (HCI) and Affec-tive Computing domains. Recently, we have witnessed sig-nificant progress in deep neural network models to solve facial analysis tasks such as Facial Attribute Recognition (FAR) [34, 85], Facial Expression Recognition (FER) [48], DeepFake Detection (DFD) [70], and Lip Synchronization (LS) [43]. While these deep models can achieve remark-L a r g e U n l a b e l l e d F a c i a l V i d e o D a t a s e t M A R L I N E n c o d e r F e a t u r e sF a c i a l A t t r i b u t e R e c o g n i t i o n F a c i a l E x p r e s s i o n R e c o g n i t i o n D e e p F a k e D e t e c t i o n L i p S y n c h r o n i z a t i o n O t h e r d o w n s t r e a m t a s k s . . .P r e t r a i n e d V i d e oD o w n s t r e a m A d a p t a t i o nFigure 1. Overview of the proposed Masked Autoencoder for fa-cial Representation LearnINg aka MARLIN. MARLIN aims to learn a universal facial representation from abundantly available non-annotated facial video data. able performance, they often require large-scale annotated datasets, which is not only a resource-expensive and time-consuming process but also infeasible for some applications requiring domain expertise for annotation (e.g. FER). To this end, self-supervised pre-training [26, 37, 71] has lately emerged as an effective strategy to address the lim-itations of fully supervised methods, as it enables generic representation learning from non-annotated data, that can then be transferred across tasks having limited labels. For images of natural scenes and objects, self-supervised learning approaches using self-distillation [14], contrastive-learning [18, 19], solving pre-text tasks such as jigsaw puz-zle [53], and more recently autoencoding [37,71] have even outperformed the supervised learning approaches. Despite the promises offered by these self-supervised methods in learning scalable and generic representations for natural scene images and videos, these have not yet been investigated for learning representations from facial video data. Facial representation learning requires track-ing of fine-grained face specific details which might not be perfectly captured by linear tube masking [71]. Un-til now, most of the existing approaches associated with facial analysis tasks are highly specialized and develop This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1493 task-specific models trained in a fully supervised manner [46, 54, 63], with very few recent efforts towards learning generic image-based facial encoding [10,84]. These closely related works [10, 84] either focus on exploring training dataset properties in terms of size and quality [10] or per-forming pre-training in visual-linguistic way [84]. These works [10, 84] are hard to scale since they use static image-level facial information and the image-caption pairs are highly associated with context information rather than face. In this paper, our goal is to learn universal andtask-agnostic representations in a self-supervised manner for face-related downstream tasks (see Fig. 1). For this pur-pose, we employ a masked autoencoder [37, 71] with a facial-guided masking strategy that learns to reconstruct spatio-temporal details of a face from densely masked fa-cial regions using non-annotated videos. Unlike existing approaches for natural scene videos [71], where the tube-masking is initialized with a static part of the video without any semantic information, our approach dynamically tracks face and then develops a facial part-guided tube mask-ing strategy using an off-the-shelf face parser i.e. FaceX-Zoo [75]. Thus, we pose a more challenging task that en-courages the model to learn spatio-temporal representations to cover local as well as global information. Inspired by prior works [27, 60] showing high-quality reconstruction results along with rich and generic latent features, we in-corporate adversarial loss on top of masked encoding to enhance reconstruction quality. Our experimental results show that our proposed framework, MARLIN, learns highly generic facial encoding that scale and transfers well across diverse facial analysis tasks such as FER, DFD, FAR, and LS and achieve favorable performance gain w.r.t. state-of-the-art benchmarks. In summary, our main contributions are: • We propose, MARLIN, a universal and task-agnostic facial encoder that learns robust and transferable facial representation from abundantly available non-annotated web-crawled facial videos in a self-supervised fashion. • As a challenging auxiliary task, we propose to reconstruct the spatio-temporal details of the face from the densely masked facial regions. The proposed facial region-guided tube masking (aka Fasking ) strategy aims to learn local and global aspects from facial videos which in turn help encode generic and transferable features. • Through extensive quantitative and qualitative analysis, we show that MARLIN learns rich, generic, transferable, and robust facial representation, that performs consis-tently well across a variety of downstream tasks includ-ing FAR (1.13% gain over supervised benchmark), FER (2.64% gain over unsupervised benchmark), DFD (1.86% gain over unsupervised benchmark), LS (29.36% gain for Frechet Inception Distance) and even in few shot settings.Table 1. Facial Analysis Tasks. Overview of different face related tasks and relevant datasets down the lane. Datasets # Samples Env. Fmt. Task Year LFW [39] 13,233 Wild Img. Identification 2008 VGG-FACE [54] 2.6M Wild Img. Identification 2015 CelebA [50] 202,599 Wild Img. Attributes 2015 YouTubeFace [78] 3,425 Wild Vid Identification 2011 LRS2 [22] 144,482 Wild Vid Lip Sync. 2017 CelebV [79] 5 Wild Vid Reenact 2018 CMU-MOSEI [83] 23,453 Wild Vid Emo, Senti 2018 FaceForensics++ [62] 1,004 Wild Vid DeepFake 2019 V oxCeleb2 [23] 150,480 Wild Vid Speaker 2018 CelebV-HQ [85] 55,666 Wild Vid Attribute 2022
Dong_The_Enemy_of_My_Enemy_Is_My_Friend_Exploring_Inverse_CVPR_2023
Abstract Although current deep learning techniques have yielded superior performance on various computer vision tasks, yet they are still vulnerable to adversarial examples. Adversar-ial training and its variants have been shown to be the most effective approaches to defend against adversarial exam-ples. A particular class of these methods regularize the dif-ference between output probabilities for an adversarial and its corresponding natural example. However, it may have a negative impact if a natural example is misclassified. To circumvent this issue, we propose a novel adversarial train-ing scheme that encourages the model to produce similar output probabilities for an adversarial example and its “in-verse adversarial” counterpart. Particularly, the counter-part is generated by maximizing the likelihood in the neigh-borhood of the natural example. Extensive experiments on various vision datasets and architectures demonstrate that our training method achieves state-of-the-art robustness as well as natural accuracy among robust models. Further-more, using a universal version of inverse adversarial ex-amples, we improve the performance of single-step adver-sarial training techniques at a low computational cost.
1. Introduction Deep learning has achieved revolutionary progress in nu-merous computer vision tasks [24, 40, 55] and has emerged as a promising technique for fundamental research in mul-tiple disciplines [31, 35, 52]. However, a well-established study has demonstrated that Deep Neural Networks (DNNs) are extremely vulnerable to adversarial examples [42], which are indistinguishable from natural examples in hu-man vision. In other words, a visually undetectable per-turbation to the original example can lead to a significant *Corresponding Author Accuracy Top 50% Bottom 50% Attack Strength 1.0 0.8 0.6 0.4 0.2 0.0 -8 -6 -4 -2 0 2 4 6 8 (a) Natural Training Accuracy Top 50% Bottom 50% Attack Strength 1.0 0.8 0.6 0.4 0.2 0.0 -8 -6 -4 -2 0 2 4 6 8 (b) Adversarial Training [25] Figure 1. Average accuracy under different attack strengths for two networks trained on natural and adversarial samples. We rank test examples based on the cross-entropy loss value in increasing order and divide them into two equal halves. Note that the negative ϵdenotes the strength of inverse adversarial perturbation. (a) Nat-urally trained models are extremely susceptible to perturbations. (b) For adversarially trained models, the adversarial effect is ex-acerbated on examples that are more possibly to be misclassified. The green line corresponds to natural examples. disruption of the inference result of DNNs. The impercep-tibility of these tailored examples also makes them easy to bypass manual verification [3, 15], posing a potential secu-rity threat to the safety of deep learning-based applications. Various defense methods have been proposed to improve adversarial robustness of DNNs [21,46,48]. As the primary defense method, adversarial training [10, 25, 42] improves intrinsic network robustness via adaptively augmenting ad-versarial examples into training examples. State-of-the-art adversarial training methods mainly focus on the distribu-tion alignment between natural and adversarial examples to preserve the consistency of the DNN prediction [7, 44, 53]. However, there still exists an undesirable decrease in the standard accuracy for adversarially trained models due to limited data and restricted model capacity. The misclassifi-cation of natural examples can further undermine the distri-bution alignment during adversarial training. The natural intuition is that: adversarial examples corre-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24678 sponding to misclassified natural examples are more likely to be misclassified. In other words, adversarial examples exhibit higher loss values compared to their corresponding natural examples. Contrary to adversaries that are harm-ful to DNNs, we introduce inverse adversarial examples1 that are created via minimizing the objective function as an inverse procedure of adversary generation. Specifically, in-verse adversarial examples are beneficial to DNNs, which can be more possibly to be correctly classified. To sup-port this claim, we study the accuracy of trained classifi-cation models on two groups of samples (see Figure 1). We present the accuracy of adversarial examples and their in-verse counterparts under different attack strengths. Even a small adversarial perturbation can induce a drastic accu-racy decrease for the naturally trained model. For the ad-versarially trained model, the robust accuracy of examples with higher loss values (Bottom 50%) suffers from a heavier drop than that of examples with lower loss values (Top 50%) under larger attack strengths. This indicates that the adver-sarial counterparts of low-confidence or even misclassified examples are also misclassified. Therefore, the distribution alignment [7, 44, 53] between two misclassified examples might have an unnecessary or even harmful effect on the adversarial robustness establishment. In this paper, to mitigate the unnecessary or even harm-ful matching manner between misclassified examples, we propose a novel adversarial training framework based on an inverse version of adversarial examples, dubbed Inverse Ad-versarial Training (IAT), which implicitly bridges the dis-tribution gap between adversarial examples and the high-likelihood region of their belonging classes. Adversarial ex-amples of a certain category can thus be pulled closer to the high-likelihood region instead of their original examples. Specifically, we propose an inverse procedure of the stan-dard adversary generation to reach the high-likelihood re-gion. The generated inverse adversaries can also be viewed as the rectification of original examples for reducing pre-diction errors. Considering the multi-class decision surface and computational cost, we further design a class-specific inverse adversary generation paradigm as opposed to the instance-wise version. Furthermore, we establish a momen-tum mechanism for the prediction of inverse adversaries to stabilize the training process. A one-off version of our in-verse adversarial training is also proposed for improving time efficiency. Comprehensive experiments demonstrate the superiority of our method in comparison with state-of-the-art adversar-ial training approaches. We also show that our method can be adapted to larger models with extra generated data for robustness enhancement. Besides, the robustness of single-step adversarial training methods can be further improved at a low cost by incorporating our method. 1The formal definition will be given in the following sections.The main contribution of this paper can be summarized as follows: • By analyzing the unnecessary, or even harmful, align-ment between misclassified examples, we propose a novel adversarial training framework based on the in-verse version of adversarial examples, which promotes the aggregation of adversarial examples to the high-likelihood region of their belonging classes. • Based on our Inverse Adversarial Training (IAT) paradigm, we design a class-specific universal inverse adversary generation method to mitigate the individ-ual bias of different examples with high efficiency. We also propose a one-off strategy to reduce compu-tational costs with a negligible performance loss. • Extensive experiments demonstrate the effectiveness and generalizability of our method compared to state-of-the-art adversarial training methods. Our method can also be combined with single-step adversarial training methods as a plug-and-play component for boosting robustness at a low cost. Related works. The lethal vulnerabilities of deep neural networks against adversarial examples have been witnessed in [4, 10, 28, 42]. A myriad of attempts have been made to defend against these tailored examples, including adversar-ial training [17,25,44,53], adversarial detection [14,43], and input transformation-based methods [37, 48, 49]. Among them, adversarial training consistently remains to be the most effective method [2] to improve intrinsic network ro-bustness via augmenting the training data with adversarial examples. In addition, most existing works generally in-corporate a regularization term to narrow the distribution difference between natural examples and their adversarial counterparts [7, 44, 53], which has been demonstrated to be beneficial for robustness enhancement. This matching man-ner seems natural but might be misguided by misclassified natural examples, as we showed in Figure 1. Several efforts have been devoted to resolving such an issue by assigning weights on losses in terms of the intensity of adversarial examples [23, 54]. However, they mainly concentrate on mitigating the imbalance of disturbance effect among ad-versarial examples, while our primary focus is to alleviate the harmful alignment between misclassified examples by incorporating inverse adversarial examples. Inverse adversarial examples were first formally de-scribed in [36], where Salman et al. studied them in vi-sion systems to enhance in-distribution performance against new corruptions. In comparison, we investigate the rectifi-cation effect of inverse adversarial examples on the distri-bution alignment during adversarial training for robustness enhancement. A concurrent work [22] also exploits the in-verse version of adversarial examples for adversarial robust-ness by incorporating different distance metrics. However, 24679 we built on class-specific universal inverse adversaries for adversarial training with more efficiency and robustness. Furthermore, we show how our method can be combined with single-step adversarial training techniques to improve both the natural performance and robustness.
Chen_Boundary_Unlearning_Rapid_Forgetting_of_Deep_Networks_via_Shifting_the_CVPR_2023
Abstract The practical needs of the “right to be forgotten” and poisoned data removal call for efficient machine unlearn-ing techniques, which enable machine learning models to unlearn, or to forget a fraction of training data and its lin-eage. Recent studies on machine unlearning for deep neural networks (DNNs) attempt to destroy the influence of the for-getting data by scrubbing the model parameters. However, it is prohibitively expensive due to the large dimension of the parameter space. In this paper, we refocus our attention from the parameter space to the decision space of the DNN model, and propose Boundary Unlearning, a rapid yet ef-fective way to unlearn an entire class from a trained DNN model. The key idea is to shift the decision boundary of the original DNN model to imitate the decision behavior of the model retrained from scratch. We develop two novel bound-ary shift methods, namely Boundary Shrink and Boundary Expanding, both of which can rapidly achieve the utility and privacy guarantees. We extensively evaluate Boundary Un-learning on CIFAR-10 and Vggface2 datasets, and the re-sults show that Boundary Unlearning can effectively forget the forgetting class on image classification and face recog-nition tasks, with an expected speed-up of 17and19, respectively, compared with retraining from the scratch.
1. Introduction Suppose a company trains a face recognition model with your photos and deploys it as an opened API. Your photos could be stolen or inferenced by attackers via model inver-sion attack [6,18]. With the increasing awareness of protect-ing user’s privacy, a lot of privacy regulations take effect to This work was supported in part by the National Natural Science Foundation of China under Grants 62272183 and 62171189; by the Key R&D Program of Hubei Province under Grant 2021BAA026; and by the special fund for Wuhan Yellow Crane Talents (Excellent Young Scholar). The corresponding author of this paper is Chen Wang.provide you the control over your personal data. For exam-ples, the General Data Protect Regulation (GDPR) estab-lished by the European Union gives individuals “ the right to be forgotten ” and mandates that companies have to erase personal data once it is requested [35]. Beyond the “right to be forgotten”, data forgetting from machine learning (ML) models is also beneficial when cer-tain training data becomes no longer valid, e.g., some train-ing data is manipulated by data poisoning attacks [10, 26], or outdated over time, or even identified to be mistakes after training. These practical needs call for efficient machine un-learning techniques, which enable ML models to unlearn, or to forget a fraction of training data and its lineage. In this paper, we focus on unlearning an entire class from deep neural networks (DNNs), which is useful in realis-tic scenarios like face recognition: unlearning one’s data needs to forget the entire class of one’s face images. As the DNN model retrained from scratch is the optimal un-learned model, early studies try to accelerate the retrain-ing process of deep networks [1, 11, 38], but have to inter-vene the original training process, which degenerates the model utility and increases the training time. A branch of recent researches [8, 9, 23, 27] attempt to destroy the influ-ence of the forgetting data by scrubbing the model param-eters. For example, the Fisher Information Matrix (FIM) is used to locate the influence of forgetting data at the param-eter space [8, 9]. However, it is prohibitively expensive due to the large dimension of the parameter space. In order to find an efficient unlearning approach to forget an entire class, we visualize the decision space of the re-trained DNN model and discover two key observations (c.f. Figure 1). First, the forgetting samples spread around the decision space of the retrained DNN model, indicating that the decision boundary of the forgetting samples has been broken. Second, most of the forgetting samples move to the border of other clusters; this helps us recall the closest-to-boundary criterion [24] that samples at the border of cluster in the decision space will probably be predicted with huge uncertainty. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7766 Figure 1. Key observations from the decision space of the re-trained DNN model. The solid dots in different colors represent the remaining samples belonging to different classes and the hol-low circles in different colors stand for the forgetting samples pre-dicted as corresponding classes. It can be observed that (1) the forgetting samples spread around the feature space of the retrained DNN model, and (2) most of the forgetting samples move to the borders of other clusters. These two observations naturally match the two critical goals of machine unlearning: utility and privacy guarantees. Utility guarantee ensures that the unlearned model should generalize badly on the forgetting data while the prediction performance on the remaining data is maintained. Privacy guarantee means that the unlearned model should not leak any information of the forgetting data. Based on our key ob-servations, the utility guarantee can be achieved by only de-stroying the boundary of the forgetting class but maintain-ing the boundary of the remain classes, while the privacy guarantee can be accomplished by pushing the forgetting data to the border of other clusters. In light of the above ideas, we refocus our attention from the parameter space to the decision space of the DNN model1, and propose Boundary Unlearning , a rapid yet ef-fective way to unlearn the forgetting class from a trained DNN model. Boundary Unlearning tries to shift the de-cision boundary of the original DNN model to imitate the decision behavior of the retrained model. To achieve the critical goals, we further introduce two novel boundary shift methods: Boundary Shrink andBoundary Expanding . The former breaks the decision boundary of the forgetting class by splitting the forgetting feature space into other classes, while the latter disperses the activation about the forgetting class by remapping and pruning an extra shadow class as-signed to the forgetting data. 1Previous unlearning approaches try to destroy the information of the forgetting data by locating the influential parameters directly, while we find that unlearning can be accomplished by manipulating the parameters with the guidance of the decision behaviors of the retrained model.We summarize our major contributions as follows: • We propose Boundary Unlearning, the first work to un-learn an entire class from a trained DNN model by shifting the decision boundary. Compared with ex-isting studies, Boundary Unlearning neither costs too much computational resource nor intervenes the origi-nal training pipeline. • We propose two novel methods, namely, Boundary Shrink and Boundary Expanding, to shift the decision boundary of the forgetting class. Both methods can rapidly achieve the utility and privacy guarantees with only a few epochs of boundary adjusting. • We conduct extensive experiments to evaluate Bound-ary Unlearning on image classification and face recog-nition tasks. The results show that Boundary Unlearn-ing can rapidly and effectively forget the forgetting class, and outperforms four state-of-the-art techniques. The code has been released for reproducibility2.
Hui_Bridging_Search_Region_Interaction_With_Template_for_RGB-T_Tracking_CVPR_2023
Abstract RGB-T tracking aims to leverage the mutual enhance-ment and complement ability of RGB and TIR modalities for improving the tracking process in various scenarios, where cross-modal interaction is the key component. Some previ-ous methods concatenate the RGB and TIR search region features directly to perform a coarse interaction process with redundant background noises introduced. Many other methods sample candidate boxes from search frames and conduct various fusion approaches on isolated pairs of RGB and TIR boxes, which limits the cross-modal interaction within local regions and brings about inadequate context modeling. To alleviate these limitations, we propose a novel Template-Bridged Search region Interaction (TBSI) module which exploits templates as the medium to bridge the cross-modal interaction between RGB and TIR search regions by gathering and distributing target-relevant object and envi-ronment contexts. Original templates are also updated with enriched multimodal contexts from the template medium. Our TBSI module is inserted into a ViT backbone for joint feature extraction, search-template matching, and cross-modal interaction. Extensive experiments on three popu-lar RGB-T tracking benchmarks demonstrate our method achieves new state-of-the-art performances. Code is avail-able at https://github.com/RyanHTR/TBSI .
1. Introduction Given the initial state of a single target object in the first frame, the goal of single object tracking (SOT) is to local-ize the target object in successive frames. As a fundamen-tal task in the computer vision community, SOT has drawn the great attention of researchers. However, current SOT methods built on only visible light (RGB) data become vul-nerable under extreme imaging conditions ( e.g., low illumi-*Corresponding author RGB Search FrameTIR Search Frame(a) RGB RoIsTIR RoIs(b)TemplateTIR Search TokensRGB Search Tokens(c) CNN CNNConcat CNNCNNFusionViTViTTokensBridging Figure 1. Comparison between our cross-modal interaction ap-proach and previous ones. (a) Features of RGB and TIR search frames are directly concatenated. (b) Candidate boxes (RoIs) are sampled from RGB and TIR search frames and fused in pairs with gating or attention mechanisms. (c) Our approach exploits tem-plate tokens as the medium to bridge the cross-modal interaction between RGB and TIR search region tokens. nation and adverse weather, etc), which motivates the in-corporation of thermal infrared (TIR or T) data for mutual enhancement and complement. Benefiting from the strong nocturnal photosensitivity and penetration ability of thermal infrared data, RGB-T tracking enjoys wide potential appli-cations such as video surveillance processing [1], intelligent robotics [5], and autonomous driving [8]. As a multimodal vision task, the key to RGB-T tracking is how to perform effective cross-modal interaction. Since the tracking process occurs in successive frames guided by the annotated initial frame, cross-modal interaction be-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13630 tween search frames of RGB and TIR modalities becomes the main focus. As illustrated in Figure 1 (a), some pre-vious methods [16, 44] directly concatenate features of the whole RGB and TIR search frames from the encoders of strong base trackers [4, 40]. This simple manner tends to introduce redundant background noise information, making cross-modal interaction too coarse and hence harming the model’s discriminative ability. In addition, there are many other methods [14,27,28,37,39,49] which sample candidate boxes (RoIs) from the Gaussian distribution in the search frames and conduct various fusion operators based on at-tention, gating mechanism, or dataset attributes, etc, to fuse each pair of RoI features of RGB and TIR modalities as shown in Figure 1 (b). Then, fused RoI features are sep-arately fed into a binary classifier to distinguish the target object. However, each pair of RoIs merely crops a small portion of local features from the search frames, contain-ing limited foreground and background information. Thus, cross-modal interaction between each isolated pair of RoIs may bring about inadequate modeling of the global envi-ronment context in the search frame and restrict the mutual enhancement and complement effect of the two modalities. Given the above discussion, we argue that direct cross-modal interaction between RGB and TIR search frames or candidate RoIs still has limitations in comprehensively leveraging complementary multimodal clues to facilitate the tracking process. Therefore, we propose a novel scheme which exploits the target templates as the medium to bridge the cross-modal interaction between RGB and TIR search regions , as illustrated in Figure 1 (c). The major superior-ity motivating our advocate of this scheme is that the tem-plates contain original multimodal information of the target object, which can serve as strong guidance to extract target-relevant object and environment contexts from search re-gions for adaptive and precise information enhancement and complement. The background noises of other distrac-tors in search regions can also be reduced by template bridg-ing during the cross-modal interaction process. In order to implement the above scheme, we design a Template-Bridged Search region Interaction (TBSI) mod-ule. Concretely, our TBSI module first fuses features of RGB and TIR templates to obtain the multimodal context medium. Since the cross-attention mechanism [36] is an effective and widely-adopted practice for context aggrega-tion, our TBSI also utilizes it with the fused template as query and TIR search region feature as key and value to gather target-relevant TIR context information into the tem-plate medium. Then, the RGB search region feature serves as query and the fused template serves as key and value to distribute target-relevant TIR context from the medium to the RGB search region. Similarly, target-relevant RGB context is also gathered and distributed to the TIR search region through the template medium in a reverse direction.Finally, comprehensive multimodal information aggregated in the fused template is transferred back to the original RGB and TIR templates to update them with the enriched multi-modal contexts gathered from search regions. In addition, most existing RGB-T tracking methods [14, 27,28,37,39,49] employ MDNet [32] with VGG-M [34] as the base tracker, whose number of classification branches equals the number of training sequences, which largely lim-its their capacity and scalability. Inspired by the powerful ability of Vision Transformer (ViT) [12] to capture long-range dependencies and its recent success on SOT [7, 24, 42], we also extend ViT to RGB-T tracking for joint fea-ture extraction, search-template matching, and cross-modal interaction. Our TBSI module is inserted into the ViT base tracker to bridge the intra-modal information flow within the Transformer layers for effective RGB-T tracking. Our contributions are summarized as follows: (1) We propose a novel Template-Bridged Search region Interac-tion (TBSI) module which exploits the fused target tem-plate as the medium to bridge the cross-modal interaction between RGB and TIR search regions and update original templates as well, forming adaptive and precise information enhancement. (2) We extend the ViT architecture with the proposed TBSI module to RGB-T tracking for joint feature extraction, search-template matching, and cross-modal in-teraction, which has not been explored by previous methods to our best knowledge. (3) Extensive experiments demon-strate that our method achieves new state-of-the-art perfor-mances on three popular RGB-T tracking benchmarks.
Girdhar_ImageBind_One_Embedding_Space_To_Bind_Them_All_CVPR_2023
Abstract We present IMAGE BIND, an approach to learn a joint embedding across six different modalities -images, text, au-dio, depth, thermal, and IMU data. We show that all combi-nations of paired data are not necessary to train such a joint embedding, and only image-paired data is sufficient to bind the modalities together. IMAGE BIND can leverage recent large scale vision-language models, and extends their zero-shot capabilities to new modalities just by using their natu-ral pairing with images. It enables novel emergent applica-tions ‘out-of-the-box’ including cross-modal retrieval, com-posing modalities with arithmetic, cross-modal detection and generation. The emergent capabilities improve with the strength of the image encoder and we set a new state-of-the-art on emergent zero-shot recognition tasks across modal-ities, outperforming specialist supervised models. Finally, we show strong few-shot recognition results outperforming prior work, and that IMAGE BINDserves as a new way to evaluate vision models for visual and non-visual tasks. ∗Equal technical contribution.1. Introduction A single image can bind together many experiences – an image of a beach can remind us of the sound of waves, the texture of the sand, a breeze, or even inspire a poem. This ‘binding’ property of images offers many sources of super-vision to learn visual features, by aligning them with any of the sensory experiences associated with images. Ideally, for a single joint embedding space, visual features should be learned by aligning to all of these sensors. However, this requires acquiring all types and combinations of paired data with the same set of images, which is infeasible. Recently, many methods learn image features aligned with text [1, 30, 45, 59, 63, 80, 81], audio [3, 4, 49, 54, 55, 68] etc. These methods use a single pair of modali-ties or, at best, a few visual modalities. However, the fi-nal embeddings are limited to the pairs of modalities used for training. Thus, video-audio embeddings cannot directly be used for image-text tasks and vice versa. A major ob-stacle in learning a true joint embedding is the absence of large quantities of multimodal data where all modalities are present together. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15180 In this paper, we present IMAGE BIND, which learns a single shared representation space by leveraging multiple types of image-paired data. It does not need datasets where all modalities co-occur with each other. Instead, we lever-age the binding property of images and we show that just aligning each modality’s embedding to image embeddings leads to an emergent alignment across all of the modalities. In practice, I MAGE BINDleverages web-scale (image, text) paired data and combines it with naturally occurring paired data such as (video, audio), (image, depth) etc. to learn a single joint embedding space. This allows I MAGE BIND to implicitly align the text embeddings to other modalities such as audio, depth etc., enabling zero-shot recognition ca-pabilities on that modality without explicit semantic or tex-tual pairing. Moreover, we show that it can be initialized with large-scale vision-language models such as CLIP [59], thereby leveraging the rich image and text representations of these models. Thus, I MAGE BIND can be applied to a variety of different modalities and tasks with little training. We use large-scale image-text paired data along with nat-urally paired ‘self-supervised’ data across four new modal-ities -audio, depth, thermal, and Inertial Measurement Unit (IMU) readings – and show strong emergent zero-shot clas-sification and retrieval performance on tasks for each of these modalities. These emergent properties improve as the underlying image representation is made stronger. On au-dio classification and retrieval benchmarks, I MAGE BIND’s emergent zero-shot classification matches or outperforms specialist models trained with direct audio-text supervision on benchmarks like ESC, Clotho, AudioCaps. I MAGE BIND representations also outperform specialist supervised mod-els on few-shot evaluation benchmarks. Finally, we show that I MAGE BIND’s joint embeddings can be used for a wide variety of compositional tasks as illustrated in Figure 1, in-cluding cross-modal retrieval, combining embeddings via arithmetic, detecting audio sources in images, and generat-ing images given audio input. 2. Related Work IMAGE BIND builds upon several advances in vision-language, multimodal, and self-supervised research. Language Image Pre-training. Training images jointly with linguistic signals like words or sentences has been shown to be an effective method for zero-shot, open-vocabulary recognition and text to image retrieval [13, 17, 37, 66]. Language as supervision can further be used for learning strong video representations [2, 46, 47]. Joulin et al. [33] show that using large-scale image dataset with noisy captions yields strong visual features. Recently, CLIP [59], ALIGN [30] and Florence [81] collect large collections of image and text pairs and train models to embed image and language inputs in a joint space using contrastive learning, exhibiting impressive zero-shot performance. CoCa [80]adds an image captioning objective on top of the contrastive loss for improved performance. Flamingo [1] handles arbi-trarily interleaved images and texts, and achieves state of the art on many few-shot learning benchmarks. LiT [82] adopts contrastive training for fine-tuning and observes freezing image encoders works the best. This prior line of works mostly considers image and text, while our work enables zero-shot recognition on multiple modalities. Multi-Modal Learning. Our work binds multiple modal-ity representations in a joint embedding space. Prior works explored joint training of multiple modalities in a super-vised [20, 41] or self-supervised contexts [3, 19, 49, 68, 72]. The success of image and language pre-training methods such as CLIP has inspired approaches that revisits learn-ing deep semantic representations through matching other modalities with linguistic inputs. Various methods adapt CLIP to extract semantically strong video representations [14, 42, 44, 77]. Most related to our method, Nagrani et al. [50] create a weakly-labeled dataset for paired video-audio and captions that allows for training multi-modal video-audio encoder to match textual features resulting in strong audio and video retrieval and captioning perfor-mance. AudioCLIP [26] adds audio as an additional modal-ity into a CLIP framework, enabling zero-shot audio classi-fication. In contrast, I MAGE BINDdoes not require explicit paired data between all modalities and instead leverages im-age as a natural weak supervision for unifying modalities. Feature Alignment Pre-trained CLIP models have been utilized as teachers to supervise other models due to the strength of its visual representations [43, 57, 73]. More-over, CLIP joint image and text embedding space has also been leveraged for a variety of zero-shot tasks like de-tection [23, 86], segmentation [40], mesh animation [79] etc. showing the power of joint embedding spaces. Point-CLIP [83] finds a pre-trained CLIP encoder can be used for 3D recognition by projecting a point cloud to a number of 2D depth map views, which in turn are encoded using CLIP visual encoder. In multilingual neural machine translation, a similar phenomenon to the emergence behavior of I M-AGEBINDis commonly observed and utilized: if languages are trained in the same latent space through learned implicit bridging, translation can be done between language pairs on which no paired data is provided [32, 39]. 3. Method Our goal is to learn a single joint embedding space for all modalities by using images to bind them together. We align each modality’s embedding to image embeddings, such as text to image using web data and IMU to video using video data captured from egocentric cameras with IMU. We show that the resulting embedding space has a powerful emer-gent zero-shot behavior that automatically associates pairs of modalities without seeing any training data for that spe-15181 Web Image-Text Thermal Data Depth Sensor Data IMAGEBIND Images Videos TextAudio Depth Thermal IMUNaturally AlignedEmergent AlignmentWeb Videos Egocentric Videos Figure 2. I MAGE BIND overview. Different modalities occur naturally aligned in different data sources, for instance images+text and video+audio in web data, depth or thermal information with images, IMU data in videos captured with egocentric cameras, etc. IMAGE -BINDlinks all these modalities in a common embedding space, enabling new emergent alignments and capabilities. cific pair. We illustrate our approach in Figure 2. 3.1. Preliminaries Aligning specific pairs of modalities. Contrastive learn-ing [27] is a general technique for learning an embedding space by using pairs of related examples (positives) and un-related examples (negatives). Using pairs of aligned ob-servations, contrastive learning can align pairs of modal-ities such as (image, text) [59], (audio, text) [26], (image, depth) [68], (video, audio) [49] etc. However, in each case, the joint embeddings are trained and evaluated using the same pairs of modalities. Thus, (video, audio) embeddin
gs are not directly applicable for text-based tasks while (image, text) embeddings cannot be applied for audio tasks. Zero-shot image classification using text prompts. CLIP [59] popularized a ‘zero-shot’ classification task based on an aligned (image, text) embedding space. This involves constructing a list of text descriptions that describe the classes in a dataset. An input image is classified based on its similarity to the text descriptions in the embedding space. Unlocking such zero-shot classification for other modalities requires specifically training using paired text data, e.g., (audio, text) [26] or (point-clouds, text) [83]. In contrast, I MAGE BIND unlocks zero-shot classification for modalities without paired text data. 3.2. Binding modalities with images IMAGE BIND uses pairs of modalities ( I,M), where I represents images and Mis another modality, to learn a sin-gle joint embedding. We use large-scale web datasets with (image, text) pairings that span a wide range of semantic concepts. Additionally, we use the natural, self-supervised pairing of other modalities – audio, depth, thermal, and In-tertial Measurement Unit (IMU) – with images. Consider the pair of modalities ( I,M) with aligned ob-servations. Given an image Iiand its corresponding obser-vation in the other modality Mi, we encode them into nor-malized embeddings: qi=f(Ii)andki=g(Mi)where f, gare deep networks. The embeddings and the encodersare optimized using an InfoNCE [53] loss: LI,M=−logexp(q⊺ iki/τ) exp(q⊺ iki/τ) +P j̸=iexp(q⊺ ikj/τ),(1) where τis a scalar temperature that controls the smoothness of the softmax distribution and jdenotes unrelated observa-tions, also called ‘negatives’. We follow [74] and consider every example j̸=iin the mini-batch to be a negative. The loss makes the embeddings qiandkicloser in the joint em-bedding space, and thus aligns IandM. In practice, we use a symmetric loss LI,M+LM,I. Emergent alignment of unseen pairs of modalities. IM-AGEBINDuses modalities paired with images, i.e., pairs of the form ( I,M) to align each the embeddings from each modality Mto those from images. We observe an emer-gent behavior in the embedding space that aligns two pairs of modalities (M1,M2)even though we only train using the pairs (I,M1)and(I,M2). This behavior allows us to perform a wide variety of zero-shot and cross-modal re-trieval tasks without training for them. We achieve state-of-the-art zero-shot text-audio classification results without observing a single sample of paired (audio, text). 3.3. Implementation Details IMAGE BIND is conceptually simple and can be imple-mented in many different ways. We deliberately choose a vanilla implementation that is flexible and allows for an ef-fective study and easy adoption. In § 5, we present design decisions that are critical for good emergent ‘binding’. Encoding modalities. We use a Transformer architec-ture [71] for all the modality encoders. We use the Vision Transformer (ViT) [12] for images. Following [19], we use the same encoder for images and videos. We temporally inflate [7] the patch projection layer of the ViT and use 2 frame video clips sampled from 2seconds. We follow [21] for encoding audio and convert a 2second audio sampled at 16kHz into spectrograms using 128mel-spectrogram bins. As the spectrogram is also a 2D signal like an image, we use a ViT with a patch size of 16and stride 10. We treat ther-mal images and depth images as one-channel images and 15182 Dataset Task #cls Metric #test Audioset Audio-only (AS-A) [18] Audio cls. 527 mAP 19048 ESC 5-folds (ESC) [58] Audio cls. 50 Acc 400 Clotho (Clotho) [16] Retrieval -Recall 1045 AudioCaps (AudioCaps) [36] Retrieval -Recall 796 VGGSound (VGGS) [8] Audio cls. 309 Acc 14073 SUN Depth-only (SUN-D) [67] Scene cls. 19 Acc 4660 NYU-v2 Depth-only (NYU-D) [64] Scene cls. 10 Acc 653 LLVIP (LLVIP) [31] Person cls. 2 Acc 15809 Ego4D (Ego4D) [22] Scenario cls. 108 Acc 68865 Table 1. Emergent zero-shot classification datasets for audio, depth, thermal, and Inertial Measurement Unit (IMU) modalities. We evaluate I MAGE BINDwithout training for any of these tasks and without training on paired text data for these modalities. For each dataset, we report the task (classification or retrieval), number of classes (#cls), metric for evaluation (Accuracy or mean Average Precision), and the number of test samples (#test). also use a ViT to encode them. We follow [20] to convert depth into disparity maps for scale invariance. We extract the IMU signal consisting of accelerometer and gyroscope measurements across the X,Y, and Zaxes. We use 5sec-ond clips resulting in 2K time step IMU readings which are projected using a 1D convolution with a kernel size of 8. The resulting sequence is encoded using a Transformer. Fi-nally, we follow the text encoder design from CLIP [59]. We use separate encoders for images, text, audio, ther-mal images, depth images, and IMU. We add a modality-specific linear projection head on each encoder to obtain a fixed size ddimensional embedding, that is normalized and used in the InfoNCE loss from Eq 1. In addition to ease of learning, this setup allows us to also initialize a subset of the encoders using pretrained models, e.g., the image and text encoder using CLIP [59] or OpenCLIP [29]. 4. Experiments We first describe the main experimental setup and pro-vide full details in the supplement. Naturally paired modalities and datasets. We use I M-AGEBIND on six modalities -image/video, text, audio, depth, thermal images, and IMU. As described in § 3.3, we treat videos as 2 frame images and process them the same as images. For the naturally available paired data, we use the (video, audio) pairs from the Audioset dataset [18], (im-age, depth) pairs from the SUN RGB-D dataset [67], (im-age, thermal) pairs from the LLVIP dataset [31] and (video, IMU) pairs from the Ego4D dataset [22]. For these pairs of modalities, we do not use any extra supervision like class la-bels, text etc. Since SUN RGB-D and LLVIP are relatively small, we follow [20] and replicate them 50 ×for training. Large scale image-text pairs. We leverage image-text su-pervision from large-scale web data [59]. For ease of ex-perimentation, we use pretrained models that are trained on billions of (image, text) pairs. Specifically, we use thepretrained vision (ViT-H 630M params) and text encoders (302M params) from OpenCLIP [29] in our experiments. Encoders for each modality. We convert audio into 2D mel-spectrograms [21], and thermal and depth modalities into 1 channel images and use ViT-B, ViT-S encoders re-spectively. The image and text encoders are kept frozen during the I MAGE BINDtraining and the audio, depth, ther-mal, and IMU encoders are updated. Emergent zero-shot vs. zero-shot. Methods such as CLIP [59], AudioCLIP [26] etc. train with modality pairs, (image, text) and (audio, text), to demonstrate zero-shot classification using text-prompts for the same modality. In contrast, I MAGE BINDbinds modalities together using only image-paired data. Thus, just by training on (image, text) and (image, audio), I MAGE BIND can perform zero-shot classification of audio using text prompts. As we do not directly train for this ability, we term it emergent zero-shot classification to distinguish it from methods that specifically train using paired text-supervision for all modalities. Evaluation on downstream tasks. We comprehensively evaluate I MAGE BIND on a many different downstream tasks using different protocols. We summarize the main datasets used for evaluation in Table 1. 4.1. Emergent zero-shot classification We evaluate I MAGE BINDon emergent zero-shot classi-fication and use the text prompt templates from [59] (full details in Appendix B). We report the results in Table 2. Each task measures I MAGE BIND’s ability to associate text embeddings to the other modalities without observing them together during training. Given the novelty of our problem setting, there are no “fair” baselines to compare I MAGE -BIND with. Nevertheless, we compare to prior work that uses text paired with certain modalities ( e.g. audio [26, 50]), and for certain “visual-like” modalities such as depth and thermal, we use the CLIP model directly. We also report the best reported supervised upper bound per benchmark. IMAGE BIND achieves a high emergent zero-shot clas-sification performance. On each benchmark, I MAGE BIND achieves strong gains and even compares favorably to super-vised specialist models trained for the specific modality and task. These results demonstrate that I MAGE BINDaligns the modalities and implicitly transfers the text supervision as-sociated with images to other modalities like audio. In par-ticular, I MAGE BINDshows strong alignment for non-visual modalities like audio and IMU suggesting that their natu-rally available pairing with images is a powerful source of supervision. For completeness, we also report the standard zero-shot image (ImageNet [62] -IN1K, Places-365 [85] -P365) and video (Kinetics400 [34] -K400, MSR-VTT 1k-A [76] -MSR-VTT) tasks. As the image & text encoders are initialized (and frozen) using OpenCLIP, these results match those of OpenCLIP. 15183 IN1K P365 K400 MSR-VTT NYU-D SUN-D AS-A VGGS ESC LLVIP Ego4D Random 0.1 0.27 0.25 0.1 10.0 5.26 0.62 0.32 2.75 50.0 0.9 IMAGE BIND 77.7 45.4 50.0 36.1 54.0 35.1 17.6 27.8 66.9 63.4 25.0 Text Paired ----41.9∗25.4∗28.4†[26] -68.6†[26] --Absolute SOTA 91.0 [80] 60.7 [6
Cai_Orthogonal_Annotation_Benefits_Barely-Supervised_Medical_Image_Segmentation_CVPR_2023
Abstract Recent trends in semi-supervised learning have signifi-cantly boosted the performance of 3D semi-supervised med-ical image segmentation. Compared with 2D images, 3D medical volumes involve information from different direc-tions, e.g., transverse, sagittal, and coronal planes, so as to naturally provide complementary views. These com-plementary views and the intrinsic similarity among ad-jacent 3D slices inspire us to develop a novel annotation way and its corresponding semi-supervised model for effec-tive segmentation. Specifically, we firstly propose the or-thogonal annotation by only labeling two orthogonal slices in a labeled volume, which significantly relieves the bur-den of annotation. Then, we perform registration to ob-tain the initial pseudo labels for sparsely labeled volumes. Subsequently, by introducing unlabeled volumes, we pro-pose a dual-network paradigm named Dense-Sparse Co-training (DeSCO) that exploits dense pseudo labels in early stage and sparse labels in later stage and meanwhile forces consistent output of two networks. Experimental results on three benchmark datasets validated our effectiveness in performance and efficiency in annotation. For example, with only 10 annotated slices, our method reaches a Dice up to 86.93% on KiTS19 dataset. Our code and models are available at https://github.com/HengCai-NJU/DeSCO .
1. Introduction Medical image segmentation is one of the most critical vision tasks in medical image analysis field. Thanks to the development of deep learning-based methods [8,11,28,32], segmentation performance has now been substantially im-proved. However, the current promising performance is at *Corresponding author: Yinghuan Shi. Heng Cai, Shumeng Li, Yinghuan Shi and Yang Gao are with the State Key Laboratory for Novel Software Technology and National Institute of Healthcare Data Science, Nanjing University, China. This work is supported by the NSFC Program (62222604, 62206052, 62192783), CAAI-Huawei Mind-Spore (CAAIXSJLJJ-2021-042A), China Postdoctoral Science Founda-tion Project (2021M690609), Jiangsu Natural Science Foundation Project (BK20210224), and CCF-Lenovo Bule Ocean Research Fund. Transverse plane Coronal plane Orthogonal annotation Annotation (#slice)Dice (%) 5590 80 75 70 65 6085 Orthogonal annotation Sparse annotation Dense annotation95Figure 1. The upper figure illustrates our annotation method, each volume with annotations is labeled with only two orthogonal slices. The lower figure shows the comparison between the effi-ciency and effectiveness of our orthogonal annotation and other manners, including conventional dense annotation and previous sparse annotation which labels slices in one plane. All trained on LA [42] dataset with supervised setting. For sparse annotation and our orthogonal annotation, we train the models only on labeled voxels through partial cross-entropy and partial Dice loss. the cost of large-scale manually precisely labeled dataset, which is prohibitively expensive and laborious to achieve. What’s worse, different radiologists might provide different annotations even for a same image. Therefore, exploring ways to alleviate the requirement of quantity or quality of manual annotation is highly demanded. Mainstream meth-ods typically follow two paradigms: 1) degrade annotation quality, i.e., weakly-supervised segmentation, and 2) reduce annotation quantity, i.e., semi-supervised segmentation. Weakly-supervised segmentation methods usually utilize weak annotations, e.g., image-level label [16, 17], scrib-ble [20, 21], point [3] or partial slices [5, 18]. Unfor-tunately, most of them are either difficult to distinguish some fuzzy boundaries or with additional large computa-tional burden [15]. What’s more, weakly-supervised setting usually requires coarse annotation for every single image. This is still a heavy burden for radiologists. Besides, most current methods originally developed for 2D segmentation could not directly utilize 3D volumetric information [9]. Different from these weakly-supervised methods, semi-supervised methods train segmentation models with a small amount of manually labeled data and a large amount of unlabeled data, which have achieved remarkable perfor-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3302 mance with an impressive deduction on demand for anno-tation [6, 19]. Despite their success, we notice most current semi-supervised segmentation methods still require full 3D annotation for each labeled volume. In fact, segmentation targets in adjacent slices of 3D volume are highly similar in both appearance and location, leading it redundant to la-bel every slice. Although the sparse annotation is discussed in recent work [18], we notice these conventional methods still neglect the complementary views between different di-rections in 3D volume. It is known that 3D medical volumes naturally contains different directions ( e.g., transverse, coronal planes) which provide complementary information from different views. And recent trends in semi-supervised learning [7, 40] have revealed that learning from complementary view is indeed beneficial. Thus, we wonder whether a novel annotation method coupled with its corresponding model could be in-vestigated by introducing this complementary relation into 3D semi-supervised medical image segmentation . In this paper, for labeled volume, we innovatively inves-tigate a novel sparse annotation way— orthogonal annota-tion,i.e., only to label two slices in its orthogonal direction (e.g., transverse and coronal direction in Figure 1). We be-lieve our annotation way has two merits: 1) it could largely force the model to learn from complementary views with two diversely initialized labeled slices, 2) it helps greatly re-duce the label costs with fully utilizing the inter-slice simi-larity. Following very recent work [18], we name the setting as Barely-supervised Segmentation. To incorporate our orthogonal annotation, the most in-tuitive thought about training strategy of a segmentation model is that only the voxels on the labeled slices contribute to the training. However, directly learning from this sparse annotation is unstable and the training is apt to collapse (shown in Sec. 4). Thus, we apply registration to spread supervision signals from slice to volume, where the result of label propagation can serve as the dense pseudo label for training. By performing registration, we obtain two sets of pseudo labels for volumes from orthogonal directions. Yet, the obtained pseudo labels are not promising enough to directly train a segmentation model using current exist-ing semi-supervised methods, which is mainly due to the accumulation of error in the registration process. Therefore, to leverage 1) the volumes with inaccurate pseudo labels and 2) the rest unlabeled volumes, we propose a simple yet effective end-to-end framework namely Dense-Sparse Co-training (DeSCO), which consists two segmen-tation models of a same structure. At the beginning of training, the models mainly learn from dense pseudo labels with a learning preference on voxels with more confident pseudo labels, i.e., voxels near to registration source slice, and exploit unlabeled volumes through cross-supervision. After the models have been improved through training, wegradually get rid of pseudo label until the supervised loss solely comes from sparse annotation. Meanwhile, the role of cross-supervision is gradually emphasized correspond-ingly. Because in the process of reaching consensus through cross-supervision, the mistake introduced by previous train-ing on inaccurate pseudo labels could be revised. Overall, our contributions are three folds: • A new annotation way that only labels two orthogonal slices for a labeled 3D volume, which greatly reduces the annotation burden. • A novel barely-supervised 3D medical image segmen-tation framework to steadily utilize our high-efficient sparse annotation with coupled segmentation method. • A dense-sparse co-training paradigm to learn from dense pseudo label and sparse label while leveraging unlabeled volumes to reduce noise by reaching con-sensus through cross-supervision. Extensive experiments on three public datasets validate that our barely-supervised method is close to or even bet-ter than its upper bound, i.e., semi-supervised methods with fully annotated labeled volumes. For example, on KiTS19, compared to Mean Teacher [36] that uses 320 labeled slices with a Dice of 84.98%, we only uses 10 labeled slices yet obtains a Dice of 86.93%.
Guo_Knowledge_Distillation_for_6D_Pose_Estimation_by_Aligning_Distributions_of_CVPR_2023
Abstract Knowledge distillation facilitates the training of a com-pact student network by using a deep teacher one. While this has achieved great success in many tasks, it remains completely unstudied for image-based 6D object pose esti-mation. In this work, we introduce the first knowledge dis-tillation method driven by the 6D pose estimation task. To this end, we observe that most modern 6D pose estimation frameworks output local predictions, such as sparse 2D key-points or dense representations, and that the compact stu-dent network typically struggles to predict such local quan-tities precisely. Therefore, instead of imposing prediction-to-prediction supervision from the teacher to the student, we propose to distill the teacher’s distribution of local pre-dictions into the student network, facilitating its training. Our experiments on several benchmarks show that our dis-tillation method yields state-of-the-art results with different compact student models and for both keypoint-based and dense prediction-based architectures.
1. Introduction Estimating the 3D position and 3D orientation, a.k.a. 6D pose, of an object relative to the camera from a single 2D image has a longstanding history in computer vision, with many real-world applications, such as robotics, autonomous navigation, and virtual and augmented reality. Modern methods that tackle this task [7,20,21,25,28,33,40,45,47] all rely on deep neural networks. The vast majority of them draw their inspiration from the traditional approach, which consists of establishing correspondences between the object’s 3D model and the input image and compute the 6D pose from these correspondences using a Perspective-n-Point (PnP) algorithm [2, 23, 27, 42] or a learnable PnP network. Their main differences then lie in the way they extract correspondences. While some methods predict the 2D image locations of sparse 3D object keypoints, such as the 8 3D bounding box corners [19–21] or points on the ob-Ground-truthTeacherStudent (a) Student(c) Teacher(b) Our Distilled Student Figure 1. Student vs teacher keypoint predictions. The large backbone of the teacher allows it to produce accurate keypoints, indicated by tight clusters. By contrast, because of its more com-pact backbone, the student struggles to predict accurate keypoints when trained with keypoint-to-keypoint supervision. We therefore propose to align the student’s and teacher’s keypoint distributions . ject surface [33], others produce dense representations, such as 3D locations [7,45] or binary codes [40], from which the pose can be obtained. In any event, these methods rely on large models, which, while achieving impressive accuracy, are impractical de-ployment on embedded platforms and edge devices. As, to the best of our knowledge, no compact and efficient 6D pose estimation models have yet been proposed, a simple way to reduce the size of these networks consists of replacing their large backbones with much smaller ones. Unfortunately, this typically comes with a significant accuracy drop. In this paper, we address this by introducing a knowledge dis-tillation strategy for 6D pose estimation networks. Knowledge distillation aims to transfer information from a deep teacher network to a compact student one. The re-search on this topic has tackled diverse tasks, such as image classification [17, 37, 48], object detection [10, 11, 49] and semantic segmentation [14, 30]. While some techniques, such as feature distillation [15, 37, 48, 49], can in principle generalize to other tasks, no prior work has studied knowl-edge distillation in the context of 6D pose estimation. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18633 In this paper, we introduce a knowledge distillation method for 6D pose estimation motivated by the follow-ing observations. In essence, whether outputting sparse 2D locations or dense representations, the methods discussed above all produce multiple local predictions. We then argue that the main difference between the local predictions made by a deep teacher network and a compact student one con-sists in the accuracy of these individual predictions. Fig-ure 1 showcases this for sparse keypoint predictions, ev-idencing that predicting accurate keypoint locations with keypoint-to-keypoint supervision is much harder for the stu-dent than for the teacher. We therefore argue that knowledge distillation for 6D pose estimation should be performed not by matching the individual local predictions of the stu-dent and teacher but instead by encouraging the student and teacher distributions of local predictions to become similar. This leaves more flexibility to the student and thus facili-tates its training. To achieve this, we follow an Optimal Transport (OT) formalism [44], which lets us measure the distance between the two sets of local predictions. We express this as a loss function that can be minimized using a weight-based variant of Sinkhorn’s algorithm [6], which further allows us to ex-ploit predicted object segmentation scores in the distillation process. Our strategy is invariant to the order and the num-ber of local predictions, making it applicable to unbalanced teacher and student predictions that are not in one-to-one correspondence. We validate the effectiveness of our approach by conducting extensive experiments on the popular LINEMOD [16], Occluded-LINEMOD [3] and YCB-V [47] datasets with the SOTA keypoint-based approach WDRNet+. Our prediction distribution alignment strategy consistently outperforms both a prediction-to-prediction distillation baseline and the state-of-the-art feature distil-lation method [49] using diverse lightweight backbones and architecture variations. Interestingly, our approach is orthogonal to feature distillation, and we show that com-bining it with the state-of-the-art approach of [49] further boosts the performance of student network. To show the generality of our approach beyond keypoint prediction, we then apply it to the SOTA dense prediction-based method, ZebraPose [40], to align the distributions of dense binary code probabilities. Our experiments evidence that this outperforms training a compact ZebraPose in a standard prediction-to-prediction knowledge distillation fashion. Our main contributions can be summarized as follows. (i) We investigate for the first time knowledge distillation in the context of 6D pose estimation. (ii) We introduce an approach that aligns the teacher and student distribu-tions of local predictions together with their predicted ob-ject segmentation scores. (iii) Our method generalizes to both sparse keypoints and dense predictions 6D pose esti-mation frameworks. (iv) Our approach can be used in con-junction with feature distillation to further boost the stu-dent’s performance. Our code is available at https:// github.com/GUOShuxuan/kd-6d-pose-adlp .
Cao_Three_Guidelines_You_Should_Know_for_Universally_Slimmable_Self-Supervised_Learning_CVPR_2023
Abstract We propose universally slimmable self-supervised learn-ing (dubbed as US3L) to achieve better accuracy-efficiency trade-offs for deploying self-supervised models across dif-ferent devices. We observe that direct adaptation of self-supervised learning (SSL) to universally slimmable networks misbehaves as the training process frequently collapses. We then discover that temporal consistent guidance is the key to the success of SSL for universally slimmable networks, and we propose three guidelines for the loss design to ensure this temporal consistency from a unified gradient perspec-tive. Moreover, we propose dynamic sampling and group regularization strategies to simultaneously improve training efficiency and accuracy. Our US3L method has been empiri-cally validated on both convolutional neural networks and vision transformers. With only once training and one copy of weights, our method outperforms various state-of-the-art methods (individually trained or not) on benchmarks includ-ing recognition, object detection and instance segmentation.
1. Introduction Deep supervised learning has achieved great success in the last decade, but the drawback is that it relies heavily on a large set of annotated training data. Self-supervised learning (SSL) has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. Since the emergence of contrastive learning [7], SSL has clearly gained momentum and several recent works [8, 14] have achieved comparable or even better performance than the su-pervised pretraining when transferring to downstream tasks. However, it remains challenging to deploy trained models for edge computing purposes, due to the limited memory, computation and storage capabilities of such devices. *Corresponding author.Table 1. Comparisons between supervised classification and Sim-Siam under S-Net on CIFAR-100. The accuracy for SimSiam is under linear evaluation. ‘-’ denotes the model collapses. Type MethodAccuracy (%) 1.0x 0.75x 0.5x 0.25x SupervisedIndividual 73.8 72.8 71.4 67.3 S-Net [32] 71.9 71.7 70.8 66.2 S-Net+Distill [31] 73.1 71.9 70.5 67.2 SimSiam [9]Individual 65.2 64.0 60.6 51.2 S-Net [32] ----S-Net+Distill [31] 46.9 46.9 46.7 45.3 Ours 65.5 65.3 63.2 59.7 To facilitate deployment, several model compression tech-niques have been proposed, including lightweight architec-ture design [29], knowledge distillation [20], network prun-ing [15], and quantization [33]. Among them, structured net-work pruning [25] is directly supported and accelerated by most current hardware and therefore the most studied. How-ever, most structured pruning methods require fine-tuning to obtain a sub-network with a specific sparsity, and a single trained model cannot achieve instant and adaptive accuracy-efficiency trade-offs across different devices. To address this problem in the context of supervised learning, the family of slimmable networks (S-Net) and universally slimmable networks (US-Net) [2, 22, 31, 32] were proposed, which can switch freely among different widths by training only once. Driven by the success of slimmable networks, a ques-tion arises: Can we train a self-supervised model that can run at arbitrary width? A na ¨ıve solution is to replace the supervised loss with self-supervised loss based on the US-Net framework. However, we find that this solution doesn’t work directly after empirical studies. Table 1 shows that the phenomenon in self-supervised scenarios is very different. The model directly collapses after applying the popular SSL method SimSiam [9] to slimmable networks [32]. Although using inplace distillation [31] for sub-networks prevents the model from collapsing, there is still a big gap between the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15742 results of S-Net+Distill and training each model individually for SimSiam. So why is the situation so different in SSL and how to further improve the performance (i.e., close the gap)? In this paper, we present a unified perspective to ex-plain the differences and propose corresponding measures to bridge the gap. From a unified gradient perspective, we find that the key is that the guidance to sub-networks should be consistent between iterations, and we analyze which com-ponents of SSL incur the temporal inconsistency problem and why US-Net works in supervised learning. Based on these theoretical analyses, we propose three guidelines for the loss design of US-Net training to ensure temporal consis-tency. As long as one of them is satisfied, US-Net can work well, no matter in supervised or self-supervised scenarios. Moreover, considering the characteristics of SSL and the deficiencies of US-Net, we propose dynamic sampling and group regularization to reduce the training overhead while improving accuracy. Our main contributions are: •We discover significant differences between supervised and self-supervised learning when training US-Net. Based on these observations, we analyze and summarize three guidelines for the loss design of US-Net to ensure temporal consistency from a unified gradient perspective. •We propose a dynamic sampling strategy to reduce the train-ing cost without sacrificing accuracy, which eases coping with the large data volumes in SSL. •We analyze how the training scheme of US-Net limits the model capacity and propose group regularization as a solu-tion by giving different freedoms to different channels. •We validate the effectiveness of our method on both CNNs and Vision Transformers (ViTs). Our method requires only once training and a single model, which can exceed the re-sults of training each model individually, and is comparable to knowledge distillation from pretrained teachers.
Fan_PointListNet_Deep_Learning_on_3D_Point_Lists_CVPR_2023
Abstract Deep neural networks on regular 1D lists ( e.g., natural languages) and irregular 3D sets ( e.g., point clouds) have made tremendous achievements. The key to natural lan-guage processing is to model words and their regular or-der dependency in texts. For point cloud understanding, the challenge is to understand the geometry via irregular point coordinates, in which point-feeding orders do not matter. However, there are a few kinds of data that exhibit both reg-ular 1D list and irregular 3D set structures, such as proteins and non-coding RNAs. In this paper, we refer to them as 3D point lists and propose a Transformer-style PointListNet to model them. First, PointListNet employs non-parametric distance-based attention because we find sometimes it is the distance, instead of the feature or type, that mainly deter-mines how much two points, e.g., amino acids, are corre-lated in the micro world. Second, different from the vanilla Transformer that directly performs a simple linear transfor-mation on inputs to generate values and does not explicitly model relative relations, our PointListNet integrates the 1D order and 3D Euclidean displacements into values. We con-duct experiments on protein fold classification and enzyme reaction classification. Experimental results show the effec-tiveness of the proposed PointListNet.
1. Introduction The essence of deep learning is to capture the structure of a certain kind of data via artificial neural networks. Usu-ally, an element of data includes a position part and a feature part. According to the type of element position, data exhibit different structures. Various deep neural networks are pro-posed to model those structures and have made tremendous achievements. For example, texts are 1D lists of words. As shown in Fig. 1(a). The position of a word is its order in the text and the feature is the word itself. To capture the structure of texts or the dependency of words, 1D convolutional neu-ral networks (CNNs) [3, 30, 58], recurrent neural networks (RNNs) [9, 26, 39] and Transformers [13, 49] are widely used. A digital image can be seen as a 2D rectangular grid or matrix of pixels, as shown in Fig. 1(b). Each pixel has a 2D position and is associated with a feature of color or other attributes. In this case, 2D CNNs are usually used to model image structure [23, 33, 46]. Recently, Transformers are also employed for image understanding [15]. Recently, 3D point cloud/set processing is attracting This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17692 more and more attention from the deep learning community. Different from texts or images, in which the orders of words or the positions of pixels are regular (words or pixels are dis-tributed uniformly in texts or images), the 3D coordinates of points are irregular (points are distributed unevenly in 3D Euclidean space), as shown in Fig. 1(c). To capture the irregular structure of point clouds, deep neural networks, such as multilayer perceptrons (MLPs) [42, 43, 45], convo-lutions [48, 56] and Transformers [22, 62], need to not only effectively exploit 3D coordinates for geometry understand-ing but also be invariant to permutations of the input set in point-feeding order. Besides regular 1D lists of words, 2D grids of pixels and irregular 3D point sets, data may exhibit hybrid struc-tures. For example, proteins are made up of amino acids. As shown in Fig. 1(d), those amino acids are linked by pep-tide bonds and form a chain. Therefore, proteins include a 1D list data structure. Because amino acids are arranged uniformly in the chains, the list structure is regular. In ad-dition to the 1D sequential order in the peptide chain, each amino acid is with a 3D coordinate, which specifies its spa-tial position in the protein. Those 3D coordinates describe a geometry structure. Similar to point clouds, the geome-try structure of proteins exhibits irregularity. Therefore, the data structure of proteins involves a regular 1D list and an irregular 3D set. In this paper, we refer to this data struc-ture as 3D point list. Point lists also exist in other polymers, such as non-coding RNAs. Because the function of proteins or non-coding RNAs is based on their structures, modeling 3D point lists can facilitate a mechanistic understanding of their function to life. In this paper, we propose a Transformer-style network, named PointListNet, to capture the structure of 3D point lists. First, different from the vanilla Transformer [15, 49], which calculates self-attention by performing compu-tationally expensive matrix multiplication on inputs, our PointListNet employs a simple non-parametric distance-based attention mechanism because we find sometimes it is mainly the distance, instead of the feature or type, that determines how much two elements, e.g., amino acids, are correlated in the micro world. Second, because structures are relative, which is independent of the absolute sequential order or the absolute Euclidean coordinate, our PointList-Net integrates the 1D order and 3D Euclidean displace-ments into values. This is substantially different from the vanilla Transformer that directly performs a simple linear transformation on absolute positional embeddings and input features to generate values, which does not explicitly model relative distance or direction. To evaluate PointListNet, we conduct experiments on protein fold classification and en-zyme reaction classification and achieve new state-of-the-art accuracy. The contributions of this paper are fivefold: • Among the early efforts, we investigate a range ofpoint cloud methods for protein modeling. • We propose a Transformer-style network, i.e., PointListNet, for 3D point list modeling. • We replace self-attention with non-parametric distance-based attention, which is more efficient and effective to achieve the correlation among microparticles in some cases. • We integrate relative structure modeling into Trans-former and employ regular and irregular methods to capture the sequence and geometry structures, respec-tively. • We conduct extensive experiments on two protein tasks and the proposed method significantly outper-forms existing methods.
Choi_Balanced_Energy_Regularization_Loss_for_Out-of-Distribution_Detection_CVPR_2023
Abstract In the field of out-of-distribution (OOD) detection, a pre-vious method that use auxiliary data as OOD data has shown promising performance. However, the method pro-vides an equal loss to all auxiliary data to differentiate them from inliers. However, based on our observation, in various tasks, there is a general imbalance in the distribution of the auxiliary OOD data across classes. We propose a balanced energy regularization loss that is simple but generally ef-fective for a variety of tasks. Our balanced energy regular-ization loss utilizes class-wise different prior probabilities for auxiliary data to address the class imbalance in OOD data. The main concept is to regularize auxiliary samples from majority classes, more heavily than those from minor-ity classes. Our approach performs better for OOD detec-tion in semantic segmentation, long-tailed image classifica-tion, and image classification than the prior energy regular-ization loss. Furthermore, our approach achieves state-of-the-art performance in two tasks: OOD detection in seman-tic segmentation and long-tailed image classification.
1. Introduction Deep neural networks are used in a variety of fields such as image classification [22] and semantic segmenta-tion [11]. However, there is a challenge in the practical use of deep neural networks in areas where safety is crucial, such as autonomous driving and medical diagnosis [20,25]. In particular, deep neural networks have the issue of provid-ing high confidence to out-of-distribution (OOD) samples that are not used for training [15]. As a result, Maximum softmax probability (MSP) score has been proposed to iden-tify these OOD samples [17]. Based on the score, OOD de-tection performance is evaluated by metrics (e.g. AUROC, FPR). Both in image classification [18,24,26,29,30,38,40, 43, 46](including long-tailed image classification [43]) and semantic segmentation [1–3, 5, 10, 12, 16, 19, 28, 33, 36, 41], *Work done as an intern at RideFlux.different approaches have been suggested to enhance the OOD detection performance. Among them, we concentrate on the methods using auxiliary data as OOD data which in-dicate superior OOD detection performance to the previous methods that only use in-distribution samples. Outlier Exposure (OE) utilizes an auxiliary dataset of outliers to improve OOD detection performance [18]. The auxiliary data is consist of classes that do not overlap with the in-distribution data and the test OOD data. OE leverages the cross-entropy loss for the existing training data and the regularization loss for the auxiliary data. The cross-entropy loss that results from giving the auxiliary data a uniform label is the regularization loss of OE. Meanwhile, a new energy score has been introduced in Energy-based OOD detection (EnergyOE) which replaces the MSP score [29]. Furthermore, EnergyOE suggests an energy regularization loss that differs from that of OE to enhance performance. The squared hinge loss for energy with every existing (in-distribution) piece of data and every auxiliary (OOD) piece of data is added to create the energy regularization loss. Similarly, in semantic segmentation, the OOD detection performance is enhanced by using the auxiliary dataset of the outlier. Meta-OOD [5] organized the auxiliary dataset of the outlier by scenes of the COCO dataset [27]. Although the process of creating the auxiliary data is different from image classification, the training loss is comparable. Meta-OOD adopts the regularization loss proposed by OE. Re-cently, PEBAL [41] also adopts energy regularization loss proposed by EnergyOE. However, when regularizing auxiliary data, the existing methods for OOD detection do not take into account varia-tions between auxiliary data samples. The variations are se-vere especially on real data such as semantic segmentation for autonomous driving. As seen in Figure 1a, for the pre-trained model, the class distribution of the auxiliary OOD data is not uniform across classes, i.e., imbalanced. To ad-dress the imbalanced problem, we regularize the auxiliary data differently for each sample. To achieve this, we pro-pose a balanced energy regularization loss to apply higher regularization to majority classes than minority classes in This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15691 (a) Percentage (%)Cut-paste OOD Input Image OOD Detection (PEBAL) OOD Detection (Ours) Final Prediction (Ours)(a) (b) (b) Percentage (%)Cut-paste OOD Input Image OOD Detection (PEBAL) OOD Detection (Ours) Final Prediction (Ours)(a) (b) Initial Prediction Figure 1. Overview of our approach in semantic segmentation task (a): Class distribution of cut-pasted OOD pixels collected from 10000 synthesized scene images ; (b): OOD detection result in Fishyscapes validation sets. Our balanced energy PEBAL(Ours) is the method that substitutes the energy regularization loss in PEBAL [41] with our balanced energy regularization loss. auxiliary data. In other words, auxiliary samples of major-ity classes receive a larger energy constraint than samples of minority classes. We introduce the term Z, which indicates whether a sample belongs to the majority or minority of a class. Zis the weighted sum of the softmax output of the classification model for a sample (i.e., the posterior prob-ability of a class for a given sample), where the weight is the prior probability for the class. Unlike the existing en-ergy regularization loss, our balanced energy regularization loss adjusts to the value of Zfor an auxiliary data sample. Two adaptive loss components make up our loss: loss mar-gin and loss weight. The adaptive loss margin provides an additional Z-proportional margin in the squared hinge loss for auxiliary data. The adaptive loss weight gives a weight proportional to Zto the squared hinge loss. We confirm our novel loss in three tasks: semantic seg-mentation, long-tailed image classification, and image clas-sification. The proposed loss is simple but generally effec-tive for various tasks. Figure 1b illustrates how our method outperforms the previous state-of-the-art (SOTA) algorithm PEBAL in the semantic segmentation task by replacing the energy regularization loss with our loss. OOD detection per-formance is also enhanced when using our loss compared to the baseline (EnergyOE) which use only energy regular-ization loss. In all image classification tasks, we evaluate our method on semantically coherent OOD detection (SC-OOD) benchmark [46]. In long-tailed image classification task, our approach reveals superior OOD performance com-pared to both OE and EnergyOE methods which use auxil-iary data. In addition, our approach outperforms the pre-vious SOTA method PASCL [43], Similarly, in the image classification task, we demonstrate the superiority of our loss by outperforming both OE and EnergyOE, which make use of auxiliary data. The contributions are summarized as: • By making inferences based on previously trained models, we explain the imbalanced distribution of aux-iliary OOD data. • We suggest a novel balanced energy regularization loss to address the class imbalance in auxiliary OOD data. • The proposed balanced loss performs better for OOD detection than the previous energy regularization loss. • The SOTA performance for OOD detection in two tasks is achieved by our OOD detection method.
Deitke_Phone2Proc_Bringing_Robust_Robots_Into_Our_Chaotic_World_CVPR_2023
Abstract Training embodied agents in simulation has become mainstream for the embodied AI community. However, these agents often struggle when deployed in the physical world due to their inability to generalize to real-world envi-ronments. In this paper, we present Phone2Proc, a method that uses a 10-minute phone scan and conditional proce-dural generation to create a distribution of training scenes that are semantically similar to the target environment. The generated scenes are conditioned on the wall layout and arrangement of large objects from the scan, while also sampling lighting, clutter, surface textures, and instances of smaller objects with randomized placement and materi-als. Leveraging just a simple RGB camera, training with Phone2Proc shows massive improvements from 34.7% to 70.7% success rate in sim-to-real ObjectNav performance ∗Equal contribution.across a test suite of over 200 trials in diverse real-world environments, including homes, offices, and RoboTHOR. Furthermore, Phone2Proc’s diverse distribution of gener-ated scenes makes agents remarkably robust to changes in the real world, such as human movement, object rearrange-ment, lighting changes, or clutter.
1. Introduction The embodied AI research community has increasingly relied on visual simulators [ 30,49,61] to train embodied agents, with the expectation that the resulting policies can be transferred onto robots in the physical world. While agents trained within simulated environments have shown increased capabilities, progress in successfully deploying these policies onto physical robots has been limited. Robots trained in simulation must overcome daunting challenges if they are to work effectively in a real space such as our home. First, they must overcome the generalization This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9665 gap between the limited set of simulated environments they are trained on and the test scene of interest. In practice, poli-cies trained to perform complex visual tasks with reinforce-ment learning struggle to perform well in novel scenes with novel layouts and object instances. Second, they must work in realistic environments where we live and work, which are often full of clutter, with objects that keep being moved around, with people in and out of the scene and with light-ing changes. In short, we expect our agents to learn from a small set of training data points and generalize not just to a single test data point, but to a distribution of test data that is often semantically distant from the training data. Today’s methods are a ways away from delivering such performant, robust, and resilient robots [ 9,12]. In this work, we present P HONE 2PROC, which repre-sents a significant advancement towards the goal of creating performant, robust, and resilient robots. Instead of train-ing policies in simulated environments that may be seman-tically distant from the target physical scene, P HONE 2PROC efficiently generates a distribution of training environments that are semantically similar to the target environment. This significantly reduces the generalization gap between the training and target distributions, resulting in more capable robots. PHONE 2PROC utilizes a freely available mobile appli-cation to quickly scan a target environment and create a template of the surroundings, including the scene layout and 3D placements of large furniture. This template is then used to conditionally generate a fully interactive sim-ulated world using ProcTHOR [ 13], closely mirroring the real-world space. Importantly, this single simulated envi-ronment is then transformed into a distribution of simulated worlds by randomizing objects, their placements, materi-als, textures, scene lighting, and clutter. This allows for the creation of arbitrary large training datasets that are seman-tically similar to the desired real-world scene. We produce policies for object goal navigation using PHONE 2PROC and deploy them onto a LoCoBot robot in the physical world. We conduct extensive evaluations with 234 episodes in five diverse physical environments: a 3-room and 6-room apartment, a test scene from RoboTHOR-real, a conference room, and a cafeteria. This represents one of the largest and most diverse studies of sim-to-real indoor navigation agents to date. Across all environments, PHONE 2PROC significantly outperforms the state-of-the-art embodied AI model built with ProcTHOR, with an average improvement in success rate from 34.7% to 70.7%. Our robot is able to explore the scene efficiently and effectively navigate to objects of interest, even in the presence of clut-ter, lighting changes, shifts in furniture, and human move-ment. These strong navigation results are achieved using anRGB-only camera ,no depth sensors, no localization sensors, and no explicit mapping components.In summary, we present: (1) P HONE 2PROC, a simple and highly effective method for reducing the generalization gap between datasets of simulated environments and a tar-get environment in the real world, (2) large-scale real-world robotics experiments with 234 trials showing significant im-provements for P HONE 2PROC compared to state-of-the-art models, and (3) experiments demonstrating the robustness of P HONE 2PROC in the face of variations such as changes in lighting, clutter, and human presence.
Girase_Latency_Matters_Real-Time_Action_Forecasting_Transformer_CVPR_2023
Abstract We present RAFTformer, a real-time action forecasting transformer for latency-aware real-world action forecast-ing. RAFTformer is a two-stage fully transformer based architecture comprising of a video transformer backbone that operates on high resolution, short-range clips, and a head transformer encoder that temporally aggregates infor-mation from multiple short-range clips to span a long-term horizon. Additionally, we propose a novel self-supervised shuffled causal masking scheme as a model level augmen-tation to improve forecasting fidelity. Finally, we also pro-pose a novel real-time evaluation setting for action fore-casting that directly couples model inference latency to overall forecasting performance and brings forth a hith-erto overlooked trade-off between latency and action fore-casting performance. Our parsimonious network design fa-cilitates RAFTformer inference latency to be 9×smaller than prior works at the same forecasting accuracy. Ow-ing to its two-staged design, RAFTformer uses 94%less training compute and 90%lesser training parameters to outperform prior state-of-the-art baselines by 4.9points on EGTEA Gaze+ and by 1.4points on EPIC-Kitchens-100 validation set, as measured by Top-5 recall (T5R) in the offline setting. In the real-time setting, RAFTformer outperforms prior works by an even greater margin of upto 4.4T5R points on the EPIC-Kitchens-100 dataset. Project Webpage: https://karttikeya.github. io/publication/RAFTformer/ .
1. Introduction Latency matters. It is a crucial system design consid-eration for countless applications that operate in real-time from hardware design [65], network engineering [63], and satellite communications [30] to capital trading [32], human vision [59] and COVID transmission patterns [54]. How-ever, it has not been a center stage design consideration in modern computer vision systems of the past decade [11,45]. Modern vision system design has largely focused on the *Work done during Harshayu’s internship at HRI with Chiho Choi’s supervision who is now at Samsung Seminconductor US Karttikeya Mangalam is the corresponding author TimePresentTargetFutureForecasting Horizon Inference LatencyObserved PastLatencyObserved Past Prior MethodsRAFTformerObserved Pastdry handfold clothtake pizzatake pizzadry hand = 0Offline:Real-time: ≠ 0Action? secFigure 1. Action Forecasting is the task of predicting actions that will happen after a pre-determined time span, say tfseconds, into the future. Prior works consider an offline evaluation setting that ignores the model inference latency. We propose a latency-aware real-time evaluation setting where the model is required to finish forecasting tfseconds before the target time. We present RAFTformer, a fast action anticipation transformer that outper-forms prior works both in offline & real-time setting while fore-casting actions in real-time ( ≥25FPS). correctness of systems rather than the latency of the pre-dictions. While vision-based forecasting systems are often meant for embodied real-time deployment on autonomous agents like self-driving cars and robots, they are evaluated in an offline setting where inference latency is neglected (Figure 1). Interestingly, recent neural network architec-tures have adopted FLOPs as a proxy for latency as a sec-ondaxis for model design. While a sufficient fidelity met-ric for offline after-the-fact applications like automatic con-tent recognition, latency often comes second to correctness, even for real-time systems such as forecasting models. Forecasting empowers reactive planning [17]. An au-tonomous system present in rich human environments in-evitably needs to understand human actions around it for smooth task planning and execution. Autonomous agent planning critically depends on anticipating the future of the scene in various forms such as trajectory prediction [22, 23, 57, 58], action forecasting [19, 25, 80] or future scene This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18759 segmentation [8] and anticipating the future is an activity humans subconsciously do for day-to-day tasks [60]. And while vision-based forecasting systems are often meant for embodied real-time deployment on autonomous agents like autonomous cars and robots, they are evaluated in an offline setting where inference latency is neglected (Figure 1). In this work, we propose a real-time evaluation setting (Figure 1) that closely mimics the real-world deployment for a forecasting system. Suppose that in a real-time sys-tem, the design specifications require the forecasting system outputs tfseconds in advance of the event to be able to plan and use the forecasts effectively. In current offline settings, the forecasting system begins the inference tfseconds in advance of the event (‘Present’ in Figure 1) and the model latency is ignored (or assumed to be 0) such that the pre-dictions are available instantly. However, in our proposed real-time setting, the model is required to start inference in advance of ‘Present’ so that the outputs are available with a horizon of tfseconds, meeting the design specification. We observe that in the real-time setting, the prior works fare quite poorly because of their slow model inference la-tency (Table 3). A large latency implies that the model has to start inference further in the past and has to rely on older video data to make forecasts with the benefit of more ex-pressiveness (Figure 2). A smaller latency means the model can enjoy more recent video data but has limited capacity. Simply said, models that are only evaluated in the offline setting may fare poorly in the real-time deployment setting due to their latency agnostic design (Figure 2). We present, RAFTformer, a real-time action forecast-ing transformer that uses a two-stage transformer encoder-based network for lightning fast forecasts in inference. RAFTformer uses a shuffled casual masking scheme based feature prediction loss for learning strong temporal cues that transfer to feature prediction. Further, RAFTformer uses specialized anticipation tokens for learning to predict action at multiple temporal horizons that improve model reasoning capabilities of short-term action forecasting as well. Finally, the model is explicitly designed for real-time embodied de-ployments that allows inference up to an order of magnitude faster than prior state-of-the-art methods. In summary, our contributions are three-fold, First , we propose R eal-time A ction F orecasting Transformer (RAFTformer), a real-time action fore-casting transformer with latency at least 9×smaller than prior state-of-the-art action forecasting methods. RAFT-former uses specialized anticipation tokens and a novel shuffled casual masking-based self-supervision loss that allows it to outperform prior work while maintaining low latency with a reduction of 94% in GPU training time and 90% in the number of trainable parameters compares to prior works. To the best of our knowledge, our work is the first to achieve action anticipation in real-time ( i.e. 25 fps). 21 28 40 110 160 194 Latency (ms)17.518.519.320.5Top-5 Recall RAFTformer Offline RAFTformer Real-timeFigure 2. Evaluation Performance vs. Latency. Bigger models perform better in latency agnostic offline settings. In the real-time evaluation setting, we observe that, beyond a limit, bigger models with higher latency cause a drop in forecasting performance. In practical deployment, there exists a trade-off between latency and high-fidelity forecasts. See §4.3.1 for details. Second , we propose a latency-aware real-time evaluation setting (Figure 1) that better mimics practical deployment settings for embodied forecasting systems. Real-time eval-uation demonstrates a clear trade-off between inference la-tency and model forecasting fidelity, paving the path for the development of latency-aware forecasting models in the fu-ture (also see [20]). Third , Through extensive experiments, we show that RAFTformer outperforms prior state-of-the-art methods by 4.9points on the EGTEA Gaze+ dataset, by 1.4points on the EPIC-Kitchens-100 dataset according to the Top-5 Re-call metric and by a relative margin of 5.3%on the top-1 accuracy metric on EPIC-Kitchens-55 dataset.
Ashutosh_HierVL_Learning_Hierarchical_Video-Language_Embeddings_CVPR_2023
Abstract Video-language embeddings are a promising avenue for injecting semantics into visual representations, but exist-ing methods capture only short-term associations between seconds-long video clips and their accompanying text. We propose HierVL, a novel hierarchical video-language em-bedding that simultaneously accounts for both long-term and short-term associations. As training data, we take videos accompanied by timestamped text descriptions of human actions, together with a high-level text summary of the activity throughout the long video (as are available in Ego4D). We introduce a hierarchical contrastive train-ing objective that encourages text-visual alignment at both the clip level and video level. While the clip-level con-straints use the step-by-step descriptions to capture what is happening in that instant, the video-level constraints use the summary text to capture why it is happening, i.e., the broader context for the activity and the intent of the ac-tor. Our hierarchical scheme yields a clip representation that outperforms its single-level counterpart as well as a long-term video representation that achieves SotA results on tasks requiring long-term video modeling. HierVL success-fully transfers to multiple challenging downstream tasks (in EPIC-KITCHENS-100, Charades-Ego, HowTo100M) in both zero-shot and fine-tuned settings.
1. Introduction Understanding human activity in video is a fundamental vision problem with abundant applications in augmented re-ality, robotics, and information retrieval. The field has made exciting advances, from new models for recognition [24, 53, 86] and self-supervised representations [55, 58, 61, 90] to major datasets [16, 34, 63, 74, 106]. Nonetheless, activity understanding in video lags noticeably behind object under-standing in images, where today’s AI models compete well with people. One key reason for this discrepancy is the fact that whereas objects present themselves directly in the pixels— no subtext required—activity naturally has broad temporal Website: https://vision.cs.utexas.edu/projects/hiervl/ Summary : C made salad dressings with some oil and sauce4 minutes C opens the fridge Standard EmbeddingC places a bottle of vinegar on table Our Hierarchical EmbeddingC opens the tap Figure 1. Conventional video-language embeddings are trained to match short-term clips with their corresponding descriptions, e.g., open tap (in orange boxes), thus capturing what is happen-ing. Our hierarchical video-language embedding (in dotted blue box) learns both short-term and long-term visual-text relations, thereby capturing why is it happening (e.g., making salad dress-ing). Long-term intent is conveyed by textual summaries (blue) that give an abstractive summary of the whole video, and comple-ment the more literal step-by-step narrations (green). context rooted in the human actor’s (latent) intentions. Not only does an activity stretch across video frames, but also its interpretation relies on the larger context of what the person is trying to accomplish. Thus, there is a natural hierarchy of information in video, starting with the short-term “what the person is literally doing right now” (e.g., reaching for the stove) and going all the way to the long-term “what the person aims to do” (e.g., cook dinner). As a step towards capturing this hierarchy, we explore video-language representation learning. Video often has ac-companying timestamped text, whether from spoken nar-rations in a how-to video [63, 75, 106], closed caption text and scripts [9,76], or deliberate text annotations [16,34,91]. Existing video-language models learn a correspondence be-tween the two modalities by matching short video segments with their text counterpart, typically with a learned embed-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23066 ding [3, 55, 61, 90] that produces a language-enriched video clip encoder. However, this standard approach risks captur-ing only the short-term actions. Granular comments such as“now I pour milk in the pan” or“he picked up a wa-ter hose” fail to capture the overall goal of the activity, like making a coffee orcleaning a car . As a result, at inference time their encodings for unseen videos can be myopic and miss sequential dependencies between observed events. To tackle this problem, we introduce HierVL: a novel hierarchical video-language model that captures both short-term actions and long-term intents in video. Unlike standard video-language embeddings, our method aims to simulta-neously capture the immediate observed actions as well as their contribution to the longer-term goal. To that end, given training video accompanied by timestamped clip-level text descriptions as well as global (video-level) text summaries , HierVL learns a video-text embedding for hierarchical tem-poral understanding using two layers of contrastive learn-ing. The top (parent) layer encourages the aggregated video clips to be close to the overarching textual summary (e.g., he makes spaghetti dinner ), while the bottom (child) layer trains individual clips to be similar to their respective de-scriptions (e.g., he turns on the cooker ). See Fig. 1. To our knowledge, ours is the first work to create a hier-archical video-language embedding. Our idea to blend ab-stract textual summaries with literal text descriptions is new. Furthermore, our model design addresses constituent tech-nical challenges—namely, we circumvent the typical ex-pense of long-term feature learning [4, 43, 86] by using ag-gregation of short-term features, and we show how to jointly train with two levels of annotation in a way that staves off catastrophic forgetting of either layer. This hierarchical training yields not only global video-level representations that capture long-term information (e.g., intent and temporal dependencies), but also clip-level video features that are more expressive than those tradi-tionally learned via single-level schemes. This happens by means of our parent-child learning framework, which re-quires the aggregation of clip features within a video to match the long-term context captured by the summary. We demonstrate our model by training with the narra-tions and summaries in the 3,670-hour egocentric video dataset Ego4D [13, 34]. We show that HierVL outperforms strong baselines and state-of-the-art methods for multiple video benchmarks, successfully transferring its pretrained representation for inference on Charades-Ego [74], EPIC-KITCHENS [16], and HowTo100M [63].1We evaluate our representations on both hierarchy levels. In particu-lar, at the time of submission, HierVL achieves state-of-the-art performance on Ego4D Long Term Anticipation (LTA), Charades-Ego Action Recognition, EPIC-KITCHENS-100 1Note that we do not need any text or summary annotations for these downstream datasets and tasks.Multi-Instance Retrieval (zero-shot and fine-tuned settings), and HowTo100M Long Video Classification.
Gou_Rethinking_Image_Super_Resolution_From_Long-Tailed_Distribution_Learning_Perspective_CVPR_2023
Abstract Existing studies have empirically observed that the reso-lution of the low-frequency region is easier to enhance than that of the high-frequency one. Although plentiful works have been devoted to alleviating this problem, little under-standing is given to explain it. In this paper, we try to give a feasible answer from a machine learning perspective, i.e., the twin fitting problem caused by the long-tailed pixel dis-tribution in natural images. With this explanation, we refor-mulate image super resolution (SR) as a long-tailed distri-bution learning problem and solve it by bridging the gaps of the problem between in low-and high-level vision tasks. As a result, we design a long-tailed distribution learning so-lution, that rebalances the gradients from the pixels in the low-and high-frequency region, by introducing a static and a learnable structure prior. The learned SR model achieves better balance on the fitting of the low-and high-frequency region so that the overall performance is improved. In the experiments, we evaluate the solution on four CNN-and one Transformer-based SR models w.r.t. six datasets and three tasks, and experimental results demonstrate its superiority.
1. Introduction Image super resolution aims to restore a high-resolution (HR) image from a low-resolution (LR) one, which is an important technique in image processing [13,26,27,52] and computer vision [7,14,18,45,51]. In the past decades, plen-tiful SR methods have been proposed [19, 53], and applied to a wide range of real-world applications [21, 47, 49, 54]. Among existing studies, the learning-based methods that learn a mapping between LR and HR image spaces have achieved the state-of-the-art performance [17,39,43,58,59]. Nonetheless, they have empirically observed that the high-frequency regions are harder to be super-resolved than the Corresponding author ~20% pixels with >0.1~17% pixels with >0.1~4% pixels with >0.1 Figure 1. The long-tailed pixel distribution in the natural image. For given a HR image IHR, we take4LR versionILRas a showcase, and utilize Bicubic Interpolation (BI) and MSRRes-Net [25] (MSRRN) to super-resolve it, i.e.,ISR BIandISR MSRRN , re-spectively. The top row shows the absolute difference (AD) in the luminance channel, and the bottom row shows the pixel number at different AD intervals. From the top row, one could observe that i) both BI and MSRRN achieve better results in the low-than high-frequency regions; ii) MSRRN performs significantly better than BI in the high-frequency regions while slightly better in the low ones. From the bottom row, one could see that iii) the pixel dis-tribution w.r.t. the low-and high-frequency region is long-tailed, i.e., the number of pixels in the low-frequency regions is far more than that in the high-frequency ones. Clearly, such an imbalanced pixel distribution necessarily results in the twin fitting problem, i.e., overfitting majority pixels in the low-frequency region while underfitting minority pixels in the high-frequency one. low-frequency ones in the natural image. To alleviate that, various SR methods have been proposed following the be-low two paradigms, i.e., developing generalized models with larger capacities [31,36] or specific models with high-frequency enhancements [37,48]. The former obtains better results in both the high-and low-frequency regions via con-stantly enlarging the capacities, while the latter enhances the high-frequency regions through specific auxiliary sub-networks, loss functions, training strategies, etc. Although the promising results have been obtained, they involve the following three limitations. First, the large capacity models take a lot of time and computations in the training and infer-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14327 ring, which is unavailable to mobile scenarios. Second, the specific models need ingenious designs about the architec-ture and training strategy, which is difficult to training and prone to artifacts. Third, they don’t dive into the problem and give a reasonable explanation, thus alleviate the prob-lem not in the most cost-effective way. In this paper, we dive into the problem and explain it from a machine learning perspective, i.e., the twin fitting problem caused by the long-tailed pixel distribution in the natural images. Taking the Fig. 1 as an example, the num-ber of pixels in the low-frequency region is far more than that in the high-frequency one, i.e., the long-tailed pixel dis-tribution. Since majority pixels in the low-frequency region dominate minority pixels in the high-frequency one, the gra-dients of SR model are mainly from the former instead of the latter. As a result, the SR model is optimized to mainly fit the pixels in the low-frequency region, and thus over-fitting them while underfitting those in the high-frequency region, i.e., the twin fitting problem. Motivated by the above explanation, we reformulate SR as the long-tailed distribution learning problem. With this reformulation, the twin fitting problem could be alleviated during training in a model-agnostic way, and thus applicable to different SR models. However, although the long-tailed distribution learning problem has been extensively studied in high-level vision tasks, there are few works on it in low-level ones. Therefore, we bridge the gaps of the problem between in low-and high-level vision ones, and design a simple and effective solution to verify the feasibility of our reformulation. To be specific, we design a novel long-tailed distribution learning method for SR, termed as Focal Pixel Learning (FPL), which adaptively re-weights the loss con-tribution of pixels by combining two complementary struc-ture priors. In this way, the gradients of SR model could be rebalanced, leading it to achieve better balance on the fitting of the high-and low-frequency regions. The contributions of this work are summarized below. For the first time, this work dives into the observation that the high-frequency regions are harder to be super-resolved than the low-frequency ones, and gives a rea-sonable explanation, i.e., the long-tailed pixel distribu-tion and it caused twin fitting problem. With our explanation, this work reformulates SR as a long-tailed distribution learning problem and designs a novel solution to verify its feasibility, which could be the first long-tailed distribution learning solution for SR, as far as we know. Extensive analyses and experiments are conducted to demonstrate the explanation, verify the reformulation, and validate the solution. The results demonstrate that our works could consistently improve the performance of SR models with different complexities.2. Related Works Here, we briefly review the related works of image super resolution and long-tailed distribution learning.
Hong_Watch_or_Listen_Robust_Audio-Visual_Speech_Recognition_With_Visual_Corruption_CVPR_2023
Abstract This paper deals with Audio-Visual Speech Recognition (AVSR) under multimodal input corruption situations where audio inputs and visual inputs are both corrupted, which is not well addressed in previous research directions. Previ-ous studies have focused on how to complement the cor-rupted audio inputs with the clean visual inputs with the assumption of the availability of clean visual inputs. How-ever, in real life, clean visual inputs are not always acces-sible and can even be corrupted by occluded lip regions or noises. Thus, we firstly analyze that the previous AVSR mod-els are not indeed robust to the corruption of multimodal input streams, the audio and the visual inputs, compared to uni-modal models. Then, we design multimodal input cor-ruption modeling to develop robust AVSR models. Lastly, we propose a novel AVSR framework, namely Audio-Visual Reliability Scoring module (AV-RelScore), that is robust to the corrupted multimodal inputs. The AV-RelScore can de-termine which input modal stream is reliable or not for the prediction and also can exploit the more reliable streams in prediction. The effectiveness of the proposed method is evaluated with comprehensive experiments on popular benchmark databases, LRS2 and LRS3. We also show that the reliability scores obtained by AV-RelScore well reflect the degree of corruption and make the proposed model fo-cus on the reliable multimodal representations.
1. Introduction Imagine you are watching the news on Youtube. Whether the recording microphone is a problem or the video encoding is wrong, the anchor’s voice keeps breaking off, so you cannot hear well. You try to understand her by her lip motions, but making matters worse, the microphone keeps covering her mouth, so the news is hardly recognizable. These days, people often face these kinds of situations, even in video conferences or interviews where the internet situa-*Both authors have contributed equally to this work. †Corresponding authortion cuts in and out. As understanding speech is the core part of human com-munication, there have been a number of works on speech recognition [1,2], especially based on deep learning. These works have tried to enhance audio representation for recog-nizing speech in a noisy situation [3–6] or to utilize addi-tional visual information for obtaining complementary ef-fects [7–12]. Recently, even technologies that comprehend speech from only visual information have been developed [13–21]. With the research efforts, automatic speech recogni-tion technologies including Audio Speech Recognition (ASR), Visual Speech Recognition (VSR), and Audio-Visual Speech Recognition (A VSR) are achieving great de-velopments with outstanding performances [22–24]. With the advantages of utilizing multimodal inputs, audio and vi-sual, A VSR that can robustly recognize speech even in a noisy environment, such as in a crowded restaurant, is ris-ing for the future speech recognition technology. However, the previous studies have mostly considered the case where the audio inputs are corrupted and utilizing the additional clean visual inputs for complementing the corrupted audio information. Looking at the case, we come up with an im-portant question, what if both visual and audio information are corrupted, even simultaneously? In real life, like the aforementioned news situation, cases where both visual and audio inputs are corrupted alternatively or even simultane-ously, are frequently happening. To deal with the question, we firstly analyze the robustness of the previous ASR, VSR, and A VSR models on three different input corruption situa-tions, 1) audio input corruption, 2) visual input corruption, and 3) audio-visual input corruption. Then, we show that the previous A VSR models are not indeed robust to audio-visual input corruption and show even worse performances than uni-modal models, which is eventually losing the ben-efit of utilizing multimodal inputs. To maximize the superiority of using multimodal sys-tems over the uni-modal system, in this paper, we propose a novel multimodal corruption modeling method and show its importance in developing robust A VSR technologies for This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18783 diverse input corruption situations including audio-visual corruption. To this end, we model the visual corruption with lip occlusion and noises that are composed of blurry frames and additive noise perturbation, along with the au-dio corruption modeling. Then, we propose a novel A VSR framework, namely Audio-Visual Reliability Scoring mod-ule (A V-RelScore), that can evaluate which modal of the current input representations is more reliable than others. The proposed A V-RelScore produces the reliability scores for each time step, which represent how much the current audio features and the visual features are helpful for rec-ognizing speech. With the reliability scores, meaningful speech representations can be emphasized at each modal stream. Then, through multimodal attentive encoder, the emphasized multimodal representations are fused by con-sidering inter-modal relationships. Therefore, with the A V-RelScore, the A VSR model can refer to the audio stream when the given visual stream is determined as less reliable (i.e., corrupted), and vice versa. We provide the audio-visual corruption modeling for the reproducibility and the future research.1 Our key contributions are as follows: • To the best of our knowledge, this is the first attempt to analyze the robustness of deep learning-based A VSR under the corruption of multimodal inputs including lip occlusions. • We propose an audio-visual corruption modeling method and show that it is key for developing robust A VSR technologies under diverse environments. • We propose Audio-Visual Reliability Scoring module (A V-RelScore) to figure out whether the current input modal is reliable or not, so that to robustly recognize the input speech even if one modality is corrupted, or even both. • We conduct comprehensive experiments with ASR, VSR, and A VSR models to validate the effectiveness of the proposed audio-visual corruption modeling and A V-RelScore on LRS2 [25] and LRS3 [26], the largest audio-visual datasets obtained in the wild.
Gao_VisFusion_Visibility-Aware_Online_3D_Scene_Reconstruction_From_Videos_CVPR_2023
Abstract We propose VisFusion, a visibility-aware online 3D scene reconstruction approach from posed monocular videos. In particular, we aim to reconstruct the scene from volumetric features. Unlike previous reconstruction meth-ods which aggregate features for each voxel from input views without considering its visibility, we aim to improve the feature fusion by explicitly inferring its visibility from a similarity matrix, computed from its projected features in each image pair. Following previous works, our model is a coarse-to-fine pipeline including a volume sparsification process. Different from their works which sparsify voxels globally with a fixed occupancy threshold, we perform the sparsification on a local feature volume along each visual ray to preserve at least one voxel per ray for more fine de-tails. The sparse local volume is then fused with a global one for online reconstruction. We further propose to pre-dict TSDF in a coarse-to-fine manner by learning its resid-uals across scales leading to better TSDF predictions. Ex-perimental results on benchmarks show that our method can achieve superior performance with more scene details. Code is available at: https://github.com/huiyu-gao/VisFusion
1. Introduction 3D scene reconstruction from RGB videos is a critical task in 3D computer vision, which finds its broad appli-cations in augmented reality (AR), robot navigation and human-robot interaction. These applications require accu-rate, complete and real-time 3D reconstruction of scenes. While state-of-the-art SLAM systems [3, 31] can track the camera motion accurately by leveraging both visual and in-ertial measurements in an unknown environment, the recon-structed map from a SLAM system only contains sparse point clouds such that dense reconstruction from monocular videos remains as a challenging problem. Many previous methods [1, 18] assume the observation of the whole video sequence for the reconstruction, which is not practical for online applications like VR games. In this paper, we follow [26] to propose an online 3D reconstruc-tion method. Given input images, most earlier 3D recon-struction methods [23,35] adopt a two-stage pipeline, which first estimates the depth map for each keyframe based on multi-view stereo (MVS) algorithms [11,14,29,32] and then fuses the estimated depth maps into a Truncated Signed Dis-tance Function (TSDF) volume [19]. The Marching Cubes algorithm [16] is then used to extract the 3D mesh. How-ever, those two-stage pipelines struggle to produce glob-ally coherent reconstruction since each depth map is esti-mated separately [26], especially for low texture regions like walls whose depth values are extremely hard to esti-mate with only several local views. To address this, more recent works [2,26,33] propose to fuse image features into a global 3D volume and directly regress TSDF [26,33] or oc-cupancy [2] given the feature volume. Such strategy allows for an end-to-end global surface reconstruction. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17317 The problem of occlusion naturally arises for global fea-ture fusion. Previous methods [2, 26] either completely ig-nore it by simply averaging the multi-view features [26] for each voxel or implicitly model the visibility via the atten-tion mechanism [2]. However, without explicit supervision, such attention cannot guarantee to encode the correct visi-bility. In this paper, we thus propose to explicitly predict the visibility weights of all views for each voxel with ground truth supervision. In addition, voxels will be considered vis-iblein at least one view in [2] due to the normalization of the attention mechanism, while in our method, empty vox-els and fully occluded voxels are invisible in any view to avoid introducing noises. Specifically, given a fragment of a video sequence observing the same 3D region, we first project each 3D voxel onto different view images to obtain 2D features. We then compute the pair-wise similarities of these features. Since features of the same occupied voxel are often similar across views, such similarity map naturally encodes the information of whether a 3D voxel is visible at a particular camera view or not (see Fig. 4). We thus use this similarity map to predict visibility weights. For volumetric-based methods, it is common practice to adopt a coarse-to-fine pipeline [2,18,25,26]. One of its key steps is voxel sparsification which eliminates empty voxels at coarse level for better performance and smaller memory consumption. To the best of our knowledge, previous meth-ods [2,18,25,26] propose to globally sparsify the volume by removing voxels whose occupancy probabilities are lower than a predefined threshold. However, such fixed threshold tends to sparsify more voxels than necessary, especially to remove voxels covering thin structures such as chair legs. At coarse level where the thin structure only occupies a small portion of the voxel, the features of such thin struc-ture are likely ignored leading to low occupancy probability prediction and resulting in the removal of such voxel. How-ever, such voxel should rank highly, based on the occupancy probability, among voxels along the visual ray defined by the pixel observing this thin structure. Inspired by this, we introduce a novel ray-based sparsification process. In par-ticular, for any image, we first cast a ray from every pixel to get the voxels this ray passes. For each ray, we then keep voxels with top occupancy scores to next level. Unlike pre-vious works [2, 18, 25, 26] that sparsify the global volume, our ray-based sparsification is performed on local 3D vol-ume. Our ray-based sparsifying strategy allows us to retain more surface voxels to the next level leading to a more com-plete reconstruction. Furthermore, previous coarse-to-fine methods [2, 18, 25, 26] directly regress the TSDF at each level discarding the relationships between the TSDF predicted at coarse and that at fine level. In our method, at each fine level, we aim to pre-dict a residual between the TSDF volume upsampled from the coarser level and that of the fine level, which is shownto be more accurate in TSDF estimation. In summary, our contributions are (i) a visibility-aware feature fusion module which explicitly predicts visibility weights used for feature aggregation for voxels; (ii) a ray-based voxel sparsifying algorithm which leads to the recon-struction of more scene structure details. (iii) an easier way of TSDF regression by learning the residual to the upsam-pled coarse TSDF volume for improved TSDF estimation. Our model outperforms the existing online feature fusion based methods.
Huang_Feature_Shrinkage_Pyramid_for_Camouflaged_Object_Detection_With_Transformers_CVPR_2023
Abstract Vision transformers have recently shown strong global context modeling capabilities in camouflaged object detec-tion. However, they suffer from two major limitations: less effective locality modeling and insufficient feature aggre-gation in decoders, which are not conducive to camou-flaged object detection that explores subtle cues from in-distinguishable backgrounds. To address these issues, in this paper, we propose a novel transformer-based Feature Shrinkage Pyramid Network (FSPNet), which aims to hi-erarchically decode locality-enhanced neighboring trans-former features through progressive shrinking for camou-flaged object detection. Specifically, we propose a non-local token enhancement module (NL-TEM) that employs the non-local mechanism to interact neighboring tokens and explore graph-based high-order relations within tokens to enhance local representations of transformers. Moreover, we design a feature shrinkage decoder (FSD) with adja-cent interaction modules (AIM), which progressively ag-gregates adjacent transformer features through a layer-by-layer shrinkage pyramid to accumulate imperceptible but effective cues as much as possible for object information decoding. Extensive quantitative and qualitative experi-ments demonstrate that the proposed model significantly outperforms the existing 24 competitors on three challeng-ing COD benchmark datasets under six widely-used evalu-ation metrics. Our code is publicly available at https: //github.com/ZhouHuang23/FSPNet .
1. Introduction Camouflage is a common defense or tactic in organ-isms that “perfectly” blend in with their surroundings to deceive predators (prey) or sneak up on prey (hunters). Camouflaged object detection (COD) [11] aims to segment camouflaged objects in the scene and has been widely ap-†Equal contributions. *Corresponding author: Tian-Zhu Xiang . Image GT Ours ZoomNet SINetV2Small Large Occluded Multiple UncertainFigure 1. Visual comparison of COD in different challeng-ing scenarios , including small, large, multiple, occluded and boundary-uncertain camouflaged objects. Compared with the re-cently proposed ZoomNet [30] and SINet-v2 [10], our method pro-vides superior performance with more accurate object localization and more complete object segmentation, mainly due to the pro-posed locality-enhanced global context exploration and progres-sive shrinkage decoder. plied in species conservation [29], medical image segmen-tation [5, 20], and industrial defect detection [3], etc. Due to the high similarities between camouflaged objects and their backgrounds, camouflaged objects are usually in-conspicuous and indistinguishable, which brings great chal-lenges to accurate detection. Recently, the development of deep learning and the availability of large-scale COD datasets ( e.g., COD10K [11]) have significantly advanced camouflaged object detection. Numerous deep learning-based methods have been proposed, which can be roughly divided into three categories: targeted design of feature ex-ploration modules, multi-task joint learning frameworks, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5557 and bio-inspired methods. Although these methods have made remarkable progress, they mainly rely heavily on con-volutional neural networks (CNNs), which cannot capture long-range dependencies due to the limited receptive fields, resulting in inferior performance for COD. As shown in Fig. 1, recently proposed state-of-the-art CNN-based meth-ods ( e.g., ZoomNet [30] and SINet-v2 [10]) fail to explore global feature relations and thus often provide predictions of incomplete object regions, especially for multiple ob-jects, large objects and occlusion cases. Although larger convolution kernels or simply stacking multiple convolu-tion layers with small kernels can enlarge receptive fields and thus alleviate this issue to some extent, it also dramat-ically increases the computational cost and the number of network parameters. Furthermore, studies [34] have shown that simply network deepening is ineffective for long-range dependency modeling. Compared to CNNs, vision transformers (ViT) [7], which have recently been introduced into computer vision and demonstrated significant breakthroughs in various vi-sion applications [17], can efficiently model long-range de-pendencies with the self-attention operations and thus over-come the above drawbacks of CNNs-based models. Re-cently, the works of [47] and [24] have attempted to accom-modate transformers for COD and shown promising per-formance. These methods either employ transformer as a network component for feature decoding or utilize the off-the-shelf vision transformers as backbones for feature en-coding. Through a thorough analysis of these methods for COD, we observe two major issues within existing tech-niques: 1) Less effective local feature modeling for trans-former backbones. We argue that both global context and local features play essential roles in COD tasks. However, we observe that most transformer-based methods lack a lo-cality mechanism for information exchange within local re-gions. 2) Limitations of feature aggregation in decoders. Existing decoders (shown in Fig. 2 (a)-(d)) usually directly aggregate the features with significant information differ-ences ( e.g., low-level features with rich details and high-level features with semantics), which tends to discard some inconspicuous but valuable cues or introduce noise, result-ing in inaccurate predictions. This is a big blow for the task of identifying camouflaged objects from faint clues. To this end, in this paper, we propose a novel transformer-based Feature Shrinkage Pyramid Network, named FSPNet , which aims to hierarchically decode neigh-boring transformer features which are locality-enhanced global representations for camouflaged objects through pro-gressive shrinking, thereby excavating and accumulating rich local cues and global context of camouflaged objects in our encoder and decoder for accurate and complete camou-flaged object segmentation. Specifically, to complement lo-cal feature modeling in the transformer encoder, we proposea non-local token enhancement module (NL-TEM) which employs the non-local mechanism to interact neighboring similar tokens and explore graph-based high-level relations within tokens to enhance local representations. Further-more, we design a feature shrinkage decoder (FSD) with adjacent interaction modules (AIMs) which progressively aggregates adjacent transformer features in pairs through a layer-by-layer shrinkage pyramid architecture to accumu-late subtle but effective details and semantics as much as possible for object information decoding. Owing to the global context modeling of transformers, locality explo-ration within tokens and progressive feature shrinkage de-coder, our proposed model achieves state-of-the-art perfor-mance and provides an accurate and complete camouflaged object segmentation. Our main contributions are summa-rized as follows: • We propose a non-local token enhancement module (NL-TEM) for feature interaction and exploration be-tween and within tokens to compensate for locality modeling of transformers. • We design a feature shrinkage decoder (FSD) with the adjacent interaction module (AIM) to better aggregate camouflaged object cues between neighboring trans-former features through progressive shrinking for cam-ouflaged object prediction. • Comprehensive experiments show that our proposed FSPNet achieves superior performance on three widely-used COD benchmark datasets compared to 24 existing state-of-the-art methods.

Dataset Card for "CVPR2023_title_abstract_intro"

More Information needed

Downloads last month
38
Edit dataset card