bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 215
445
| abstract
stringlengths 820
2.37k
| title
stringlengths 24
147
| authors
sequencelengths 1
13
⌀ | id
stringclasses 1
value | type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 33
values | n_linked_authors
int64 -1
4
| upvotes
int64 -1
21
| num_comments
int64 -1
4
| n_authors
int64 -1
11
| Models
sequencelengths 0
1
| Datasets
sequencelengths 0
1
| Spaces
sequencelengths 0
4
| old_Models
sequencelengths 0
1
| old_Datasets
sequencelengths 0
1
| old_Spaces
sequencelengths 0
4
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=2x87SGUQ3o | @inproceedings{
xiu2024hierarchical,
title={Hierarchical Debiasing and Noisy Correction for Cross-domain Video Tube Retrieval},
author={Jingqiao Xiu and Mengze Li and Wei Ji and Jingyuan Chen and Hanbin Zhao and Shin'ichi Satoh and Roger Zimmermann},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2x87SGUQ3o}
} | Video Tube Retrieval (VTR) has attracted wide attention in the multi-modal domain, aiming to accurately localize the spatial-temporal tube in videos based on the natural language description. Despite the remarkable progress, existing VTR models trained on a specific domain (source domain) often perform unsatisfactory in another domain (target domain), due to the domain gap. Toward this issue, we introduce the learning strategy, Unsupervised Domain Adaptation, into the VTR task (UDA-VTR), which enables the knowledge transfer from the labeled source domain to the unlabeled target domain without additional manual annotations. An intuitive solution is generating the pseudo labels for the target-domain samples with the fully trained source model and fine-tuning the source model on the target domain with pseudo labels. However, the existing domain gap gives rise to two problems for this process: (1) The transfer of model parameters across domains may introduce source domain bias into target-domain features, significantly impacting the feature-based prediction for target domain samples. (2) The pseudo labels often tend to identify video tubes that are widely present in the source domain, rather than accurately localizing the correct video tubes specific to the target domain samples. To address the above issues, we propose the unsupervised domain adaptation model via Hierarchical dEbiAsing and noisy correction for cRoss-domain video Tube retrieval (HEART), which contains two characteristic modules: Layered Feature Debiasing (including the adversarial feature alignment and the graph-based alignment) and Pseudo Label Refinement. Extensive experiments prove the effectiveness of our HEART model by significantly surpassing the state-of-the-arts. The code is available (https://anonymous.4open.science/r/HEART). | Hierarchical Debiasing and Noisy Correction for Cross-domain Video Tube Retrieval | [
"Jingqiao Xiu",
"Mengze Li",
"Wei Ji",
"Jingyuan Chen",
"Hanbin Zhao",
"Shin'ichi Satoh",
"Roger Zimmermann"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=2v9hATJSv4 | @inproceedings{
chen2024vrdiagnet,
title={{VR}-DiagNet: Medical Volumetric and Radiomic Diagnosis Networks with Interpretable Clinician-like Optimizing Visual Inspection},
author={Shouyu Chen and Tangwei Ye and Lai Zhong Yuan and Qi Zhang and KE LIU and Usman Naseem and Ke Sun and Nengjun Zhu and Liang Hu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2v9hATJSv4}
} | Interpretable and robust medical diagnoses are essential traits for practicing clinicians. Most computer-augmented diagnostic systems suffer from three major problems: non-interpretability, limited modality analysis, and narrow focus. Existing frameworks can either deal with multimodality to some extent but suffer from non-interpretability or partially interpretable but provide a limited modality and multifaceted capabilities. Our work aims to integrate all these aspects in one complete framework to fully utilize the full spectrum of information offered by multiple modalities and facets. We propose our solution via our novel architecture VR-DiagNet, consisting of a planner and a classifier, optimized iteratively and cohesively. VR-DiagNet simulates the perceptual process of clinicians via the use of volumetric imaging information integrated with radiomic features modality; at the same time, it recreates human thought processes via a customized Monte Carlo Tree Search (MCTS) which constructs a volume-tailored experience tree to identify slices of interest (SoIs) in our multi-slice perception space. We conducted extensive experiments across two diagnostic tasks comprising six public medical volumetric benchmark datasets. Our findings showcase superior performance, as evidenced by heightened accuracy and area under the curve (AUC) metrics, reduced computational overhead, and expedited convergence while conclusively illustrating the immense value of integrating volumetric and radiomic modalities for our current problem setup. | VR-DiagNet: Medical Volumetric and Radiomic Diagnosis Networks with Interpretable Clinician-like Optimizing Visual Inspection | [
"Shouyu Chen",
"Tangwei Ye",
"Lai Zhong Yuan",
"Qi Zhang",
"KE LIU",
"Usman Naseem",
"Ke Sun",
"Nengjun Zhu",
"Liang Hu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=2plaFtU3Xz | @inproceedings{
lyu2024rainyscape,
title={RainyScape: Unsupervised Rainy Scene Reconstruction using Decoupled Neural Rendering},
author={Xianqiang Lyu and Hui LIU and Junhui Hou},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2plaFtU3Xz}
} | We propose RainyScape, an unsupervised framework for reconstructing clean scenes from a collection of multi-view rainy images. RainyScape consists of two main modules: a neural rendering module and a rain-prediction module that incorporates a predictor network and a learnable latent embedding that captures the rain characteristics of the scene. Specifically, based on the spectral bias property of neural networks, we first optimize the neural rendering pipeline to obtain a low-frequency scene representation. Subsequently, we jointly optimize the two modules, driven by the proposed adaptive direction-sensitive gradient-based reconstruction loss, which encourages the network to distinguish between scene details and rain streaks, facilitating the propagation of gradients to the relevant components. Extensive experiments on both the classic neural radiance field and the recently proposed 3D Gaussian splatting demonstrate the superiority of our method in effectively eliminating rain streaks and rendering clean images, achieving state-of-the-art performance. The constructed high-quality dataset and source code will be publicly available. | RainyScape: Unsupervised Rainy Scene Reconstruction using Decoupled Neural Rendering | [
"Xianqiang Lyu",
"Hui LIU",
"Junhui Hou"
] | Conference | poster | 2404.11401 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=2o1TIiBXPF | @inproceedings{
yin2024diverse,
title={Diverse consensuses paired with motion estimation-based multi-model fitting},
author={Wenyu Yin and Shuyuan Lin and Yang Lu and Hanzi Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2o1TIiBXPF}
} | Multi-model fitting aims to robustly estimate the parameters of various model instances in data contaminated by noise and outliers. Most previous works employ only a single type of consensus or implicit fusion model to represent the correlation between data points and model hypotheses. This approach often results in unrealistic and incorrect model fitting in the presence of noise and uncertainty. In this paper, we propose a novel method of diverse Consensuses paired with Motion estimation-based multi-Model Fitting (CMMF), which leverages three types of diverse consensuses along with inter-model collaboration to enhance the effectiveness of multi-model fusion. We design a Tangent Consensus Residual Reconstruction (TCRR) module to capture motion structure information of two points at the pixel level. Additionally, we introduce a Cross Consensus Affinity (CCA) framework to strengthen the correlation between data points and model hypotheses. To address the challenge of multi-body motion estimation, we propose a Nested Consensus Clustering (NCC) strategy, which formulates multi-model fitting as a motion estimation problem. It explicitly establishes motion collaboration between models and ensures that multiple models are well-fitted. Extensive quantitative and qualitative experiments are conducted on four public datasets (i.e., AdelaideRMF-F, Hopkins155, KITTI, MTPV62), and the results demonstrate that our proposed method outperforms several state-of-the-art methods. | Diverse consensuses paired with motion estimation-based multi-model fitting | [
"Wenyu Yin",
"Shuyuan Lin",
"Yang Lu",
"Hanzi Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=2jzyYyRqX0 | @inproceedings{
lu2024breaking,
title={Breaking Modality Gap in {RGBT} Tracking: Coupled Knowledge Distillation},
author={Andong Lu and Jiacong Zhao and Chenglong Li and Yun Xiao and Bin Luo},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2jzyYyRqX0}
} | Modality gap between RGB and thermal infrared (TIR) images is a crucial issue but often overlooked in existing RGBT tracking methods. It can be observed that modality gap mainly lies in the image style difference. In this work, we propose a novel Coupled Knowledge Distillation framework called CKD, which pursues common styles of different modalities to break modality gap, for high performance RGBT tracking. In particular, we introduce two student networks and employ the style distillation loss to make their style features consistent as much as possible. Through alleviating the style difference of two student networks, we can break modality gap of different modalities well.
However, the distillation of style features might harm to the content representations of two modalities in student networks. To handle this issue, we take original RGB and TIR networks as the teachers, and distill their content knowledge into two student networks respectively by the style-content orthogonal feature decoupling scheme. We couple the above two distillation processes in an online optimization framework to form new feature representations of RGB and thermal modalities without modality gap. In addition, we design a masked modeling strategy and a multi-modal candidate token elimination strategy into CKD to improve tracking robustness and efficiency respectively. Extensive experiments on five standard RGBT tracking datasets validate the effectiveness of the proposed method against state-of-the-art methods while achieving the fastest tracking speed of 96.4 FPS. | Breaking Modality Gap in RGBT Tracking: Coupled Knowledge Distillation | [
"Andong Lu",
"Jiacong Zhao",
"Chenglong Li",
"Yun Xiao",
"Bin Luo"
] | Conference | poster | 2410.11586 | [
"https://github.com/multi-modality-tracking/ckd"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=2jMxi682Wt | @inproceedings{
yu2024ecavatar,
title={{ECA}vatar: 3D Avatar Facial Animation with Controllable Identity and Emotion},
author={Minjing Yu and Delong Pang and Ziwen Kang and Zhiyao Sun and Tian Lv and Jenny Sheng and Ran Yi and Yu-Hui Wen and Yong-jin Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2jMxi682Wt}
} | Speech-driven 3D facial animation has attracted considerable attention due to its extensive applicability across diverse domains. The majority of existing 3D facial animation methods ignore the avatar's expression, while emotion-controllable methods struggle with specifying the avatar's identity and portraying various emotional intensities, resulting in a lack of naturalness and realism in the animation. To address this issue, we first present an Emolib dataset containing 10,736 expression images with eight emotion categories, i.e., neutral, happy, angry, sad, fear, surprise, disgust, and contempt, where each image is accompanied by a corresponding emotion label and a 3D model with expression. Additionally, we present a novel 3D facial animation framework that operates with unpaired training data. This framework produces emotional facial animations aligned with the input face image, effectively conveying diverse emotional expressions and intensities. Our framework initially generates lip-synchronized and expression models separately. These models are then combined using a fusion network to generate face models that effectively synchronize with speech while conveying emotions. Moreover, the mouth structure is incorporated to create a comprehensive face model. This model is then fed into our skin-realistic renderer, resulting in a highly realistic animation. Experimental results demonstrate that our approach outperforms state-of-the-art 3D facial animation methods in terms of realism and emotional expressiveness while also maintaining precise lip synchronization. | ECAvatar: 3D Avatar Facial Animation with Controllable Identity and Emotion | [
"Minjing Yu",
"Delong Pang",
"Ziwen Kang",
"Zhiyao Sun",
"Tian Lv",
"Jenny Sheng",
"Ran Yi",
"Yu-Hui Wen",
"Yong-jin Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=2fdf7KYeTE | @inproceedings{
bao2024d,
title={3D Reconstruction and Novel View Synthesis of Indoor Environments based on a Dual Neural Radiance Field},
author={Zhenyu Bao and Guibiao Liao and Zhongyuan Zhao and KANGLIN LIU and Qing Li and Guoping Qiu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2fdf7KYeTE}
} | Simultaneously achieving 3D reconstruction and novel view synthesis for indoor environments has widespread applications but is technically very challenging. State-of-the-art methods based on implicit neural functions can achieve excellent 3D reconstruction results, but their performances on new view synthesis can be unsatisfactory. The exciting development of neural radiance field (NeRF) has revolutionized novel view synthesis, however, NeRF-based models can fail to reconstruct clean geometric surfaces. %In this paper,
We have developed a dual neural radiance field (Du-NeRF) to simultaneously achieve high-quality geometry reconstruction and view rendering. Du-NeRF contains two geometric fields, one derived from the SDF field to facilitate geometric reconstruction and the other derived from the density field to boost new view synthesis. One of the innovative features of Du-NeRF is that it decouples a view-independent component from the density field and uses it as a label to supervise the learning process of the SDF field. This reduces shape-radiance ambiguity and enables geometry and color to benefit from each other during the learning process. Extensive experiments demonstrate that Du-NeRF can significantly improve the performance of novel view synthesis and 3D reconstruction for indoor environments and it is particularly effective in constructing areas containing fine geometries that do not obey multi-view color consistency. | 3D Reconstruction and Novel View Synthesis of Indoor Environments based on a Dual Neural Radiance Field | [
"Zhenyu Bao",
"Guibiao Liao",
"Zhongyuan Zhao",
"KANGLIN LIU",
"Qing Li",
"Guoping Qiu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=2es1ojI14x | @inproceedings{
wu2024weakly,
title={Weakly Supervised Video Anomaly Detection and Localization with Spatio-Temporal Prompts},
author={Peng Wu and Xuerong Zhou and Guansong Pang and Zhiwei Yang and Qingsen Yan and PENG WANG and Yanning Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2es1ojI14x}
} | Current weakly supervised video anomaly detection (WSVAD) task aims to achieve frame-level anomalous event detection with only coarse video-level annotations available. Existing works typically involve extracting global features from full-resolution video frames and training frame-level classifiers to detect anomalies in the temporal dimension. However, most anomalous events tend to occur in localized spatial regions rather than the entire video frames, which implies existing frame-level feature based works may be misled by the dominant background information and lack the interpretation of the detected anomalies. To address this dilemma, this paper introduces a novel method called STPrompt that learns spatio-temporal prompt embeddings for weakly supervised video anomaly detection and localization (WSVADL) based on pre-trained vision-language models (VLMs). Our proposed method employs a two-stream network structure, with one stream focusing on the temporal dimension and the other primarily on the spatial dimension. By leveraging the learned knowledge from pre-trained VLMs and incorporating natural motion priors from raw videos, our model learns prompt embeddings that are aligned with spatio-temporal regions of videos (e.g., patches of individual frames) for identify specific local regions of anomalies, enabling accurate video anomaly detection while mitigating the influence of background information. Without relying on detailed spatio-temporal annotations or auxiliary object detection/tracking, our method achieves state-of-the-art performance on three public benchmarks for the WSVADL task. | Weakly Supervised Video Anomaly Detection and Localization with Spatio-Temporal Prompts | [
"Peng Wu",
"Xuerong Zhou",
"Guansong Pang",
"Zhiwei Yang",
"Qingsen Yan",
"PENG WANG",
"Yanning Zhang"
] | Conference | poster | 2408.05905 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=2dTHzzfoIp | @inproceedings{
luo2024bridging,
title={Bridging Gaps in Content and Knowledge for Multimodal Entity Linking},
author={Pengfei Luo and Tong Xu and Che Liu and Suojuan Zhang and Linli Xu and Minglei Li and Enhong Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2dTHzzfoIp}
} | Multimodal Entity Linking (MEL) aims to address the ambiguity in multimodal mentions and associate them with Multimodal Knowledge Graphs (MMKGs). Existing works primarily focus on designing multimodal interaction and fusion mechanisms to enhance the performance of MEL. However, these methods still overlook two crucial gaps within the MEL task. One is the content discrepancy between mentions and entities, manifested as uneven information density. The other is the knowledge gap, indicating insufficient knowledge extraction and reasoning during the linking process. To bridge these gaps, we propose a novel framework FissFuse, as well as a plug-and-play knowledge-aware re-ranking method KAR. Specifically, FissFuse collaborates with the Fission and Fusion branches, establishing dynamic features for each mention-entity pair and adaptively learning multimodal interactions to alleviate content discrepancy. Meanwhile, KAR is endowed with carefully crafted instruction for intricate knowledge reasoning, serving as re-ranking agents empowered by Large Language Models (LLMs). Extensive experiments on two well-constructed MEL datasets demonstrate outstanding performance of FissFuse compared with various baselines. Comprehensive evaluations and ablation experiments validate the effectiveness and generality of KAR. | Bridging Gaps in Content and Knowledge for Multimodal Entity Linking | [
"Pengfei Luo",
"Tong Xu",
"Che Liu",
"Suojuan Zhang",
"Linli Xu",
"Minglei Li",
"Enhong Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=2c9DH53RB6 | @inproceedings{
lin2024difftv,
title={Diff{TV}: Identity-Preserved Thermal-to-Visible Face Translation via Feature Alignment and Dual-Stage Conditions},
author={Jingyu Lin and Guiqin Zhao and Jing Xu and Guoli Wang and Zejin Wang and Antitza Dantcheva and Lan Du and Cunjian Chen},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2c9DH53RB6}
} | The thermal-to-visible (T2V) face translation task is essential for enabling face verification in low-light or dark conditions by converting thermal infrared faces into their visible counterparts. However, this task faces two primary challenges. First, the inherent differences between the modalities hinder the effective use of thermal information to guide RGB face reconstruction. Second, translated RGB faces often lack the identity details of the corresponding visible faces, such as skin color. To tackle these challenges, we introduce DiffTV, the first Latent Diffusion Model (LDM) specifically designed for T2V facial image translation with a focus on preserving identity. Our approach proposes a novel heterogeneous feature alignment strategy that bridges the modal gap and extracts both coarse- and fine-grained identity features consistent with visible images. Furthermore, a dual-stage condition injection strategy introduces control information to guide identity-preserved translation. Experimental results demonstrate the superior performance of DiffTV, particularly in scenarios where maintaining identity integrity is critical. | DiffTV: Identity-Preserved Thermal-to-Visible Face Translation via Feature Alignment and Dual-Stage Conditions | [
"Jingyu Lin",
"Guiqin Zhao",
"Jing Xu",
"Guoli Wang",
"Zejin Wang",
"Antitza Dantcheva",
"Lan Du",
"Cunjian Chen"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=2bTqjmcQjT | @inproceedings{
fu2024boosting,
title={Boosting Speech Recognition Robustness to Modality-Distortion with Contrast-Augmented Prompts},
author={Dongjie Fu and Xize Cheng and Xiaoda Yang and Hanting Wang and Zhou Zhao and Tao Jin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2bTqjmcQjT}
} | In the burgeoning field of Audio-Visual Speech Recognition (AVSR), extant research has predominantly concentrated on the training paradigms tailored for high-quality resources. However, owing to the challenges inherent in real-world data collection, audio-visual data are frequently affected by modality-distortion, which encompasses audio-visual asynchrony, video noise and audio noise. The recognition accuracy of existing AVSR method is significantly compromised when multiple modality-distortion coexist in low-resource data. In light of the above challenges, we propose PCD: Cluster-Prompt with Contrastive Decomposition, a robust framework for modality-distortion speech recognition, specifically devised to transpose the pre-trained knowledge from high-resource domain to the targeted domain by leveraging contrast-augmented prompts. In contrast to previous studies, we take into consideration the possibility of various types of distortion in both the audio and visual modalities. Concretely, we design bespoke prompts to delineate each modality-distortion, guiding the model to achieve speech recognition applicable to various distortion scenarios with quite few learnable parameters. To materialize the prompt mechanism, we employ multiple cluster-based strategies that better suits the pre-trained audio-visual model. Additionally, we design a contrastive decomposition mechanism to restrict the explicit relationships among various modality conditions, given their shared task knowledge and disparate modality priors. Extensive results on LRS2 dataset demonstrate that PCD achieves state-of-the-art performance for audio-visual speech recognition under the constraints of distorted resources. | Boosting Speech Recognition Robustness to Modality-Distortion with Contrast-Augmented Prompts | [
"Dongjie Fu",
"Xize Cheng",
"Xiaoda Yang",
"Hanting Wang",
"Zhou Zhao",
"Tao Jin"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=2SqVOQWhT8 | @inproceedings{
liu2024cotuning,
title={CoTuning: A Large-Small Model Collaborating Distillation Framework for Better Model Generalization},
author={Zimo Liu and Kangjun Liu and Mingyue Guo and Shiliang Zhang and Yaowei Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2SqVOQWhT8}
} | Model compression and distillation techniques have become essential for deploying deep learning models efficiently. However, existing methods often encounter challenges related to model generalization and scalability for harnessing the expertise of pre-trained large models. This paper introduces CoTuning, a novel framework designed to enhance the generalization ability of neural networks by leveraging collaborative learning between large and small models. CoTuning overcomes the limitations of traditional compression and distillation techniques by introducing strategies for knowledge exchange and simultaneous optimization. Our framework comprises an adapter-based co-tuning mechanism between cloud and edge models, a scale-shift projection for feature alignment, and a novel collaborative knowledge distillation mechanism for domain-agnostic tasks. Extensive experiments conducted on various benchmark datasets demonstrate the effectiveness of CoTuning in improving model generalization while maintaining computational efficiency and scalability. The proposed framework exhibits a significant advancement in model compression and distillation, with broad implications for research in the collaborative evolution of large-small models. | CoTuning: A Large-Small Model Collaborating Distillation Framework for Better Model Generalization | [
"Zimo Liu",
"Kangjun Liu",
"Mingyue Guo",
"Shiliang Zhang",
"Yaowei Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=2KjnPzj8gf | @inproceedings{
dan2024hogda,
title={{HOGDA}: Boosting Semi-supervised Graph Domain Adaptation via High-Order Structure-Guided Adaptive Feature Alignmen},
author={Jun Dan and Weiming Liu and Mushui Liu and Chunfeng Xie and Shunjie Dong and Guofang Ma and Yanchao Tan and Jiazheng Xing},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2KjnPzj8gf}
} | Semi-supervised graph domain adaptation, as a subfield of graph transfer learning, seeks to precisely annotate unlabeled target graph nodes by leveraging transferable features acquired from the limited labeled source nodes. However, most existing studies often directly utilize GCNs-based feature extractors to capture domain-invariant node features, while neglecting the issue that GCNs are insufficient in collecting complex structure information in graph. Considering the importance of graph structure information in encoding the complex relationship among nodes and edges, this paper aims to utilize such powerful information to assist graph transfer learning. To achieve this goal, we develop an novel framework called HOGDA. Concretely, HOGDA introduces a high-order structure information mixing module to effectively assist the feature extractor in capturing transferable node features.Moreover, to achieve fine-grained feature distributions alignment, the AWDA strategy is proposed to dynamically adjust the node weight during adversarial domain adaptation process, effectively boosting the model's transfer ability.
Furthermore, to mitigate the overfitting phenomenon caused by limited source labeled nodes, we also design a TNC strategy to guide the unlabeled nodes to achieve discriminative clustering. Extensive experimental results show that our HOGDA outperforms the state-of-the-art methods on various transfer tasks. | HOGDA: Boosting Semi-supervised Graph Domain Adaptation via High-Order Structure-Guided Adaptive Feature Alignmen | [
"Jun Dan",
"Weiming Liu",
"Mushui Liu",
"Chunfeng Xie",
"Shunjie Dong",
"Guofang Ma",
"Yanchao Tan",
"Jiazheng Xing"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=2CUpjuBbtX | @inproceedings{
deng2024pimt,
title={{PIMT}: Physics-Based Interactive Motion Transition for Hybrid Character Animation},
author={Yanbin Deng and Zheng Li and Ning Xie and Wei Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2CUpjuBbtX}
} | Motion transitions, which serve as bridges between two sequences of character animation, play a crucial role in creating long variable animation for real-time 3D interactive applications. In this paper, we present a framework to produce hybrid character animation, which combines motion capture animation and physical simulation animation that seamlessly connects the front and back motion clips. In contrast to previous works using interpolation for transition, our physics-based approach inherently ensures physical validity, and both the transition moment of the source motion clip and the horizontal rotation of the target motion clip can be specified arbitrarily within a certain range, which achieves high responsiveness and wide latitude for user control. The control policy of character can be trained automatically using only the motion capture data that requires transition, and is enhanced by our proposed Self-Behavior Cloning (SBC), an approach to improve the unsupervised reinforcement learning of motion transition. We show that our framework can accomplish the interactive transition tasks from a fully-connected state machine constructed from nine motion clips with high accuracy and naturalness. | PIMT: Physics-Based Interactive Motion Transition for Hybrid Character Animation | [
"Yanbin Deng",
"Zheng Li",
"Ning Xie",
"Wei Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=2AnSGzLiGs | @inproceedings{
lu2024towards,
title={Towards Multi-view Consistent Graph Diffusion},
author={Jielong Lu and Zhihao Wu and Zhaoliang Chen and Zhiling Cai and Shiping Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=2AnSGzLiGs}
} | Facing the increasing heterogeneity of data in the real world, multi-view learning has become a crucial area of research. Many researchers favor using graph convolutional networks for their adeptness at modeling both the topology and the attributes.
However, these approaches typically only consider the construction of static topologies within individual views, overlooking the potential relationships between views in multi-view data. Furthermore, there is a glaring absence of theoretical guidance for constructing topologies of multi-view data, leaving uncertainties about whether graph embeddings are progressing toward the desired state.
To tackle these challenges, we introduce a framework named energy-constrained multi-view graph diffusion.
This approach establishes a mathematical correspondence between multi-view data and graph convolution via graph diffusion. It derives a feature propagation process with inter-view perception by considering both inter- and intra-view feature flows across the entire system, treating multi-view data as a holistic entity. Additionally, an energy function is introduced to guide the inter- and intra-view diffusion functions, ensuring that the representations converge towards global consistency.
The empirical research on several benchmark datasets substantiates the benefits of the proposed method and demonstrates its significant performance improvement. | Towards Multi-view Consistent Graph Diffusion | [
"Jielong Lu",
"Zhihao Wu",
"Zhaoliang Chen",
"Zhiling Cai",
"Shiping Wang"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=25RJsYfLIu | @inproceedings{
tang2024lovd,
title={{LOVD}: Large Open Vocabulary Object detection},
author={Shiyu Tang and Zhaofan Luo and Yifan Wang and Lijun Wang and Huchuan Lu and Weibo Su and Libo Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=25RJsYfLIu}
} | Existing open-vocabulary object detectors require an accurate and compact vocabulary pre-defined during inference. Their performance is largely degraded in real scenarios where the underlying vocabulary may be indeterminate and often exponentially large. To have a more comprehensive understanding of this phenomenon, we propose a new setting called Large-and-Open Vocabulary object Detection, which simulates real scenarios by testing detectors with large vocabularies containing thousands of unseen categories. The vast unseen categories inevitably lead to an increase in category distractors, severely impeding the recognition process and leading to unsatisfactory detection results. To address this challenge, We propose a Large and Open Vocabulary Detector (LOVD) with two core components, termed the Image-to-Region Filtering (IRF) module and Cross-View Verification (CV$^2$) scheme. To relieve the category distractors of the given large vocabularies, IRF performs image-level recognition to build a compact vocabulary relevant to the image scene out of the large input vocabulary, followed by region-level classification upon the compact vocabulary. CV$^2$ further enhances the IRF by conducting image-to-region filtering in both global and local views and produces the final detection categories through a multi-branch voting mechanism. Compared to the prior works, our LOVD is more scalable and robust to large input vocabularies, and can be seamlessly integrated with predominant detection methods to improve their open-vocabulary performance. Source code will be made publicly available. | LOVD: Large Open Vocabulary Object detection | [
"Shiyu Tang",
"Zhaofan Luo",
"Yifan Wang",
"Lijun Wang",
"Huchuan Lu",
"Weibo Su",
"Libo Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=25JCtPdJyk | @inproceedings{
nguyen2024adai,
title={Ada2I: Enhancing Modality Balance for Multimodal Conversational Emotion Recognition},
author={Cam Van Thi Nguyen and Son Le The and Tuan Anh Mai and Duc-Trong Le},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=25JCtPdJyk}
} | Multimodal Emotion Recognition in Conversations (ERC) is a typical multimodal learning task in exploiting various data modalities concurrently. Prior studies on effective multimodal ERC encounter challenges in addressing modality imbalances and optimizing learning across modalities. Dealing with these problems, we present a novel framework named Ada2I, which consists of two inseparable modules namely Adaptive Feature Weighting (AFW) and Adaptive Modality Weighting (AMW) for feature-level and modality-level balancing respectively via leveraging both Inter- and Intra-modal interactions. Additionally, we introduce a refined disparity ratio as part of our training optimization strategy, a simple yet effective measure to assess the overall discrepancy of the model's learning process when handling multiple modalities simultaneously. Experimental results validate the effectiveness of Ada2I with state-of-the-art performance compared against baselines on three benchmark datasets including IEMOCAP, MELD, and CMU-MOSEI, particularly in addressing modality imbalances. | Ada2I: Enhancing Modality Balance for Multimodal Conversational Emotion Recognition | [
"Cam Van Thi Nguyen",
"Son Le The",
"Tuan Anh Mai",
"Duc-Trong Le"
] | Conference | poster | 2408.12895 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=1t7RW2Ixps | @inproceedings{
yang2024accurate,
title={Accurate and Lightweight Learning for Specific Domain Image-Text Retrieval},
author={Rui Yang and Shuang Wang and Jianwei Tao and Yingping Han and Qiaoling Lin and Yanhe Guo and Biao Hou and Licheng Jiao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=1t7RW2Ixps}
} | Recent advances in vision-language pre-trained models like CLIP have greatly enhanced general domain image-text retrieval performance. This success has led scholars to develop methods for applying CLIP to Specific Domain Image-Text Retrieval (SDITR) tasks such as Remote Sensing Image-Text Retrieval (RSITR) and Text-Image Person Re-identification (TIReID). However, these methods for SDITR often neglect two critical aspects: the enhancement of modal-level distribution consistency within the retrieval space and the reduction of CLIP's computational cost during inference, resulting in suboptimal retrieval spaces and unnecessarily high inference computational loads.
To address these issues, this paper presents a novel framework, Accurate and lightweight learning for specific domain Image-text Retrieval (AIR), based on the CLIP architecture. AIR incorporates a Modal-Level distribution Consistency Enhancement regularization (MLCE) loss and a Self-Pruning Distillation Strategy (SPDS) to improve retrieval precision and computational efficiency. The MLCE loss harmonizes the sample distance distributions within image and text modalities, fostering a retrieval space closer to the ideal state. Meanwhile, SPDS employs a strategic knowledge distillation process to transfer deep multimodal insights from CLIP to a shallower level, maintaining only the essential layers for inference, thus achieving model light-weighting.
Comprehensive experiments across various datasets in RSITR and TIReID demonstrate the effectiveness of both MLCE loss and SPDS. The study also explores the limits of SPDS's performance and compares it with conventional teacher-student distillation methods. The findings reveal that MLCE loss secures optimal retrieval on several datasets, while SPDS achieves a favorable balance between accuracy and computational demand during testing. | Accurate and Lightweight Learning for Specific Domain Image-Text Retrieval | [
"Rui Yang",
"Shuang Wang",
"Jianwei Tao",
"Yingping Han",
"Qiaoling Lin",
"Yanhe Guo",
"Biao Hou",
"Licheng Jiao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=1rvrYgWeEG | @inproceedings{
li2024two,
title={Two in One Go: Single-stage Emotion Recognition with Decoupled Subject-context Transformer},
author={Xinpeng Li and Teng Wang and Shuyi Mao and Jinbao Wang and Jian Zhao and Xiaojiang Peng and Feng Zheng and Xuelong Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=1rvrYgWeEG}
} | Emotion recognition aims to discern the emotional state of subjects within an image, relying on subject-centric and contextual visual cues. Current approaches typically follow a two-stage pipeline: first localize subjects by off-the-shelf detectors, then perform emotion classification through the late fusion of subject and context features. However, the complicated paradigm suffers from disjoint training stages and limited interaction between fine-grained subject-context elements. To address the challenge, we present a single-stage emotion recognition approach, employing a Decoupled Subject-Context Transformer (DSCT), for simultaneous subject localization and emotion classification. Rather than compartmentalizing training stages, we jointly leverage box and emotion signals as supervision to enrich subject-centric feature learning. Furthermore, we introduce DSCT to facilitate interactions between fine-grained subject-context cues in a ``decouple-then-fuse'' manner. The decoupled query tokens—subject queries and context queries—gradually intertwine across layers within DSCT, during which spatial and semantic relations are exploited and aggregated. We evaluate our single-stage framework on two widely used context-aware emotion recognition datasets, CAER-S and EMOTIC. Our approach surpasses two-stage alternatives with fewer parameter numbers, achieving a 3.39% accuracy improvement and a 6.46% average precision gain on CAER-S and EMOTIC datasets, respectively. | Two in One Go: Single-stage Emotion Recognition with Decoupled Subject-context Transformer | [
"Xinpeng Li",
"Teng Wang",
"Shuyi Mao",
"Jinbao Wang",
"Jian Zhao",
"Xiaojiang Peng",
"Feng Zheng",
"Xuelong Li"
] | Conference | poster | 2404.17205 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=1quLRqyqVm | @inproceedings{
huang2024efficient,
title={Efficient Perceiving Local Details via Adaptive Spatial-Frequency Information Integration for Multi-focus Image Fusion},
author={Jingjia Huang and Jingyan Tu and Ge Meng and Yingying Wang and Yuhang Dong and Xiaotong Tu and Xinghao Ding and Yue Huang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=1quLRqyqVm}
} | Multi-focus image fusion (MFIF) aims to combine multiple images with different focused regions into a single all-in-focus image. Existing unsupervised deep learning-based methods only fuse structural information of images in the spatial domain, neglecting potential solutions from the frequency domain exploration. In this paper, we make the first attempt to integrate spatial-frequency information to achieve high-quality MFIF. We propose a novel unsupervised spatial-frequency interaction MFIF network named SFIMFN, which consists of three key components: Adaptive Frequency Domain Information Interaction Module (AFIM), Ret-Attention-Based Spatial Information Extraction Module (RASEM), and Invertible Dual-domain Feature Fusion Module (IDFM). Specifically, in AFIM, we interactively explore global contextual information by combining the amplitude and phase information of multiple images separately. In RASEM, we design a customized transformer to encourage the network to capture important local high-frequency information by redesigning the self-attention mechanism with a bidirectional, two-dimensional form of explicit decay. Finally, we employ IDFM to fuse spatial-frequency information without information loss to generate the desired all-in-focus image. Extensive experiments on different datasets demonstrate that our method significantly outperforms state-of-the-art unsupervised methods in terms of qualitative and quantitative metrics as well as the generalization ability. | Efficient Perceiving Local Details via Adaptive Spatial-Frequency Information Integration for Multi-focus Image Fusion | [
"Jingjia Huang",
"Jingyan Tu",
"Ge Meng",
"Yingying Wang",
"Yuhang Dong",
"Xiaotong Tu",
"Xinghao Ding",
"Yue Huang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=1fyMDAzjNw | @inproceedings{
cho2024training,
title={Training Spatial-Frequency Visual Prompts and Probabilistic Clusters for Accurate Black-Box Transfer Learning},
author={Wonwoo Cho and Kangyeol Kim and Saemee Choi and Jaegul Choo},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=1fyMDAzjNw}
} | Despite the growing prevalence of black-box pre-trained models (PTMs) such as prediction API services and proprietary software, there remains a significant challenge in directly applying general models to real-world scenarios due to the data distribution gap. Considering a data deficiency and constrained computational resource scenario, this paper proposes a novel parameter-efficient transfer learning framework for vision recognition models in the black-box setting. Our framework incorporates two novel training techniques. First, we align the input space (i.e., image) of PTMs to the target data distribution by generating visual prompts of spatial and frequency domain. Along with the novel spatial-frequency hybrid visual prompter, we design a novel training technique based on probabilistic clusters, which can enhance class separation in the output space (i.e., prediction probabilities). In experiments, our model demonstrates superior performance in a few-shot transfer learning setting across extensive visual recognition datasets, surpassing state-of-the-art baselines. Additionally, the proposed method efficiently reduces computational costs for training and inference phases. | Training Spatial-Frequency Visual Prompts and Probabilistic Clusters for Accurate Black-Box Transfer Learning | [
"Wonwoo Cho",
"Kangyeol Kim",
"Saemee Choi",
"Jaegul Choo"
] | Conference | poster | 2408.07944 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=1Y3qb4987z | @inproceedings{
shen2024deitalk,
title={{DEIT}alk: Speech-Driven 3D Facial Animation with Dynamic Emotional Intensity Modeling},
author={Kang Shen and Haifeng Xia and Guangxing Geng and GuangYue Geng and Siyu Xia and Zhengming Ding},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=1Y3qb4987z}
} | Speech-driven 3D facial animation aims to synthesize 3D talking head animations with precise lip movements and rich stylistic expressions. However, existing methods exhibit two limitations: 1) they mostly focused on emotionless facial animation modeling, neglecting the importance of emotional expression, due to the lack of high-quality 3D emotional talking head datasets, and 2) several latest works treated emotional intensity as a global controllable parameter, akin to emotional or speaker style, leading to over-smoothed emotional expressions in their outcomes. To address these challenges, we first collect a 3D talking head dataset comprising five emotional styles with a set of coefficients based on the MetaHuman character model and then propose an end-to-end deep neural network, DEITalk, which conditions on speech and emotional style labels to generate realistic facial animation with dynamic expressions. To model emotional saliency variations in long-term audio contexts, we design a dynamic emotional intensity (DEI) modeling module and a dynamic positional encoding (DPE) strategy. The former extracts implicit representations of emotional intensity from speech features and utilizes them as local (high temporal frequency) emotional supervision, whereas the latter offers abilities to generalize to longer speech sequences. Moreover, we introduce an emotion-guided feature fusion decoder and a four-way loss function to generate emotion-enhanced 3D facial animation with controllable emotional styles. Extensive experimental results demonstrate that our method outperforms existing state-of-the-art methods. We recommend watching the video demo provided in our supplementary material for detailed results. | DEITalk: Speech-Driven 3D Facial Animation with Dynamic Emotional Intensity Modeling | [
"Kang Shen",
"Haifeng Xia",
"Guangxing Geng",
"GuangYue Geng",
"Siyu Xia",
"Zhengming Ding"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=1WoK45MiYK | @inproceedings{
li2024cocolc,
title={{COCO}-{LC}: Colorfulness Controllable Language-based Colorization},
author={Yifan Li and Yuhang Bai and Shuai Yang and Jiaying Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=1WoK45MiYK}
} | Language-based image colorization aims to convert grayscale images to plausible and visually pleasing color images with language guidance, enjoying wide applications in historical photo restoration and film industry. Existing methods mainly leverage large language models and diffusion models to incorporate language guidance into the colorization process. However, it is still a great challenge to build accurate correspondence between the gray image and the semantic instructions, leading to mismatched, overflowing and under-saturated colors. In this paper, we introduce a novel coarse-to-fine framework, COlorfulness COntrollable Language-based Colorization (COCO-LC), that effectively reinforces the image-text correspondence with a coarsely colorized results. In addition, a multi-level condition that leverages both low-level and high-level cues of the gray image is introduced to realize accurate semantic-aware colorization without color overflows. Furthermore, we condition COCO-LC with a scale factor to determine the colorfulness of the output, flexibly meeting the different needs of users. We validate the superiority of COCO-LC over state-of-the-art image colorization methods in accurate, realistic and controllable colorization through extensive experiments. | COCO-LC: Colorfulness Controllable Language-based Colorization | [
"Yifan Li",
"Yuhang Bai",
"Shuai Yang",
"Jiaying Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=1GYxguqqLy | @inproceedings{
xu2024crossmodal,
title={Cross-Modal Coherence-Enhanced Feedback Prompting for News Captioning},
author={Ning Xu and Yifei Gao and Tingting Zhang and Hongshuo Tian and Anan Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=1GYxguqqLy}
} | News Captioning involves generating the descriptions for news images based on the detailed content of related news articles. Given that these articles often contain extensive information not directly related to the image, captions may end up misaligned with the visual content. To mitigate this issue, we propose the novel cross-modal coherence-enhanced feedback prompting method to clarify the crucial elements that align closely with the visual content for news captioning. Specifically, we first adapt CLIP to develop a news-specific image-text matching module, enriched with insights from language model MPNet using a matching-score comparative loss, which facilitates effective cross-modal knowledge distillation. This module enhance the coherence between images and each news sentences via rating confidence. Then, we design confidence-aware prompts to fine-tune LLaVA model with by LoRa strategy, focusing on essential details in extensive articles. Lastly, we evaluate the generated news caption with refined CLIP, constructing confidence-feedback prompts to further enhance LLaVA through feedback learning, which iteratively refine captions to improve its accuracy. Extensive experiments conduct on two public datasets, GoodNews and NYTimes800k, have validated the effectiveness of our method. | Cross-Modal Coherence-Enhanced Feedback Prompting for News Captioning | [
"Ning Xu",
"Yifei Gao",
"Tingting Zhang",
"Hongshuo Tian",
"Anan Liu"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=1D984UTCxU | @inproceedings{
bao2024boundaryaware,
title={Boundary-Aware Periodicity-based Sparsification Strategy for Ultra-Long Time Series Forecasting},
author={Yiying Bao and Hao Zhou and Chao Peng and Chenyang Xu and Shuo Shi and Kecheng Cai},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=1D984UTCxU}
} | In various domains such as transportation, resource management, and weather forecasting, there is an urgent need for methods that can provide predictions over a sufficiently long time horizon to encompass the period required for decision-making and implementation. Compared to traditional time series forecasting, ultra-long time series forecasting requires enhancing the model’s ability to infer long-term series, while maintaining inference costs within an acceptable range. To address this challenge, we propose the Boundary-Aware Periodicity-based sparsification strategy for Ultra-Long time series forecasting (BAP-UL). The periodicity-based sparsification strategy is a general lightweight data sparsification framework that captures periodic features in time series and reorganizes inputs and outputs into shorter sub-sequences for model prediction. The boundary-aware method, combined with the bounded nature of time series, improves the model’s capability to predict extreme peaks and irregular time series by adjusting the prediction results. We conducted extensive experiments on benchmark datasets, and the BAP-UL model achieved nearly 90% state-of-the-art results under various experimental conditions. Moreover, the data sparsification method based on the periodicity, proposed in this paper, exhibits broad applicability. It enhances the upper limit of sequence length for mainstream time series forecasting models and achieves the state-of-the-art prediction results. | Boundary-Aware Periodicity-based Sparsification Strategy for Ultra-Long Time Series Forecasting | [
"Yiying Bao",
"Hao Zhou",
"Chao Peng",
"Chenyang Xu",
"Shuo Shi",
"Kecheng Cai"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=17o7sAwnb6 | @inproceedings{
li2024grformer,
title={{GRF}ormer: Grouped Residual Self-Attention for Lightweight Single Image Super-Resolution},
author={Yuzhen Li and Zehang Deng and Yuxin Cao and Lihua Liu},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=17o7sAwnb6}
} | Previous works have shown that reducing parameter overhead and computations for transformer-based single image super-resolution (SISR) models (e.g., SwinIR) usually leads to a reduction of performance. In this paper, we present GRFormer, an efficient and lightweight method, which not only reduces the parameter overhead and computations, but also greatly improves performance. The core of GRFormer is Grouped Residual Self-Attention (GRSA), which is specifically oriented towards two fundamental components. Firstly, it introduces a novel grouped residual layer (GRL) to replace the QKV linear layer in self-attention, aimed at efficiently reducing parameter overhead, computations, and performance loss at the same time. Secondly, it integrates a compact Exponential-Space Relative Position Bias (ES-RPB) as a substitute for the original relative position bias to improve the ability to represent position information while further minimizing the parameter count.
Extensive experimental results demonstrate that GRFormer outperforms state-of-the-art transformer-based methods for x2, x3 and x4 SISR tasks, notably outperforming SOTA by a maximum PSNR of 0.23dB when trained on the DIV2K dataset, while reducing the number of parameter and MACs by about 60% and 49% in only self-attention module respectively. We hope that our simple and effective method that can easily applied to SR models based on window-division self-attention can serve as a useful tool for further research in image super-resolution. The code is available at https://github.com/sisrformer/GRFormer. | GRFormer: Grouped Residual Self-Attention for Lightweight Single Image Super-Resolution | [
"Yuzhen Li",
"Zehang Deng",
"Yuxin Cao",
"Lihua Liu"
] | Conference | poster | 2408.07484 | [
"https://github.com/sisrformer/grformer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=15ZqYh1h4I | @inproceedings{
zhao2024learning,
title={Learning in Order! A Sequential Strategy to Learn Invariant Features for Multimodal Sentiment Analysis},
author={Xianbing Zhao and Lizhen Qu and Tao Feng and Jianfei Cai and Buzhou Tang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=15ZqYh1h4I}
} | This work proposes a novel and simple sequential learning strategy to train models on videos and texts for multimodal sentiment analysis. To estimate sentiment polarities on unseen out-of-distribution data, we introduce a multimodal model that is trained either in a single source domain or multiple source domains using our learning strategy. This strategy starts with learning domain invariant features in text, followed by learning sparse domain-agnostic features in videos, assisted by the selected features learned in text. Our experimental results demonstrate that our model achieves significantly superior performance than the state-of-the-art approaches in both single-source and multi-source settings. Our feature selection procedure favors the features that are independent to each other and are strongly correlated with their polarity labels. To facilitate research on this topic, the source code of this work will be publicly available upon acceptance. | Learning in Order! A Sequential Strategy to Learn Invariant Features for Multimodal Sentiment Analysis | [
"Xianbing Zhao",
"Lizhen Qu",
"Tao Feng",
"Jianfei Cai",
"Buzhou Tang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=12hscviLrW | @inproceedings{
pu2024siformer,
title={Siformer: Feature-isolated Transformer for Efficient Skeleton-based Sign Language Recognition},
author={Muxin Pu and Mei Kuan Lim and Chun Yong Chong},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=12hscviLrW}
} | Sign language recognition (SLR) refers to interpreting sign language glosses from given videos automatically. This research area presents a complex challenge in computer vision because of the rapid and intricate movements inherent in sign languages, which encompass hand gestures, body postures, and even facial expressions. Recently, skeleton-based action recognition has attracted increasing attention due to its ability to handle variations in subjects and backgrounds independently. However, current skeleton-based SLR methods exhibit three limitations: 1) they often neglect the importance of realistic hand poses, where most studies train SLR models on non-realistic skeletal representations; 2) they tend to assume complete data availability in both training or inference phases, and capture intricate relationships among different body parts collectively; 3) these methods treat all sign glosses uniformly, failing to account for differences in complexity levels regarding skeletal representations. To enhance the realism of hand skeletal representations, we present a kinematic hand pose rectification method for enforcing constraints. Mitigating the impact of missing data, we propose a feature-isolated mechanism to focus on capturing local spatial-temporal context. This method captures the context concurrently and independently from individual features, thus enhancing the robustness of the SLR model. Additionally, to adapt to varying complexity levels of sign glosses, we develop an input-adaptive inference approach to optimise computational efficiency and accuracy. Experimental results demonstrate the effectiveness of our approach, as evidenced by achieving a new state-of-the-art (SOTA) performance on WLASL100 and LSA64. For WLASL100, we achieve a top-1 accuracy of 86.50\%, marking a relative improvement of 2.39\% over the previous SOTA. For LSA64, we achieve a top-1 accuracy of 99.84\%. The artefacts and code related to this study are made publicly online (https://github.com/mpuu00001/Siformer.git). | Siformer: Feature-isolated Transformer for Efficient Skeleton-based Sign Language Recognition | [
"Muxin Pu",
"Mei Kuan Lim",
"Chun Yong Chong"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0yKqeOJYXs | @inproceedings{
li2024materialsegd,
title={MaterialSeg3D: Segmenting Dense Materials from 2D Priors for 3D Assets},
author={Zeyu Li and Ruitong Gan and Chuanchen Luo and Yuxi Wang and Jiaheng Liu and Ziwei Zhu and Man Zhang and Qing Li and Zhaoxiang Zhang and Junran Peng and Xu-Cheng Yin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0yKqeOJYXs}
} | Driven by powerful image diffusion models, recent research has achieved the automatic creation of 3D objects from textual or visual guidance. By performing score distillation sampling (SDS) iteratively across different views, these methods succeed in lifting 2D generative prior to the 3D space.
However, such a 2D generative image prior bakes the effect of illumination and shadow into the texture.
As a result, material maps optimized by SDS inevitably involve spurious correlated components.
The absence of precise material definition makes it infeasible to relight the generated assets reasonably in novel scenes, which limits their application in downstream scenarios. In contrast, humans can effortlessly circumvent this ambiguity by deducing the material of the object from its appearance and semantics.
Motivated by this insight, we propose MaterialSeg3D, a 3D asset material generation framework to infer underlying material from the 2D semantic prior.
Based on such a prior model, we devise a mechanism to parse material in 3D space.
We maintain a UV stack, each map of which is unprojected from a specific viewpoint.
After traversing all viewpoints, we fuse the stack through a weighted voting scheme and then employ region unification to ensure the coherence of the object parts.
To fuel the learning of semantics prior, we collect a material dataset, named Materialized Individual Objects (MIO), which features abundant images, diverse categories, and accurate annotations.
Extensive quantitative and qualitative experiments demonstrate the effectiveness of our method. | MaterialSeg3D: Segmenting Dense Materials from 2D Priors for 3D Assets | [
"Zeyu Li",
"Ruitong Gan",
"Chuanchen Luo",
"Yuxi Wang",
"Jiaheng Liu",
"Ziwei Zhu",
"Man Zhang",
"Qing Li",
"Zhaoxiang Zhang",
"Junran Peng",
"Xu-Cheng Yin"
] | Conference | oral | 2404.13923 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=0xbAgqSzPf | @inproceedings{
duan2024pc,
title={{PC}\${\textasciicircum}2\$: Pseudo-Classification Based Pseudo-Captioning for Noisy Correspondence Learning in Cross-Modal Retrieval},
author={Yue Duan and Zhangxuan Gu and Zhenzhe Ying and Lei Qi and Changhua Meng and Yinghuan Shi},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0xbAgqSzPf}
} | In the realm of cross-modal retrieval, seamlessly integrating diverse modalities within multimedia remains a formidable challenge, especially given the complexities introduced by noisy correspondence learning (NCL). Such noise often stems from mismatched data pairs, which is a significant obstacle distinct from traditional noisy labels. This paper introduces Pseudo-Classification based Pseudo-Captioning (PC$^2$) framework to address this challenge. PC$^2$ offers a threefold strategy: firstly, it establishes an auxiliary "pseudo-classification" task that interprets captions as categorical labels, steering the model to learn image-text semantic similarity through a non-contrastive mechanism. Secondly, unlike prevailing margin-based techniques, capitalizing on PC$^2$'s pseudo-classification capability, we generate pseudo-captions to provide more informative and tangible supervision for each mismatched pair. Thirdly, the oscillation of pseudo-classification is borrowed to assistant the correction of correspondence. In addition to technical contributions, we develop a realistic NCL dataset called Noise of Web (NoW), which could be a new powerful NCL benchmark where noise exists naturally. Empirical evaluations of PC$^2$ showcase marked improvements over existing state-of-the-art robust cross-modal retrieval techniques on both simulated and realistic datasets with various NCL settings. The contributed dataset and source code are released at https://github.com/alipay/PC2-NoiseofWeb. | PC^2: Pseudo-Classification Based Pseudo-Captioning for Noisy Correspondence Learning in Cross-Modal Retrieval | [
"Yue Duan",
"Zhangxuan Gu",
"Zhenzhe Ying",
"Lei Qi",
"Changhua Meng",
"Yinghuan Shi"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0nEEsPOT0r | @inproceedings{
dong2024decoderonly,
title={Decoder-Only {LLM}s are Better Controllers for Diffusion Models},
author={ZiYi Dong and Yao Xiao and Pengxu Wei and Liang Lin},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0nEEsPOT0r}
} | Groundbreaking advancements in text-to-image generation have recently been achieved with the emergence of diffusion models. These models exhibit a remarkable ability to generate highly artistic and intricately detailed images based on textual prompts. However, obtaining desired generation outcomes often necessitates repetitive trials of manipulating text prompts just like casting spells on a magic mirror, and the reason behind that is the limited capability of semantic understanding inherent in current image generation models. Specifically, existing diffusion models encode the input text prompt with a pre-trained encoder structure, which is usually trained on a limited amount of image-caption pairs. State-of-the-art large language models (LLMs) based on the decoder-only structure have shown very powerful semantic understanding capability as their architectures are more suitable for training on very large-scale unlabeled data. In this work, we propose to enhance text-to-image diffusion models by borrowing the strength of semantic understanding from large language models (LLMs), resulting in a simple yet effective adapter to allow the diffusion models to be compatible with the decoder-only structure. In the evaluation, we conduct not only extensive empirical results but also the supporting theoretical analysis with various architectures (e.g., encoder-only, encoder-decoder, and decoder-only). The experimental results show that the enhanced models with our adapter module are superior to the stat-of-the-art models in terms of text-to-image generation quality and reliability. | Decoder-Only LLMs are Better Controllers for Diffusion Models | [
"ZiYi Dong",
"Yao Xiao",
"Pengxu Wei",
"Liang Lin"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0mxBMxL9iM | @inproceedings{
dai2024oneshot,
title={One-shot In-context Part Segmentation},
author={Zhenqi Dai and Ting Liu and Xingxing Zhang and Yunchao Wei and Yanning Zhang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0mxBMxL9iM}
} | In this paper, we present the One-shot In-context Part Segmentation (OIParts) framework, designed to tackle the challenges of part segmentation by leveraging visual foundation models (VFMs). Existing training-based one-shot part segmentation methods that utilize VFMs encounter difficulties when faced with scenarios where the one-shot image and test image exhibit significant variance in appearance and perspective, or when the object in the test image is partially visible. We argue that training on the one-shot example often leads to overfitting, thereby compromising the model's generalization capability. Our framework offers a novel approach to part segmentation that is training-free, flexible, and data-efficient, requiring only a single in-context example for precise segmentation with superior generalization ability. By thoroughly exploring the complementary strengths of VFMs, specifically DINOv2 and Stable Diffusion, we introduce an adaptive channel selection approach by minimizing the intra-class distance for better exploiting these two features, thereby enhancing the discriminatory power of the extracted features for the fine-grained parts. We have achieved remarkable segmentation performance across diverse object categories. The OIParts framework not only eliminates the need for extensive labeled data but also demonstrates superior generalization ability. Through comprehensive experimentation on three benchmark datasets, we have demonstrated the superiority of our proposed method over existing part segmentation approaches in one-shot settings. | One-shot In-context Part Segmentation | [
"Zhenqi Dai",
"Ting Liu",
"Xingxing Zhang",
"Yunchao Wei",
"Yanning Zhang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0j60hdbzln | @inproceedings{
yuan2024customnet,
title={CustomNet: Object Customization with Variable-Viewpoints in Text-to-Image Diffusion Models},
author={Ziyang Yuan and Mingdeng Cao and Xintao Wang and Zhongang Qi and Chun Yuan and Ying Shan},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0j60hdbzln}
} | Incorporating a customized object into image generation presents an attractive
feature in text-to-image (T2I) generation.
Some methods finetune T2I models for each object individually at test-time, which tend to be overfitted and time-consuming.
Others train an extra encoder to extract object visual information for customization efficiently but struggle to preserve the object's identity.
To address these limitations, we present CustomNet, a unified encoder-based object customization framework that explicitly incorporates 3D novel view synthesis capabilities into the customization process.
This integration facilitates the adjustment of spatial positions and viewpoints, producing diverse outputs while effectively preserving the object's identity.
To train our model effectively, we propose a dataset construction pipeline to better handle real-world objects and complex backgrounds.
Additionally, we introduce delicate designs that enable location control and flexible background
control through textual descriptions or user-defined backgrounds.
Our method allows for object customization without the need of test-time optimization, providing simultaneous control over viewpoints, location, and text. Experimental results show that our method outperforms other customization methods regarding identity preservation, diversity, and harmony. | CustomNet: Object Customization with Variable-Viewpoints in Text-to-Image Diffusion Models | [
"Ziyang Yuan",
"Mingdeng Cao",
"Xintao Wang",
"Zhongang Qi",
"Chun Yuan",
"Ying Shan"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0hN4UZq48C | @inproceedings{
cho2024gaussiantalker,
title={GaussianTalker: Real-Time Talking Head Synthesis with 3D Gaussian Splatting},
author={Kyusun Cho and JoungBin Lee and Heeji Yoon and Yeobin Hong and Jaehoon Ko and Sangjun Ahn and Seungryong Kim},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0hN4UZq48C}
} | This paper proposes GaussianTalker, a novel framework for real-time generation of pose-controllable talking heads. It leverages the fast rendering capabilities of 3D Gaussian Splatting (3DGS) while addressing the challenges of directly controlling 3DGS with speech audio. GaussianTalker constructs a single 3DGS representation of the head and deforms it in sync with the audio. A key insight is to encode the 3D Gaussian attributes into a shared implicit feature representation, where it is merged with audio features to manipulate each Gaussian attribute. This design exploits the spatial information of the head and enforces interactions between neighboring points. The feature embeddings are then fed to a spatial-audio attention module, which predicts frame-wise offsets for the attributes of each Gaussian. This method is more stable than previous concatenation or multiplication approaches for manipulating the numerous Gaussians and their intricate parameters. Overall, GaussianTalker offers a promising approach for real-time generation of high-quality pose-controllable talking heads. | GaussianTalker: Real-Time Talking Head Synthesis with 3D Gaussian Splatting | [
"Kyusun Cho",
"JoungBin Lee",
"Heeji Yoon",
"Yeobin Hong",
"Jaehoon Ko",
"Sangjun Ahn",
"Seungryong Kim"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0ffdnLFc3M | @inproceedings{
feng2024federated,
title={Federated Fuzzy C-means with Schatten-p Norm Minimization},
author={Wei Feng and Zhenwei Wu and Qianqian Wang and Bo Dong and Quanxue Gao},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0ffdnLFc3M}
} | Federated multi-view clustering aims to provide a feasible and effective solution for handling unlabeled data owned by multiple clients. There are two main challenges: 1) The local data is always sensitive, thus preventing any inadvertent data leakage to the server or other clients. 2) Multi-view data contain both consistency and complementarity information, necessitating thorough exploration and utilization of these aspects to achieve enhanced clustering performance. Fully considering the above challenges, in this paper, we propose a novel federated multi-view method named Federated Fuzzy C-Means with Schatten-p Norm Minimization(FFCMSP) which is based on Fuzzy C-Means and Schatten p-norm. Specifically, we utilize the membership degrees to replace conventional hard clustering assignment in K-means, enabling improved uncertainty handling and less information loss. Moreover, we introduce a Schatten p-norm-based regularizer to fully explore the inter-view complementary information and global spatial structure. We also develop a federated optimization algorithm enabling clients to collaboratively learn the clustering results. Extensive experiments on several datasets demonstrate that our proposed method exhibits superior performance in federated multi-view clustering. | Federated Fuzzy C-means with Schatten-p Norm Minimization | [
"Wei Feng",
"Zhenwei Wu",
"Qianqian Wang",
"Bo Dong",
"Quanxue Gao"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0fSv58lTEq | @inproceedings{
wan2024tracing,
title={Tracing Training Progress: Dynamic Influence Based Selection for Active Learning},
author={Tianjiao Wan and Kele Xu and Long Lan and Zijian Gao and Feng Dawei and Bo Ding and Huaimin Wang},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0fSv58lTEq}
} | Active learning (AL) aims to select highly informative data points from an unlabeled dataset for annotation, mitigating the need for extensive human labeling effort. However, classical AL methods heavily rely on human expertise to design the sampling strategy, inducing limited scalability and generalizability. Many efforts have sought to address this limitation by directly connecting sample selection with model performance improvement, typically through influence function. Nevertheless, these approaches often ignore the dynamic nature of model behavior during training optimization, despite empirical evidence highlights the importance of dynamic influence to track the sample contribution. This oversight can lead to suboptimal selection, hindering the generalizability of model. In this study, we explore the dynamic influence based data selection strategy by tracing the impact of unlabeled instances on model performance throughout the training process. Our theoretical analyses suggest that selecting samples with higher projected gradients along the accumulated optimization direction at each checkpoint leads to improved performance. Furthermore, to capture a wider range of training dynamics without incurring excessive computational or memory costs, we introduce an additional dynamic loss term designed to encapsulate more generalized training progress information. These insights are integrated into a universal and task-agnostic AL framework termed Dynamic Influence Scoring for Active Learning (DISAL). Comprehensive experiments across various tasks have demonstrated that DISAL significantly surpasses existing state-of-the-art AL methods, demonstrating its ability to facilitate more efficient and effective learning in different domains. | Tracing Training Progress: Dynamic Influence Based Selection for Active Learning | [
"Tianjiao Wan",
"Kele Xu",
"Long Lan",
"Zijian Gao",
"Feng Dawei",
"Bo Ding",
"Huaimin Wang"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0aijVq844S | @inproceedings{
chu2024qncd,
title={{QNCD}: Quantization Noise Correction for Diffusion Models},
author={Huanpeng Chu and Wei Wu and Chengjie Zang and Kun Yuan},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0aijVq844S}
} | Diffusion models have revolutionized image synthesis, setting new benchmarks in quality and creativity. However, their widespread adoption is hindered by the intensive computation required during the iterative denoising process. Post-training quantization (PTQ) presents a solution to accelerate sampling, aibeit at the expense of sample quality, extremely in low-bit settings. Addressing this, our study introduces a unified Quantization Noise Correction Scheme (QNCD), aimed at minishing quantization noise throughout the sampling process. We identify two primary quantization challenges: intra and inter quantization noise. Intra quantization noise, mainly exacerbated by embeddings in the resblock module, extends activation quantization ranges, increasing disturbances in each single denosing step. Besides, inter quantization noise stems from cumulative quantization deviations across the entire denoising process, altering data distributions step-by-step. QNCD combats these through embedding-derived feature smoothing for eliminating intra quantization noise and an effective runtime noise estimatiation module for dynamicly filtering inter quantization noise. Extensive experiments demonstrate that our method outperforms previous quantization methods for diffusion models, achieving lossless results in W4A8 and W8A8 quantization settings on ImageNet (LDM-4). | QNCD: Quantization Noise Correction for Diffusion Models | [
"Huanpeng Chu",
"Wei Wu",
"Chengjie Zang",
"Kun Yuan"
] | Conference | poster | 2403.19140 | [
"https://github.com/huanpengchu/qncd"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=0ZN9KinRYD | @inproceedings{
li2024zeroshot,
title={Zero-Shot Character Identification and Speaker Prediction in Comics via Iterative Multimodal Fusion},
author={Yingxuan Li and Ryota Hinami and Kiyoharu Aizawa and Yusuke Matsui},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0ZN9KinRYD}
} | Recognizing characters and predicting speakers of dialogue are critical for comic processing tasks, such as voice generation or translation. However, because characters vary by comic title, supervised learning approaches like training character classifiers which require specific annotations for each comic title are infeasible.
This motivates us to propose a novel zero-shot approach, allowing machines to identify characters and predict speaker names based solely on unannotated comic images.
In spite of their importance in real-world applications, these task have largely remained unexplored due to challenges in story comprehension and multimodal integration.
Recent large language models (LLMs) have shown great capability for text understanding and reasoning, while their application to multimodal content analysis is still an open problem.
To address this problem, we propose an iterative multimodal framework, the first to employ multimodal information for both character identification and speaker prediction tasks.
Our experiments demonstrate the effectiveness of the proposed framework, establishing a robust baseline for these tasks.
Furthermore, since our method requires no training data or annotations, it can be used as-is on any comic series. | Zero-Shot Character Identification and Speaker Prediction in Comics via Iterative Multimodal Fusion | [
"Yingxuan Li",
"Ryota Hinami",
"Kiyoharu Aizawa",
"Yusuke Matsui"
] | Conference | oral | 2404.13993 | [
"https://github.com/liyingxuan1012/zeroshot-speaker-prediction"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=0WK6OKrhXT | @inproceedings{
wang2024innerf,
title={InNe{RF}: Learning Interpretable Radiance Fields for Generalizable 3D Scene Representation and Rendering},
author={Dan Wang and Xinrui Cui},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0WK6OKrhXT}
} | We propose Interpretable Neural Radiance Fields (InNeRF) for generalizable 3D scene representation and rendering. In contrast to previous image-based rendering, which used two independent working processes of pooling-based fusion and MLP-based rendering, our framework unifies source-view fusion and target-view rendering processes via an end-to-end interpretable Transformer-based network. InNeRF enables the investigation of deep relationships between the target-rendering view and source views that were previously neglected by pooling-based fusion and fragmented rendering procedures. As a result, InNeRF improves model interpretability by enhancing the shape and appearance consistency of a 3D scene in both the surrounding view space and the ray-cast space. For a query rendering 3D point, InNeRF integrates both its projected 2D pixels from the surrounding source views and its adjacent 3D points along the query ray and simultaneously decodes this information into the query 3D point representation. Experiments show that InNeRF outperforms state-of-the-art image-based neural rendering methods in both scene-agnostic and per-scene finetuning scenarios, especially when there is a considerable disparity between source views and rendering views. The interpretation experiment shows that InNeRF can explain a query rendering process. | InNeRF: Learning Interpretable Radiance Fields for Generalizable 3D Scene Representation and Rendering | [
"Dan Wang",
"Xinrui Cui"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0Q9zTGHOda | @inproceedings{
guo2024instancelevel,
title={Instance-Level Panoramic Audio-Visual Saliency Detection and Ranking},
author={Ruohao Guo and Dantong Niu and Liao Qu and Yanyu Qi and Ji Shi and Wenzhen Yue and Bowei Xing and Taiyan Chen and Xianghua Ying},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0Q9zTGHOda}
} | Panoramic audio-visual saliency detection is to segment the most attention-attractive regions in 360° panoramic videos with sound. To meticulously delineate the detected salient regions and effectively model human attention shift, we extend this task to more fine-grained instance scenarios: identifying salient object instances and inferring their saliency ranks. In this paper, we propose the first instance-level framework that can simultaneously be applied to segmentation and ranking of multiple salient objects in panoramic videos. Specifically, it consists of a distortion-aware pixel decoder to overcome panoramic distortions, a sequential audio-visual fusion module to integrate audio-visual information, and a spatio-temporal object decoder to separate individual instances and predict their saliency scores. Moreover, owing to the absence of such annotations, we create the ground-truth saliency ranks for the PAVS10K benchmark. Extensive experiments demonstrate that our model is capable of achieving state-of-the-art performance on the PAVS10K for both saliency detection and ranking tasks. The code and dataset will be released soon. | Instance-Level Panoramic Audio-Visual Saliency Detection and Ranking | [
"Ruohao Guo",
"Dantong Niu",
"Liao Qu",
"Yanyu Qi",
"Ji Shi",
"Wenzhen Yue",
"Bowei Xing",
"Taiyan Chen",
"Xianghua Ying"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0MiogCwsVo | @inproceedings{
yin2024embracing,
title={Embracing Adaptation: An Effective Dynamic Defense Strategy Against Adversarial Examples},
author={Shenglin Yin and kelu Yao and Zhen Xiao and Jieyi Long},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0MiogCwsVo}
} | Existing defense methods against adversarial examples are static, meaning that they remain unchanged once trained, regardless of changes in the attack. Consequently, static defense methods are highly vulnerable to adaptive attacks. We contend that in order to defend against more powerful attacks, the model should continuously adapt to cope with various attack methods. We propose a novel dynamic defense approach that optimizes the input by generating pseudo-labels. Subsequently, it utilizes information maximization and enhanced average prediction as optimization objectives, followed by hierarchical optimization methods to effectively counteract adversarial examples through model parameter optimization. Importantly, our approach is implemented during the inference phase and does not necessitate model retraining. It can be readily applied to existing adversarially trained models, significantly enhancing the robustness of various models against white-box, black-box, and adaptive attacks across diverse datasets. We have conducted extensive experiments to validate the state-of-the-art of our proposed method. | Embracing Adaptation: An Effective Dynamic Defense Strategy Against Adversarial Examples | [
"Shenglin Yin",
"kelu Yao",
"Zhen Xiao",
"Jieyi Long"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0KrM2KwV7v | @inproceedings{
huang2024class,
title={Class Balance Matters to Active Class-Incremental Learning},
author={Zitong Huang and Ze Chen and Yuanze Li and Bowen Dong and Erjin Zhou and Yong Liu and Rick Siow Mong Goh and Chun-Mei Feng and Wangmeng Zuo},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0KrM2KwV7v}
} | Few-Shot Class-Incremental Learning has shown remarkable efficacy in efficient learning new concepts with limited annotations. Nevertheless, the heuristic few-shot annotations may not always cover the most informative samples, which largely restricts the capability of incremental learner. We aim to start from a pool of large-scale unlabeled data and then annotate the most informative samples for incremental learning. Based on this purpose, this paper introduces the Active Class-Incremental Learning (ACIL). The objective of ACIL is to select the most informative samples from the unlabeled pool to effectively train an incremental learner, aiming to maximize the performance of the resulting model. Note that vanilla active learning algorithms suffer from class-imbalanced distribution among annotated samples, which restricts the ability of incremental learning. To achieve both class balance and informativeness in chosen samples, we propose $\textbf{C}$lass-$\textbf{B}$alanced $\textbf{S}$election ($\textbf{CBS}$) strategy. Specifically, we first cluster the features of all unlabeled images into multiple groups. Then for each cluster, we employ greedy selection strategy to ensure that the Gaussian distribution of the sampled features closely matches the Gaussian distribution of all unlabeled features within the cluster. Our CBS can be plugged and played into those CIL methods which are based on pretrained models with prompts tunning technique. Extensive experiments under ACIL protocol across five diverse datasets demonstrate that CBS outperforms both random selection and other SOTA active learning approaches. | Class Balance Matters to Active Class-Incremental Learning | [
"Zitong Huang",
"Ze Chen",
"Yuanze Li",
"Bowen Dong",
"Erjin Zhou",
"Yong Liu",
"Rick Siow Mong Goh",
"Chun-Mei Feng",
"Wangmeng Zuo"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0JO83loPPV | @inproceedings{
ma2024cognitionsupervised,
title={Cognition-Supervised Saliency Detection: Contrasting {EEG} Signals and Visual Stimuli},
author={Jun Ma and Tuukka Ruotsalo},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0JO83loPPV}
} | Understanding human assessment of semantically salient parts of multimedia content is crucial for developing human-centric applications, such as annotation tools, search and recommender systems, and systems able to generate new media matching human interests. However, the challenge of acquiring suitable supervision signals to detect semantic saliency without extensive manual annotation remains significant. Here, we explore a novel method that utilizes signals measured directly from human cognition via electroencephalogram (EEG) in response to natural visual perception. These signals are used for supervising representation learning to capture semantic saliency. Through a contrastive learning framework, our method aligns EEG data with visual stimuli, capturing human cognitive responses without the need for any manual annotation. Our approach demonstrates that the learned representations closely align with human-centric notions of visual saliency and achieve competitive performance in several downstream tasks, such as image classification and generation. As a contribution, we introduce an open EEG/image dataset from 30 participants, to facilitate further research in utilizing cognitive signals for multimodal data analysis, studying perception, and developing models for cross-modal representation learning. | Cognition-Supervised Saliency Detection: Contrasting EEG Signals and Visual Stimuli | [
"Jun Ma",
"Tuukka Ruotsalo"
] | Conference | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0EzCOFDueS | @inproceedings{
zhang2024rca,
title={{RCA}: Region Conditioned Adaptation for Visual Abductive Reasoning},
author={Hao Zhang and Ee Yeo Keat and Basura Fernando},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0EzCOFDueS}
} | Vision foundational models (e.g., CLIP) show strong generalization on various downstream visual perception tasks.
However, their ability to reason beyond mere perception is limited, as they are only pre-trained on image-text pairs that hold semantically equivalent meanings.
To tackle this, we propose a simple yet effective \textit{Region Conditioned Adaptation} (RCA), a hybrid parameter-efficient fine-tuning method that equips the frozen CLIP with the ability to infer hypotheses from local visual cues.
Specifically, the RCA contains two novel modules: regional prompt generator and Adapter$^\textbf{+}$.
The prior encodes ''local hints'' and ''global contexts'' into visual prompts separately at fine and coarse-grained levels.
The latter enhances the vanilla adapters with a newly designed Map Adapter, that directly steers the focus of attention map with trainable query and key projections. Finally, we train the RCA with a new Dual-Contrastive Loss to regress the visual feature simultaneously toward features of literal description (a.k.a. clue text) and plausible hypothesis (abductive inference text). The loss enables CLIP to maintain both perception and reasoning abilities. Experiments on the Sherlock visual abductive reasoning benchmark show that the RCA significantly outstands previous SOTAs, ranking the \nth{1} on the leaderboards (e.g., Human Acc: RCA 31.74 \textit{vs} CPT-CLIP 29.58, higher =better). We also validate the RCA is generalizable to local perception benchmarks like RefCOCO. We would open-source our codes for future research. | RCA: Region Conditioned Adaptation for Visual Abductive Reasoning | [
"Hao Zhang",
"Ee Yeo Keat",
"Basura Fernando"
] | Conference | poster | 2303.10428 | [
"https://github.com/lunaproject22/rpa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=0CwtY7FtYa | @inproceedings{
jiang-lin2024record,
title={ReCorD: Reasoning and Correcting Diffusion for {HOI} Generation},
author={Jian-Yu Jiang-Lin and Kang-Yang Huang and Ling Lo and Yi-Ning Huang and Terence Lin and Jhih-Ciang Wu and Hong-Han Shuai and Wen-Huang Cheng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=0CwtY7FtYa}
} | Diffusion models revolutionize image generation by leveraging natural language to guide the creation of multimedia content. Despite significant advancements in such generative models, challenges persist in depicting detailed human-object interactions, especially regarding pose and object placement accuracy. We introduce a training-free method named Reasoning and Correcting Diffusion (ReCorD) to address these challenges. Our model couples Latent Diffusion Models with Visual Language Models to refine the generation process, ensuring precise depictions of HOIs. We propose an interaction-aware reasoning module to improve the interpretation of the interaction, along with an interaction correcting module to refine the output image for more precise HOI generation delicately. Through a meticulous process of pose selection and object positioning, ReCorD achieves superior fidelity in generated images while efficiently reducing computational requirements. We conduct comprehensive experiments on three benchmarks to demonstrate the significant progress in solving text-to-image generation tasks, showcasing ReCorD's ability to render complex interactions accurately by outperforming existing methods in HOI classification score, as well as FID and Verb CLIP-Score. | ReCorD: Reasoning and Correcting Diffusion for HOI Generation | [
"Jian-Yu Jiang-Lin",
"Kang-Yang Huang",
"Ling Lo",
"Yi-Ning Huang",
"Terence Lin",
"Jhih-Ciang Wu",
"Hong-Han Shuai",
"Wen-Huang Cheng"
] | Conference | poster | 2407.17911 | [
"https://github.com/j1anglin/ReCorD"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=07Skvbp3aQ | @inproceedings{
wang2024an,
title={An Inverse Partial Optimal Transport Framework for Music-guided Trailer Generation},
author={Yutong Wang and Sidan Zhu and Hongteng Xu and Dixin Luo},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=07Skvbp3aQ}
} | Trailer generation is a challenging video clipping task that aims to select highlighting shots from long videos like movies and re-organize them in an attractive way.
In this study, we propose an inverse partial optimal transport (IPOT) framework to achieve music-guided movie trailer generation.
In particular, we formulate the trailer generation task as selecting and sorting key movie shots based on audio shots, which involves matching the latent representations across visual and acoustic modalities.
We learn a multi-modal latent representation model in the proposed IPOT framework to achieve this aim.
In this framework, a two-tower encoder derives the latent representations of movie and music shots, respectively, and an attention-assisted Sinkhorn matching network parameterizes the grounding distance between the shots' latent representations and the distribution of the movie shots.
Taking the correspondence between the movie shots and its trailer music shots as the observed optimal transport plan defined on the grounding distances, we learn the model by solving an inverse partial optimal transport problem, leading to a bi-level optimization strategy.
We collect real-world movies and their trailers to construct a dataset with abundant label information called CMTD and, accordingly, train and evaluate various automatic trailer generators.
Compared with state-of-the-art methods, our IPOT method consistently shows superiority in subjective visual effects and objective quantitative measurements. | An Inverse Partial Optimal Transport Framework for Music-guided Trailer Generation | [
"Yutong Wang",
"Sidan Zhu",
"Hongteng Xu",
"Dixin Luo"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=06c7e989wH | @inproceedings{
fan2024dreambooth,
title={DreamBooth++: Boosting Subject-Driven Generation via Region-Level References Packing},
author={Zhongyi Fan and Zixin Yin and Gang Li and Yibing Zhan and Heliang Zheng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=06c7e989wH}
} | DreamBooth has demonstrated significant potential in subject-driven text-to-image generation, especially in scenarios requiring precise preservation of a subject's appearance. However, it still suffers from inefficiency and requires extensive iterative training to customize concepts using a small set of reference images. To address these issues, we introduce DreamBooth++, a region-level training strategy designed to significantly improve the efficiency and effectiveness of learning specific subjects. In particular, our approach employs a region-level data re-formulation technique that packs a set of reference images into a single sample, significantly reducing computational costs. Moreover, we adapt convolution and self-attention layers to ensure their processings are restricted within individual regions. Thus their operational scope (i.e., receptive field) can be preserved within a single subject, avoiding generating multiple sub-images within a single image. Last but not least, we design a text-guided prior regularization between our model and the pretrained one to preserve the original semantic generation ability. Comprehensive experiments demonstrate that our training strategy not only accelerates the subject-learning process but also significantly boosts fidelity to both subject and prompts in subject-driven generation. | DreamBooth++: Boosting Subject-Driven Generation via Region-Level References Packing | [
"Zhongyi Fan",
"Zixin Yin",
"Gang Li",
"Yibing Zhan",
"Heliang Zheng"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=05PCDMN4Dk | @inproceedings{
zhang2024deeppointmap,
title={DeepPointMap2: Accurate and Robust Li{DAR}-Visual {SLAM} with Neural Descriptors},
author={Xiaze Zhang and Ziheng Ding and Qi Jing and Ying Cheng and Wenchao Ding and Rui Feng},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=05PCDMN4Dk}
} | Simultaneous Localization and Mapping (SLAM) plays a pivotal role in autonomous driving and robotics. Given the complexity of road environments, there is a growing research emphasis on developing robust and accurate multi-modal SLAM systems. Existing methods often rely on hand-craft feature extraction and cross-modal fusion techniques, resulting in limited feature representation capability and reduced flexibility and robustness. To address this challenge, we introduce DeepPointMap2, a novel learning-based LiDAR-Visual SLAM architecture that leverages neural descriptors to tackle multiple SLAM subtasks in a unified manner. Our approach employs neural networks to extract multi-modal feature tokens, which are then adaptively fused by the Visual-Point Fusion Module to generate sparse neural 3D descriptors, ensuring precise localization and robust performance. As a pioneering work, our method achieves state-of-the-art localization performance among various Visual-based, LiDAR-based, and Visual-LiDAR-based methods in widely used benchmarks, as shown in the experiment results. Furthermore, the approach proves to be robust in scenarios involving camera failure and LiDAR obstruction. | DeepPointMap2: Accurate and Robust LiDAR-Visual SLAM with Neural Descriptors | [
"Xiaze Zhang",
"Ziheng Ding",
"Qi Jing",
"Ying Cheng",
"Wenchao Ding",
"Rui Feng"
] | Conference | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |