Papers
arxiv:2111.15454

Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup

Published on Nov 30, 2021
Authors:
,
,
,
,
,

Abstract

Mixup is a well-known data-dependent augmentation technique for DNNs, consisting of two sub-tasks: <PRE_TAG>mixup generation</POST_TAG> and classification. However, the recent dominant online training method confines mixup to supervised learning (SL), and the objective of the generation sub-task is limited to selected sample pairs instead of the whole data manifold, which might cause trivial solutions. To overcome such limitations, we comprehensively study the objective of <PRE_TAG>mixup generation</POST_TAG> and propose Scenario-Agnostic Mixup (SAMix) for both SL and Self-supervised Learning (SSL) scenarios. Specifically, we hypothesize and verify the objective function of <PRE_TAG>mixup generation</POST_TAG> as optimizing local smoothness between two mixed classes subject to global discrimination from other classes. Accordingly, we propose eta-balanced mixup loss for complementary learning of the two sub-objectives. Meanwhile, a label-free generation sub-network is designed, which effectively provides non-trivial mixup samples and improves transferable abilities. Moreover, to reduce the computational cost of online training, we further introduce a pre-trained version, SAMix^P, achieving more favorable efficiency and generalizability. Extensive experiments on nine SL and SSL benchmarks demonstrate the consistent superiority and versatility of SAMix compared with existing methods.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2111.15454 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2111.15454 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2111.15454 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.