Papers
arxiv:2007.08702

DACS: Domain Adaptation via Cross-domain Mixed Sampling

Published on Jul 17, 2020
Authors:
,
,
,

Abstract

Semantic segmentation models based on <PRE_TAG><PRE_TAG><PRE_TAG><PRE_TAG>convolutional neural networks</POST_TAG></POST_TAG></POST_TAG></POST_TAG> have recently displayed remarkable performance for a multitude of applications. However, these models typically do not generalize well when applied on new domains, especially when going from synthetic to real data. In this paper we address the problem of <PRE_TAG><PRE_TAG><PRE_TAG>unsupervised domain adaptation (UDA)</POST_TAG></POST_TAG></POST_TAG>, which attempts to train on labelled data from one domain (source domain), and simultaneously learn from unlabelled data in the domain of interest (target domain). Existing methods have seen success by training on <PRE_TAG><PRE_TAG><PRE_TAG>pseudo-labels</POST_TAG></POST_TAG></POST_TAG> for these unlabelled images. Multiple techniques have been proposed to mitigate low-quality <PRE_TAG><PRE_TAG><PRE_TAG>pseudo-labels</POST_TAG></POST_TAG></POST_TAG> arising from the domain shift, with varying degrees of success. We propose DACS: Domain Adaptation via Cross-domain mixed Sampling, which mixes images from the two domains along with the corresponding labels and <PRE_TAG><PRE_TAG><PRE_TAG>pseudo-labels</POST_TAG></POST_TAG></POST_TAG>. These mixed samples are then trained on, in addition to the labelled data itself. We demonstrate the effectiveness of our solution by achieving state-of-the-art results for GTA5 to Cityscapes, a common synthetic-to-real <PRE_TAG><PRE_TAG><PRE_TAG><PRE_TAG>semantic segmentation</POST_TAG></POST_TAG></POST_TAG></POST_TAG> benchmark for UDA.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2007.08702 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2007.08702 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2007.08702 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.