Papers
arxiv:2312.06591

Concurrent Density Estimation with Wasserstein Autoencoders: Some Statistical Insights

Published on Dec 11, 2023
Authors:
,
,

Abstract

<PRE_TAG>Variational Autoencoders (VAEs)</POST_TAG> have been a pioneering force in the realm of deep generative models. Amongst its legions of progenies, Wasserstein Autoencoders (WAEs) stand out in particular due to the dual offering of heightened <PRE_TAG>generative quality</POST_TAG> and a strong theoretical backbone. WAEs consist of an encoding and a <PRE_TAG>decoding network</POST_TAG> forming a <PRE_TAG>bottleneck</POST_TAG> with the prime objective of generating new samples resembling the ones it was catered to. In the process, they aim to achieve a target <PRE_TAG>latent representation</POST_TAG> of the encoded data. Our work is an attempt to offer a theoretical understanding of the machinery behind WAEs. From a statistical viewpoint, we pose the problem as concurrent <PRE_TAG>density estimation</POST_TAG> tasks based on neural network-induced transformations. This allows us to establish deterministic upper bounds on the realized errors WAEs commit. We also analyze the propagation of these <PRE_TAG>stochastic errors</POST_TAG> in the presence of <PRE_TAG>adversaries</POST_TAG>. As a result, both the large sample properties of the <PRE_TAG>reconstructed distribution</POST_TAG> and the <PRE_TAG>resilience</POST_TAG> of WAE models are explored.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2312.06591 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2312.06591 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2312.06591 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.