Abstract
For convolutional neural network models that optimize an image embedding, we propose a method to highlight the regions of images that contribute most to pairwise similarity. This work is a corollary to the visualization tools developed for classification networks, but applicable to the problem domains better suited to similarity learning. The visualization shows how similarity networks that are fine-tuned learn to focus on different features. We also generalize our approach to embedding networks that use different pooling strategies and provide a simple mechanism to support image similarity searches on objects or sub-regions in the query image.
Models citing this paper 0
No model linking this paper
Cite arxiv.org/abs/1901.00536 in a model README.md to link it from this page.
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/1901.00536 in a dataset README.md to link it from this page.
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper
Add this paper to a
collection
to link it from this page.