Papers
arxiv:2211.06088

RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization

Published on Nov 11, 2022
Authors:
,
,
,
,

Abstract

Feature reuse has been a key technique in light-weight convolutional neural networks (CNNs) design. Current methods usually utilize a concatenation operator to keep large channel numbers cheaply (thus large network capacity) by reusing feature maps from other layers. Although concatenation is parameters- and FLOPs-free, its computational cost on hardware devices is non-negligible. To address this, this paper provides a new perspective to realize feature reuse via structural re-parameterization technique. A novel hardware-efficient RepGhost module is proposed for implicit <PRE_TAG>feature reuse</POST_TAG> via re-parameterization, instead of using concatenation operator. Based on the RepGhost module, we develop our efficient RepGhost bottleneck and Rep<PRE_TAG>GhostNet</POST_TAG>. Experiments on ImageNet and COCO benchmarks demonstrate that the proposed Rep<PRE_TAG>GhostNet</POST_TAG> is much more effective and efficient than GhostNet and MobileNetV3 on mobile devices. Specially, our Rep<PRE_TAG>GhostNet</POST_TAG> surpasses GhostNet 0.5x by 2.5% Top-1 accuracy on ImageNet dataset with less parameters and comparable latency on an ARM-based mobile phone.

Community

Sign up or log in to comment

Models citing this paper 8

Browse 8 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2211.06088 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2211.06088 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.