RepGhost: A Hardware-Efficient Ghost Module via Re-parameterization
Abstract
Feature reuse has been a key technique in light-weight convolutional neural networks (CNNs) design. Current methods usually utilize a concatenation operator to keep large channel numbers cheaply (thus large network capacity) by reusing feature maps from other layers. Although concatenation is parameters- and FLOPs-free, its computational cost on hardware devices is non-negligible. To address this, this paper provides a new perspective to realize feature reuse via structural re-parameterization technique. A novel hardware-efficient RepGhost module is proposed for implicit <PRE_TAG>feature reuse</POST_TAG> via re-parameterization, instead of using concatenation operator. Based on the RepGhost module, we develop our efficient RepGhost bottleneck and Rep<PRE_TAG>GhostNet</POST_TAG>. Experiments on ImageNet and COCO benchmarks demonstrate that the proposed Rep<PRE_TAG>GhostNet</POST_TAG> is much more effective and efficient than GhostNet and MobileNetV3 on mobile devices. Specially, our Rep<PRE_TAG>GhostNet</POST_TAG> surpasses GhostNet 0.5x by 2.5% Top-1 accuracy on ImageNet dataset with less parameters and comparable latency on an ARM-based mobile phone.
Models citing this paper 8
Browse 8 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper