Papers
arxiv:2103.06495

Read Like Humans: Autonomous, Bidirectional and Iterative Language Modeling for Scene Text Recognition

Published on Mar 11, 2021
Authors:
,
,
,
,

Abstract

Linguistic knowledge is of great benefit to scene text recognition. However, how to effectively model linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from: 1) implicitly language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet for scene text recognition. Firstly, the autonomous suggests to block gradient flow between vision and language models to enforce explicitly language modeling. Secondly, a novel <PRE_TAG>bidirectional cloze network (BCN)</POST_TAG> as the language model is proposed based on <PRE_TAG>bidirectional feature representation</POST_TAG>. Thirdly, we propose an execution manner of <PRE_TAG>iterative correction</POST_TAG> for language model which can effectively alleviate the impact of noise input. Additionally, based on the ensemble of <PRE_TAG>iterative predictions</POST_TAG>, we propose a self-training method which can learn from unlabeled images effectively. Extensive experiments indicate that ABINet has superiority on low-quality images and achieves state-of-the-art results on several mainstream benchmarks. Besides, the ABINet trained with ensemble self-training shows promising improvement in realizing human-level recognition. Code is available at https://github.com/FangShancheng/ABINet.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2103.06495 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2103.06495 in a dataset README.md to link it from this page.

Spaces citing this paper 6

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.