UNI-MB - logo
UMNIK - logo
 
E-resources
Full text
Peer reviewed
  • Watch, attend and parse: An...
    Zhang, Jianshu; Du, Jun; Zhang, Shiliang; Liu, Dan; Hu, Yulong; Hu, Jinshui; Wei, Si; Dai, Lirong

    Pattern recognition, November 2017, 2017-11-00, Volume: 71
    Journal Article

    •A novel neural network based approach to handwritten mathematical expression recognition.•An end-to-end encoder-decoder framework to alleviate the problem caused by an explicit symbol segmentation and the computational demands of employing a mathematical expression grammar.•The deep fully convolutional neural network as the encoder.•The coverage-based attention model to incorporate the attention history.•Attention visualization to show the link between the input image and output symbol sequence in latex format.•To the best of our knowledge, achieving the best published expression recognition accuracy on CROHME 2014 competition set by only using the official training data. Machine recognition of a handwritten mathematical expression (HME) is challenging due to the ambiguities of handwritten symbols and the two-dimensional structure of mathematical expressions. Inspired by recent work in deep learning, we present Watch, Attend and Parse (WAP), a novel end-to-end approach based on neural network that learns to recognize HMEs in a two-dimensional layout and outputs them as one-dimensional character sequences in LaTeX format. Inherently unlike traditional methods, our proposed model avoids problems that stem from symbol segmentation, and it does not require a predefined expression grammar. Meanwhile, the problems of symbol recognition and structural analysis are handled, respectively, using a watcher and a parser. We employ a convolutional neural network encoder that takes HME images as input as the watcher and employ a recurrent neural network decoder equipped with an attention mechanism as the parser to generate LaTeX sequences. Moreover, the correspondence between the input expressions and the output LaTeX sequences is learned automatically by the attention mechanism. We validate the proposed approach on a benchmark published by the CROHME international competition. Using the official training dataset, WAP significantly outperformed the state-of-the-art method with an expression recognition accuracy of 46.55% on CROHME 2014 and 44.55% on CROHME 2016.