NUK - logo
E-resources
Full text
Peer reviewed
  • Deep residual pooling netwo...
    Mao, Shangbo; Rajan, Deepu; Chia, Liang Tien

    Pattern recognition, April 2021, 2021-04-00, Volume: 112
    Journal Article

    •We propose a learnable residual pooling layer comprising of a residual encoding module and an aggregation module that retains spatial information and aggregates them to a feature with a lower dimension.•We propose an end-to-end learning framework that integrates the residual pooling layer into any pre-trained CNN model for efficient feature transfer for texture recognition.•We compare the performance of the proposed pooling layer with other residual encoding schemes to illustrate state-of-the-art performance on benchmark texture datasets, an industry dataset and a scene recognition dataset. Current deep learning-based texture recognition methods extract spatial orderless features from pre-trained deep learning models that are trained on large-scale image datasets. These methods either produce high dimensional features or have multiple steps like dictionary learning, feature encoding and dimension reduction. In this paper, we propose a novel end-to-end learning framework that not only overcomes these limitations, but also demonstrates faster learning. The proposed framework incorporates a residual pooling layer consisting of a residual encoding module and an aggregation module. The residual encoder preserves the spatial information for improved feature learning and the aggregation module generates orderless feature for classification through a simple averaging. The feature has the lowest dimension among previous deep texture recognition approaches, yet it achieves state-of-the-art performance on benchmark texture recognition datasets such as FMD, DTD, 4D Light and one industry dataset used for metal surface anomaly detection. Additionally, the proposed method obtains comparable results on the MIT-Indoor scene recognition dataset. Our codes are available at https://github.com/maoshangbo/DRP-Texture-Recognition.