Humans can naturally and effectively find salient regions in complex scenes. Motivated by this observation, attention mechanisms were introduced into computer vision with the aim of imitating this ...aspect of the human visual system. Such an attention mechanism can be regarded as a dynamic weight adjustment process based on features of the input image. Attention mechanisms have achieved great success in many visual tasks, including image classification, object detection, semantic segmentation, video understanding, image generation, 3D vision, multimodal tasks, and self-supervised learning. In this survey, we provide a comprehensive review of various attention mechanisms in computer vision and categorize them according to approach, such as channel attention, spatial attention, temporal attention, and branch attention; a related repository
https://github.com/MenghaoGuo/Awesome-Vision-Attentions
is dedicated to collecting related work. We also suggest future directions for attention mechanism research.
The irregular domain and lack of ordering make it challenging to design deep neural networks for point cloud processing. This paper presents a novel framework named
Point Cloud Transformer
(PCT) for ...point cloud learning. PCT is based on Transformer, which achieves huge success in natural language processing and displays great potential in image processing. It is inherently permutation invariant for processing a sequence of points, making it well-suited for point cloud learning. To better capture local context within the point cloud, we enhance input embedding with the support of farthest point sampling and nearest neighbor search. Extensive experiments demonstrate that the PCT achieves the state-of-the-art performance on shape classification, part segmentation, semantic segmentation, and normal estimation tasks.
Attention mechanisms, especially self-attention, have played an increasingly important role in deep feature representation for visual tasks. Self-attention updates the feature at each position by ...computing a weighted sum of features using pair-wise affinities across all positions to capture the long-range dependency within a single sample. However, self-attention has quadratic complexity and ignores potential correlation between different samples. This article proposes a novel attention mechanism which we call external attention , based on two external, small, learnable, shared memories, which can be implemented easily by simply using two cascaded linear layers and two normalization layers; it conveniently replaces self-attention in existing popular architectures. External attention has linear complexity and implicitly considers the correlations between all data samples. We further incorporate the multi-head mechanism into external attention to provide an all-MLP architecture, external attention MLP (EAMLP), for image classification. Extensive experiments on image classification, object detection, semantic segmentation, instance segmentation, image generation, and point cloud analysis reveal that our method provides results comparable or superior to the self-attention mechanism and some of its variants, with much lower computational and memory costs.
Video stabilization is necessary for many hand‐held shot videos. In the past decades, although various video stabilization methods were proposed based on the smoothing of 2D, 2.5D or 3D camera paths, ...hardly have there been any deep learning methods to solve this problem. Instead of explicitly estimating and smoothing the camera path, we present a novel online deep learning framework to learn the stabilization transformation for each unsteady frame, given historical steady frames. Our network is composed of a generative network with spatial transformer networks embedded in different layers, and generates a stable frame for the incoming unstable frame by computing an appropriate affine transformation. We also introduce an adversarial network to determine the stability of apiece of video. The network is trained directly using the pair of steady and unsteady videos. Experiments show that our method can produce similar results as traditional methods, moreover, it is capable of handling challenging unsteady video of low quality, where traditional methods fail, such as video with heavy noise or multiple exposures. Our method runs in real time, which is much faster than traditional methods.
A Large Chinese Text Dataset in the Wild Yuan, Tai-Ling; Zhu, Zhe; Xu, Kun ...
Journal of computer science and technology,
05/2019, Letnik:
34, Številka:
3
Journal Article
Recenzirano
In this paper, we introduce a very large Chinese text dataset in the wild. While optical character recognition (OCR) in document images is well studied and many commercial tools are available, the ...detection and recognition of text in natural images is still a challenging problem, especially for some more complicated character sets such as Chinese text. Lack of training data has always been a problem, especially for deep learning methods which require massive training data. In this paper, we provide details of a newly created dataset of Chinese text with about 1 million Chinese characters from 3 850 unique ones annotated by experts in over 30 000 street view images. This is a challenging dataset with good diversity containing planar text, raised text, text under poor illumination, distant text, partially occluded text, etc. For each character, the annotation includes its underlying character, bounding box, and six attributes. The attributes indicate the character’s background complexity, appearance, style, etc. Besides the dataset, we give baseline results using state-of-the-art methods for three tasks: character recognition (top-1 accuracy of 80.5%), character detection (AP of 70.9%), and text line detection (AED of 22.1). The dataset, source code, and trained models are publicly available.
We present Skeleton-CutMix, a simple and effective skeleton augmentation framework for supervised domain adaptation and show its advantage in skeleton-based action recognition tasks. Existing ...approaches usually perform domain adaptation for action recognition with elaborate loss functions that aim to achieve domain alignment. However, they fail to capture the intrinsic characteristics of skeleton representation. Benefiting from the well-defined correspondence between bones of a pair of skeletons, we instead mitigate domain shift by fabricating skeleton data in a mixed domain, which mixes up bones from the source domain and the target domain. The fabricated skeletons in the mixed domain can be used to augment training data and train a more general and robust model for action recognition. Specifically, we hallucinate new skeletons by using pairs of skeletons from the source and target domains; a new skeleton is generated by exchanging some bones from the skeleton in the source domain with corresponding bones from the skeleton in the target domain, which resembles a cut-and-mix operation. When exchanging bones from different domains, we introduce a class-specific bone sampling strategy so that bones that are more important for an action class are exchanged with higher probability when generating augmentation samples for that class. We show experimentally that the simple bone exchange strategy for augmentation is efficient and effective and that distinctive motion features are preserved while mixing both action and style across domains. We validate our method in cross-dataset and cross-age settings on NTU-60 and ETRI-Activity3D datasets with an average gain of over 3% in terms of action recognition accuracy, and demonstrate its superior performance over previous domain adaptation approaches as well as other skeleton augmentation strategies.
We present ClusterVO, a stereo Visual Odometry which simultaneously clusters and estimates the motion of both ego and surrounding rigid clusters/objects. Unlike previous solutions relying on batch ...input or imposing priors on scene structure or dynamic object models, ClusterVO is online, general and thus can be used in various scenarios including indoor scene understanding and autonomous driving. At the core of our system lies a multi-level probabilistic association mechanism and a heterogeneous Conditional Random Field (CRF) clustering approach combining semantic, spatial and motion information to jointly infer cluster segmentations online for every frame. The poses of camera and dynamic objects are instantly solved through a sliding-window optimization. Our system is evaluated on Oxford Multimotion and KITTI dataset both quantitatively and qualitatively, reaching comparable results to state-of-the-art solutions on both odometry and dynamic trajectory recovery.
This paper presents a Semantic Positioning System (SPS) to enhance the accuracy of mobile device geo-localization in outdoor urban environments. Although the traditional Global Positioning System ...(GPS) can offer a rough localization, it lacks the necessary accuracy for applications such as Augmented Reality (AR). Our SPS integrates Geographic Information System (GIS) data, GPS signals, and visual image information to estimate the 6 Degree-of-Freedom (DoF) pose through cross-view semantic matching. This approach has excellent scalability to support GIS context with Levels of Detail (LOD). The map data representation is Digital Elevation Model (DEM), a cost-effective aerial map that allows for fast deployment for large-scale areas. However, the DEM lacks geometric and texture details, making it challenging for traditional visual feature extraction to establish pixel/voxel level cross-view correspondences. To address this, we sample observation pixels from the query ground-view image using predicted semantic labels. We then propose an iterative homography estimation method with semantic correspondences. To improve the efficiency of the overall system, we further employ a heuristic search to speedup the matching process. The proposed method is robust, real-time, and automatic. Quantitative experiments on the challenging Bund dataset show that we achieve a positioning accuracy of 73.24%, surpassing the baseline skyline-based method by 20%. Compared with the state-of-the-art semantic-based approach on the Kitti dataset, we improve the positioning accuracy by an average of 5%.
Accurate Dynamic SLAM Using CRF-Based Long-Term Consistency Du, Zheng-Jun; Huang, Shi-Sheng; Mu, Tai-Jiang ...
IEEE transactions on visualization and computer graphics,
2022-April-1, 2022-Apr, 2022-4-1, 20220401, Letnik:
28, Številka:
4
Journal Article
Recenzirano
Accurate camera pose estimation is essential and challenging for real world dynamic 3D reconstruction and augmented reality applications. In this article, we present a novel RGB-D SLAM approach for ...accurate camera pose tracking in dynamic environments. Previous methods detect dynamic components only across a short time-span of consecutive frames. Instead, we provide a more accurate dynamic 3D landmark detection method, followed by the use of long-term consistency via conditional random fields, which leverages long-term observations from multiple frames. Specifically, we first introduce an efficient initial camera pose estimation method based on distinguishing dynamic from static points using graph-cut RANSAC. These static/dynamic labels are used as priors for the unary potential in the conditional random fields, which further improves the accuracy of dynamic 3D landmark detection. Evaluation using the TUM and Bonn RGB-D dynamic datasets shows that our approach significantly outperforms state-of-the-art methods, providing much more accurate camera trajectory estimation in a variety of highly dynamic environments. We also show that dynamic 3D reconstruction can benefit from the camera poses estimated by our RGB-D SLAM approach.