The parking problem, which is caused by a low parking space utilization ratio, has always plagued drivers. In this work, we proposed an intelligent detection method based on deep learning technology. ...First, we constructed a TensorFlow deep learning platform for detecting vehicles. Second, the optimal time interval for extracting video stream images was determined in accordance with the judgment time for finding a parking space and the length of time taken by a vehicle from arrival to departure. Finally, the parking space order and number were obtained in accordance with the data layering method and the TimSort algorithm, and parking space vacancy was judged via the indirect Monte Carlo method. To improve the detection accuracy between vehicles and parking spaces, the distance between the vehicles in the training dataset was greater than that of the vehicles observed during detection. A case study verified the reliability of the parking space order and number and the judgment of parking space vacancies.
Terrestrial laser scanning (TLS) is a widely used remote sensing technique which can produce very dense point cloud data very promptly and is particularly suited for surface deformation monitoring. ...Deformation magnitude is typically estimated by comparing TLS scans over the same area but at different time epochs of interest. However, there is an issue related to such a method, which is not clear that whether the difference between two successive surveys results from the surface deformation. Hence, it is vital to determine the minimum detectable deformation (MDD) by a TLS device with a given registration and point cloud error level. In this paper, the MDD is determined based on the computation of the point cloud error entropy. The performance of the proposed method is extensively evaluated numerically using simulated plane board deformation point clouds under a range of distances and incidence angles. This proposed method was also successfully applied to deformation monitoring of one landslide test site located in the Wuhan University of Technology. The experimental results demonstrate that the theoretical MDD has a good match with the actual deformation, and the deformation greater than MDD can be accurately detected by the TLS device.
Accurate fire identification can help to control fires. Traditional fire detection methods are mainly based on temperature or smoke detectors. These detectors are susceptible to damage or ...interference from the outside environment. Meanwhile, most of the current deep learning methods are less discriminative with respect to dynamic fire and have lower detection precision when a fire changes. Therefore, we propose a dynamic convolution YOLOv5 fire detection method using a video sequence. Our method first uses the K-mean++ algorithm to optimize anchor box clustering; this significantly reduces the rate of classification error. Then, the dynamic convolution is introduced into the convolution layer of YOLOv5. Finally, pruning of the network heads of YOLOv5's neck and head is carried out to improve the detection speed. Experimental results verify that the proposed dynamic convolution YOLOv5 fire detection method demonstrates better performance than the YOLOv5 method in recall, precision and F1-score. In particular, compared with three other deep learning methods, the precision of the proposed algorithm is improved by 13.7%, 10.8% and 6.1%, respectively, while the F1-score is improved by 15.8%, 12% and 3.8%, respectively. The method described in this paper is applicable not only to short-range indoor fire identification but also to long-range outdoor fire detection.
The traditional image stitching method has some shortcomings such as double shadow, chromatic aberration, and stitching. In view of this, this paper proposes a power function-weighted image stitching ...method that combines SURF optimization and improved cell acceleration. First, the method uses the cosine similarity to preliminarily judge the similarity of the feature points and then uses the two-way consistency mutual selection to filter the feature point pairs again. Simultaneously, some incorrect matching points in the reverse matching are eliminated. Finally, the method uses the MSAC algorithm to perform fine matching. Then, the power function-weighted fusion algorithm is used to calculate the weight of the center point. The power function weight of the accelerated cell is used to perform the final image fusion. The experimental results show that the matching accuracy rate of the proposed method is about 11 percentage points higher than the traditional SURF algorithm, and the time is reduced by about 1.6 s. In the image fusion stage, this paper first selects images with different brightness, angles, resolutions, and scales to verify the effectiveness of the proposed method. The results show that the proposed method effectively solves the ghosting and stitching seams. Comparing with the traditional fusion algorithm, the time consumption is reduced by at least 2 s, the mean square error is reduced by about 1.32%∼1.48%, and the information entropy is improved by about 0.98%∼1.70%. The proposed method has better performance in matching accuracy and fusion effect and has better stitching quality.
Concrete wall surfaces are prone to cracking for a long time, which affects the stability of concrete structures and may even lead to collapse accidents. In view of this, it is necessary to recognize ...and distinguish the concrete cracks. Then, the stability of concrete will be known. In this paper, we propose a novel approach by fusing fractal dimension and UHK-Net deep learning network to conduct the semantic recognition of concrete cracks. We first use the local fractal dimensions to study the concrete cracking and roughly determine the location of concrete crack. Then, we use the U-Net Haar-like (UHK-Net) network to construct the crack segmentation network. Ultimately, the different types of concrete crack images are used to verify the advantage of the proposed method by comparing with FCN, U-Net, YOLO v5 network. Results show that the proposed method can not only characterize the dark crack images, but also distinguish small and fine crack images. The pixel accuracy (PA), mean pixel accuracy (MPA), and mean intersection over union (MIoU) of crack segmentation determined by the proposed method are all greater than 90%.
It is easy to fall when the stairs, subway stations, bus stations, and factories are crowded. Real-time detection of human falls is helpful for timely assistance. In this paper, we propose an ...efficient and real-time detection network ED-YOLO for human fall detection. Firstly, a re-parameterization backbone is proposed. The shallow convolution (conv) modules in the backbone are replaced by DBBConv and DBBC3 modules, which 1*3 and 3*1 convs are used to replace pooling in DBBConv, and DBBConv is used to replace normal conv in DBBC3. The deep conv module in the backbone are replaced by E-DBBConv and E-DBBC3 modules, which can improve the ability to extract detailed features. Then, a novel feature enhancement module (FEM) is proposed to enhance the features representation of the region of interest and the fusion of features. FEM is added to the feature pyramid network (FPN) to improve detection accuracy. Finally, the CIoU Loss is replaced by Gradient Smoothing-SIoU loss (GS-SIoU Loss), and gradient smoothing is introduced to improve the regression speed and accuracy of the prediction box. In order to further reduce the inference over-head of the model, the network proposed in this paper is pruned. The mAP of the proposed network achieves 96.25%, while the parameters of the model are only 6.34M, and the detection FPS reaches 31 in RTX2080ti. The proposed network and other mainstream lightweight networks are tested on the test set. The experimental results show that the performance of human fall detection in the proposed network is superior to other networks. Especially, the mAP is 2.42% higher than YOLOv5s, and the detection speed is 14.8% faster than it.
The contour feature points of object point cloud are the main features of human perception on target, and play an important role in many fields such as indoor model reconstruction, object detection ...and location. In this paper, we present a new method to extract the contour feature points of point cloud, which mainly includes two main contents: (1) The conspicuous and inconspicuous boundary points are extracted according to the characteristics of distribution of the azimuth between adjacent vectors in two-dimentional view. (2) According to the direction of main feature vector, a two-dimensional projection plane of adjacent points in the bounding sphere is constructed, and the crease points are extracted according to the constraint parameters model of distribution mechanism of adjacent points in the two-dimensional view. We evaluate the performance of the proposed method using objects of different sizes in real world scenarios. Simultaneously, the extraction effect of contour feature points is compared with other methods, and the results show that the extraction and anti-noise performance of the proposed method is superior to the other methods. Simultaneously, it is suitable not only for regular flat-shaped buildings but also for objects with irregular curvilinear architecture. Moreover, the proposed method involves only one parameter that needs to be tuned, and the parameter can be quickly obtained based on the distance resolution.
Point cloud deep learning networks have been widely applied in point cloud classification, part segmentation and semantic segmentation. However, current point cloud deep learning networks are ...insufficient in the local feature extraction of the point cloud, which affects the accuracy of point cloud classification and segmentation. To address this issue, this paper proposes a local domain multi-level feature fusion point cloud deep learning network. First, dynamic graph convolutional operation is utilized to obtain the local neighborhood feature of the point cloud. Then, relation-shape convolution is used to extract a deeper-level edge feature of the point cloud, and max pooling is adopted to aggregate the edge features. Finally, point cloud classification and segmentation are realized based on global features and local features. We use the ModelNet40 and ShapeNet datasets to conduct the comparison experiment, which is a large-scale 3D CAD model dataset and a richly annotated, large-scale dataset of 3D shapes. For ModelNet40, the overall accuracy (OA) of the proposed method is similar to DGCNN, RS-CNN, PointConv and GAPNet, all exceeding 92%. Compared to PointNet, PointNet++, SO-Net and MSHANet, the OA of the proposed method is improved by 5%, 2%, 3% and 2.6%, respectively. For the ShapeNet dataset, the mean Intersection over Union (mIoU) of the part segmentation achieved by the proposed method is 86.3%, which is 2.9%, 1.4%, 1.7%, 1.7%, 1.2%, 0.1% and 1.0% higher than PointNet, RS-Net, SCN, SPLATNet, DGCNN, RS-CNN and LRC-NET, respectively.
Registration of point clouds is vital in point cloud data processing. By registering, the point cloud data from different views are transformed into a common coordinate system. In the iterative ...closest projected point (ICPP) method, the three nearest points are used to form a patch or plane and the performance is greatly affected by noise. More points may be used to construct a plane to reduce the effect of noise, but such a technique may not be suited for the scenarios where the physical surface is not a 2-D pane but a curved surface. In this paper, the iterative closest optimal plane (ICOPlane) method is developed. We propose a method of searching the optimal plane over a possible curved surface for registration of point clouds. In addition, in order to consider the errors of all variables, a constrained weighted total least squares algorithm is derived to estimate the plane parameters and transformation parameters. Both simulated and real experiments are carried out to examine the performance of the developed method, and experimental results demonstrate that the developed method can produce more accurate transformation parameters in comparison with the ICPP method.
The shape of the object is mainly described by feature points and lines. Since a feature point can be described by the intersection of two feature lines, feature lines are the key to determine the ...contour of the object. In this article, a novel method for the generation and regularization of point cloud feature line is presented, which consists of two main steps: extraction of the outline points according to the property of vectors distribution and cluster, feature points are sorted according to the vector deflection angle and distance and they are fitted using the improved cubic b-spline curve fitting algorithm. The performance of the proposed method is evaluated with both large and small point clouds acquired by terrestrial laser scanning devices in real-world scenes. The results show that the proposed method and the analysis of geometrical properties of neighborhoods (AGPN) method achieve very similar performance in the case of planar objects, accurately extracting the outline points of objects. However, in the presence of a curved surface, the proposed method significantly outperforms the existing methods in detecting outline points. The outlines are regularized by the improved cubic b-spline and it is superior to the traditional cubic b-spline curve fitting algorithm.