The existing buffers algorithms cannot effectively to meet the demands of high accuracy of buffer analysis in practice although many efforts have been made in the past 60 years. A generalized ...buffering algorithm (GBA) is presented, which considers the geometric distance and the attribute characteristics of all instances within buffer zone. The proposed algorithm includes three major steps: (1) select and initialize target instance; (2) determine buffer boundary points through mining homogeneous pattern; (3) "smoothly" connect buffer boundary points to generate the generalized buffer zone. The details for the generations of the generalized point buffer (GPIB) zone, the generalized line buffer (GLB) zone, and the generalized polygon buffer (GPLB) zone are discussed. Two dataset are used to validate the performances of the proposed GBA. Six parameters are applied as indexes to evaluate the proposed algorithm. The experimental results discovered that <xref rid="deqn1" ref-type="disp-formula">(1) the GBA is close to the tradition buffering algorithm (TBA) when the angle increment (<inline-formula> <tex-math notation="LaTeX">\Delta \varphi </tex-math></inline-formula>) in GPIB, line increment (<inline-formula> <tex-math notation="LaTeX">\Delta L </tex-math></inline-formula>) in GLB, and arc length increment (<inline-formula> <tex-math notation="LaTeX">\Delta S </tex-math></inline-formula>) in GPLB approach to zero, respectively; <xref rid="deqn2" ref-type="disp-formula">(2) the proposed GBA can accurately reflect the real situation of the buffering zone, and improve the deficiency and accuracy of TBA in real application.
A textured urban 3D mesh is an important part of 3D real scene technology. Semantically segmenting an urban 3D mesh is a key task in the photogrammetry and remote sensing field. However, due to the ...irregular structure of a 3D mesh and redundant texture information, it is a challenging issue to obtain high and robust semantic segmentation results for an urban 3D mesh. To address this issue, we propose a semantic urban 3D mesh segmentation network (MeshNet) with sparse prior (SP), named MeshNet-SP. MeshNet-SP consists of a differentiable sparse coding (DSC) subnetwork and a semantic feature extraction (SFE) subnetwork. The DSC subnetwork learns low-intrinsic-dimensional features from raw texture information, which increases the effectiveness and robustness of semantic urban 3D mesh segmentation. The SFE subnetwork produces high-level semantic features from the combination of features containing the geometric features of a mesh and the low-intrinsic-dimensional features of texture information. The proposed method is evaluated on the SUM dataset. The results of ablation experiments demonstrate that the low-intrinsic-dimensional feature is the key to achieving high and robust semantic segmentation results. The comparison results show that the proposed method can achieve competitive accuracies, and the maximum increase can reach 34.5%, 35.4%, and 31.8% in mR, mF1, and mIoU, respectively.
Although some researchers have proposed the Field Programmable Gate Array (FPGA) architectures of Feature From Accelerated Segment Test (FAST) and Binary Robust Independent Elementary Features ...(BRIEF) algorithm, there is no consideration of image data storage in these traditional architectures that will result in no image data that can be reused by the follow-up algorithms. This paper proposes a new FPGA architecture that considers the reuse of sub-image data. In the proposed architecture, a remainder-based method is firstly designed for reading the sub-image, a FAST detector and a BRIEF descriptor are combined for corner detection and matching. Six pairs of satellite images with different textures, which are located in the Mentougou district, Beijing, China, are used to evaluate the performance of the proposed architecture. The Modelsim simulation results found that: (i) the proposed architecture is effective for sub-image reading from DDR3 at a minimum cost; (ii) the FPGA implementation is corrected and efficient for corner detection and matching, such as the average value of matching rate of natural areas and artificial areas are approximately 67% and 83%, respectively, which are close to PC's and the processing speed by FPGA is approximately 31 and 2.5 times faster than those by PC processing and by GPU processing, respectively.
The traditional ortho-rectification technique for remotely sensed (RS) images, which is performed on the basis of a ground image processing platform, has been unable to meet timeliness or near ...timeliness requirements. To solve this problem, this paper presents research on an ortho-rectification technique based on a field programmable gate array (FPGA) platform that can be implemented on board spacecraft for (near) real-time processing. The proposed FPGA-based ortho-rectification method contains three modules, i.e., a memory module, a coordinate transformation module (including the transformation from geodetic coordinates to photo coordinates, and the transformation from photo coordinates to scanning coordinates), and an interpolation module. Two datasets, aerial images located in central Denver, Colorado, USA, and an aerial image from the example dataset of ERDAS IMAGINE 9.2, are used to validate the processing speed and accuracy. Compared to traditional ortho-rectification technology, the throughput from the proposed FPGA-based platform and the personal computer (PC)-based platform are 11,182.3 kilopixels per second and 2582.9 kilopixels per second, respectively. This means that the proposed FPGA-based platform is 4.3 times faster than the PC-based platform for processing the same RS images. In addition, the root-mean-square errors of the planimetric coordinates φX and φY and the distance φS are 1.09 m, 1.61 m, and 1.93 m, respectively, which can meet the requirements of correction accuracy in practice.
Automatic registration of unordered point clouds is the prerequisite for urban reconstruction. However, most of the existing technologies still suffer from some limitations. On one hand, most of them ...are sensitive to noise and repetitive structures, which makes them infeasible for the registration of large-scale point clouds. On the other hand, most of them dealing with point clouds with limited overlaps and unpredictable location. All these problems make it difficult for registration technology to obtain qualified results in outdoor point cloud. To overcome these limitations, this paper presents a grid graph-based point cloud registration (GGR) algorithm to align pairwise scans. First, point cloud is divided into a set of 3D grids. We propose a voting strategy to measure the similarity between two grids based on feature descriptor, transforming the superficial correspondence into 3D grid expression. Next, a graph matching is proposed to capture the spatial consistency from putative correspondences, and graph matching hierarchically refines the corresponding grids until obtaining point-to-point correspondences. Comprehensive experiments demonstrated that the proposed algorithm obtains good performance in terms of successful registration rate, rotation error, translation error, and outperformed the state-of-the-art approaches.
The triglyceride-glucose (TyG) index is a reliable alternative biomarker of insulin resistance (IR). However, whether the TyG index has prognostic value in critically ill patients with coronary heart ...disease (CHD) remains unclear.
Participants from the Medical Information Mart for Intensive Care III (MIMIC-III) were grouped into quartiles according to the TyG index. The primary outcome was in-hospital all-cause mortality. Cox proportional hazards models were constructed to examine the association between TyG index and all-cause mortality in critically ill patients with CHD. A restricted cubic splines model was used to examine the associations between the TyG index and outcomes.
A total of 1,618 patients (65.14% men) were included. The hospital mortality and intensive care unit (ICU) mortality rate were 9.64% and 7.60%, respectively. Multivariable Cox proportional hazards analyses indicated that the TyG index was independently associated with an elevated risk of hospital mortality (HR, 1.71 95% CI 1.25-2.33 P = 0.001) and ICU mortality (HR, 1.50 95% CI 1.07-2.10 P = 0.019). The restricted cubic splines regression model revealed that the risk of hospital mortality and ICU mortality increased linearly with increasing TyG index (P for non-linearity = 0.467 and P for non-linearity = 0.764).
The TyG index was a strong independent predictor of greater mortality in critically ill patients with CHD. Larger prospective studies are required to confirm these findings.
Conventional rational polynomial coefficients (RPC)-based orthorectification methods are unable to satisfy the demands of timely responses to terrorist attacks and disaster rescue. To accelerate the ...orthorectification processing speed, we propose an on-board orthorectification method, i.e., a field-programmable gate array (FPGA)-based fixed-point (FP)-RPC orthorectification method. The proposed RPC algorithm is first modified using fixed-point arithmetic. Then, the FP-RPC algorithm is implemented using an FPGA chip. The proposed method is divided into three main modules: a reading parameters module, a coordinate transformation module, and an interpolation module. Two datasets are applied to validate the processing speed and accuracy that are achievable. Compared to the RPC method implemented using Matlab on a personal computer, the throughputs from the proposed method and the Matlab-based RPC method are 675.67 Mpixels/s and 61,070.24 pixels/s, respectively. This means that the proposed method is approximately 11,000 times faster than the Matlab-based RPC method to process the same satellite images. Moreover, the root-mean-square errors (RMSEs) of the row coordinate (Δ
), column coordinate (Δ
), and the distance Δ
are 0.35 pixels, 0.30 pixels, and 0.46 pixels, respectively, for the first study area; and, for the second study area, they are 0.27 pixels, 0.36 pixels, and 0.44 pixels, respectively, which satisfies the correction accuracy requirements in practice.
After a big karst sinkhole happened in Jili Village of Guangxi, China, the local government was eager to quantitatively analyze and map susceptible areas of the potential second-time karst sinkholes ...in order to make timely decisions whether the residents living in the first-time sinkhole areas should move. For this reason, karst sinkholes susceptibility geospatial analysis is investigated using multivariate spatial data, logistic regression model (LRM) and Geographical Information System (GIS). Ten major karst sinkholes related factors, including (1) formation lithology, (2) soil structure, (3) profile curvature, (4) groundwater depth, (5) fluctuation of groundwater level, (6) percolation rate of soil, (7) degree of karst development, (8) distance from fault, (9) distance from the traffic route, and (10) overburden thickness were selected, and then each of factors was classified and quantitated with the three or four levels. The LRM was applied to evaluate which factor makes significant contributions to sinkhole. The results demonstrated that formation lithology, soil structure, profile curvature, groundwater depth, ground water level, percolation rate of soil, and degree of karst development, the distance from fault, and overburden thickness are positive, while one factor, the distance from traffic routes is negative, which is deleted from LRM model. The susceptibility of the potential sinkholes in the study area is estimated and mapped using the solved impact factors. The susceptible degrees of the study area are classified into five levels, very high, high, moderate, low, and ignore susceptibility. It has been found that that both very high and high susceptibility areas are along Datou Hill and the foothills of the study area. This finding is verified by field observations. With the investigations conducted in this paper, it can be concluded that the susceptibility maps produced in this paper are reliable and accurate, and useful as a reference for local governments to make decisions regarding whether or not residents living within sinkhole areas should move.
•Karst sinkholes susceptibility geospatial analysis is investigated.•Ten major karst sinkholes factors were quantitated.•Logistic model is used for susceptible sinkhole analysis.•A 5-level susceptibility map was created for local government.
With increasing demands in real-time or near real-time remotely sensed imagery applications in such as military deployments, quick response to terrorist attacks and disaster rescue, the on-board ...geometric calibration problem has attracted the attention of many scientists in recent years. This paper presents an on-board geometric calibration method for linear CCD sensor arrays using FPGA chips. The proposed method mainly consists of four modules-Input Data, Coefficient Calculation, Adjustment Computation and Comparison-in which the parallel computations for building the observation equations and least squares adjustment, are implemented using FPGA chips, for which a decomposed matrix inversion method is presented. A Xilinx Virtex-7 FPGA VC707 chip is selected and the MOMS-2P data used for inflight geometric calibration from DLR (Köln, Germany), are employed for validation and analysis. The experimental results demonstrated that: (1) When the widths of floating-point data from 44-bit to 64-bit are adopted, the FPGA resources, including the utilizations of FF, LUT, memory LUT, I/O and DSP48, are consumed at a fast increasing rate; thus, a 50-bit data width is recommended for FPGA-based geometric calibration. (2) Increasing number of ground control points (GCPs) does not significantly consume the FPGA resources, six GCPs is therefore recommended for geometric calibration. (3) The FPGA-based geometric calibration can reach approximately 24 times faster speed than the PC-based one does. (4) The accuracy from the proposed FPGA-based method is almost similar to the one from the inflight calibration if the calibration model and GCPs number are the same.
In the field of remote sensing, most of the feature indexes are obtained based on expert knowledge or domain analysis. With the rapid development of machine learning and artificial intelligence, this ...method is time-consuming and lacks flexibility, and the indexes obtained cannot be applied to all areas. In order to not rely on expert knowledge and find the effective feature index with regard to a certain material automatically, this paper proposes a data-driven method to learn interactive features for hyperspectral remotely sensed data based on a sparse multiclass logistic regression model. The key point explicitly expresses the interaction relationship between original features as new features by multiplication or division operation in the logistic regression. Through the strong constraint of the L1 norm, the learned features are sparse. The coefficient value of the corresponding features after sparse represents the basis for judging the importance of the features, and the optimal interactive features among the original features. This expression is inspired by the phenomenon that usually the famous indexes we used in remote sensing, like NDVI, NDWI, are the ratio between different spectral bands, and also in statistical regression, the relationship between features is captured by feature value multiplication. Experiments were conducted on three hyperspectral data sets of Pavia Center, Washington DC Mall, and Pavia University. The results for binary classification show that the method can extract the NDVI and NDWI autonomously, and a new type of metal index is proposed in the Pavia University data set. This framework is more flexible and creative than the traditional method based on laboratory research to obtain the key feature and feature interaction index for hyperspectral remotely sensed data.