•Dimension is reduced by the generalized sliced inverse regression – a nonlinear dimension reduction method.•The curse of dimensionality is mitigated with the constructed Gaussian process model in ...the low-dimensional space.•The new method can deal with nonlinear functions and small probabilities of failure.
It is computationally expensive to predict reliability using physical models at the design stage if many random input variables exist. This work introduces a dimension reduction technique based on generalized sliced inverse regression (GSIR) to mitigate the curse of dimensionality. The proposed high dimensional reliability method enables active learning to integrate GSIR, Gaussian Process (GP) modeling, and Importance Sampling (IS), resulting in an accurate reliability prediction at a reduced computational cost. The new method consists of three core steps, 1) identification of the importance sampling region, 2) dimension reduction by GSIR to produce a sufficient predictor, and 3) construction of a GP model for the true response with respect to the sufficient predictor in the reduced-dimension space. High accuracy and efficiency are achieved with active learning that is iteratively executed with the above three steps by adding new training points one by one in the region with a high chance of failure.
The rapid growth of remote sensing big data (RSBD) has attracted considerable attention from both academia and industry. Despite the progress of computer technologies, conventional computing ...implementations have become technically inefficient for processing RSBD. Cloud computing is effective in activating and mining large-scale heterogeneous data and has been widely applied to RSBD over the past years. This study performs a technical review of cloud-based RSBD storage and computing from an interdisciplinary viewpoint of remote sensing and computer science. First, we elaborate on four critical technical challenges resulting from the scale expansion of RSBD applications, i.e. raster storage, metadata management, data homogeneity, and computing paradigms. Second, we introduce state-of-the-art cloud-based data management technologies for RSBD storage. The unit for manipulating remote sensing data has evolved due to the scale expansion and use of novel technologies, which we name the RSBD data model. Four data models are suggested, i.e. scenes, ARD, data cubes, and composite layers. Third, we summarize recent research on the application of various cloud-based parallel computing technologies to RSBD computing implementations. Finally, we categorize the architectures of mainstream RSBD platforms. This research provides a comprehensive review of the fundamental issues of RSBD for computing experts and remote sensing researchers.
Understanding the effects of polar nanoregions (PNRs) dynamics on dielectric properties is a complex question of essential importance for both fundamental studies of relaxor ferroelectrics and their ...applications to electro-optic devices. The frequency dependence of dielectric response to the bias electric field opens a brand new window for the study of this problem. A novel model from mesoscopic to macroscopic, revealing the relationship between the dielectric permittivity to the applied electric field, temperature, and PNRs, was established based on mean field approximation and the theory of continuum percolation, and not only validates the field-induced percolation and the relaxation time divergency at the freezing temperature, but also predicts the frequency dependence of dielectric response. Unexpectedly, the model reveals the field-enhanced correlation length results in the nonmonotonic behavior of dielectric response, and implies that the increased orientation consistency of dipolar clusters and coercive fields originated from inherent inhomogeneity slow down the relaxation time of PNR reorientation. Considering the multi-scale heterogeneity of PNRs in relaxor, we found that the increased heterogeneity degree reduces the dielectric permittivity, but changes the slope of dielectric response to the bias electric field.
Severe drought and wetness would have serious impacts on human society and natural environment. Discovering the implicit relationships among severely dry/wet conditions is of great significance for ...meteorological disaster early warning and risk management strategy formulation. In this study, we propose an improved spatial‐temporal association rule mining algorithm to mine the global dry/wet associations. A modified time‐lagged association rule mining algorithm is first developed to mine unit‐to‐unit dry/wet association rules. Then, a density‐based clustering algorithm is employed to identify associated dry/wet zones. The gridded Standardized Precipitation Evapotranspiration Index (SPEI) data sets at 12‐ and 3‐month time scales were used to characterize annual and seasonal dry/wet conditions. Analysis results show that there are strong dry/wet associations between many regions. Some of the discovered association rules reflect similar associations with known phenomena, indicating the effectiveness and feasibility of the method. Other rules were previously unknown and can provide new knowledge for this research field. Several predictions can be made according to the discovered rules that western Oman and southeastern Saudi Arabia would suffer severe or extreme drought in 2021 and northwestern China would suffer severe or extreme drought in 2025 with probabilities exceeding 77%; northwestern China, northern Argentina, and western Australia would experience severe or extreme wetness in 2021, 2022, and 2023, respectively, with probabilities exceeding 85%. The algorithm provides a new perspective for analyzing dry/wet associations and the discovered associations can be used as theoretical bases for local governments to take precautions to mitigate the potential impacts of severe drought or wetness.
Plain Language Summary
Severe dry or wet conditions would have serious impacts on human society and ecological environment. Discovering the implicit relationships among severely dry/wet conditions is of great significance for meteorological disaster early warning and social and ecological losses reducing. In this study, an improved spatial‐temporal association rule mining algorithm was proposed to mine the global dry/wet associations, finding associations like “With a certain degree of confidence, if area A experiences severe wetness, area B would suffer severe drought after a period of time” with no need to presuppose that areas A and B might be associated. A modified time‐lagged association rule mining algorithm is first developed to mine unit‐to‐unit dry/wet association rules. Then, a density‐based clustering algorithm is employed to identify associated dry/wet zones. Analysis results show strong annual and seasonal dry/wet associations between many regions. Some of the discovered association rules reflect similar associations with known phenomena. Other rules were previously unknown and can provide new knowledge for this research field. The dry/wet conditions of five associated regions were predicted, with probabilities exceeding 77%, which can be useful for instructing local governments to take precautions to mitigate the potential impacts of severe drought or wetness.
Key Points
An improved spatial‐temporal association rule mining algorithm was proposed to mine global dry/wet association rules
Both annual and seasonal dry/wet associations exist in many regions of the world
The dry/wet conditions of five associated regions were predicted, with probabilities exceeding 77%
Heavy industrial burning contributes significantly to the greenhouse gas (GHG) emissions. It is responsible for almost one-quarter of the global energy-related CO2 emissions and its share continues ...to grow. Mostly, those industrial emissions are accompanied by a great deal of high-temperature heat emissions from the combustion of carbon-based fuels by steel, petrochemical, or cement plants. Fortunately, these industrial heat emission sources treated as thermal anomalies can be detected by satellite-borne sensors in a quantitive way. However, most of the dominant remote sensing-based fire detection methods barely work well for heavy industrial heat source discernment. Although the object-oriented approach, especially the data clustering-based approach, has guided a novel method of detection, it is still limited by the costly computation and storage resources. Furthermore, when scaling to a national, or even global, long time-series detection, it is greatly challenged by the tremendous computation introduced by the incredible large-scale data clustering of tens of millions of high-dimensional fire data points. Therefore, we proposed an improved parallel identification method with geocoded, task-tree-based, large-scale clustering for the spatial-temporal distribution analysis of industrial heat emitters across the United States from long time-series active Visible Infrared Imaging Radiometer Suite (VIIRS) data. A recursive k-means clustering method is introduced to gradually segment and cluster industrial heat objects. Furthermore, in order to avoid the blindness caused by random cluster center initialization, the time series VIIRS hotspots data are spatially pre-grouped into GeoSOT-encoded grid tasks which are also treated as initial clustering objects. In addition, some grouped parallel clustering strategy together with geocoding-aware task tree scheduling is adopted to sufficiently exploit parallelism and performance optimization. Then, the spatial-temporal distribution pattern and its changing trend of industrial heat emitters across the United States are analyzed with the identified industrial heat sources. Eventually, the performance experiment also demonstrated the efficiency and encouraging scalability of this approach.
A colorimetric sensor detects an analyte by utilizing the optical properties of the sensor unit, such as absorption or reflection, to generate a structural color that serves as the output signal to ...detect an analyte. Detecting the refractive index of an analyte by recording the color change of the sensor structure on its surface has several advantages, including simple operation, low cost, suitability for onsite analysis, and real-time detection. Colorimetric sensors have drawn much attention owing to their rapidity, simplicity, high sensitivity and selectivity. This Review discusses the use of colorimetric sensors in the food industry, including their applications for detecting food contaminants. The Review also provides insight into the scope of future research in this area.
Objectives. The present study aimed to investigate the effects of guided imagery training on heart rate variability in individuals while performing spaceflight emergency tasks. Materials and Methods. ...Twenty-one student subjects were recruited for the experiment and randomly divided into two groups: imagery group ( n = 11 ) and control group ( n = 10 ). The imagery group received instructor-guided imagery (session 1) and self-guided imagery training (session 2) consecutively, while the control group only received conventional training. Electrocardiograms of the subjects were recorded during their performance of nine spaceflight emergency tasks after imagery training. Results. In both of the sessions, the root mean square of successive differences (RMSSD), the standard deviation of all normal NN (SDNN), the proportion of NN50 divided by the total number of NNs (PNN50), the very low frequency (VLF), the low frequency (LF), the high frequency (HF), and the total power (TP) in the imagery group were significantly higher than those in the control group. Moreover, LF/HF of the subjects after instructor-guided imagery training was lower than that after self-guided imagery training. Conclusions. Guided imagery was an effective regulator for HRV indices and could be a potential stress countermeasure in performing spaceflight tasks.
Electro-optic modulators have enabled a wide range of emerging applications that place greater demands on the alignment of the modulators. In this paper, for the first time to the best of our ...knowledge, we construct a theoretical model to describe the electro-optic modulator with alignment errors, which helps us to quantitatively analyze the distribution characteristics of the exiting intensity of the electro-optic modulator under different alignment errors and different applied voltages. The alignment errors in the theoretical model are divided into three kinds, namely, the horizontal error, the vertical error, and the rotation error. The theoretical results show that the model and the distribution characteristics of the exiting intensity are useful to align the electro-optic modulator precisely in practical applications. Furthermore, an experimental setup is proposed to align the electro-optic modulator based on a lithium niobite crystal and to verify the correctness of the model presented in this paper. An excellent agreement is found between the theoretical results and the experimental results.
The dimension variables of a mechanism vary randomly within their tolerance ranges. Likewise, the positions of the pins in joints also vary randomly, and the joint distribution of the position ...coordinates of a pin is usually unknown. In traditional robust mechanism synthesis, however, dimension variables are usually considered as unbounded random variables, and the joint distributions of the clearance variables are commonly assumed to be known. This work intends to make robust mechanism synthesis more practical by treating dimension variables as truncated random variables and clearance variables as interval variables. A new robust methodology is then proposed to accommodate truncated dimension variables and interval clearance variables. A quality loss function is created with the mean and standard deviation of the maximal motion error with respect to interval clearance variables. Then minimizing the expected quality loss function leads to reductions in both nominal motion error and the variation of the motion error. This ensures that the mechanism is robust against uncertainties in both dimension variables and clearance variables. The robust synthesis methodology is applied to a four-bar function generating mechanism.
► New mechanism synthesis methodology that can handle both truncated random variables and interval clearance variables ► Accurate analysis for the robustness of function generating mechanisms ► New robust design model for mechanism synthesis with both truncated random variables and interval clearance variables
•The saddlepoint approximation is used to evaluate reliability for quadratic limit-state functions.•The saddlepoint approximation does not require additional transformations and approximations for ...quadratic functions.•The saddlepoint approximation has better accuracy than the traditional second order reliability methods.•The proposed method can also be used for general nonlinear limit-state functions.
If the state of a component can be predicted by a limit-state function, the First and Second Order Reliability Methods are commonly used to calculate the reliability of the component. The latter method is more accurate because it approximates the limit-state function with a quadratic form in standard normal variables. To further improve the accuracy, this study develops a saddlepoint approximation reliability method that does not require additional transformations and approximations on the quadratic function. Analytical equations are derived for the cumulant generating function (CGF) of the limit-state function in standard normal variables, and then the saddlepoint is found by equating the derivative of the CGF to the limit state. Thereafter a closed form solution to the reliability is available. The method can also apply to general nonlinear limit-state functions after they are approximated by a second order Taylor expansion. Examples show the better accuracy than the traditional second order reliability methods.