GIXSGUI is a MATLAB toolbox that offers both a graphical user interface and script‐based access to visualize and process grazing‐incidence X‐ray scattering data from nanostructures on surfaces and in ...thin films. It provides routine surface scattering data reduction methods such as geometric correction, one‐dimensional intensity linecut, two‐dimensional intensity reshaping etc. Three‐dimensional indexing is also implemented to determine the space group and lattice parameters of buried organized nanoscopic structures in supported thin films.
This study characterizes the demographic, epidemiologic, and clinical characteristics of hospitalized infants diagnosed with coronavirus disease 2019 infection between December 8, 2019, and February ...6, 2020, in China.
A large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. ...Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called marginal Fisher analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional linear discriminant analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions
Face recognition using Laplacianfaces Xiaofei He; Shuicheng Yan; Yuxiao Hu ...
IEEE transactions on pattern analysis and machine intelligence,
03/2005, Volume:
27, Issue:
3
Journal Article
Peer reviewed
We propose an appearance-based face recognition method called the Laplacianface approach. By using locality preserving projections (LPP), the face images are mapped into a face subspace for analysis. ...Different from principal component analysis (PCA) and linear discriminant analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.
With rapid developments in quantum hardware comes a push towards the first practical applications. While fully fault-tolerant quantum computers are not yet realized, there may exist intermediate ...forms of error correction that enable practical applications. In this work, we consider the idea of post-processing error decoders using existing quantum codes, which mitigate errors on logical qubits using post-processing without explicit syndrome measurements or additional qubits beyond the encoding overhead. This greatly simplifies the experimental exploration of quantum codes on real, near-term devices, removing the need for locality of syndromes or fast feed-forward. We develop the theory of the method and demonstrate it on an example with the perfect 5, 1, 3 code, which exhibits a pseudo-threshold of p ≈ 0.50 under a single qubit depolarizing channel applied to all qubits. We also provide a demonstration of improved performance on an unencoded hydrogen molecule.
Neurofilament light chain (NfL) is a novel biomarker for the assessment of neurological function after cardiac arrest (CA). Although meta-analysis has confirmed its predictive value, it has not ...conducted a more detailed analysis of its research. We conducted a meta-analysis to evaluate the relationship between serum NfL level and neurological prognosis in patients with spontaneous circulation recovery after CA, and subgroup analysis was conducted according to sample collection time, time to assess neurological function, study design, whether TTM was received, the method of specimen determination, and the presence of neurological disease in patients. To analyze the influence of these factors on the predictive value of serum NfL. Published Cochrane reviews and an updated, extended search of MEDLINE, Cochrane Library, Embase, Scopus, ClinicalKey, CINAHL, and Web of Science for relevant studies until March 2022 were assessed through inclusion and exclusion criteria. The standard mean difference and 95% confidence interval were calculated using the random-effects model or fixed-effects model to assess the association between one variable factor NfL level and the outcome of CA patients. Subgroup analysis according to sample collection time was performed. The prognosis analysis and publication bias were also assessed using Egger's and Begg's tests. Among 1209 related articles for screening, 6 studies (1360 patients) met the inclusion criteria and were selected for meta-analysis. The level of serum NfL in the good prognosis group (CPC1-2, CPC: cerebral performance category score) was significantly lower than that in the poor prognosis group (CPC3-5)SMD(standardized mean difference) = 0.553, 95%CI(confidence interval) = 0.418-0.687, I.sup.2 = 65.5% P<0.05). And this relationship also exists at each sampling time point (NfL specimens were collected on admission: SMD:0.48,95%CI:0.24-0.73; Samples were collected 24 hours after CA: SMD:0.60,95%CI:0.32-0.88;Specimens were obtained 48 hours after CA: SMD:0.51, 95%CI:0.18-0.85;Specimens were obtained 72 hours after CA: SMD:0.59, 95%CI:0.38-0.81). NfL may play a potential neuroprognostication role in postcardiac arrest patients with spontaneous circulation, regardless of when the sample was collected after CA.
With the fast development of industrial Internet of things (IIoT), a large amount of data is being generated continuously by different sources. Storing all the raw data in the IIoT devices locally is ...unwise considering that the end devices' energy and storage spaces are strictly limited. In addition, the devices are unreliable and vulnerable to many threats because the networks may be deployed in remote and unattended areas. In this paper, we discuss the emerging challenges in the aspects of data processing, secure data storage, efficient data retrieval and dynamic data collection in IIoT. Then, we design a flexible and economical framework to solve the problems above by integrating the fog computing and cloud computing. Based on the time latency requirements, the collected data are processed and stored by the edge server or the cloud server. Specifically, all the raw data are first preprocessed by the edge server and then the time-sensitive data (e.g., control information) are used and stored locally. The non-time-sensitive data (e.g., monitored data) are transmitted to the cloud server to support data retrieval and mining in the future. A series of experiments and simulation are conducted to evaluate the performance of our scheme. The results illustrate that the proposed framework can greatly improve the efficiency and security of data storage and retrieval in IIoT.
Following the intuition that the naturally occurring face data may be generated by sampling a probability distribution that has support on or near a submanifold of ambient space, we propose an ...appearance-based face recognition method, called orthogonal Laplacianface. Our algorithm is based on the locality preserving projection (LPP) algorithm, which aims at finding a linear approximation to the eigenfunctions of the Laplace Beltrami operator on the face manifold. However, LPP is nonorthogonal, and this makes it difficult to reconstruct the data. The orthogonal locality preserving projection (OLPP) method produces orthogonal basis functions and can have more locality preserving power than LPP. Since the locality preserving power is potentially related to the discriminating power, the OLPP is expected to have more discriminating power than LPP. Experimental results on three face databases demonstrate the effectiveness of our proposed algorithm
Edge computing is a new paradigm to provide strong computing capability at the edge of pervasive radio access networks close to users. A critical research challenge of edge computing is to design an ...efficient offloading strategy to decide which tasks can be offloaded to edge servers with limited resources. Although many research efforts attempt to address this challenge, they need centralized control, which is not practical because users are rational individuals with interests to maximize their benefits. In this article, we study to design a decentralized algorithm for computation offloading, so that users can independently choose their offloading decisions. Game theory has been applied in the algorithm design. Different from existing work, we address the challenge that users may refuse to expose their information about network bandwidth and preference. Therefore, it requires that our solution should make the offloading decision without such knowledge. We formulate the problem as a partially observable Markov decision process (POMDP), which is solved by a policy gradient deep reinforcement learning (DRL) based approach. Extensive simulation results show that our proposal significantly outperforms existing solutions.
Structurally precision graphene nanoribbons (GNRs) have attracted great interest considering their prospective applications as organic carbon materials for nanoelectronics. The electronic properties ...of GNRs not only critically depend on the edge structure and width but also on the heteroatom type, doping position, and concentration. Motivated by the recent undisputable progress in the synthesis of stable boron‐doped polycyclic aromatic hydrocarbons (B‐PAHs), considerable efforts have been devoted to the precision synthesis of the corresponding boron‐doped GNRs (B‐GNRs) via bottom‐up synthesis approach in recent years in view of the extraordinary ability of boron doping on modulating their physiochemical properties. In this review, an overview of the bottom‐up organic synthesis of B‐GNRs, including the precursor design and synthesis, structure characterization of the resulting B‐GNRs, and investigation of their electronic properties is provided. Moreover, the future challenges and perspectives regarding the bottom‐up synthesis of B‐GNRs are also discussed. The authors hope that this review will further stimulate the synthesis and device integrations of B‐GNRs with a combined effort from different disciplines.
In this review, recent progress in the bottom‐up organic synthesis and characterization of boron‐doped graphene nanoribbons through on‐surface or in‐solution chemistry is presented. Moreover, the challenges and perspectives in this research field are also discussed.