Low-cost layered lithium transition metal oxides delivering high capacity and moderate rate capability are considered as promising cathodes for next-generation lithium-ion batteries (LIBs). However, ...the low stacking and compressed density results in lower volumetric energy density of such LIBs compared with that of the first commercialized LiCoO2-based battery. Herein, for the first time, a new strategy is rationally proposed to prepare micron-sized monocrystalline LiNi1/3Co1/3Mn1/3O2via stepwise addition of lithium sources into hydroxide/oxide precursors. As anticipated, the as-prepared 4–8 μm-thick monocrystalline cathode exhibits comparable stacking/compressed density with LiCoO2 electrode, achieving ultrahigh volumetric energy density exceeding 2600 W h L−1, enhanced structural stability and high rate capability in half-cells. Moreover, in full-cell configuration, by using this monocrystalline LiNi1/3Co1/3Mn1/3O2 as the cathode and mesocarbon microbeads as the anode, higher volumetric energy density exceeding 660 W h L−1, enhanced cycling stability and high rate capability are achieved, indicating the expectant merits of the micron-sized monocrystalline cathodes. It is also confirmed that this monocrystalline cathode can mitigate side reactions occurring at the electrode/electrolyte interface and maintain the stability of layered structures upon cycling. This facile tactic provides an innovative insight into preparing high-volumetric-energy-density lithium transition metal oxide cathodes with enhanced electrochemical properties. Moreover, this approach can be readily extended to prepare other types of layered and spinel monocrystalline cathodes with improved volumetric energy density.
In this paper, we discuss a family of robust, high-dimensional regression models for quantile and composite quantile regression, both with and without an adaptive lasso penalty for variable ...selection. We reformulate these quantile regression problems and obtain estimators by applying the alternating direction method of multipliers (ADMM), majorize-minimization (MM), and coordinate descent (CD) algorithms. Our new approaches address the lack of publicly available methods for (composite) quantile regression, especially for high-dimensional data, both with and without regularization. Through simulation studies, we demonstrate the need for different algorithms applicable to a variety of data settings, which we implement in the cqrReg package for R. For comparison, we also introduce the widely used interior point (IP) formulation and test our methods against the IP algorithms in the existing quantreg package. Our simulation studies show that each of our methods, particularly MM and CD, excel in different settings such as with large or high-dimensional data sets, respectively, and outperform the methods currently implemented in quantreg. The ADMM approach offers specific promise for future developments in its amenability to parallelization and scalability.
Background One common way to share health data for secondary analysis while meeting increasingly strict privacy regulations is to de-identify it. To demonstrate that the risk of re-identification is ...acceptably low, re-identification risk metrics are used. There is a dearth of good risk estimators modeling the attack scenario where an adversary selects a record from the microdata sample and attempts to match it with individuals in the population. Objectives Develop an accurate risk estimator for the sample-to-population attack. Methods A type of estimator based on creating a synthetic variant of a population dataset was developed to estimate the re-identification risk for an adversary performing a sample-to-population attack. The accuracy of the estimator was evaluated through a simulation on four different datasets in terms of estimation error. Two estimators were considered, a Gaussian copula and a d-vine copula. They were compared against three other estimators proposed in the literature. Results Taking the average of the two copula estimates consistently had a median error below 0.05 across all sampling fractions and true risk values. This was significantly more accurate than existing methods. A sensitivity analysis of the estimator accuracy based on variation in input parameter accuracy provides further application guidance. The estimator was then used to assess re-identification risk and de-identify a large Ontario COVID-19 behavioral survey dataset. Conclusions The average of two copula estimators consistently provides the most accurate re-identification risk estimate and can serve as a good basis for managing privacy risks when data are de-identified and shared.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Lithium–sulfur (Li–S) battery is labeled as a promising high-energy-density battery system, but some inherent drawbacks of sulfur cathode materials using relatively complicated techniques impair the ...practical applications. Herein, an integrated approach is proposed to fabricate the high-performance rGO/VS4/S cathode composites through a simple one-step solvothermal method, where nano sulfur and VS4 particles are uniformly distributed on the conductive rGO matrix. rGO and sulfiphilic VS4 provide electron transfer skeleton and physical/chemical anchor for soluble lithium polysulfides (LiPS). Meanwhile, VS4 could also act as an electrochemical mediator to efficiently enhance the utilization and reversible conversion of LiPS. Correspondingly, the rGO/VS4/S composites maintain a high reversible capacity of 969 mAh/g at 0.2 C after 100 cycles, with a capacity retention rate of 82.3%. The capacity fade rate could lower to 0.0374% per cycle at 1 C. Moreover, capacity still sustains 795 mAh/g after 100 cycles in the relatively high-sulfur-loading battery (6.5 mg/cm2). Thus, the suggested method in configuring the sulfur-based composites is demonstrated a simple and efficient strategy to construct the high-performance Li–S batteries.
The rGO/VS4/S composites are directly synthesized through an in-situ, one-step solvothermal method by adding H2O2 oxidant. The obtained hybrid structure endows the merits of conductive rGO with physical anchoring effect and polar VS4 with chemical adsorption and catalysis. Thus, the utilization of sulfur species and the electrochemical stability are enhanced in Li−S batteries.
Display omitted
Editorial on the Research Topic Modern Statistical Learning Strategies in Imaging Genetics With the rapid growth of modern technology, many biomedical studies, such as the Alzheimer's disease ...neuroimaging initiative (ADNI) study (Mueller et al., 2005), Human Connectome Project (HCP) (Van Essen et al., 2013), and UK BioBank (UKBB) study (Sudlow et al., 2015), are being conducted to collect massive datasets with volumes of multi-modality imaging, genetic, neurocognitive, and clinical information from increasingly large cohorts. ...integration of imaging and genetic data through deep learning techniques recently gained considerable attention in AD prediction. Taken together, the studies in this special issue include several advanced statistical learning approaches in imaging genetics, and exemplify the potential impact of applying these methods to better understand the roles of brain imaging data and genetic information in mental health and disease.
We construct robust designs for nonlinear quantile regression, in the presence of both a possibly misspecified nonlinear quantile function and heteroscedasticity of an unknown form. The asymptotic ...mean-squared error of the quantile estimate is evaluated and maximized over a neighbourhood of the fitted quantile regression model. This maximum depends on the scale function and on the design. We entertain two methods to find designs that minimize the maximum loss. The first is local – we minimize for given values of the parameters and the scale function, using a sequential approach, whereby each new design point minimizes the subsequent loss, given the current design. The second is adaptive – at each stage, the maximized loss is evaluated at quantile estimates of the parameters, and a kernel estimate of scale, and then the next design point is obtained as in the sequential method. In the context of a Michaelis–Menten response model for an estrogen/hormone study, and a variety of scale functions, we demonstrate that the adaptive approach performs as well, in large study sizes, as if the parameter values and scale function were known beforehand and the sequential method applied. When the sequential method uses an incorrectly specified scale function, the adaptive method yields an, often substantial, improvement. The performance of the adaptive designs for smaller study sizes is assessed and seen to still be very favourable, especially so since the prior information required to design sequentially is rarely available.
We present a comparative study for discriminative anatomy detection in high dimensional neuroimaging data. While most studies solve this problem using mass univariate approaches, recent works show ...better accuracy and variable selection using a sparse classification model. Two types of image-based regularization methods have been proposed in the literature based on either a Graph Net (GN) model or a total variation (TV) model. These studies showed increased classification accuracy and interpretability of results when using image-based regularization, but did not look at the accuracy and quality of the recovered significant regions. In this paper, we theoretically prove bounds on the recovered sparse coefficients and the corresponding selected image regions in four models (two based on GN penalty and two based on TV penalty). Practically, we confirm the theoretical findings by measuring the accuracy of selected regions compared with ground truth on simulated data. We also evaluate the stability of recovered regions over cross-validation folds using real MRI data. Our findings show that the TV penalty is superior to the GN model. In addition, we showed that adding an l2 penalty improves the accuracy of estimated coefficients and selected significant regions for the both types of models.
We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing ...methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.
In unsupervised learning, clustering is a common starting point for data processing. The convex or concave fusion clustering method is a novel approach that is more stable and accurate than ...traditional methods such as
-means and hierarchical clustering. However, the optimization algorithm used with this method can be slowed down significantly by the complexity of the fusion penalty, which increases the computational burden. This paper introduces a random projection ADMM algorithm based on the Bernoulli distribution and develops a double random projection ADMM method for high-dimensional fusion clustering. These new approaches significantly outperform the classical ADMM algorithm due to their ability to significantly increase computational speed by reducing complexity and improving clustering accuracy by using multiple random projections under a new evaluation criterion. We also demonstrate the convergence of our new algorithm and test its performance on both simulated and real data examples.