An integrative Java software, visual3D 1.0, for 3D visualization of objects and chemical molecules was developed in present study. The software uses a XYZ or OBJ file to generate the 3D graphics ...represented by the file. Various parameters can be specified by users. In the generated 3D graphics window, users may rightor left-click mouse to zoom in or zoom out the 3D graphics, or mouse-drag the graphics to rotate the 3D graphics, or slide scrollbar to vertically or horizontally translate the 3D graphics. The 3D graphics can be saved as an image file (in PNG, JPG, or other formats) at its current appearance. Both visual3D 1.0 and demonstration data files were given.
Effect size is a statistical concept which measures the strength of the relationship between two variables. Effect size has basic properties such as measurement unit independence, sample size ...independence, and monotonicity. In particular, unlike statistical significance test, effect size is not influenced by sample size. Effect size avoids various problems in statistical significance tests. It is one of the important contents in constructing new statistics. In present study, various effect sizes were mathematically described and a free desktop calculator for effect sizes was presented.
Lamellar transition‐metal dichalcogenides (MX2) have promising applications in electrochemical energy storage and conversion devices due to their two‐dimensional structure, ultrathin thickness, large ...interlayer distance, tunable bandgap, and transformable phase nature. Interlayer engineering of MX2 nanosheets with large specific surface area can modulate their electronic structures and interlayer distance as well as the intercalated foreign species, which is important for optimizing their performance in different devices. In this review, a summary on recent progress of MX2 nanosheets and the significance of their interlayer engineering is presented firstly. Synthesis of interlayer‐expanded MX2 nanosheets with various strategies is then discussed in detail. Emphasis is focused on their applications in rechargeable batteries, pseudocapacitors, hydrogen evolution reaction (HER) catalysis and treatments of environmental contaminants, demonstrating the importance of interlayer engineering on controlling performance of MX2. The current challenges of the interlayer‐expanded MX2 and outlooks for further advances are finally discussed.
Interlayer engineering of transition‐metal dichalcogenides (TMDs) nanostructures opens up a new door for tuning their physical and chemical properties and optimizing their device performance. This review summarizes the recent advances of interlayer‐expanded TMDs nanostructures focusing on the synthesis strategies and their applications in rechargeable batteries, pseudocapacitors and electrolysis.
In this paper, we propose a new no-reference (NR)/ blind sharpness metric in the autoregressive (AR) parameter space. Our model is established via the analysis of AR model parameters, first ...calculating the energy- and contrast-differences in the locally estimated AR coefficients in a pointwise way, and then quantifying the image sharpness with percentile pooling to predict the overall score. In addition to the luminance domain, we further consider the inevitable effect of color information on visual perception to sharpness and thereby extend the above model to the widely used YIQ color space. Validation of our technique is conducted on the subsets with blurring artifacts from four large-scale image databases (LIVE, TID2008, CSIQ, and TID2013). Experimental results confirm the superiority and efficiency of our method over existing NR algorithms, the state-of-the-art blind sharpness/blurriness estimators, and classical full-reference quality evaluators. Furthermore, the proposed metric can be also extended to stereoscopic images based on binocular rivalry, and attains remarkably high performance on LIVE3D-I and LIVE3D-II databases.
The t-test theory has laid the foundation of modern statistics and it is one of the main contents of statistics. This theory can be found in all statistics textbooks and is at the core of almost all ...applied statistics courses. At the same time, almost all statistical software or tools have t-test content, such as Matlab, SAS, SPSS, R, etc. However, t-test theory has been widely criticized in recent years due to its theoretical flaws and misuse. The t-test is only used for the problems of normal distribution population with small sample size. Even so, its sample size cannot be too small due to problems such as t-transformation distortion. In terms of significance test, the t-test has the general defects of statistical significance tests, coupled with the inherent fallacies of confidence intervals, and the peculiar uncertainty problems of t-intervals, make the t-test methodology obviously insufficient. The t-test theory is faced with the retaining or discarding decision in statistics, and some statisticians have advocated and abolished the t-test theory from statistics textbooks. As a statistical significance test, the solutions of t-tests include using Bayesian methods, performing meta-analyses, using effect sizes, stressing statistical validity, using nonparametric statistics, using good experimental and sampling designs and determining appropriate sample size, the network methods are used instead of the reductionist method to obtain and analyze the data, and the statistical conclusions are combined with the mechanism analysis to draw scientific inferences, etc. As a t-interval uncertainty problem, its solutions include using the Bayesian credible interval method, using the Bootstrap credible interval method, inferring directly from the central limit theorem, using the unified theory of uncertainty, etc.
An investigation of the concept of marriage and childbirth in Yu Village of Zhejiang Province over the past 40 years finds that the change from the conventional normative concept of "having children ...after marriage" to the wide acceptance of "having children before marriage" can be explained as a trade-off strategy against the one-child policy to secure the continuity of family name. When the One Child policy ended, some families in the village once again adhered to the norm of having children after marriage. However, with the skyrocketed cost of marriage involving in luxury weddings and new houses, "having children before marriage" came back in fashion again in the village since 2010. Judging from the number of children born before marriage in the village, it seems that the traditional concept of having children after marriage has almost been abandoned. Nevertheless, if one examines the structure of meaning behind the now much expanded engagement ceremony, one can see it essentially redefines the "witness" an
Silicon (Si) alleviates cadmium (Cd) toxicity in rice (Oryza sativa). However, the chemical mechanisms at the single‐cell level are poorly understood. Here, a suspension of rice cells exposed to Cd ...and/or Si treatments was investigated using a combination of plant cell nutritional, molecular biological, and physical techniques including in situ noninvasive microtest technology (NMT), polymerase chain reaction (PCR), inductively coupled plasma mass spectroscopy (ICP‐MS), and atomic force microscopy (AFM) in Kelvin probe mode (KPFM). We found that Si‐accumulating cells had a significantly reduced net Cd²⁺influx, compared with that in Si‐limited cells. PCR analyses of the expression levels of Cd and Si transporters in rice cells showed that, when the Si concentration in the medium was increased, expression of the Si transporter gene Low silicon rice 1 (Lsi1) was up‐regulated, whereas expression of the gene encoding the transporter involved in the transport of Cd, Natural resistance‐associated macrophage protein 5 (Nramp5), was down‐regulated. ICP‐MS results revealed that 64% of the total Si in the cell walls was bound to hemicellulose constituents following the fractionation of the cell walls, and consequently inhibited Cd uptake. Furthermore, AFM in KPFM demonstrated that the heterogeneity of the wall surface potential was higher in cells cultured in the presence of Si than in those cultured in its absence, and was homogenized after the addition of Cd. These results suggest that a hemicellulose‐bound form of Si with net negative charges is responsible for inhibition of Cd uptake in rice cells by a mechanism of Si‐hemicellulose matrixCd complexation and subsequent co‐deposition.
Metallic lithium (Li) is a promising anode material for next‐generation rechargeable batteries. However, the dendrite growth of Li and repeated formation of solid electrolyte interface during Li ...plating and stripping result in low Coulombic efficiency, internal short circuits, and capacity decay, hampering its practical application. In the development of stable Li metal anode, the current collector is recognized as a critical component to regulate Li plating. In this work, a lithiophilic Cu‐CuO‐Ni hybrid structure is synthesized as a current collector for Li metal anodes. The low overpotential of CuO for Li nucleation and the uniform Li+ ion flux induced by the formation of Cu nanowire arrays enable effective suppression of the growth of Li dendrites. Moreover, the surface Cu layer can act as a protective layer to enhance structural durability of the hybrid structure in long‐term running. As a result, the Cu‐CuO‐Ni hybrid structure achieves a Coulombic efficiency above 95% for more than 250 cycles at a current density of 1 mA cm−2 and 580 h (290 cycles) stable repeated Li plating and stripping in a symmetric cell.
A lithiophilic Cu‐CuO‐Ni hybrid structure is synthesized on a Ni foam substrate as a current collector for lithium (Li) metal anodes. The collective effects of low overpotential of the Cu‐CuO‐Ni hybrid structure for Li nucleation, nanowire array configuration, and the Cu buffer layer are demonstrated to be keys for achieving an outstanding overall performance of the current collector.
For a long time, confidence interval theory is the basis of statistics, and confidence interval has been regarded as an important content of statistical analysis. Almost all statistical textbooks and ...statistical analysis software contain the contents of confidence intervals, which are used to estimate statistical parameters or parameters of mathematical models, and are an important part of many methods such as interval estimation, analysis of variance, and regression analysis, etc. They are recommended or required by the method guidelines of many reputable journals. So far, confidence interval theory and methods have been widely used in various scientific or engineering fields including life sciences, medicine, environmental science, chemistry, physics, and psychology. However, due to the fallacies or deficiencies of the confidence interval theory and methodology, it has caused a wide range of misuses, and has been criticized more and more in recent years. Some statisticians even suggest abandoning the confidence interval theory. To avoid the problems of classical confidence interval theory, one can use Bayesian credible intervals, use uncertainty methods, calculate confidence intervals by avoiding statistic significance tests, or use the Bootstrap credible interval method proposed by me, etc. In practice, for controlled experiments, multiple replicates or treatments should be designed; for observational experiments, multiple representative samples should be drawn, and even a single sample can be used if sufficient sample size is ensured. It is necessary to implement the whole process control for every procedures from sampling to statistical analysis. Cross-comparison and validation of confidence interval analysis results with other multi-source results should be conducted to obtain the most reliable conclusions. Finally, in addition to writing, publishing and adopting new statistical works and teaching materials as soon as possible, it is imperative to revise and distribute various statistical software in new editions based on new statistics for use.
The p-value is at the heart of statistical significance tests, a very important issue related to the role of statistical inference in advancing scientific discovery. Over the past few decades, ...p-value based statistical significance tests have been widely used in most statistics-related research papers, textbooks, and all statistical software around the world. Numerous scientists in various disciplines hold the p-value as the gold standard for statistical significance. However, in recent years, the p-value based statistical significance tests have been questioned unprecedentedly, mainly because the paradigm of significance tests is wrong, p-value is too sensitive, p-value is a dichotomous subjective index, and statistical significance is related to sample size, etc. Scientific research can only be falsified, not confirmed. p-value based statistical significance tests are one of the sources of false conclusions and research reproducibility crisis. For this reason, many statisticians advocate to abandon p-value based statistical significance tests and replace them with effect size, Bayesian methods, meta-analysis, etc. Scientific inference that combines statistical testing and multiple types of evidence is the basis for producing reliable conclusions. Reliable scientific inference requires appropriate experimental design, sampling design, and sample size; it also requires full control of the research process. For complex and time-varying problems, the network or systematic methods should be used instead of the reductionist methods to obtain and analyze data. To change the scientific research paradigm, the paradigm of multiple repeated experiments and multi-sample testing should be adopted, and multiple parties should verify each other to improve the authenticity and reproducibility of the results. In addition to writing, publishing and adopting new statistical monographs and textbooks, the most urgent task is to revise and distribute various statistical software in the new versions based on the new statistics for further use. Before the popularization of new statistics, what we can do is to improve data quality, strict p-value levels of statistical significance tests, use more reasonable analysis methods or testing standards, and combine statistical analysis and mechanism analysis, etc.