•Se is predicted by using spectral characteristics.•Abundance values of clay minerals were obtained through SMACC.•Reflectance of clay minerals was obtained by spectral retrieval.•Se concentration ...map has been generated.
Selenium (Se)isanimportanttraceelementthat is essential tohumanbeings. In the past, the Se concentration has mostly been obtained by field sampling and analysed under laboratory conditions. Unfortunately, this process is expensive, and the number of available samples is usually relatively small.
A soil geochemical survey was conducted in conjunction with an airborne survey via hyperspectral remote sensing in the Chuangye Farm area, China. Twenty-five elements/oxides including Se were analysed in the samples, and the results showed that Se has a highly negative correlation with K. Using hyperspectral Shortwave Infrared Airborne Spectrographic Imager (SASI) data, the abundances of clay minerals were obtained through the sequential maximum angle convex cone (SMACC) classification operation. According to the abundances of clay minerals, the reflectance of clay minerals was obtained using the spectral retrieval method. Due to the correlation among K, Se, clay minerals and their spectral characteristics, a stepwise regression model was established using the results from the geochemical survey data and retrieved hyperspectral SASI data; then, the K and Se concentrations were predicted. The results of this study show that predicting the Se content in soil by using SASI images through the spectral retrieval of clay minerals in soil in conjunction with actual geochemical analysis results boasts a higher prediction accuracy than the use of the raw SASI images, and this prediction approach has been proven to be feasible.
Task complexity has been recognized as an important task characteristic that influences and predicts human performance and behaviors. However, currently there is still limited consensus on how to ...understand this concept. This study aims at providing a clear, systematic understanding of task complexity. Task complexity definitions and models in the literature are reviewed from structuralist, resource requirement, and interaction viewpoints. Various existing task complexity definitions are summarized. Confusing terms related to task complexity are then clarified. From an objective and broad sense, task complexity is conceptualized following a task-component-factor-dimension framework. A six-component task model is proposed for identifying salient complexity contributory factors. Task complexity is then structured with ten dimensions. Finally, the proposed task complexity model was compared with other models.
The review and conceptualization of task complexity are helpful for better understanding of task complexity, its measurement and management, and in-depth analysis of various tasks in industries.
► Various types of definition of task complexity are reviewed. ► Confusing constructs related to task complexity are clarified. ► Task complexity is conceptualized in a task-component-factor-dimension framework. ► Several task complexity models are compared.
Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput ...sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for personalized cancer therapy.
Learning without Forgetting Li, Zhizhong; Hoiem, Derek
IEEE transactions on pattern analysis and machine intelligence,
12/2018, Volume:
40, Issue:
12
Journal Article
Peer reviewed
Open access
When building a unified vision system or gradually adding new apabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks ...grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.
A number of studies have shown that increasing the depth or width of convolutional networks is a rewarding approach to improve the performance of image recognition. In our study, however, we observed ...difficulties along both directions. On one hand, the pursuit for very deep networks is met with a diminishing return and increased training difficulty, on the other hand, widening a network would result in a quadratic growth in both computational cost and memory demand. These difficulties motivate us to explore structural diversity in designing deep networks, a new dimension beyond just depth and width. Specifically, we present a new family of modules, namely the PolyInception, which can be flexibly inserted in isolation or in a composition as replacements of different parts of a network. Choosing PolyInception modules with the guidance of architectural efficiency can improve the expressive power while preserving comparable computational cost. The Very Deep PolyNet, designed following this direction, demonstrates substantial improvements over the state-of-the-art on the ILSVRC 2012 benchmark. Compared to Inception-ResNet-v2, it reduces the top-5 validation error on single crops from 4.9% to 4.25%, and that on multi-crops from 3.7% to 3.45%.
The Xindian Ancient City (XAC) site is the most complete and frequently excavated urban site in the lower reaches of the Minjiang River. Archaeological and chronological research of this area helps ...clarify the history of human activity at the site and restore the geographical background of the ethnic minority regimes who lived in the coastal areas of south China during the Eastern Zhou dynasty and the related influence on regional civilisation. Dating results suggest that the site was built during the Spring and Autumn and Warring States periods (3.2–2.4 thousand years ago), which can reasonably explain the presence of relics from the Warring States period on and within the wall. Our dating results are also consistent with the late Neolithic stone tools and pottery fragments found in the area, suggesting that the site was an ideal settlement before castles were constructed. Based on the spatial distribution of the Neolithic sites and Holocene transgression records in the Fuzhou Basin, it was found that coastline advance and retreat were the main factors affecting the paleo-human activity in the Fuzhou Basin during the late Neolithic period. Previous studies showed that the seawater gradually withdrew from the basin around 2 ka BP. Before this, humans mainly lived in the crescent-shaped area on the west side of the Fuzhou Basin, close to the ancient coastline. Based on existing archaeological results and chronological data, it can be shown that the lower reaches of the Minjiang River, where Fujian Province's capital is located, has long been the centre of ancient human activity in the coastal areas of south China, with a civilisation history of 6–4 ka and a city construction history of at least 2.9 ka.
•It is the earliest urban site in the lower reaches of the Minjiang River, China.•The site was a continuously inhabited Neolithic−historical settlement.•The geomorphic pattern led to long-term human activity in the coastal estuaries.•Sea level change can control the spatial distribution of sites in the estuary area.
There is a scarcity of empirical data on human error for human reliability analysis (HRA). This situation can increase the variability and impair the validity of HRA outcomes in risk analysis. In ...this work, a microworld study was used to investigate the effects of performance shaping factors (PSFs) and their interrelationships and combined effects on the human error probability (HEP). The PSFs involved were task complexity, time availability, experience, and time pressure. The empirical data obtained were compared with predictions by the Standardized Plant Analysis Risk‐Human Reliability Method (SPAR‐H) and data from other sources. The comparison included three aspects: (1) HEP, (2) relative effects of the PSFs, and (3) error types. Results showed that the HEP decreased with experience and time availability levels. The significant relationship between task complexity and the HEP depended on time availability and experience, and time availability affected the HEP through time pressure. The empirical HEPs were higher than the HEPs predicted by SPAR‐H under different PSF combinations, showing the tendency of SPAR‐H to produce relatively optimistic results in our study. The relative effects of two PSFs (i.e., experience/training and stress/stressors) in SPAR‐H agreed to some extent with those in our study. Several error types agreed well with those from operational experience and a database for nuclear power plants (NPPs).
•Utilization of expert judgment and empirical data to derive the multipliers of performance shaping factors (PSFs).•Application of absolute probability judgment (APJ) and ratio magnitude estimation ...(RME) methods.•Suggestion of PSF multiplier design for digital control rooms.
Human reliability analysis (HRA) still heavily relies on expert judgments to generate reliability data. There exists a widely recognized need to validate and justify the reliability data obtained from expert judgments. For demonstrating such effort, we provide a template of how we base expert elicitations and empirical studies to derive the multipliers of performance shaping factors (PSFs). We applied two expert judgment techniques—absolute probability judgment (APJ) and ratio magnitude estimation (RME)—to update the PSF multiplier design in Standardized Plant Analysis of Risk-Human Reliability Analysis (SPAR-H). Licensed operators (N = 17) from a nuclear power plant were recruited. It is found that APJ and RME have acceptable inter-rater reliability and convergent validity between them. The multipliers estimated by APJ and RME were compared with those from empirical studies in the human performance literature. Certain consistencies between these heterogeneous data sources were found. Combining these heterogeneous data, we suggested the multiplier design of PSFs for SPAR-H. We also bridged the relationship between every PSF and its psychological mechanism to trigger human errors. Our work might suggest the appropriateness of expert elicitations in generating useful data for HRA, and strengthen the empirical and psychological foundations of PSF-based HRA methods.
Glioblastomas are highly lethal cancers that contain cellular hierarchies with self-renewing cancer stem cells that can propagate tumors in secondary transplant assays. The potential significance of ...cancer stem cells in cancer biology has been demonstrated by studies showing contributions to therapeutic resistance, angiogenesis, and tumor dispersal. We recently reported that physiologic oxygen levels differentially induce hypoxia inducible factor-2α (HIF2α) levels in cancer stem cells. HIF1α functioned in proliferation and survival of all cancer cells but also was activated in normal neural progenitors suggesting a potentially restricted therapeutic index while HIF2α was essential in only in cancer stem cells and was not expressed by normal neural progenitors demonstrating HIF2α is a cancer stem cell specific target. We now extend these studies to examine the role of hypoxia in regulating tumor cell plasticity. We find that hypoxia promotes the self-renewal capability of the stem and non-stem population as well as promoting a more stem-like phenotype in the non-stem population with increased neurosphere formation as well as upregulation of important stem cell factors, such as OCT4, NANOG, and c-MYC. The importance of HIF2α was further supported as forced expression of non-degradable HIF2α induced a cancer stem cell marker and augmented the tumorigenic potential of the non-stem population. This novel finding may indicate a specific role of HIF2α in promoting glioma tumorigenesis. The unexpected plasticity of the non-stem glioma population and the stem-like phenotype emphasizes the importance of developing therapeutic strategies targeting the microenvironmental influence on the tumor in addition to cancer stem cells.
As an important secondary resource with abundant platinum group metals (PGMs), spent catalysts demand recycling for both economic and environmental benefits. This article reviews the main ...pyrometallurgical processes for PGM recovery from spent catalysts. Existing processes, including smelting, vaporization, and sintering processes, are discussed based in part on a review of the physiochemical characteristics of PGMs in spent catalysts. The smelting technology, which produces a PGM-containing alloy, is significantly influenced by the addition of various collectors, such as lead, copper, iron, matte, or printed circuit board (PCB), considering their chemical affinities for PGMs. The vaporization process can recover PGMs in vapor form at low temperatures (250–700°C), but it suffers high corrosion and potential environmental and health risks as a result of involvement of the hazardous gases, mainly Cl
2
and CO. The sintering process serves as a reforming means for recycling of the spent catalysts by in situ reduction of their oxidized PGMs components. Among these processes, the smelting process seems more promising although its overall performance can be further improved by seeking a suitable target-oriented collector and flux, together with proper pretreatment and process intensification using an external field.