The intracellular delivery of emerging biomacromolecular therapeutics, such as genes, peptides, and proteins, remains a great challenge. Unlike small hydrophobic drugs, these biotherapeutics are ...impermeable to the cell membrane, thus relying on the endocytic pathways for cell entry. After endocytosis, they are entrapped in the endosomes and finally degraded in lysosomes. To overcome these barriers, many carriers have been developed to facilitate the endosomal escape of these biomacromolecules. This mini-review focuses on the development of anionic pH-responsive amphiphilic carboxylate polymers for endosomal escape applications, including the design and synthesis of these polymers, the mechanistic insights of their endosomal escape capability, the challenges in the field, and future opportunities.
Contrast is a fundamental attribute of images that plays an important role in human visual perception of image quality. With numerous approaches proposed to enhance image contrast, much less work has ...been dedicated to automatic quality assessment of contrast changed images. Existing approaches rely on global statistics to estimate contrast quality. Here we propose a novel local patch-based objective quality assessment method using an adaptive representation of local patch structure, which allows us to decompose any image patch into its mean intensity, signal strength and signal structure components and then evaluate their perceptual distortions in different ways. A unique feature that differentiates the proposed method from previous contrast quality models is the capability to produce a local contrast quality map, which predicts local quality variations over space and may be employed to guide contrast enhancement algorithms. Validations based on four publicly available databases show that the proposed patch-based contrast quality index (PCQI) method provides accurate predictions on the human perception of contrast variations.
The uplift of the Tibetan Plateau is an important geological event, but there is considerable controversy about its growth history. Different geological observations contribute to this controversial ...issue, while data from geochemistry, tectonics, and paleontology further fuel the debate. Vertebrate fossils have provided significant evidence for documenting the uplift of the Tibetan Plateau in the geologic past. The earliest fossil evidence recently collected from the Oligocene Dingqing Formation in central Tibet includes the climbing perch and cyprinine fish fossils whose modern close relatives are distributed in the tropical zone of Asia and Africa. These discoveries not only are significant for the phylogeny and zoogeography of fishes, but also imply that the hinterland of the Tibetan Plateau was a warm and humid lowland at ~26 Ma. The co-existing plant assemblage, which includes palms and golden rain trees among others, indicates that the warm and humid airs from the Indian Ocean could flow deeply into central Tibet, consistent with the inference from the fish fossils. Since that time, the geographical features and natural environments within the Tibetan Plateau have greatly changed. The Tibetan Plateau was consistently uplifted in the Early Miocene and reached an elevation of ~3000 m, which was demonstrated by fish, mammal, and plant fossils. The endemic schizothoracines (snow carps) originated from the Miocene when the Tibetan Plateau turned into a barrier for mammalian migrations between north and south sides. A series of fish and mammal fossils provided unequivocal evidence that the Tibetan Plateau uplifted close to its modern elevation in the Pliocene and developed a cryospheric environment. As a result, the plateau region became the origination center for the cold-adapted Quaternary Ice Age fauna.
•There have been heated debates about the history and process, especially paleo-elevations of Tibetan Plateau uplift.•Paleo-elevations of northern Tibet were lower than 2000 m during the Oligocene, too low to hinder the migration of mammals.•Until the Miocene, the plateau’s elevation reached about 3000 m, which became an obstacle for the exchange of large mammals.•In the Pliocene, the plateau reached its modern elevations of >4000 m, causing a cryospheric environment of the Ice Age fauna.•Paleontologists failed to search ancestors of the Ice Age fauna in the Pliocene and Quaternary Arctic tundra and steppe.•Cold environments forced the ancestors of the Ice Age fauna to spend their early evolutionary time in the Tibetan Plateau.
Recently, convolutional neural network (CNN) has attracted tremendous attention and has achieved great success in many image processing tasks. In this paper, we focus on CNN technology combined with ...image restoration to facilitate video coding performance and propose the content-aware CNN based in-loop filtering for high-efficiency video coding (HEVC). In particular, we quantitatively analyze the structure of the proposed CNN model from multiple dimensions to make the model interpretable and optimal for CNN-based loop filtering. More specifically, each coding tree unit (CTU) is treated as an independent region for processing, such that the proposed content-aware multimodel filtering mechanism is realized by the restoration of different regions with different CNN models under the guidance of the discriminative network. To adapt the image content, the discriminative neural network is learned to analyze the content characteristics of each region for the adaptive selection of the deep learning model. The CTU level control is also enabled in the sense of rate-distortion optimization. To learn the CNN model, an iterative training method is proposed by simultaneously labeling filter categories at the CTU level and fine-tuning the CNN model parameters. The CNN based in-loop filter is implemented after sample adaptive offset in HEVC, and extensive experiments show that the proposed approach significantly improves the coding performance and achieves up to 10.0% bit-rate reduction. On average, 4.1%, 6.0%, 4.7%, and 6.0% bit-rate reduction can be obtained under all intra, low delay, low delay P, and random access configurations, respectively.
Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that ...simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images.
Face anti-spoofing (a.k.a. presentation attack detection) has recently emerged as an active topic with great significance for both academia and industry due to the rapidly increasing demand in user ...authentication on mobile phones, PCs, tablets, and so on. Recently, numerous face spoofing detection schemes have been proposed based on the assumption that training and testing samples are in the same domain in terms of the feature space and marginal probability distribution. However, due to unlimited variations of the dominant conditions (illumination, facial appearance, camera quality, and so on) in face acquisition, such single domain methods lack generalization capability, which further prevents them from being applied in practical applications. In light of this, we introduce an unsupervised domain adaptation face anti-spoofing scheme to address the real-world scenario that learns the classifier for the target domain based on training samples in a different source domain. In particular, an embedding function is first imposed based on source and target domain data, which maps the data to a new space where the distribution similarity can be measured. Subsequently, the Maximum Mean Discrepancy between the latent features in source and target domains is minimized such that a more generalized classifier can be learned. State-of-the-art representations including both hand-crafted and deep neural network learned features are further adopted into the framework to quest the capability of them in domain adaptation. Moreover, we introduce a new database for face spoofing detection, which contains more than 4000 face samples with a large variety of spoofing types, capture devices, illuminations, and so on. Extensive experiments on existing benchmark databases and the new database verify that the proposed approach can gain significantly better generalization capability in cross-domain scenarios by providing consistently better anti-spoofing performance.
The emergence of the deep convolutional neural network (CNN) greatly improves the quality of computer-aided supporting systems. However, due to the challenges of generating reliable and timely ...results, clinical adoption of computer-aided diagnosis systems is still limited. Recent informatics research indicates that machine learning algorithms need to be combined with sufficient clinical expertise in order to achieve an optimal result.
In this research, we used deep learning algorithms to help diagnose four common cutaneous diseases based on dermoscopic images. In order to facilitate decision-making and improve the accuracy of our algorithm, we summarized classification/diagnosis scenarios based on domain expert knowledge and semantically represented them in a hierarchical structure.
Our algorithm achieved an accuracy of 87.25 ± 2.24% in our test dataset with 1067 images. The semantic summarization of diagnosis scenarios can help further improve the algorithm to facilitate future computer-aided decision support.
In this paper, we applied deep neural network algorithm to classify dermoscopic images of four common skin diseases and archived promising results. Based on the results, we further summarized the diagnosis/classification scenarios, which reflect the importance of combining the efforts of both human expertise and computer algorithms in dermatologic diagnoses.
With the widespread adoption of multidevice communication, such as telecommuting, screen content images (SCIs) have become more closely and frequently related to our daily lives. For SCIs, the tasks ...of accurate visual quality assessment, high-efficiency compression, and suitable contrast enhancement have thus currently attracted increased attention. In particular, the quality evaluation of SCIs is important due to its good ability for instruction and optimization in various processing systems. Hence, in this paper, we develop a new objective metric for research on perceptual quality assessment of distorted SCIs. Compared to the classical MSE, our method, which mainly relies on simple convolution operators, first highlights the degradations in structures caused by different types of distortions and then detects salient areas where the distortions usually attract more attention. A comparison of our algorithm with the most popular and state-of-the-art quality measures is performed on two new SCI databases (SIQAD and SCD). Extensive results are provided to verify the superiority and efficiency of the proposed IQA technique.
The human visual system exhibits multiscale characteristic when perceiving visual scenes. The hierarchical structures of an image are contained in its scale space representation, in which the image ...can be portrayed by a series of increasingly smoothed images. Inspired by this, this paper presents a no-reference and robust image sharpness evaluation (RISE) method by learning multiscale features extracted in both the spatial and spectral domains. For an image, the scale space is first built. Then sharpness-aware features are extracted in gradient domain and singular value decomposition domain, respectively. In order to take into account the impact of viewing distance on image quality, the input image is also down-sampled by several times, and the DCT-domain entropies are calculated as quality features. Finally, all features are utilized to learn a support vector regression model for sharpness prediction. Extensive experiments are conducted on four synthetically and two real blurred image databases. The experimental results demonstrate that the proposed RISE metric is superior to the relevant state-of-the-art methods for evaluating both synthetic and real blurring. Furthermore, the proposed metric is robust, which means that it has very good generalization ability.
Minor amputations are performed in a large proportion of patients with diabetic foot ulcers (DFU) and early identification of the outcome of minor amputations facilitates medical decision-making and ...ultimately reduces major amputations and deaths. However, there are currently no clinical predictive tools for minor amputations in patients with DFU. We aim to establish a predictive model based on machine learning to quickly identify patients requiring minor amputation among newly admitted patients with DFU. Overall, 362 cases with University of Texas grade (UT) 3 DFU were screened from tertiary care hospitals in East China. We utilized the synthetic minority oversampling strategy to compensate for the disparity in the initial dataset. A univariable analysis revealed nine variables to be included in the model: random blood glucose, years with diabetes, cardiovascular diseases, peripheral arterial diseases, DFU history, smoking history, albumin, creatinine, and C-reactive protein. Then, risk prediction models based on five machine learning algorithms: decision tree, random forest, logistic regression, support vector machine, and extreme gradient boosting (XGBoost) were independently developed with these variables. After evaluation, XGBoost earned the highest score (accuracy 0.814, precision 0.846, recall 0.767, F1-score 0.805, and AUC 0.881). For convenience, a web-based calculator based on our data and the XGBoost algorithm was established (https://dfuprediction.azurewebsites.net/). These findings imply that XGBoost can be used to develop a reliable prediction model for minor amputations in patients with UT3 DFU, and that our online calculator will make it easier for clinicians to assess the risk of minor amputations and make proactive decisions.