Blood-based preparations are used in clinical practice for the treatment of several eye disorders. The aim of this study is to analyze the effect of freeze-drying blood-based preparations on the ...levels of growth factors and wound healing behaviors in an in vitro model. Platelet-rich plasma (PRP) and serum (S) preparations from the same Cord Blood (CB) sample, prepared in both fresh frozen (FF) and freeze-dried (FD) forms (and then reconstituted), were analyzed for EGF and BDNF content (ELISA Quantikine kit). The human MIO-M1 glial cell line (Moorfield/Institute of Ophthalmology, London, UK) was incubated with FF and FD products and evaluated for cell migration with scratch-induced wounding (IncuCyte S3 Essen BioScience), proliferation with cyclin A2 and D1 gene expression, and activation with vimentin and GFAP gene expression. The FF and FD forms showed similar concentrations of EGF and BDNF in both the S and PRP preparations. The wound healing assay showed no significant difference between the FF and FD forms for both S and PRP. Additionally, cell migration, proliferation, and activation did not appear to change in the FD forms compared to the FF ones. Our study showed that reconstituted FD products maintained the growth factor concentrations and biological properties of FF products and could be used as a functional treatment option.
Photoplethysmography (PPG) measured by smartphone has the potential for a large scale, non-invasive, and easy-to-use screening tool. Vascular aging is linked to increased arterial stiffness, which ...can be measured by PPG. We investigate the feasibility of using PPG to predict healthy vascular aging (HVA) based on two approaches: machine learning (ML) and deep learning (DL). We performed data preprocessing, including detrending, demodulating, and denoising on the raw PPG signals. For ML, ridge penalized regression has been applied to 38 features extracted from PPG, whereas for DL several convolutional neural networks (CNNs) have been applied to the whole PPG signals as input. The analysis has been conducted using the crowd-sourced Heart for Heart data. The prediction performance of ML using two features (AUC of 94.7%) - the a wave of the second derivative PPG and tpr, including four covariates, sex, height, weight, and smoking - was similar to that of the best performing CNN, 12-layer ResNet (AUC of 95.3%). Without having the heavy computational cost of DL, ML might be advantageous in finding potential biomarkers for HVA prediction. The whole workflow of the procedure is clearly described, and open software has been made available to facilitate replication of the results.
One of the main objectives of high-throughput genomics studies is to obtain a low-dimensional set of observables-a signature-for sample classification purposes (diagnosis, prognosis, stratification). ...Biological data, such as gene or protein expression, are commonly characterized by an up/down regulation behavior, for which discriminant-based methods could perform with high accuracy and easy interpretability. To obtain the most out of these methods features selection is even more critical, but it is known to be a NP-hard problem, and thus most feature selection approaches focuses on one feature at the time (k-best, Sequential Feature Selection, recursive feature elimination). We propose DNetPRO, Discriminant Analysis with Network PROcessing, a supervised network-based signature identification method. This method implements a network-based heuristic to generate one or more signatures out of the best performing feature pairs. The algorithm is easily scalable, allowing efficient computing for high number of observables (Formula: see text-Formula: see text). We show applications on real high-throughput genomic datasets in which our method outperforms existing results, or is compatible with them but with a smaller number of selected features. Moreover, the geometrical simplicity of the resulting class-separation surfaces allows a clearer interpretation of the obtained signatures in comparison to nonlinear classification models.
Appropriate wound management shortens the healing times and reduces the management costs, benefiting the patient in physical terms and potentially reducing the healthcare system's economic burden. ...Among the instrumental measurement methods, the image analysis of a wound area is becoming one of the cornerstones of chronic ulcer management. Our study aim is to develop a solid AI method based on a convolutional neural network to segment the wounds efficiently to make the work of the physician more efficient, and subsequently, to lay the foundations for the further development of more in-depth analyses of ulcer characteristics. In this work, we introduce a fully automated model for identifying and segmenting wound areas which can completely automatize the clinical wound severity assessment starting from images acquired from smartphones. This method is based on an active semi-supervised learning training of a convolutional neural network model. In our work, we tested the robustness of our method against a wide range of natural images acquired in different light conditions and image expositions. We collected the images using an ad hoc developed app and saved them in a database which we then used for AI training. We then tested different CNN architectures to develop a balanced model, which we finally validated with a public dataset. We used a dataset of images acquired during clinical practice and built an annotated wound image dataset consisting of 1564 ulcer images from 474 patients. Only a small part of this large amount of data was manually annotated by experts (ground truth). A multi-step, active, semi-supervised training procedure was applied to improve the segmentation performances of the model. The developed training strategy mimics a continuous learning approach and provides a viable alternative for further medical applications. We tested the efficiency of our model against other public datasets, proving its robustness. The efficiency of the transfer learning showed that after less than 50 epochs, the model achieved a stable DSC that was greater than 0.95. The proposed active semi-supervised learning strategy could allow us to obtain an efficient segmentation method, thereby facilitating the work of the clinician by reducing their working times to achieve the measurements. Finally, the robustness of our pipeline confirms its possible usage in clinical practice as a reliable decision support system for clinicians.
Quantitative computed tomography (QCT)-based in silico models have demonstrated improved accuracy in predicting hip fractures with respect to the current gold standard, the areal bone mineral ...density. These models require that the femur bone is segmented as a first step. This task can be challenging, and in fact, it is often almost fully manual, which is time-consuming, operator-dependent, and hard to reproduce. This work proposes a semi-automated procedure for femur bone segmentation from CT images. The proposed procedure is based on the bone and joint enhancement filter and graph-cut algorithms. The semi-automated procedure performances were assessed on 10 subjects through comparison with the standard manual segmentation. Metrics based on the femur geometries and the risk of fracture assessed in silico resulting from the two segmentation procedures were considered. The average Hausdorff distance (0.03 ± 0.01 mm) and the difference union ratio (0.06 ± 0.02) metrics computed between the manual and semi-automated segmentations were significantly higher than those computed within the manual segmentations (0.01 ± 0.01 mm and 0.03 ± 0.02). Besides, a blind qualitative evaluation revealed that the semi-automated procedure was significantly superior (p < 0.001) to the manual one in terms of fidelity to the CT. As for the hip fracture risk assessed in silico starting from both segmentations, no significant difference emerged between the two (R
= 0.99). The proposed semi-automated segmentation procedure overcomes the manual one, shortening the segmentation time and providing a better segmentation. The method could be employed within CT-based in silico methodologies and to segment large volumes of images to train and test fully automated and supervised segmentation methods.
Current high-throughput technologies-i.e. whole genome sequencing, RNA-Seq, ChIP-Seq, etc.-generate huge amounts of data and their usage gets more widespread with each passing year. Complex analysis ...pipelines involving several computationally-intensive steps have to be applied on an increasing number of samples. Workflow management systems allow parallelization and a more efficient usage of computational power. Nevertheless, this mostly happens by assigning the available cores to a single or few samples' pipeline at a time. We refer to this approach as naive parallel strategy (NPS). Here, we discuss an alternative approach, which we refer to as concurrent execution strategy (CES), which equally distributes the available processors across every sample's pipeline.
Theoretically, we show that the CES results, under loose conditions, in a substantial speedup, with an ideal gain range spanning from 1 to the number of samples. Also, we observe that the CES yields even faster executions since parallelly computable tasks scale sub-linearly. Practically, we tested both strategies on a whole exome sequencing pipeline applied to three publicly available matched tumour-normal sample pairs of gastrointestinal stromal tumour. The CES achieved speedups in latency up to 2-2.4 compared to the NPS.
Our results hint that if resources distribution is further tailored to fit specific situations, an even greater gain in performance of multiple samples pipelines execution could be achieved. For this to be feasible, a benchmarking of the tools included in the pipeline would be necessary. It is our opinion these benchmarks should be consistently performed by the tools' developers. Finally, these results suggest that concurrent strategies might also lead to energy and cost savings by making feasible the usage of low power machine clusters.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
In this work, we propose an implementation of the Bienenstock-Cooper-Munro (BCM) model, obtained by a combination of the classical framework and modern deep learning methodologies. The BCM model ...remains one of the most promising approaches to modeling the synaptic plasticity of neurons, but its application has remained mainly confined to neuroscience simulations and few applications in data science.
To improve the convergence efficiency of the BCM model, we combine the original plasticity rule with the optimization tools of modern deep learning. By numerical simulation on standard benchmark datasets, we prove the efficiency of the BCM model in learning, memorization capacity, and feature extraction.
In all the numerical simulations, the visualization of neuronal synaptic weights confirms the memorization of human-interpretable subsets of patterns. We numerically prove that the selectivity obtained by BCM neurons is indicative of an internal feature extraction procedure, useful for patterns clustering and classification. The introduction of competitiveness between neurons in the same BCM network allows the network to modulate the memorization capacity of the model and the consequent model selectivity.
The proposed improvements make the BCM model a suitable alternative to standard machine learning techniques for both feature selection and classification tasks.
Abstract
The ability to detect and characterize bacteria within a biological sample is crucial for the monitoring of infections and epidemics, as well as for the study of human health and its ...relationship with commensal microorganisms. To this aim, a commonly used technique is the 16S rRNA gene targeted sequencing. PCR-amplified 16S sequences derived from the sample of interest are usually clustered into the so-called Operational Taxonomic Units (OTUs) based on pairwise similarities. Then, representative OTU sequences are compared with reference (human-made) databases to derive their phylogeny and taxonomic classification. Here, we propose a new reference-free approach to define the phylogenetic distance between bacteria based on protein domains, which are the evolving units of proteins. We extract the protein domain profiles of 3368 bacterial genomes and we use an ecological approach to model their Relative Species Abundance distribution. Based on the model parameters, we then derive a new measurement of phylogenetic distance. Finally, we show that such model-based distance is capable of detecting differences between bacteria in cases in which the 16S rRNA-based method fails, providing a possibly complementary approach , which is particularly promising for the analysis of bacterial populations measured by shotgun sequencing.
Pupillometry is a promising technique for the potential diagnosis of several neurological pathologies. However, its potential is not fully explored yet, especially for prediction purposes and results ...interpretation. In this work, we analyzed 100 pupillometric curves obtained by 12 subjects, applying both advanced signal processing techniques and physics methods to extract typically collected features and newly proposed ones. We used machine learning techniques for the classification of Optic Neuritis (ON) vs. Healthy subjects, controlling for overfitting and ranking the features by random permutation, following their importance in prediction. All the extracted features, except one, turned out to have significant importance for prediction, with an average accuracy of 76%, showing the complexity of the processes involved in the pupillary light response. Furthermore, we provided a possible neurological interpretation of this new set of pupillometry features in relation to ON vs. Healthy classification.