The recovery of morphologically accurate anatomical images from deformed ones is challenging in ultrasound (US) image acquisition, but crucial to accurate and consistent diagnosis, particularly in ...the emerging field of computer-assisted diagnosis. This article presents a novel physics-aware deformation correction approach based on a coarse-to-fine, multi-scale deep neural network (DefCor-Net). To achieve pixel-wise performance, DefCor-Net incorporates biomedical knowledge by estimating pixel-wise stiffness online using a U-shaped feature extractor. The deformation field is then computed using polynomial regression by integrating the measured force applied by the US probe. Based on real-time estimation of pixel-by-pixel tissue properties, the learning-based approach enables the potential for anatomy-aware deformation correction. To demonstrate the effectiveness of the proposed DefCor-Net, images recorded at multiple locations on forearms and upper arms of six volunteers are used to train and validate DefCor-Net. The results demonstrate that DefCor-Net can significantly improve the accuracy of deformation correction to recover the original geometry (Dice Coefficient: from 14.3±20.9 to 82.6±12.1 when the force is 6N). Code:https://github.com/KarolineZhy/DefCorNet.
•Force-induced ultrasound image deformation correction.•Dense displacement field estimation from ultrasound imaging.•Deep learning based deformation correction, pixel-wise deformation correction.•Coarse-to-fine neural network structure.•Robotic ultrasound imaging, computer-assisted diagnosis, stiffness estimation.
Metallic wire rope is the core component unit in construction, and vision-based nondestructive inspection methods for condition monitoring of metallic wire ropes have progressed in recent years. The ...lay length of metallic wire ropes contains the stress state and healthy state information of metallic wire ropes, which has important research significance and research value for the force balance guarantee of multirope structures and the safety of buildings. This article summarizes the shortcomings of the existing research of the lay length measurement methods and proposes a measurement method of the lay length of metallic wire ropes via deep learning and phase correlation image analysis algorithm. The segmentation of strand module applies the Mask-RCNN network to segment the strand into Voronoi-like images with high signal-to-noise ratio (SNR). Then, the phase correlation analysis is carried out on the obtained Voronoi diagram to calculate the lay length of the metal wire rope. The experimental results show that compared with the traditional manual method to measure the lay length of metal wire rope, the method proposed in this article greatly shortens the time required for lay length detection, only 0.1045 s. The average detection error of this method is 1.0672 mm, which is far lower than the traditional manual measurement method and the method of directly performing phase correlation image analysis on the metal wire rope. The reliability of the proposed method was validated by varying the lay length of metallic wire ropes using the tension-slack experimental device.
Plasmodium sporozoites are the crescent-shaped forms of malaria parasites injected from the salivary glands of mosquitoes into the skins of their vertebrate hosts. To proceed towards the liver of the ...host, sporozoites individually migrate at very high speeds and with relatively few adhesive interactions. By contrast, in the mosquito sporozoites often exist as collectives. Here we study their motion in collectives extracted from salivary glands, a situation in which dozens of sporozoites form rotating vortices. Complementing our experiments with quantitative image analysis and agent-based computer simulations, we find that, owing to their mechanical flexibility, single sporozoites are sorted according to their curvatures and speeds, and that these effects increase with vortex size. We also find that the vortices undergo oscillatory breathing because the thrust from the motility force of the single sporozoites can be stored in their elastic energy. Our findings suggest that the malaria parasite has evolved flexibility as an essential means to adapt to its mechanical environment and to ensure efficient transmission. In general, our work demonstrates how single-particle shape and mechanics can determine the dynamics of large, active collectives.The collective motion of malaria parasites is analyzed as a model system for active elastic matter and suggests that mechanical flexibility is favourable for parasite transmission.
One of the main challenges of underwater archaeology is to develop non‐invasive research of heritage sites in order to enable their further protection for future societies. This study explores, ...identifies and classifies archaeological objects in a shallow lake using underwater acoustics. We solved the aforementioned challenges by developing an innovative, object‐based, fuzzy‐logic classification of nine archaeological object categories based on multibeam echosounder bathymetry, 13 secondary features of bathymetry and 106 underwater diving prospections. We achieved an 86% correlation with ground‐truth samples, and 49% overall accuracy. The unique and repeatable workflow developed in this study can be applied to other case studies of underwater archaeology around the world.
In recent years, the biochip sensor has gained significant attention for its extensive utilization in drug screening, biochemical pathway profiling, and genetic sequencing. The barcode-array biochips ...have emerged as a promising biochip sensor technology due to their high throughput and rapid detection capabilities. However, to achieve optimal analysis results, it is crucial to effectively integrate each step of the biochip sensor analysis. Unfortunately, commercial scanner systems suffer from manual redundancy processing and low accuracy issues. To that end, this article presents an automatic barcode-array biochip sensor analysis system that enhances efficiency and accuracy. The proposed system introduces a novel barcode-array biochip sensor design that facilitates data analysis. Additionally, a confocal laser scanner with a lightweight scanning strategy is employed to improve scanning efficiency. Furthermore, a geometry-guide learning (GGL) method ensures accurate barcode segmentation, which improves the cooperation between barcode-array biochip sensor analysis and biochip fabrication. The GGL approach incorporates prior region prompts and specific scanning strategies, resulting in an F1-score of 86.17 for barcode segmentation, overcoming the limitations faced by existing segmentation algorithms when confronted with extreme challenges. Moreover, the lightweight scanning strategy improves data acquisition efficiency by 53.2% while saving 67.5% disk space usage. The linear regression <inline-formula> <tex-math notation="LaTeX">{R^{{2}}} </tex-math></inline-formula> value of fluorescence emission intensity exceeds 0.99, indicating that the proposed system is suitable for both research purposes and clinical diagnostics applications. Notably, this article represents the first integrated solution for barcode-array biochip sensors that effectively addresses the challenges encountered at each step.
for quantifying vegetative cover across landscapes have, until recently, been limited to ground-based surveys or remote sensing via satellites or aircraft, both of which can limit the spatial scale ...of resulting data. Unmanned Aircraft Systems (UAS) can efficiently collect high-resolution sub-decimeter imagery of landscapes; geographic, object-based image analysis (GEOBIA) of the collected imagery can then be used to estimate vegetation cover. To date, few researchers have utilized open-source programs for GEOBIA. We developed GEOBIA methods in the open-source Program R to analyze visible spectrum UAS imagery from four sites in the Chihuahuan Desert of North America. These desert grasslands are difficult to quantify due to the patchiness of ground cover at small scales (e.g. <1 m) and the rarity of shrubs on the landscape. We used site-specific training data and multiple segmentation parameters to create vegetative and shrub cover data layers at a 15 cm resolution. We report overall accuracies of 77.2%–88.8% for vegetation classification and 95.7%–99.2% for shrub classification. Our work is some of the first to use open-source GEOBIA in grasslands and provides objective, reproducible data layers of desert vegetation, particularly shrubs, at the spatial scale necessary to inform management and conservation of Chihuahuan Desert grassland communities.
•It is possible to use open-source, object-based methods to classify desert vegetation using Unmanned Aircraft Systems.•Site-specific training data and multiple segmentation parameters were used to create data layers at a 15 cm resolution.•Overall accuracies ranged 77.2%–88.8% for vegetation classification and 95.7%–99.2% for shrub classification.
Chronic wounds contribute to significant healthcare and economic burden worldwide. Wound assessment remains challenging given its complex and dynamic nature. The use of artificial intelligence (AI) ...and machine learning methods in wound analysis is promising. Explainable modelling can help its integration and acceptance in healthcare systems. We aim to develop an explainable AI model for analysing vascular wound images among an Asian population. Two thousand nine hundred and fifty‐seven wound images from a vascular wound image registry from a tertiary institution in Singapore were utilized. The dataset was split into training, validation and test sets. Wound images were classified into four types (neuroischaemic ulcer NIU, surgical site infections SSI, venous leg ulcers VLU, pressure ulcer PU), measured with automatic estimation of width, length and depth and segmented into 18 wound and peri‐wound features. Data pre‐processing was performed using oversampling and augmentation techniques. Convolutional and deep learning models were utilized for model development. The model was evaluated with accuracy, F1 score and receiver operating characteristic (ROC) curves. Explainability methods were used to interpret AI decision reasoning. A web browser application was developed to demonstrate results of the wound AI model with explainability. After development, the model was tested on additional 15 476 unlabelled images to evaluate effectiveness. After the development on the training and validation dataset, the model performance on unseen labelled images in the test set achieved an AUROC of 0.99 for wound classification with mean accuracy of 95.9%. For wound measurements, the model achieved AUROC of 0.97 with mean accuracy of 85.0% for depth classification, and AUROC of 0.92 with mean accuracy of 87.1% for width and length determination. For wound segmentation, an AUROC of 0.95 and mean accuracy of 87.8% was achieved. Testing on unlabelled images, the model confidence score for wound classification was 82.8% with an explainability score of 60.6%. Confidence score was 87.6% for depth classification with 68.0% explainability score, while width and length measurement obtained 93.0% accuracy score with 76.6% explainability. Confidence score for wound segmentation was 83.9%, while explainability was 72.1%. Using explainable AI models, we have developed an algorithm and application for analysis of vascular wound images from an Asian population with accuracy and explainability. With further development, it can be utilized as a clinical decision support system and integrated into existing healthcare electronic systems.
Diabetic retinopathy (DR) detection is a critical retinal image analysis task in the context of early blindness prevention. Unfortunately, in order to train a model to accurately detect DR based on ...the presence of different retinal lesions, typically a dataset with medical expert's annotations at the pixel level is needed. In this paper, a new methodology based on the multiple instance learning (MIL) framework is developed in order to overcome this necessity by leveraging the implicit information present on annotations made at the image level. Contrary to previous MIL-based DR detection systems, the main contribution of the proposed technique is the joint optimization of the instance encoding and the image classification stages. In this way, more useful mid-level representations of pathological images can be obtained. The explainability of the model decisions is further enhanced by means of a new loss function enforcing appropriate instance and mid-level representations. The proposed technique achieves comparable or better results than other recently proposed methods, with 90% area under the receiver operating characteristic curve (AUC) on Messidor, 93% AUC on DR1, and 96% AUC on DR2, while improving the interpretability of the produced decisions.
•Features were extracted from froth images with convolutional neural networks.•AlexNet, VGG16 & ResNet34 networks were applied directly to the froth images.•VGG16 and ResNet34 were partially ...retrained.•The retrained networks yielded markedly better results than previous algorithms.•The approach is generally applicable to sensors used in mineral processing.
Computer vision systems designed for flotation froth image analysis are well established in industry, where their ability to measure froth flow velocities and stability are used to control recovery. However, the use of froth image analysis to estimate the concentrations of mineral species in the froth phase is less well established and the reliability of these algorithms depends on the quality of the features that can be extracted from the froth images. Over less than a decade, convolutional neural networks have significantly pushed the boundaries with regard to image recognition in range of technical applications, notably cancer diagnosis, face recognition, remote sensing, as well as applications in the food industry. With the exception of the exploration geosciences, they are yet to make meaningful inroads in the mineral process industries. In this study, the use of three pretrained neural networks architectures to estimate froth grades from industrial image data, namely AlexNet, VGG16 and ResNet is considered. In its pretrained format, AlexNet outperformed previously proposed methods by a significant margin. This margin could be increased markedly via partial retraining of the VGG16 and ResNet34 networks.
► We investigate gender recognition on real-life faces. ► We use the Labeled Faces in the Wild database in our study. ► Discriminative LBP features are learned to describe faces. ► The performance of ...94.81% is obtained by applying SVM with the learned features.
Gender recognition is one of fundamental face analysis tasks. Most of the existing studies have focused on face images acquired under controlled conditions. However, real-world applications require gender classification on real-life faces, which is much more challenging due to significant appearance variations in unconstrained scenarios. In this paper, we investigate gender recognition on real-life faces using the recently built database, the Labeled Faces in the Wild (LFW). Local Binary Patterns (LBP) is employed to describe faces, and Adaboost is used to select the discriminative LBP features. We obtain the performance of 94.81% by applying Support Vector Machine (SVM) with the boosted LBP features. The public database used in this study makes future benchmark and evaluation possible.