Abstract Background This article introduces a novel index aimed at uncovering specific brain connectivity patterns associated with Alzheimer's disease (AD), defined according to neuropsychological ...patterns. Methods Electroencephalographic (EEG) recordings of 370 people, including 170 healthy subjects and 200 mild-AD patients, were acquired in different clinical centres using different acquisition equipment by harmonising acquisition settings. The study employed a new derived Small World (SW) index, SWcomb, that serves as a comprehensive metric designed to integrate the seven SW parameters, computed across the typical EEG frequency bands. The objective is to create a unified index that effectively distinguishes individuals with a neuropsychological pattern compatible with AD from healthy ones. Results Results showed that the healthy group exhibited the lowest SWcomb values, while the AD group displayed the highest SWcomb ones. Conclusions These findings suggest that SWcomb index represents an easy-to-perform, low-cost, widely available and non-invasive biomarker for distinguishing between healthy individuals and AD patients.
As an emerging cellulosic nanomaterial, microfibrillated cellulose (MFC) and nanofibrillated cellulose (NFC) have shown enormous potential in the forest products industry. The forest products ...industry and academia are working together to realise the possibilities of commercializing MFC and NFC. However, there are still needs to improve the processing, characterisation and material properties of nanocellulose in order to realise its full potential. The annual number of research publications and patents on nanocellulose with respect to manufacturing, properties and applications is now up in the thousands, so it is of the utmost importance to review articles that endeavour to research on this explosive topic of cellulose nanomaterials. This review examines the past and current situation of wood-based MFC and NFC in relation to its processing and applications relating to papermaking.
The depolarization of circularly polarized light (CPL) caused by scattering in turbid media reveals structural information about the dispersed particles, such as their size, density, and ...distribution, which is useful for investigating the state of biological tissue. However, the correlation between depolarization strength and tissue parameters is unclear.SignificanceThe depolarization of circularly polarized light (CPL) caused by scattering in turbid media reveals structural information about the dispersed particles, such as their size, density, and distribution, which is useful for investigating the state of biological tissue. However, the correlation between depolarization strength and tissue parameters is unclear.We aimed to examine the generalized correlations of depolarization strength with the particle size and wavelength, yielding depolarization diagrams.AimWe aimed to examine the generalized correlations of depolarization strength with the particle size and wavelength, yielding depolarization diagrams.The correlation between depolarization intensity and size parameter was examined for single and multiple scattering using the Monte Carlo simulation method. Expanding the wavelength width allows us to obtain depolarization distribution diagrams as functions of wavelength and particle diameter for reflection and transparent geometries.ApproachThe correlation between depolarization intensity and size parameter was examined for single and multiple scattering using the Monte Carlo simulation method. Expanding the wavelength width allows us to obtain depolarization distribution diagrams as functions of wavelength and particle diameter for reflection and transparent geometries.CPL suffers intensive depolarization in a single scattering against particles of various specific sizes for its wavelength, which becomes more noticeable in the multiple scattering regime.ResultsCPL suffers intensive depolarization in a single scattering against particles of various specific sizes for its wavelength, which becomes more noticeable in the multiple scattering regime.The depolarization diagrams with particle size and wavelength as independent variables were obtained, which are particularly helpful for investigating the feasibility of various particle-monitoring methods. Based on the obtained diagrams, several applications have been proposed, including blood cell monitoring, early embryogenesis, and antigen-antibody interactions.ConclusionsThe depolarization diagrams with particle size and wavelength as independent variables were obtained, which are particularly helpful for investigating the feasibility of various particle-monitoring methods. Based on the obtained diagrams, several applications have been proposed, including blood cell monitoring, early embryogenesis, and antigen-antibody interactions.
Information about the spatial organization of fibers within a nerve is crucial to our understanding of nerve anatomy and its response to neuromodulation therapies. A serial block-face microscopy ...method three-dimensional microscopy with ultraviolet surface excitation (3D-MUSE) has been developed to image nerves over extended depths ex vivo. To routinely visualize and track nerve fibers in these datasets, a dedicated and customizable software tool is required.SignificanceInformation about the spatial organization of fibers within a nerve is crucial to our understanding of nerve anatomy and its response to neuromodulation therapies. A serial block-face microscopy method three-dimensional microscopy with ultraviolet surface excitation (3D-MUSE) has been developed to image nerves over extended depths ex vivo. To routinely visualize and track nerve fibers in these datasets, a dedicated and customizable software tool is required.Our objective was to develop custom software that includes image processing and visualization methods to perform microscopic tractography along the length of a peripheral nerve sample.AimOur objective was to develop custom software that includes image processing and visualization methods to perform microscopic tractography along the length of a peripheral nerve sample.We modified common computer vision algorithms (optic flow and structure tensor) to track groups of peripheral nerve fibers along the length of the nerve. Interactive streamline visualization and manual editing tools are provided. Optionally, deep learning segmentation of fascicles (fiber bundles) can be applied to constrain the tracts from inadvertently crossing into the epineurium. As an example, we performed tractography on vagus and tibial nerve datasets and assessed accuracy by comparing the resulting nerve tracts with segmentations of fascicles as they split and merge with each other in the nerve sample stack.ApproachWe modified common computer vision algorithms (optic flow and structure tensor) to track groups of peripheral nerve fibers along the length of the nerve. Interactive streamline visualization and manual editing tools are provided. Optionally, deep learning segmentation of fascicles (fiber bundles) can be applied to constrain the tracts from inadvertently crossing into the epineurium. As an example, we performed tractography on vagus and tibial nerve datasets and assessed accuracy by comparing the resulting nerve tracts with segmentations of fascicles as they split and merge with each other in the nerve sample stack.We found that a normalized Dice overlap ( Dice norm ) metric had a mean value above 0.75 across several millimeters along the nerve. We also found that the tractograms were robust to changes in certain image properties (e.g., downsampling in-plane and out-of-plane), which resulted in only a 2% to 9% change to the mean Dice norm values. In a vagus nerve sample, tractography allowed us to readily identify that subsets of fibers from four distinct fascicles merge into a single fascicle as we move ∼ 5 mm along the nerve's length.ResultsWe found that a normalized Dice overlap ( Dice norm ) metric had a mean value above 0.75 across several millimeters along the nerve. We also found that the tractograms were robust to changes in certain image properties (e.g., downsampling in-plane and out-of-plane), which resulted in only a 2% to 9% change to the mean Dice norm values. In a vagus nerve sample, tractography allowed us to readily identify that subsets of fibers from four distinct fascicles merge into a single fascicle as we move ∼ 5 mm along the nerve's length.Overall, we demonstrated the feasibility of performing automated microscopic tractography on 3D-MUSE datasets of peripheral nerves. The software should be applicable to other imaging approaches. The code is available at https://github.com/ckolluru/NerveTracker.ConclusionsOverall, we demonstrated the feasibility of performing automated microscopic tractography on 3D-MUSE datasets of peripheral nerves. The software should be applicable to other imaging approaches. The code is available at https://github.com/ckolluru/NerveTracker.
Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus ...imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities.SignificanceRetinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities.This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP.AimThis study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP.A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared.ApproachA convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared.For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture.ResultsFor individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture.This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.ConclusionsThis study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.
Of patients with early-stage breast cancer, 60% to 75% undergo breast-conserving surgery. Of those, 20% or more need a second surgery because of an incomplete tumor resection only discovered days ...after surgery. An intraoperative imaging technology allowing cancer detection on the margins of breast specimens could reduce re-excision procedure rates and improve patient survival.SignificanceOf patients with early-stage breast cancer, 60% to 75% undergo breast-conserving surgery. Of those, 20% or more need a second surgery because of an incomplete tumor resection only discovered days after surgery. An intraoperative imaging technology allowing cancer detection on the margins of breast specimens could reduce re-excision procedure rates and improve patient survival.We aimed to develop an experimental protocol using hyperspectral line-scanning Raman spectroscopy to image fresh breast specimens from cancer patients. Our objective was to determine whether macroscopic specimen images could be produced to distinguish invasive breast cancer from normal tissue structures.AimWe aimed to develop an experimental protocol using hyperspectral line-scanning Raman spectroscopy to image fresh breast specimens from cancer patients. Our objective was to determine whether macroscopic specimen images could be produced to distinguish invasive breast cancer from normal tissue structures.A hyperspectral inelastic scattering imaging instrument was used to interrogate eight specimens from six patients undergoing breast cancer surgery. Machine learning models trained with a different system to distinguish cancer from normal breast structures were used to produce tissue maps with a field-of-view of 1 cm 2 classifying each pixel as either cancer, adipose, or other normal tissues. The predictive model results were compared with spatially correlated histology maps of the specimens.ApproachA hyperspectral inelastic scattering imaging instrument was used to interrogate eight specimens from six patients undergoing breast cancer surgery. Machine learning models trained with a different system to distinguish cancer from normal breast structures were used to produce tissue maps with a field-of-view of 1 cm 2 classifying each pixel as either cancer, adipose, or other normal tissues. The predictive model results were compared with spatially correlated histology maps of the specimens.A total of eight specimens from six patients were imaged. Four of the hyperspectral images were associated with specimens containing cancer cells that were correctly identified by the new ex vivo pathology technique. The images associated with the remaining four specimens had no histologically detectable cancer cells, and this was also correctly predicted by the instrument.ResultsA total of eight specimens from six patients were imaged. Four of the hyperspectral images were associated with specimens containing cancer cells that were correctly identified by the new ex vivo pathology technique. The images associated with the remaining four specimens had no histologically detectable cancer cells, and this was also correctly predicted by the instrument.We showed the potential of hyperspectral Raman imaging as an intraoperative breast cancer margin assessment technique that could help surgeons improve cosmesis and reduce the number of repeat procedures in breast cancer surgery.ConclusionsWe showed the potential of hyperspectral Raman imaging as an intraoperative breast cancer margin assessment technique that could help surgeons improve cosmesis and reduce the number of repeat procedures in breast cancer surgery.
SignificanceDamage to the cardiac conduction system remains one of the most significant risks associated with surgical interventions to correct congenital heart disease. This work demonstrates how ...light-scattering spectroscopy (LSS) can be used to non-destructively characterize cardiac tissue regions.AimTo present an approach for associating tissue composition information with location-specific LSS data and further evaluate an LSS and machine learning system as a method for non-destructive tissue characterization.ApproachA custom LSS probe was used to gather spectral data from locations across 14 excised human pediatric nodal tissue samples (8 sinus nodes, 6 atrioventricular nodes). The LSS spectra were used to train linear and neural-network-based regressor models to predict tissue composition characteristics derived from the 3D models.ResultsNodal tissue region nuclear densities were reported. A linear model trained to regress nuclear density from spectra achieved a prediction r-squared of 0.64 and a concordance correlation coefficient of 0.78.ConclusionsThese methods build on previous studies suggesting that LSS measurements combined with machine learning signal processing can provide clinically relevant cardiac tissue composition.
Photoacoustic computed tomography (PACT) is a promising non-invasive imaging technique for both life science and clinical implementations. To achieve fast imaging speed, modern PACT systems have ...equipped arrays that have hundreds to thousands of ultrasound transducer (UST) elements, and the element number continues to increase. However, large number of UST elements with parallel data acquisition could generate a massive data size, making it very challenging to realize fast image reconstruction. Although several research groups have developed GPU-accelerated method for PACT, there lacks an explicit and feasible step-by-step description of GPU-based algorithms for various hardware platforms.SignificancePhotoacoustic computed tomography (PACT) is a promising non-invasive imaging technique for both life science and clinical implementations. To achieve fast imaging speed, modern PACT systems have equipped arrays that have hundreds to thousands of ultrasound transducer (UST) elements, and the element number continues to increase. However, large number of UST elements with parallel data acquisition could generate a massive data size, making it very challenging to realize fast image reconstruction. Although several research groups have developed GPU-accelerated method for PACT, there lacks an explicit and feasible step-by-step description of GPU-based algorithms for various hardware platforms.In this study, we propose a comprehensive framework for developing GPU-accelerated PACT image reconstruction (GPU-accelerated photoacoustic computed tomography), to help the research community to grasp this advanced image reconstruction method.AimIn this study, we propose a comprehensive framework for developing GPU-accelerated PACT image reconstruction (GPU-accelerated photoacoustic computed tomography), to help the research community to grasp this advanced image reconstruction method.We leverage widely accessible open-source parallel computing tools, including Python multiprocessing-based parallelism, Taichi Lang for Python, CUDA, and possible other backends. We demonstrate that our framework promotes significant performance of PACT reconstruction, enabling faster analysis and real-time applications. Besides, we also described how to realize parallel computing on various hardware configurations, including multicore CPU, single GPU, and multiple GPUs platform.ApproachWe leverage widely accessible open-source parallel computing tools, including Python multiprocessing-based parallelism, Taichi Lang for Python, CUDA, and possible other backends. We demonstrate that our framework promotes significant performance of PACT reconstruction, enabling faster analysis and real-time applications. Besides, we also described how to realize parallel computing on various hardware configurations, including multicore CPU, single GPU, and multiple GPUs platform.Notably, our framework can achieve an effective rate of ∼ 871 times when reconstructing extremely large-scale three-dimensional PACT images on a dual-GPU platform compared to a 24-core workstation CPU. In this paper, we share example codes via GitHub.ResultsNotably, our framework can achieve an effective rate of ∼ 871 times when reconstructing extremely large-scale three-dimensional PACT images on a dual-GPU platform compared to a 24-core workstation CPU. In this paper, we share example codes via GitHub.Our approach allows for easy adoption and adaptation by the research community, fostering implementations of PACT for both life science and medicine.ConclusionsOur approach allows for easy adoption and adaptation by the research community, fostering implementations of PACT for both life science and medicine.
The estimation of tissue optical properties using diffuse optics has found a range of applications in disease detection, therapy monitoring, and general health care. Biomarkers derived from the ...estimated optical absorption and scattering coefficients can reflect the underlying progression of many biological processes in tissues.SignificanceThe estimation of tissue optical properties using diffuse optics has found a range of applications in disease detection, therapy monitoring, and general health care. Biomarkers derived from the estimated optical absorption and scattering coefficients can reflect the underlying progression of many biological processes in tissues.Complex light-tissue interactions make it challenging to disentangle the absorption and scattering coefficients, so dedicated measurement systems are required. We aim to help readers understand the measurement principles and practical considerations needed when choosing between different estimation methods based on diffuse optics.AimComplex light-tissue interactions make it challenging to disentangle the absorption and scattering coefficients, so dedicated measurement systems are required. We aim to help readers understand the measurement principles and practical considerations needed when choosing between different estimation methods based on diffuse optics.The estimation methods can be categorized as: steady state, time domain, time frequency domain (FD), spatial domain, and spatial FD. The experimental measurements are coupled with models of light-tissue interactions, which enable inverse solutions for the absorption and scattering coefficients from the measured tissue reflectance and/or transmittance.ApproachThe estimation methods can be categorized as: steady state, time domain, time frequency domain (FD), spatial domain, and spatial FD. The experimental measurements are coupled with models of light-tissue interactions, which enable inverse solutions for the absorption and scattering coefficients from the measured tissue reflectance and/or transmittance.The estimation of tissue optical properties has been applied to characterize a variety of ex vivo and in vivo tissues, as well as tissue-mimicking phantoms. Choosing a specific estimation method for a certain application has to trade-off its advantages and limitations.ResultsThe estimation of tissue optical properties has been applied to characterize a variety of ex vivo and in vivo tissues, as well as tissue-mimicking phantoms. Choosing a specific estimation method for a certain application has to trade-off its advantages and limitations.Optical absorption and scattering property estimation is an increasingly important and accessible approach for medical diagnosis and health monitoring.ConclusionOptical absorption and scattering property estimation is an increasingly important and accessible approach for medical diagnosis and health monitoring.
The accurate correlation between optical measurements and pathology relies on precise image registration, often hindered by deformations in histology images. We investigate an automated multi-modal ...image registration method using deep learning to align breast specimen images with corresponding histology images.SignificanceThe accurate correlation between optical measurements and pathology relies on precise image registration, often hindered by deformations in histology images. We investigate an automated multi-modal image registration method using deep learning to align breast specimen images with corresponding histology images.We aim to explore the effectiveness of an automated image registration technique based on deep learning principles for aligning breast specimen images with histology images acquired through different modalities, addressing challenges posed by intensity variations and structural differences.AimWe aim to explore the effectiveness of an automated image registration technique based on deep learning principles for aligning breast specimen images with histology images acquired through different modalities, addressing challenges posed by intensity variations and structural differences.Unsupervised and supervised learning approaches, employing the VoxelMorph model, were examined using a dataset featuring manually registered images as ground truth.ApproachUnsupervised and supervised learning approaches, employing the VoxelMorph model, were examined using a dataset featuring manually registered images as ground truth.Evaluation metrics, including Dice scores and mutual information, demonstrate that the unsupervised model exceeds the supervised (and manual) approaches significantly, achieving superior image alignment. The findings highlight the efficacy of automated registration in enhancing the validation of optical technologies by reducing human errors associated with manual registration processes.ResultsEvaluation metrics, including Dice scores and mutual information, demonstrate that the unsupervised model exceeds the supervised (and manual) approaches significantly, achieving superior image alignment. The findings highlight the efficacy of automated registration in enhancing the validation of optical technologies by reducing human errors associated with manual registration processes.This automated registration technique offers promising potential to enhance the validation of optical technologies by minimizing human-induced errors and inconsistencies associated with manual image registration processes, thereby improving the accuracy of correlating optical measurements with pathology labels.ConclusionsThis automated registration technique offers promising potential to enhance the validation of optical technologies by minimizing human-induced errors and inconsistencies associated with manual image registration processes, thereby improving the accuracy of correlating optical measurements with pathology labels.