Intense interest in applying convolutional neural networks (CNNs) in biomedical image analysis is wide spread, but its success is impeded by the lack of large annotated datasets in biomedical ...imaging. Annotating biomedical images is not only tedious and time consuming, but also demanding of costly, specialty - oriented knowledge and skills, which are not easily accessible. To dramatically reduce annotation cost, this paper presents a novel method called AIFT (active, incremental fine-tuning) to naturally integrate active learning and transfer learning into a single framework. AIFT starts directly with a pre-trained CNN to seek "worthy" samples from the unannotated for annotation, and the (fine-tuned) CNN is further fine-tuned continuously by incorporating newly annotated samples in each iteration to enhance the CNNs performance incrementally. We have evaluated our method in three different biomedical imaging applications, demonstrating that the cost of annotation can be cut by at least half. This performance is attributed to the several advantages derived from the advanced active and incremental capability of our AIFT method.
Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a ...low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.
The multi-dimensional assignment problem is universal for data association analysis such as data association-based visual multi-object tracking and multi-graph matching. In this paper, ...multi-dimensional assignment is formulated as a rank-1 tensor approximation problem. A dual
L
1
-normalized context/hyper-context aware tensor power iteration optimization method is proposed. The method is applied to multi-object tracking and multi-graph matching. In the optimization method, tensor power iteration with the dual unit norm enables the capture of information across multiple sample sets. Interactions between sample associations are modeled as contexts or hyper-contexts which are combined with the global affinity into a unified optimization. The optimization is flexible for accommodating various types of contextual models. In multi-object tracking, the global affinity is defined according to the appearance similarity between objects detected in different frames. Interactions between objects are modeled as motion contexts which are encoded into the global association optimization. The tracking method integrates high order motion information and high order appearance variation. The multi-graph matching method carries out matching over graph vertices and structure matching over graph edges simultaneously. The matching consistency across multi-graphs is based on the high-order tensor optimization. Various types of vertex affinities and edge/hyper-edge affinities are flexibly integrated. Experiments on several public datasets, such as the MOT16 challenge benchmark, validate the effectiveness of the proposed methods.
Full text
Available for:
CEKLJ, DOBA, EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, IZUM, KILJ, KISLJ, MFDPS, NLZOH, NUK, OBVAL, OILJ, PILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, SIK, UILJ, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
Background
This study aimed to compare one state-of-the-art deep learning method and four classical machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung ...cancer (NSCLC) from
18
F-FDG PET/CT images. Another objective was to compare the discriminative power of the recently popular PET/CT texture features with the widely used diagnostic features such as tumor size, CT value, SUV, image contrast, and intensity standard deviation. The four classical machine learning methods included random forests, support vector machines, adaptive boosting, and artificial neural network. The deep learning method was the convolutional neural networks (CNN). The five methods were evaluated using 1397 lymph nodes collected from PET/CT images of 168 patients, with corresponding pathology analysis results as gold standard. The comparison was conducted using 10 times 10-fold cross-validation based on the criterion of sensitivity, specificity, accuracy (ACC), and area under the ROC curve (AUC). For each classical method, different input features were compared to select the optimal feature set. Based on the optimal feature set, the classical methods were compared with CNN, as well as with human doctors from our institute.
Results
For the classical methods, the diagnostic features resulted in 81~85% ACC and 0.87~0.92 AUC, which were significantly higher than the results of texture features. CNN’s sensitivity, specificity, ACC, and AUC were 84, 88, 86, and 0.91, respectively. There was no significant difference between the results of CNN and the best classical method. The sensitivity, specificity, and ACC of human doctors were 73, 90, and 82, respectively. All the five machine learning methods had higher sensitivities but lower specificities than human doctors.
Conclusions
The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research.
Creating large-scale and well-annotated datasets to train AI algorithms is crucial for automated tumor detection and localization. However, with limited resources, it is challenging to determine the ...best type of annotations when annotating massive amounts of unlabeled data. To address this issue, we focus on polyps in colonoscopy videos and pancreatic tumors in abdominal CT scans; Both applications require significant effort and time for pixel-wise annotation due to the high dimensional nature of the data, involving either temporary or spatial dimensions. In this paper, we develop a new annotation strategy, termed Drag&Drop, which simplifies the annotation process to drag and drop. This annotation strategy is more efficient, particularly for temporal and volumetric imaging, than other types of weak annotations, such as per-pixel, bounding boxes, scribbles, ellipses and points. Furthermore, to exploit our Drag&Drop annotations, we develop a novel weakly supervised learning method based on the watershed algorithm. Experimental results show that our method achieves better detection and localization performance than alternative weak annotations and, more importantly, achieves similar performance to that trained on detailed per-pixel annotations. Interestingly, we find that, with limited resources, allocating weak annotations from a diverse patient population can foster models more robust to unseen images than allocating per-pixel annotations for a small set of images. In summary, this research proposes an efficient annotation strategy for tumor detection and localization that is less accurate than per-pixel annotations but useful for creating large-scale datasets for screening tumors in various medical modalities. Project Page: https://github.com/johnson111788/Drag-Drop
The therapeutics used to promote perforator flap survival function induces vascular regeneration and inhibit apoptosis. The present study aimed to explore the potential mechanism of the angiogenesis ...effects of Ginkgolide B (GB) in perforator flaps. Methods: A total of 72 rats were divided into three groups and treated with saline, GB, or GB + tunicamycin (TM; ER stress activator) for seven consecutive days, respectively. Apoptosis was assayed by determining the Bax/Bcl-2 ratio and caspase-3 level. Endoplasmic reticulum (ER) stress markers (CHOP, GRP78, and caspase-12) were detected by Western blot analysis. Oxidative stress was assessed by measuring the superoxide dismutase activity (SOD) and malondialdehyde (MDA), heme oxygenase-1(HO-1), and nuclear factor erythroid-2-related factor 2 (Nrf2) mRNA levels in the flaps. The percentage flap survival area and blood flow were assessed on postoperative day (POD) 7. Angiogenesis was visualized by hematoxylin and eosin and CD34 staining on POD 7. Results: GB increased the survival of perforator flaps, the flap survival area of GB, GB + TM, and control groups was 90.83 ± 1.93%, 70.93 ± 4.13%, and 62.97 ± 6.50%. GB decreased the Bax/Bcl-2 ratio and caspase-3 level. ER stress-related proteins were downregulated by GB. GB also decreased the MDA level and increased SOD activity, HO-1 and Nrf2 mRNA levels in the flaps. Further, GB induced regeneration of vascular vessels in comparison with saline or GB + TM. Conclusions: GB increased angiogenesis and alleviated oxidative stress by inhibiting ER stress, which increased the survival of perforator flaps. In contrast, GB + TM alleviated angiogenesis and induced oxidative stress by activating ER stress and decreasing the survival of perforator flaps.
Full text
Available for:
DOBA, IJS, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
A trusted path is a protected channel that assures the secrecy and authenticity of data transfers between a user's input/output (I/O) device and a program trusted by that user. We argue that, despite ...its incontestable necessity, current commodity systems do not support trusted path with any significant assurance. This paper presents a hyper visor-based design that enables a trusted path to bypass an untrusted operating-system, applications, and I/O devices, with a minimal Trusted Computing Base (TCB). We also suggest concrete I/O architectural changes that will simplify future trusted-path system design. Our system enables users to verify the states and configurations of one or more trusted-paths using a simple, secret less, hand-held device. We implement a simple user-oriented trusted path as a case study.
Surgery is still the main treatment for congenital polydactyly, and the aim of surgical reconstruction is to obtain a thumb with excellent function and appearance. A systematic assessment of ...polydactyly is required prior to surgery, including bone stress lines, joint deviation, joint activity and joint instability, size and development of finger and nail. Bone shape, joint incongruency, and abnormal tendon insertions must be corrected completely, in order to obtain good function and to avoide secondary surgery. Bilhault-Cloquet procedure can reconstruct the size of the finger and nails. Fine manipulation can improve the postoperative nail deformity, so that the reconstructed nail reaches a satisfactory aesthetic score.
To be trustworthy, security-sensitive applications must be formally verified and hence small and simple, i.e., wimpy. Thus, they cannot include a variety of basic services available only in large and ...untrustworthy commodity systems, i.e., in giants. Hence, wimps must securely compose with giants to survive on commodity systems, i.e., rely on giants' services but only after efficiently verifying their results. This paper presents a security architecture based on a wimpy kernel that provides on-demand isolated I/O channels for wimp applications, without bloating the underlying trusted computing base. The size and complexity of the wimpy kernel are minimized by safely outsourcing I/O subsystem functions to an untrusted commodity operating system and exporting driver and I/O subsystem code to wimp applications. Using the USB subsystem as a case study, this paper illustrates the dramatic reduction of wimpy-kernel size and complexity, e.g., over 99% of the USB code base is removed. Performance measurements indicate that the wimpy-kernel architecture exhibits the desired execution efficiency.
Object detection plays a critical role for automatic video analysis in many vision applications. Background subtraction has been the mainstream in the field of moving objects detection. However, most ...of state-of-the-art techniques of background subtraction operate on each pixel independently ignoring the global features of images. A motion detection method based on subspace update of background is proposed in this study. This method uses a subspace spanned by the principal components of background sequence to characterise the background and integrates the regional continuity of objects to segment the foreground. To deal with changes in the background geometry, a learning factor is introduced into the authors’ model to update the subspace timely. Additionally, to reduce computational complexity, they use two-dimension principal component analysis (PCA) rather than traditional PCA to obtain the principal components of background. Experiments demonstrate that the update policy is effective and in most cases this proposed method can achieve better results than others compared in this study.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK