"Big Data" is on the covers of Science, Nature , the Economist , and Wired magazines, on the front pages of the Wall Street Journal and the New York Times. But despite the media hyperbole, as ...Christine Borgman points out in this examination of data and scholarly research, having the right data is usually better than having more data; little data can be just as valuable as big data. In many cases, there are no data -- because relevant data don't exist, cannot be found, or are not available. Moreover, data sharing is difficult, incentives to do so are minimal, and data practices vary widely across disciplines. Borgman, an often-cited authority on scholarly communication, argues that data have no value or meaning in isolation; they exist within a knowledge infrastructure -- an ecology of people, practices, technologies, institutions, material objects, and relationships. After laying out the premises of her investigation -- six "provocations" meant to inspire discussion about the uses of data in scholarship -- Borgman offers case studies of data practices in the sciences, the social sciences, and the humanities, and then considers the implications of her findings for scholarly practice and research policy. To manage and exploit data over the long term, Borgman argues, requires massive investment in knowledge infrastructures; at stake is the future of scholarship.
In the era of "big data," science is increasingly information driven, and the potential for computers to store, manage, and integrate massive amounts of data has given rise to such new disciplinary ...fields as biomedical informatics. Applied ontology offers a strategy for the organization of scientific information in computer-tractable form, drawing on concepts not only from computer and information science but also from linguistics, logic, and philosophy. This book provides an introduction to the field of applied ontology that is of particular relevance to biomedicine, covering theoretical components of ontologies, best practices for ontology design, and examples of biomedical ontologies in use.After defining an ontology as a representation of the types of entities in a given domain, the book distinguishes between different kinds of ontologies and taxonomies, and shows how applied ontology draws on more traditional ideas from metaphysics. It presents the core features of the Basic Formal Ontology (BFO), now used by over one hundred ontology projects around the world, and offers examples of domain ontologies that utilize BFO. The book also describes Web Ontology Language (OWL), a common framework for Semantic Web technologies. Throughout, the book provides concrete recommendations for the design and construction of domain ontologies.
Interpersonal relation defines the association, e.g., warm, friendliness, and dominance, between two or more people. We investigate if such fine-grained and high-level relation traits can be ...characterized and quantified from face images in the wild. We address this challenging problem by first studying a deep network architecture for robust recognition of facial expressions. Unlike existing models that typically learn from facial expression labels alone, we devise an effective multitask network that is capable of learning from rich auxiliary attributes such as gender, age, and head pose, beyond just facial expression data. While conventional supervised training requires datasets with complete labels (e.g., all samples must be labeled with gender, age, and expression), we show that this requirement can be relaxed via a novel attribute propagation method. The approach further allows us to leverage the inherent correspondences between heterogeneous attribute sources despite the disparate distributions of different datasets. With the network we demonstrate state-of-the-art results on existing facial expression recognition benchmarks. To predict inter-personal relation, we use the expression recognition network as branches for a Siamese model. Extensive experiments show that our model is capable of mining mutual context of faces for accurate fine-grained interpersonal prediction.
We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed ...for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al. A comparison of affine region detectors,
2005
), the MPI-Sintel (Butler et al. A naturalistic open source movie for optical flow evaluation,
2012
) and the Kitti (Geiger et al. Vision meets robotics: The KITTI dataset,
2013
) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures. We also apply DeepMatching to the computation of optical flow, called DeepFlow, by integrating it in the large displacement optical flow (LDOF) approach of Brox and Malik (Large displacement optical flow: descriptor matching in variational motion estimation,
2011
). Additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation.
Recent studies have demonstrated the power of recurrent neural networks for machine translation, image captioning and speech recognition. For the task of capturing temporal structure in video, ...however, there still remain numerous open research questions. Current research suggests using a simple temporal feature pooling strategy to take into account the temporal aspect of video. We demonstrate that this method is not sufficient for gesture recognition, where temporal information is more discriminative compared to general video classification tasks. We explore deep architectures for gesture recognition in video and propose a new end-to-end trainable neural network architecture incorporating temporal convolutions and bidirectional recurrence. Our main contributions are twofold; first, we show that recurrence is crucial for this task; second, we show that adding temporal convolutions leads to significant improvements. We evaluate the different approaches on the Montalbano gesture recognition dataset, where we achieve state-of-the-art results.
Several reports have shown that radiomic features are affected by acquisition and reconstruction parameters, thus hampering multicenter studies. We propose a method that, by removing the center ...effect while preserving patient-specific effects, standardizes features measured from PET images obtained using different imaging protocols.
Pretreatment
F-FDG PET images of patients with breast cancer were included. In one nuclear medicine department (department A), 63 patients were scanned on a time-of-flight PET/CT scanner, and 16 lesions were triple-negative (TN). In another nuclear medicine department (department B), 74 patients underwent PET/CT on a different brand of scanner and a different reconstruction protocol, and 15 lesions were TN. The images from department A were smoothed using a gaussian filter to mimic data from a third department (department A-S). The primary lesion was segmented to obtain a lesion volume of interest (VOI), and a spheric VOI was set in healthy liver tissue. Three SUVs and 6 textural features were computed in all VOIs. A harmonization method initially described for genomic data was used to estimate the department effect based on the observed feature values. Feature distributions in each department were compared before and after harmonization.
In healthy liver tissue, the distributions significantly differed for 4 of 9 features between departments A and B and for 6 of 9 between departments A and A-S (
< 0.05, Wilcoxon test). After harmonization, none of the 9 feature distributions significantly differed between 2 departments (
> 0.1). The same trend was observed in lesions, with a realignment of feature distributions between the departments after harmonization. Identification of TN lesions was largely enhanced after harmonization when the cutoffs were determined on data from one department and applied to data from the other department.
The proposed harmonization method is efficient at removing the multicenter effect for textural features and SUVs. The method is easy to use, retains biologic variations not related to a center effect, and does not require any feature recalculation. Such harmonization allows for multicenter studies and for external validation of radiomic models or cutoffs and should facilitate the use of radiomic models in clinical practice.
The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, ...we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.
Deep learning for biological image classification Affonso, Carlos; Rossi, André Luis Debiaso; Vieira, Fábio Henrique Antunes ...
Expert systems with applications,
11/2017, Letnik:
85
Journal Article
Recenzirano
Odprti dostop
•Compare Deep Learning archtecture with machine learning techniques.•Classify the quality of wood board.•Extract texture descriptor from wood images.
A number of industries use human inspection to ...visually classify the quality of their products and the raw materials used in the production process, this process could be done automatically through digital image processing. The industries are not always interested in the most accurate technique for a given problem, but most appropriate for the expected results, there must be a balance between accuracy and computational cost. This paper investigates the classification of the quality of wood boards based on their images. For such, it compares the use of deep learning, particularly Convolutional Neural Networks, with the combination of texture-based feature extraction techniques and traditional techniques: Decision tree induction algorithms, Neural Networks, Nearest neighbors and Support vector machines. Reported studies show that Deep Learning techniques applied to image processing tasks have achieved predictive performance superior to traditional classification techniques, mainly in high complex scenarios. One of the reasons pointed out is their embedded feature extraction mechanism. Deep Learning techniques directly identify and extract features, considered by them to be relevant, in a given image dataset. However, empirical results for the image data set have shown that the texture descriptor method proposed, regardless of the strategy employed is very competitive when compared with Convolutional Neural Network for all the performed experiments. The best performance of the texture descriptor method could be caused by the nature of the image dataset. Finally are pointed out some perspectives of futures developments with the application of Active learning and Semi supervised methods.
•Deep learning based semantic segmentation method with fully convolutional network is proposed.•Defect images are captured via a self-developed image acquisition equipment (MTI-200a).•The proposed ...method can rapidly and accurately recognize defects for metro shield tunnels.
The performance of traditional visual inspection by handcrafted features for crack and leakage defects of metro shield tunnel is hardly satisfactory nowadays because it is low-efficient to distinguish defects from some interference such as segmental joints, bolt holes, cables and manual marks. Based on deep learning (DL), this paper proposes a novel image recognition algorithm for semantic segmentation of crack and leakage defects of metro shield tunnel using hierarchies of features extracted by fully convolutional network (FCN). The defect images in training dataset and testing dataset are captured via a self-developed image acquisition equipment named Moving Tunnel Inspection (MTI-200a). After the establishment of image datasets, FCN models of crack and leakage are separately trained through several iterations of forward inference and backward learning. Semantic segmentation of defect images is implemented via the corresponding FCN models using two-stream algorithm, i.e. one stream is used to recognize the crack by sliding-window-assembling operation and the other is adopted for the leakage by resizing-interpolation operation. Compared with two frequently-used traditional methods, i.e. region growing algorithm (RGA) and adaptive thresholding algorithm (ATA), great superiority of the proposed method in terms of recognition results, inference time and error rates is shown based on four typical types of defect images which are crack-only image, leakage-only image, two-defect-nonoverlapping (TDN) image, two-defect-overlapping (TDO) image. The proposed method using DL can be employed to rapidly and accurately recognize defects for structure health monitoring and maintenance of metro shield tunnels.