State-of-the-art light and electron microscopes are capable of acquiring large image datasets, but quantitatively evaluating the data often involves manually annotating structures of interest. This ...process is time-consuming and often a major bottleneck in the evaluation pipeline. To overcome this problem, we have introduced the Trainable Weka Segmentation (TWS), a machine learning tool that leverages a limited number of manual annotations in order to train a classifier and segment the remaining data automatically. In addition, TWS can provide unsupervised segmentation learning schemes (clustering) and can be customized to employ user-designed image features or classifiers.
TWS is distributed as open-source software as part of the Fiji image processing distribution of ImageJ at http://imagej.net/Trainable_Weka_Segmentation .
ignacio.arganda@ehu.eus.
Supplementary data are available at Bioinformatics online.
We describe automated technologies to probe the structure of neural tissue at nanometer resolution and use them to generate a saturated reconstruction of a sub-volume of mouse neocortex in which all ...cellular objects (axons, dendrites, and glia) and many sub-cellular components (synapses, synaptic vesicles, spines, spine apparati, postsynaptic densities, and mitochondria) are rendered and itemized in a database. We explore these data to study physical properties of brain tissue. For example, by tracing the trajectories of all excitatory axons and noting their juxtapositions, both synaptic and non-synaptic, with every dendritic spine we refute the idea that physical proximity is sufficient to predict synaptic connectivity (the so-called Peters’ rule). This online minable database provides general access to the intrinsic complexity of the neocortex and enables further data-driven inquiries.
Display omitted
Display omitted
•Tape-based pipeline for electron microscopic reconstruction of brain tissue•Annotated database of 1,700 synapses from a saturated reconstruction of cortex•Excitatory axon proximity to dendritic spines not sufficient to predict synapses
Automated technologies probing the structure of neural tissue at nanometer resolution generate a saturated reconstruction of a sub-volume of mouse neocortex, refuting the idea that physical proximity is sufficient to predict excitatory synaptic connectivity.
Display omitted
•We provide a pipeline for automatic reconstructions of neurons from EM images.•The pipeline is scalable to large data sets.•We show successful automatic long-range reconstructions ...over more than 30 micrometer.
Automated sample preparation and electron microscopy enables acquisition of very large image data sets. These technical advances are of special importance to the field of neuroanatomy, as 3D reconstructions of neuronal processes at the nm scale can provide new insight into the fine grained structure of the brain. Segmentation of large-scale electron microscopy data is the main bottleneck in the analysis of these data sets. In this paper we present a pipeline that provides state-of-the art reconstruction performance while scaling to data sets in the GB-TB range. First, we train a random forest classifier on interactive sparse user annotations. The classifier output is combined with an anisotropic smoothing prior in a Conditional Random Field framework to generate multiple segmentation hypotheses per image. These segmentations are then combined into geometrically consistent 3D objects by segmentation fusion. We provide qualitative and quantitative evaluation of the automatic segmentation and demonstrate large-scale 3D reconstructions of neuronal processes from a 27,000μm3 volume of brain tissue over a cube of 30μm in each dimension corresponding to 1000 consecutive image sections. We also introduce Mojo, a proofreading tool including semi-automated correction of merge errors based on sparse user scribbles.
In electron microscopy, a large field of view is commonly captured by taking several images of a sample region and then by stitching these images together. Non-linear lens distortions induced by the ...electromagnetic lenses of the microscope render a seamless stitching with linear transformations impossible. This problem is aggravated by large CCD cameras, as they are commonly in use nowadays. We propose a new calibration method based on ridge regression that compensates non-linear lens distortions, while ensuring that the geometry of the image is preserved. Our method estimates the distortion correction from overlapping image areas using automatically extracted correspondence points. Therefore, the estimation of the correction transform does not require any special calibration samples.
We evaluate our method on simulated ground truth data as well as on real electron microscopy data. Our experiments demonstrate that the lens calibration robustly corrects large distortions with an average stitching error exceeding 10 pixels to sub-pixel accuracy within two iteration steps.
Automatic cell image segmentation methods in connectomics produce merge and split errors, which require correction through proofreading. Previous research has identified the visual search for these ...errors as the bottleneck in interactive proofreading. To aid error correction, we develop two classifiers that automatically recommend candidate merges and splits to the user. These classifiers use a convolutional neural network (CNN) that has been trained with errors in automatic segmentations against expert-labeled ground truth. Our classifiers detect potentially-erroneous regions by considering a large context region around a segmentation boundary. Corrections can then be performed by a user with yes/no decisions, which reduces variation of information 7.5Ã- faster than previous proofreading methods. We also present a fully-automatic mode that uses a probability threshold to make merge/split decisions. Extensive experiments using the automatic approach and comparing performance of novice and expert users demonstrate that our method performs favorably against state-of-the-art proofreading methods on different connectomics datasets.
Connectomics has recently begun to image brain tissue at nanometer resolution, which produces petabytes of data. This data must be aligned, labeled, proofread, and formed into graphs, and each step ...of this process requires visualization for human verification. As such, we present the BUTTERFLY middleware, a scalable platform that can handle massive data for interactive visualization in connectomics. Our platform outputs image and geometry data suitable for hardware-accelerated rendering, and abstracts low-level data wrangling to enable faster development of new visualizations. We demonstrate scalability and extendability with a series of open source Web-based applications for every step of the typical connectomics workflow: data management and storage, informative queries, 2D and 3D visualizations, interactive editing, and graph-based analysis. We report design choices for all developed applications and describe typical scenarios of isolated and combined use in everyday connectomics research. In addition, we measure and optimize rendering throughput—from storage to display—in quantitative experiments. Finally, we share insights, experiences, and recommendations for creating an open source data management and interactive visualization platform for connectomics.
Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a ...broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.
In the field of neuroanatomy, automatic segmentation of electron microscopy images is becoming one of the main limiting factors in getting new insights into the functional structure of the brain. We ...propose a novel framework for the segmentation of thin elongated structures like membranes in a neuroanatomy setting. The probability output of a random forest classifier is used in a regular cost function, which enforces gap completion via perceptual grouping constraints. The global solution is efficiently found by graph cut optimization. We demonstrate substantial qualitative and quantitative improvement over state-of the art segmentations on two considerably different stacks of ssTEM images as well as in segmentations of streets in satellite imagery. We demonstrate that the superior performance of our method yields fully automatic 3D reconstructions of dendrites from ssTEM data.
Digital documents often contain images and scanned text. Parsing such visually-rich documents is a core task for work-flow automation, but it remains challenging since most documents do not encode ...explicit layout information, e.g., how characters and words are grouped into boxes and ordered into larger semantic entities. Current state-of-the-art layout extraction methods are challenged by such documents as they rely on word sequences to have correct reading order and do not exploit their hierarchical structure. We propose LayerDoc, an approach that uses visual features, textual semantics, and spatial coordinates along with constraint inference to extract the hierarchical layout structure of documents in a bottom-up layer-wise fashion. LayerDoc recursively groups smaller regions into larger semantic elements in 2D to infer complex nested hierarchies. Experiments show that our approach outperforms competitive baselines by 10-15% on three diverse datasets of forms and mobile app screen layouts for the tasks of spatial region classification, higher-order group identification, layout hierarchy extraction, reading order detection, and word grouping.