We describe recent progress toward defining neuronal cell types in the mouse retina and attempt to extract lessons that may be generally useful in the mammalian brain. Achieving a comprehensive ...catalog of retinal cell types now appears within reach, because researchers have achieved consensus concerning two fundamental challenges. The first is accuracy—defining pure cell types rather than settling for neuronal classes that are mixtures of types. The second is completeness—developing methods guaranteed to eventually identify all cell types, as well as criteria for determining when all types have been found. Case studies illustrate how these two challenges are handled by combining state-of-the-art molecular, anatomical, and physiological techniques. Progress is also being made in observing and modeling connectivity between cell types. Scaling up to larger brain regions, such as the cortex, will require not only technical advances but also careful consideration of the challenges of accuracy and completeness.
In this Perspective, Seung and Sümbül describe recent progress towards defining neuronal cell types in the mouse retina and attempt to extract lessons that may be generally useful in the mammalian brain.
Comprehensive high-resolution structural maps are central to functional exploration and understanding in biology. For the nervous system, in which high resolution and large spatial extent are both ...needed, such maps are scarce as they challenge data acquisition and analysis capabilities. Here we present for the mouse inner plexiform layer--the main computational neuropil region in the mammalian retina--the dense reconstruction of 950 neurons and their mutual contacts. This was achieved by applying a combination of crowd-sourced manual annotation and machine-learning-based volume segmentation to serial block-face electron microscopy data. We characterize a new type of retinal bipolar interneuron and show that we can subdivide a known type based on connectivity. Circuit motifs that emerge from our data indicate a functional mechanism for a known cellular response in a ganglion cell that detects localized motion, and predict that another ganglion cell is motion sensitive.
State-of-the-art light and electron microscopes are capable of acquiring large image datasets, but quantitatively evaluating the data often involves manually annotating structures of interest. This ...process is time-consuming and often a major bottleneck in the evaluation pipeline. To overcome this problem, we have introduced the Trainable Weka Segmentation (TWS), a machine learning tool that leverages a limited number of manual annotations in order to train a classifier and segment the remaining data automatically. In addition, TWS can provide unsupervised segmentation learning schemes (clustering) and can be customized to employ user-designed image features or classifiers.
TWS is distributed as open-source software as part of the Fiji image processing distribution of ImageJ at http://imagej.net/Trainable_Weka_Segmentation .
ignacio.arganda@ehu.eus.
Supplementary data are available at Bioinformatics online.
Once considered provocative, the notion that the wisdom of the crowd is superior to any individual has become itself a piece of crowd wisdom, leading to speculation that online voting may soon put ...credentialed experts out of business. Recent applications include political and economic forecasting, evaluating nuclear safety, public policy, the quality of chemical probes, and possible responses to a restless volcano. Algorithms for extracting wisdom from the crowd are typically based on a democratic voting procedure. They are simple to apply and preserve the independence of personal judgment. However, democratic methods have serious limitations. They are biased for shallow, lowest common denominator information, at the expense of novel or specialized knowledge that is not widely shared. Adjustments based on measuring confidence do not solve this problem reliably. Here we propose the following alternative to a democratic vote: select the answer that is more popular than people predict. We show that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard 'most popular' or 'most confident' principles fail under exactly those same assumptions. Like traditional voting, the principle accepts unique problems, such as panel decisions about scientific or artistic merit, and legal or historical disputes. The potential application domain is thus broader than that covered by machine learning and psychometric methods, which require data across multiple questions.
We describe automated technologies to probe the structure of neural tissue at nanometer resolution and use them to generate a saturated reconstruction of a sub-volume of mouse neocortex in which all ...cellular objects (axons, dendrites, and glia) and many sub-cellular components (synapses, synaptic vesicles, spines, spine apparati, postsynaptic densities, and mitochondria) are rendered and itemized in a database. We explore these data to study physical properties of brain tissue. For example, by tracing the trajectories of all excitatory axons and noting their juxtapositions, both synaptic and non-synaptic, with every dendritic spine we refute the idea that physical proximity is sufficient to predict synaptic connectivity (the so-called Peters’ rule). This online minable database provides general access to the intrinsic complexity of the neocortex and enables further data-driven inquiries.
Display omitted
Display omitted
•Tape-based pipeline for electron microscopic reconstruction of brain tissue•Annotated database of 1,700 synapses from a saturated reconstruction of cortex•Excitatory axon proximity to dendritic spines not sufficient to predict synapses
Automated technologies probing the structure of neural tissue at nanometer resolution generate a saturated reconstruction of a sub-volume of mouse neocortex, refuting the idea that physical proximity is sufficient to predict excitatory synaptic connectivity.
Many image segmentation algorithms first generate an affinity graph and then partition it. We present a machine learning approach to computing an affinity graph using a convolutional network (CN) ...trained using ground truth provided by human experts. The CN affinity graph can be paired with any standard partitioning algorithm and improves segmentation accuracy significantly compared to standard hand-designed affinity functions.
We apply our algorithm to the challenging 3D segmentation problem of reconstructing neuronal processes from volumetric electron microscopy (EM) and show that we are able to learn a good affinity graph directly from the raw EM images. Further, we show that our affinity graph improves the segmentation accuracy of both simple and sophisticated graph partitioning algorithms.
In contrast to previous work, we do not rely on prior knowledge in the form of hand-designed image features or image preprocessing. Thus, we expect our algorithm to generalize effectively to arbitrary image types.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Many theories of neural networks assume rules of connection between pairs of neurons that are based on their cell types or functional properties. It is finally becoming feasible to test such pairwise ...models of connectivity, due to emerging advances in neuroanatomical techniques. One method will be to measure the functional properties of connected pairs of neurons, sparsely sampling pairs from many specimens. Another method will be to find a “connectome,” a dense map of all connections in a single specimen, and infer functional properties of neurons through computational analysis. For the latter method, the most exciting prospect would be to decode the memories that are hypothesized to be stored in connectomes.
Electron microscopy of biological tissue has recently seen an unprecedented increase in imaging throughput moving the ultrastructural analysis of large tissue blocks such as whole brains into the ...realm of the feasible. However, homogeneous, high-quality electron microscopy staining of large biological samples is still a major challenge. To date, assessing the staining quality in electron microscopy requires running a sample through the entire staining protocol end-to-end, which can take weeks or even months for large samples, rendering protocol optimization for such samples to be inefficient. Here, we present an in situ time-lapsed X-ray-assisted staining procedure that opens the 'black box' of electron microscopy staining and allows observation of individual staining steps in real time. Using this novel method, we measured the accumulation of heavy metals in large tissue samples immersed in different staining solutions. We show that the measured accumulation of osmium in fixed tissue obeys empirically a quadratic dependence between the incubation time and sample size. We found that potassium ferrocyanide, a classic reducing agent for osmium tetroxide, clears the tissue after osmium staining and that the tissue expands in osmium tetroxide solution, but shrinks in potassium ferrocyanide reduced osmium solution. X-ray-assisted staining gave access to the in situ staining kinetics and allowed us to develop a diffusion-reaction-advection model that accurately simulates the measured accumulation of osmium in tissue. These are first steps towards
staining experiments and simulation-guided optimization of staining protocols for large samples. Hence, X-ray-assisted staining will be a useful tool for the development of reliable staining procedures for large samples such as entire brains of mice, monkeys, or humans.
Here we describe an automated method, named serial two-photon (STP) tomography, that achieves high-throughput fluorescence imaging of mouse brains by integrating two-photon microscopy and tissue ...sectioning. STP tomography generates high-resolution datasets that are free of distortions and can be readily warped in three dimensions, for example, for comparing multiple anatomical tracings. This method opens the door to routine systematic studies of neuroanatomy in mouse models of human brain disorders.
Recent developments in serial-section electron microscopy allow the efficient generation of very large image data sets but analyzing such data poses challenges for software tools. Here we introduce ...Volume Annotation and Segmentation Tool (VAST), a freely available utility program for generating and editing annotations and segmentations of large volumetric image (voxel) data sets. It provides a simple yet powerful user interface for real-time exploration and analysis of large data sets even in the Petabyte range.