Reconstruction of neural circuits from volume electron microscopy data requires the tracing of cells in their entirety, including all their neurites. Automated approaches have been developed for ...tracing, but their error rates are too high to generate reliable circuit diagrams without extensive human proofreading. We present flood-filling networks, a method for automated segmentation that, similar to most previous efforts, uses convolutional neural networks, but contains in addition a recurrent pathway that allows the iterative optimization and extension of individual neuronal processes. We used flood-filling networks to trace neurons in a dataset obtained by serial block-face electron microscopy of a zebra finch brain. Using our method, we achieved a mean error-free neurite path length of 1.1 mm, and we observed only four mergers in a test set with a path length of 97 mm. The performance of flood-filling networks was an order of magnitude better than that of previous approaches applied to this dataset, although with substantially increased computational costs.
Field inversion in F 2 m dominates the cost of modern software implementations of certain elliptic curve cryptographic operations, such as point encoding/hashing into elliptic curves (Brown et al. ...in: Submission to NIST, 2008; Brown in: IACR Cryptology ePrint Archive 2008:12, 2008; Aranha et al. in: Cryptology ePrint Archive, Report 2014/486, 2014) Itoh–Tsujii inversion using a polynomial basis and precomputed table-based multi-squaring has been demonstrated to be highly effective for software implementations (Taverne et al. in: CHES 2011, 2011; Oliveira et al. in: J Cryptogr Eng 4(1):3–17, 2014; Aranha et al. in: Cryptology ePrint Archive, Report 2014/486, 2014), but the performance and memory use depend critically on the choice of addition chain and multi-squaring tables, which in prior work have been determined only by suboptimal ad-hoc methods and manual selection. We thoroughly investigated the performance/memory tradeoff for table-based linear transforms used for efficient multi-squaring. Based upon the results of that investigation, we devised a comprehensive cost model for Itoh–Tsujii inversion and a corresponding optimization procedure that is empirically fast and provably finds globally-optimal solutions. We tested this method on eight binary fields commonly used for elliptic curve cryptography; our method found lower-cost solutions than the ad-hoc methods used previously, and for the first time enables a principled exploration of the time/memory tradeoff of inversion implementations.
Field inversion in
F
2
m
dominates the cost of modern software implementations of certain elliptic curve cryptographic operations, such as point encoding/hashing into elliptic curves (Brown et al. ...in: Submission to NIST,
2008
; Brown in: IACR Cryptology ePrint Archive 2008:12,
2008
; Aranha et al. in: Cryptology ePrint Archive, Report 2014/486,
2014
) Itoh–Tsujii inversion using a polynomial basis and precomputed table-based multi-squaring has been demonstrated to be highly effective for software implementations (Taverne et al. in: CHES 2011,
2011
; Oliveira et al. in: J Cryptogr Eng 4(1):3–17,
2014
; Aranha et al. in: Cryptology ePrint Archive, Report 2014/486,
2014
), but the performance and memory use depend critically on the choice of addition chain and multi-squaring tables, which in prior work have been determined only by suboptimal ad-hoc methods and manual selection. We thoroughly investigated the performance/memory tradeoff for table-based linear transforms used for efficient multi-squaring. Based upon the results of that investigation, we devised a comprehensive cost model for Itoh–Tsujii inversion and a corresponding optimization procedure that is empirically fast and provably finds globally-optimal solutions. We tested this method on eight binary fields commonly used for elliptic curve cryptography; our method found lower-cost solutions than the ad-hoc methods used previously, and for the first time enables a principled exploration of the time/memory tradeoff of inversion implementations.
Maps of the nervous system that identify individual cells along with their type, subcellular components and connectivity have the potential to elucidate fundamental organizational principles of ...neural circuits. Nanometer-resolution imaging of brain tissue provides the necessary raw data, but inferring cellular and subcellular annotation layers is challenging. We present segmentation-guided contrastive learning of representations (SegCLR), a self-supervised machine learning technique that produces representations of cells directly from 3D imagery and segmentations. When applied to volumes of human and mouse cortex, SegCLR enables accurate classification of cellular subcompartments and achieves performance equivalent to a supervised approach while requiring 400-fold fewer labeled examples. SegCLR also enables inference of cell types from fragments as small as 10 μm, which enhances the utility of volumes in which many neurites are truncated at boundaries. Finally, SegCLR enables exploration of layer 5 pyramidal cell subtypes and automated large-scale analysis of synaptic partners in mouse visual cortex.
To fully understand how the human brain works, knowledge of its structure at high resolution is needed. Presented here is a computationally intensive reconstruction of the ultrastructure of a cubic ...millimeter of human temporal cortex that was surgically removed to gain access to an underlying epileptic focus. It contains about 57,000 cells, about 230 millimeters of blood vessels, and about 150 million synapses and comprises 1.4 petabytes. Our analysis showed that glia outnumber neurons 2:1, oligodendrocytes were the most common cell, deep layer excitatory neurons could be classified on the basis of dendritic orientation, and among thousands of weak connections to each neuron, there exist rare powerful axonal inputs of up to 50 synapses. Further studies using this resource may bring valuable insights into the mysteries of the human brain.
A growing community is constructing a next-generation file format (NGFF) for bioimaging to overcome problems of scalability and heterogeneity. Organized by the Open Microscopy Environment (OME), ...individuals and institutes across diverse modalities facing these problems have designed a format specification process (OME-NGFF) to address these needs. This paper brings together a wide range of those community members to describe the cloud-optimized format itself—OME-Zarr—along with tools and data resources available today to increase FAIR access and remove barriers in the scientific process. The current momentum offers an opportunity to unify a key component of the bioimaging domain—the file format that underlies so many personal, institutional, and global data management and analysis tasks.
We present a novel vision-based grasp point detection algorithm that can reliably detect the corners of a piece of cloth, using only geometric cues that are robust to variation in texture. ...Furthermore, we demonstrate the effectiveness of our algorithm in the context of folding a towel using a general-purpose two-armed mobile robotic platform without the use of specialized end-effectors or tools. The robot begins by picking up a randomly dropped towel from a table, goes through a sequence of vision-based re-grasps and manipulations-partially in the air, partially on the table-and finally stacks the folded towel in a target location. The reliability and robustness of our algorithm enables for the first time a robot with general purpose manipulators to reliably and fully-autonomously fold previously unseen towels, demonstrating success on all 50 out of 50 single-towel trials as well as on a pile of 5 towels.