The promise of single-objective light-sheet microscopy is to combine the convenience of standard single-objective microscopes with the speed, coverage, resolution and gentleness of light-sheet ...microscopes. We present DaXi, a single-objective light-sheet microscope design based on oblique plane illumination that achieves: (1) a wider field of view and high-resolution imaging via a custom remote focusing objective; (2) fast volumetric imaging over larger volumes without compromising image quality or necessitating tiled acquisition; (3) fuller image coverage for large samples via multi-view imaging and (4) higher throughput multi-well imaging via remote coverslip placement. Our instrument achieves a resolution of 450 nm laterally and 2 μm axially over an imaging volume of 3,000 × 800 × 300 μm. We demonstrate the speed, field of view, resolution and versatility of our instrument by imaging various systems, including Drosophila egg chamber development, zebrafish whole-brain activity and zebrafish embryonic development - up to nine embryos at a time.
•A method to fastly annotate various objects in multiple images at the same time.•A new user interface paradigm for image annotation.•A strategy to learn and improve feature space projection during ...data annotation.•A method that avoids annotation of redundant image components.•A fundamental step towards more efficient and effective active learning.
Despite the progress of interactive image segmentation methods, high-quality pixel-level annotation is still time-consuming and laborious — a bottleneck for several deep learning applications. We take a step back to propose interactive and simultaneous segment annotation from multiple images guided by feature space projection. This strategy is in stark contrast to existing interactive segmentation methodologies, which perform annotation in the image domain. We show that feature space annotation achieves competitive results with state-of-the-art methods in foreground segmentation datasets: iCoSeg, DAVIS, and Rooftop. Moreover, in the semantic segmentation context, it achieves 91.5% accuracy in the Cityscapes dataset, being 74.75 times faster than the original annotation procedure. Further, our contribution sheds light on a novel direction for interactive image annotation that can be integrated with existing methodologies. The supplementary material presents video demonstrations. Code available at https://github.com/LIDS-UNICAMP/rethinking-interactive-image-segmentation.
Purpose
Automated segmentation of brain structures (objects) in MR three‐dimensional (3D) images for quantitative analysis has been a challenge and probabilistic atlases (PAs) are among the most ...well‐succeeded approaches. However, the existing models do not adapt to possible object anomalies due to the presence of a disease or a surgical procedure. Post‐processing operation does not solve the problem, for example, tissue classification to detect and remove such anomalies inside the resulting segmentation mask, because segmentation errors on healthy tissues cannot be fixed. Such anomalies very often alter the shape and texture of the brain structures, making them different from the appearance of the model. In this paper, we present an effective and efficient adaptive probabilistic atlas, named AdaPro, to circumvent the problem and evaluate it on a challenging task — the segmentation of the left hemisphere, right hemisphere, and cerebellum, without pons and medulla, in 3D MR‐T1 brain images of Epilepsy patients. This task is challenging due to temporal lobe resections, artifacts, and the absence of contrast in some parts between the structures of interest.
Methods
In AdaPro, we first build one probabilistic atlas per object of interest from a training set with normal 3D images and the corresponding 3D object masks. Second, we incorporate a texture classifier based on convex optimization which dynamically indicates the regions of the target 3D image where the PAs (shape constraints) should be further adapted. This strategy is mathematically more elegant and avoids problems with post‐processing. Third, we add a new object‐based delineation algorithm based on combinatorial optimization and diffusion filtering. AdaPro can then be used to locate and delineate the objects in the coordinate space of the atlas or of the test image. We also compare AdaPro with three other state‐of‐the‐art methods: an statistical shape model based on synergistic object search and delineation, and two methods based on multi‐atlas label fusion (MALF).
Results
We evaluate the methods quantitatively on 3D MR‐T1 brain images of 2T and 3T from epilepsy patients, before and after temporal lobe resections, and on the template and native coordinate spaces. The results show that AdaPro is considerably faster and consistently more accurate than the baselines with statistical significance in both coordinate spaces.
Conclusion
AdaPro can be used as a fast and effective step for brain tissue segmentation and it can also be easily extended to segment subcortical brain structures. By choice of its components, probabilistic atlas, texture classifier, and delineation algorithm, it can also be extended to other organs and imaging modalities.
•A tool, named Grabber, to increase user control in interactive segmentation.•Grabber can be integrated with any other method.•Grabber can improve convergence, with faster delineation, higher ...effectiveness, and less user effort.
Interactive image segmentation has considerably evolved from techniques that do not learn the parameters of the model to methods that pre-train a model and adapt it from user inputs during the process. However, user control over segmentation still requires significant improvements to avoid that corrections in one part of the object cause errors in other parts. We address this problem by presenting Grabber — a tool to improve convergence (user control) in interactive image segmentation. Grabber is thought to complete segmentation of some other initial method. From a given segmentation mask, Grabber quickly estimates anchor points in one orientation along the boundary of the mask and delineates an optimum contour constrained to pass through those points. The user can control the process by adding, removing, and moving anchor points. Grabber can also explore object properties from the initial coarse segmentation to improve boundary delineation. We integrate Grabber with two recent methods, a region-based approach and a pixel classification method based on deep neural networks. Extensive experiments with robot users on two datasets show in both cases that Grabber can significantly improve convergence, with faster delineation, higher effectiveness, and less user effort. The code of Grabber is available at https://github.com/LIDS-UNICAMP/grabber.
A growing community is constructing a next-generation file format (NGFF) for bioimaging to overcome problems of scalability and heterogeneity. Organized by the Open Microscopy Environment (OME), ...individuals and institutes across diverse modalities facing these problems have designed a format specification process (OME-NGFF) to address these needs. This paper brings together a wide range of those community members to describe the cloud-optimized format itself—OME-Zarr—along with tools and data resources available today to increase FAIR access and remove barriers in the scientific process. The current momentum offers an opportunity to unify a key component of the bioimaging domain—the file format that underlies so many personal, institutional, and global data management and analysis tasks.
In this work, we describe a method for large-scale 3D cell-tracking through a segmentation selection approach. The proposed method is effective at tracking cells across large microscopy datasets on ...two fronts: (i) It can solve problems containing millions of segmentation instances in terabyte-scale 3D+t datasets; (ii) It achieves competitive results with or without deep learning, which requires 3D annotated data, that is scarce in the fluorescence microscopy field. The proposed method computes cell tracks and segments using a hierarchy of segmentation hypotheses and selects disjoint segments by maximizing the overlap between adjacent frames. We show that this method achieves state-of-the-art results in 3D images from the cell tracking challenge and has a faster integer linear programming formulation. Moreover, our framework is flexible and supports segmentations from off-the-shelf cell segmentation models and can combine them into an ensemble that improves tracking. The code is available https://github.com/royerlab/ultrack.
The required number of users' actions and the response time can critically affect user experience during interactive image segmentation. In this work, we revisit a recent graph-based algorithm, ...namely Dynamic Trees (DT), which has shown to be more effective than several well-established methods from the literature of graph-based image segmentation. DT solves segmentation by growing optimum-path trees rooted at seed pixels, such that the arc weights are estimated on the fly from image properties of the growing trees, defining the objects as optimum-path forests rooted at their internal seeds. Depending on the application (e.g., 3D medical images), the response time to correct segmentation by adding and removing seeds can seriously compromise the method's efficiency. We present a differential dynamic trees (DDT) algorithm that adds and removes trees updating optimum paths only in the required regions of the image. We demonstrate that the DDT algorithm can preserve the high effectiveness of DT, being one order of magnitude faster than DT. The experiments also show the advantages of DDT over those well-established counterparts.