Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against ...angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.
Prolonged behavioral challenges can cause animals to switch from active to passive coping strategies to manage effort-expenditure during stress; such normally adaptive behavioral state transitions ...can become maladaptive in psychiatric disorders such as depression. The underlying neuronal dynamics and brainwide interactions important for passive coping have remained unclear. Here, we develop a paradigm to study these behavioral state transitions at cellular-resolution across the entire vertebrate brain. Using brainwide imaging in zebrafish, we observed that the transition to passive coping is manifested by progressive activation of neurons in the ventral (lateral) habenula. Activation of these ventral-habenula neurons suppressed downstream neurons in the serotonergic raphe nucleus and caused behavioral passivity, whereas inhibition of these neurons prevented passivity. Data-driven recurrent neural network modeling pointed to altered intra-habenula interactions as a contributory mechanism. These results demonstrate ongoing encoding of experience features in the habenula, which guides recruitment of downstream networks and imposes a passive coping behavioral strategy.
Display omitted
•Passive coping in response to behavioral challenge is conserved in larval zebrafish•Brainwide cellular-resolution activity screen shows unique role for habenula in passivity•Habenular neurons encode stress by progressive recruitment into active ensembles•Optogenetics and network modeling reveal causal contributions of habenulo-raphe circuitry
Brainwide imaging in zebrafish and network modeling reveal that switching from active to passive coping state arises from progressive activation of habenular neurons in response to behavioral challenge.
The goal of understanding living nervous systems has driven interest in high-speed and large field-of-view volumetric imaging at cellular resolution. Light sheet microscopy approaches have emerged ...for cellular-resolution functional brain imaging in small organisms such as larval zebrafish, but remain fundamentally limited in speed. Here, we have developed SPED light sheet microscopy, which combines large volumetric field-of-view via an extended depth of field with the optical sectioning of light sheet microscopy, thereby eliminating the need to physically scan detection objectives for volumetric imaging. SPED enables scanning of thousands of volumes-per-second, limited only by camera acquisition rate, through the harnessing of optical mechanisms that normally result in unwanted spherical aberrations. We demonstrate capabilities of SPED microscopy by performing fast sub-cellular resolution imaging of CLARITY mouse brains and cellular-resolution volumetric Ca2+ imaging of entire zebrafish nervous systems. Together, SPED light sheet methods enable high-speed cellular-resolution volumetric mapping of biological system structure and function.
Display omitted
•Light sheet microscopy speed is increased by extending the detection depth of field•A simple, scalable method is developed for extending the axial point spread function•Rapid, cellular-resolution nervous system mapping across the entire larval zebrafish•Fast automated identification of co-active neurons across the nervous system
By harnessing optical mechanisms that normally result in unwanted spherical aberrations, SPED light sheet microscopy allows high-speed mapping of biological structures such as the entire vertebrate nervous system and its activity at a cellular resolution.
Light field microscopy has been proposed as a new high-speed volumetric computational imaging method that enables reconstruction of 3-D volumes from captured projections of the 4-D light field. ...Recently, a detailed physical optics model of the light field microscope has been derived, which led to the development of a deconvolution algorithm that reconstructs 3-D volumes with high spatial resolution. However, the spatial resolution of the reconstructions has been shown to be non-uniform across depth, with some z planes showing high resolution and others, particularly at the center of the imaged volume, showing very low resolution. In this paper, we enhance the performance of the light field microscope using wavefront coding techniques. By including phase masks in the optical path of the microscope we are able to address this non-uniform resolution limitation. We have also found that superior control over the performance of the light field microscope can be achieved by using two phase masks rather than one, placed at the objective's back focal plane and at the microscope's native image plane. We present an extended optical model for our wavefront coded light field microscope and develop a performance metric based on Fisher information, which we use to choose adequate phase masks parameters. We validate our approach using both simulated data and experimental resolution measurements of a USAF 1951 resolution target; and demonstrate the utility for biological applications with in vivo volumetric calcium imaging of larval zebrafish brain.
Deconvolution is widely used to improve the contrast and clarity of a 3D focal stack collected using a fluorescence microscope. But despite being extensively studied, deconvolution algorithms can ...introduce reconstruction artifacts when their underlying noise models or priors are violated, such as when imaging biological specimens at extremely low light levels. In this paper we propose a deconvolution method specifically designed for 3D fluorescence imaging of biological samples in the low-light regime. Our method utilizes a mixed Poisson-Gaussian model of photon shot noise and camera read noise, which are both present in low light imaging. We formulate a convex loss function and solve the resulting optimization problem using the alternating direction method of multipliers algorithm. Among several possible regularization strategies, we show that a Hessian-based regularizer is most effective for describing locally smooth features present in biological specimens. Our algorithm also estimates noise parameters on-the-fly, thereby eliminating a manual calibration step required by most deconvolution software. We demonstrate our algorithm on simulated images and experimentally-captured images with peak intensities of tens of photoelectrons per voxel. We also demonstrate its performance for live cell imaging, showing its applicability as a tool for biological research.
We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. We accomplish this by leveraging the recently introduced DeepView view ...interpolation algorithm, replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells that are better suited for representing panoramic light field content. We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser. We demonstrate light field video results using data from the 16-camera rig of Pozo et al. 2019 as well as a new low-cost hemispherical array made from 46 synchronized action sports cameras. From this data we produce 6 degree of freedom volumetric videos with a wide 70 cm viewing baseline, 10 pixels per degree angular resolution, and a wide field of view, at 30 frames per second video frame rates. Advancing over previous work, we show that our system is able to reproduce challenging content such as view-dependent reflections, semi-transparent surfaces, and near-field objects as close as 34 cm to the surface of the camera rig.
We present a novel approach to view synthesis using multiplane images (MPIs). Building on recent advances in learned gradient descent, our algorithm generates an MPI from a set of sparse camera ...viewpoints. The resulting method incorporates occlusion reasoning, improving performance on challenging scene features such as object boundaries, lighting reflections, thin structures, and scenes with high depth complexity. We show that our method achieves high-quality, state-of-the-art results on two datasets: the Kalantari light field dataset, and a new camera array dataset, Spaces, which we make publicly available.
Current 3D localization microscopy approaches are fundamentally limited in their ability to image thick, densely labeled specimens. Here, we introduce a hybrid optical-electronic computing approach ...that jointly optimizes an optical encoder (a set of multiple, simultaneously imaged 3D point spread functions) and an electronic decoder (a neural-network-based localization algorithm) to optimize 3D localization performance under these conditions. With extensive simulations and biological experiments, we demonstrate that our deep-learning-based microscope achieves significantly higher 3D localization accuracy than existing approaches, especially in challenging scenarios with high molecular density over large depth ranges.
Three-dimensional snapshot microscopy refers to any technique capable of performing volumetric imaging of microscopic samples using information captured in a single photographic exposure. Unlike ...scanning microscopes, which collect volumetric information over time, 3-D snapshot microscopes can capture volumes at speeds limited only by the frame rate of the image sensor. Synchronous imaging is made possible by encoding depth information in the shape of the microscope's point response function (PRF). Each position in the volume produces a different, distinctive light intensity pattern on the camera sensor, and these patterns can be recognized by a computer algorithm and used to computationally reconstruct a full volume. In this work we explore two different 3-D snapshot microscopes for imaging of weakly scattering, fluorescent specimens. The first, the light field microscope, employs an array of microlenses to decompose light into different angular projections of the volume in a manner similar to computed tomography. We present an optical model for light field microscopy based on wave optics and a 3-D reconstruction method which we solve using a GPU-accelerated iterative algorithm. Theoretical resolution limits for the light field microscope are discussed and compared with experimental measurements using a USAF 1951 resolution target, pollen grains, and fluorescent beads. We also summarize our application of light field microscopy in neuroscience where we have used it to perform 3-D calcium imaging. Using this technique, we have recorded the activity of thousands of neurons in the brains of awake, behaving animals. Our second approach to 3-D snapshot imaging uses phase masks, rather than a microlens array, to encode volumetric information. We have designed a ``helical focus'' phase mask that generates a PRF that rotates as the microscope is defocused. This PRF contains a single lobe that does not change in size or shape as it rotates, thereby enabling imaging with consistent resolution over a configurable depth range of up to hundreds of micrometers. Further, we propose a design for a 3-D snapshot microscope that uses two such masks (in different light paths, but with simultaneous acquisition using two frame synchronized cameras) to capture volumetric information. Our optical simulations suggest that this microscope is capable of performing 3-D imaging at resolutions exceeding that of light field microscopy when imaging sparse volumes. We show these results and compare the two techniques.