Abstract
Ultrasonically-sculpted gradient-index optical waveguides enable non-invasive light confinement inside scattering media. The confinement level strongly depends on ultrasound parameters ...(e.g., amplitude, frequency), and medium optical properties (e.g., extinction coefficient). We develop a physically-accurate simulator, and use it to quantify these dependencies for a radially-symmetric virtual optical waveguide. Our analysis provides insights for optimizing virtual optical waveguides for given applications. We leverage these insights to configure virtual optical waveguides that improve light confinement fourfold compared to previous configurations at five mean free paths. We show that virtual optical waveguides enhance light throughput by 50% compared to an ideal external lens, in a medium with bladder-like optical properties at one transport mean free path. We corroborate these simulation findings with real experiments: we demonstrate, for the first time, that virtual optical waveguides recycle scattered light, and enhance light throughput by 15% compared to an external lens at five transport mean free paths.
Highly scattering media pose significant challenges for many optical imaging applications due to the loss of information inherent to the scattering process. Absorption can also result in significant ...degradation of image quality. However, absorption can actually improve the resolution of images transmitted through scattering media in certain cases. Here we study how the presence of absorption can enhance the quality of an image transmitted through a scattering medium, by investigating the dependence of this enhancement on the medium’s scattering properties. We find that absorption-induced image resolution enhancement is substantially larger for media consisting of isotropic scatterers (e.g., dielectric nanoparticles) than for strongly forward-scattering media (e.g., biological tissue). This work leads to a broader understanding, and ultimately control, of the optical properties of strongly absorbing, scattering media.
During the last decade, we have been witnessing the continued development of new time-of-flight imaging devices, and their increased use in numerous and varied applications. However, physics-based ...rendering techniques that can accurately simulate these devices are still lacking: while existing algorithms are adequate for certain tasks, such as simulating transient cameras, they are very inefficient for simulating time-gated cameras because of the large number of wasted path samples. We take steps towards addressing these deficiencies, by introducing a procedure for efficiently sampling paths with a predetermined length, and incorporating it within rendering frameworks tailored towards simulating time-gated imaging. We use our open-source implementation of the above to empirically demonstrate improved rendering performance in a variety of applications, including simulating proximity sensors, imaging through occlusions, depth-selective cameras, transient imaging in dynamic scenes, and non-line-of-sight imaging.
Time of flight (ToF) cameras use a temporally modulated light source and measure correlation between the reflected light and a sensor modulation pattern, in order to infer scene depth. In this paper, ...we show that such correlational sensors can also be used to selectively accept or reject light rays from certain scene depths. The basic idea is to carefully select illumination and sensor modulation patterns such that the correlation is non-zero only in the selected depth range - thus light reflected from objects outside this depth range do not affect the correlational measurements. We demonstrate a prototype depth-selective camera and highlight two potential applications: imaging through scattering media and virtual blue screening. This depth-selectivity can be used to reject back-scattering and reflection from media in front of the subjects of interest, thereby significantly enhancing the ability to image through scattering media-critical for applications such as car navigation in fog and rain. Similarly, such depth selectivity can also be utilized as a virtual blue-screen in cinematography by rejecting light reflecting from background, while selectively retaining light contributions from the foreground subject.
Doppler Time-of-Flight Rendering Kim, Juhyeon; Jarosz, Wojciech; Gkioulekas, Ioannis ...
ACM transactions on graphics,
12/2023, Letnik:
42, Številka:
6
Journal Article
Recenzirano
Odprti dostop
We introduce Doppler time-of-flight (D-ToF) rendering, an extension of ToF rendering for dynamic scenes, with applications in simulating D-ToF cameras. D-ToF cameras use high-frequency modulation of ...illumination and exposure, and measure the Doppler frequency shift to compute the radial velocity of dynamic objects. The time-varying scene geometry and high-frequency modulation functions used in such cameras make it challenging to accurately and efficiently simulate their measurements with existing ToF rendering algorithms. We overcome these challenges in a twofold manner: To achieve accuracy, we derive path integral expressions for D-ToF measurements under global illumination and form unbiased Monte Carlo estimates of these integrals. To achieve efficiency, we develop a tailored time-path sampling technique that combines antithetic time sampling with correlated path sampling. We show experimentally that our sampling technique achieves up to two orders of magnitude lower variance compared to naive time-path sampling. We provide an open-source simulator that serves as a digital twin for D-ToF imaging systems, allowing imaging researchers, for the first time, to investigate the impact of modulation functions, material properties, and global illumination on D-ToF imaging performance.
Three-dimensional imaging using Time-of-flight (ToF) sensors is rapidly gaining widespread adoption in many applications due to their cost effectiveness, simplicity, and compact size. However, the ...current generation of ToF cameras suffers from low spatial resolution due to physical fabrication limitations. In this paper, we propose CS-ToF, an imaging architecture to achieve high spatial resolution ToF imaging via optical multiplexing and compressive sensing. Our approach is based on the observation that, while depth is non-linearly related to ToF pixel measurements, a phasor representation of captured images results in a linear image formation model. We utilize this property to develop a CS-based technique that is used to recover high resolution 3D images. Based on the proposed architecture, we developed a prototype 1-megapixel compressive ToF camera that achieves as much as 4× improvement in spatial resolution and 3× improvement for natural scenes. We believe that our proposed CS-ToF architecture provides a simple and low-cost solution to improve the spatial resolution of ToF and related sensors.
Synthetic aperture sonar (SAS) measures a scene from multiple views in order to increase the resolution of reconstructed imagery. Image reconstruction methods for SAS coherently combine measurements ...to focus acoustic energy onto the scene. However, image formation is typically under-constrained due to a limited number of measurements and bandlimited hardware, which limits the capabilities of existing reconstruction methods. To help meet these challenges, we design an analysis-by-synthesis optimization that leverages recent advances in neural rendering to perform coherent SAS imaging. Our optimization enables us to incorporate physics-based constraints and scene priors into the image formation process. We validate our method on simulation and experimental results captured in both air and water. We demonstrate both quantitatively and qualitatively that our method typically produces superior reconstructions than existing approaches. We share code and data for reproducibility.
Rendering radiative transfer through media with a heterogeneous refractive index is challenging because the continuous refractive index variations result in light traveling along curved paths. ...Existing algorithms are based on photon mapping techniques, and thus are biased and result in strong artifacts. On the other hand, existing unbiased methods such as path tracing and bidirectional path tracing cannot be used in their current form to simulate media with a heterogeneous refractive index. We change this state of affairs by deriving unbiased path tracing estimators for this problem. Starting from the refractive radiative transfer equation (RRTE), we derive a path-integral formulation, which we use to generalize path tracing with next-event estimation and bidirectional path tracing to the heterogeneous refractive index setting. We then develop an optimization approach based on fast analytic derivative computations to produce the point-to-point connections required by these path tracing algorithms. We propose several acceleration techniques to handle complex scenes (surfaces and volumes) that include participating media with heterogeneous refractive fields. We use our algorithms to simulate a variety of scenes combining heterogeneous refraction and scattering, as well as tissue imaging techniques based on ultrasonic virtual waveguides and lenses. Our algorithms and publicly-available implementation can be used to characterize imaging systems such as refractive index microscopy, schlieren imaging, and acousto-optic imaging, and can facilitate the development of inverse rendering techniques for related applications.
A conventional optical lens can be used to focus light into the target medium from outside, without disturbing the medium. The focused spot size is proportional to the focal distance in a ...conventional lens, resulting in a tradeoff between penetration depth in the target medium and spatial resolution. We have shown that virtual ultrasonically sculpted gradient-index (GRIN) optical waveguides can be formed in the target medium to steer light without disturbing the medium. Here, we demonstrate that such virtual waveguides can relay an externally focused Gaussian beam of light through the medium beyond the focal distance of a single external physical lens, to extend the penetration depth without compromising the spot size. Moreover, the spot size can be tuned by reconfiguring the virtual waveguide. We show that these virtual GRIN waveguides can be formed in transparent and turbid media, to enhance the confinement and contrast ratio of the focused beam of light at the target location. This method can be extended to realize complex optical systems of external physical lenses and in situ virtual waveguides, to extend the reach and flexibility of optical methods.