•Two optimization strategies for preventing over-fitting, ensuring reasonable rendering for arbitrary novel view synthesis and robust depth prediction.•An effective IBVH-based initialization approach ...for directly optimizing explicit 3D structures of high-quality novel view synthesis. This initialization strategy aims to maximize the utilization of object grid resolution under resolution-limited situations and improve rendering quality in grid-based methods.•Successfully eliminate evident ghosting artifacts by directly optimizing 3D structure, yielding a more realistic global rendering.
Directly optimizing radiance fields with explicit 3D primitives, such as voxels and 3D Gaussians, demonstrates impressive effectiveness in both training and rendering. However, optimizing explicit 3D structures from limited training views using differentiable volumetric rendering often results in ghosting artifacts (floaters), i.e., it suffers from early-stage errors in unseen space and remains as evident ghosting artifacts as gradient becomes very sparse spatially with optimization procedure processes. Thus, to address this problem, this paper proposes Clear-Plenoxels, which involves three simple yet non-trivial components: 1) a Visual Hull-based initialization strategy for maximizing the utilization of object resolution and effectively rejecting false updates during early-stage optimization by learning rate assignment per voxel grid; 2) an effective penalty function on local grid discrepancy for ensuring consistent rendering at different view directions, which can remove unnecessary voxels; 3) a mask guided transmittance supervision for each training ray, significantly guaranteeing depth prediction precision when the object's surface color is close to the background. Experiments on a public dataset demonstrate that our method successfully overcomes ghosting artifacts by directly optimizing explicit primitive strategy. The proposed method achieves approximately 1 dB improvement in voxel grid-based methods and matches, if not surpasses, state-of-the-art quality regarding novel view rendering. The code and visualization videos are publicly available at https://github.com/nortonBryan/Clear-plenoxels.
Whether in a clinical setting or a research environment using model organisms, X-ray-based computed tomography (CT) in its different forms represents the gold standard technology for the non-invasive ...imaging and quantification of mineralized tissues. While there are many excellent reviews on computed tomography in bone imaging, most focus on the appendicular skeleton. However, the craniofacial skeleton and mineralized dentition, which are frequently imaged for a variety of reasons, can require special considerations to ensure the best quality data are acquired and interpreted correctly. In this review, I will specifically focus on micro-computed tomography (microCT) related to the study of the craniofacial skeleton from the onset of cranioskeletal development through to adulthood using the mouse as the primary reference organism. In so doing, I will cover the important considerations when planning imaging studies, explain critical parameters of both scanning, reconstruction and 3D rendering of data that can impact quantification of different mineralized craniofacial tissues, and options for enabling accurate visualization of tomographic data.
Place-based accessibility measures, such as the gravity-based model, are widely applied to study the spatial accessibility of workers to job opportunities in cities. However, gravity-based measures ...often suffer from three main limitations: (1) they are sensitive to the spatial configuration and scale of the units of analysis, which are not specifically designed for capturing job accessibility patterns and are often too coarse; (2) they omit the temporal dynamics of job opportunities and workers in the calculation, instead assuming that they remain stable over time; and (3) they do not lend themselves to dynamic geovisualization techniques. In this paper, a new methodological framework for measuring and visualizing place-based job accessibility in space and time is presented that overcomes these three limitations. First, discretization and dasymetric mapping approaches are used to disaggregate counts of jobs and workers over specific time intervals to a fine-scale grid. Second, Shen's (1998) gravity-based accessibility measure is modified to account for temporal fluctuations in the spatial distributions of the supply of jobs and the demand of workers and is used to estimate hourly job accessibility at each cell. Third, a four-dimensional volumetric rendering approach is employed to integrate the hourly job access estimates into a space-time cube environment, which enables the users to interactively visualize the space-time job accessibility patterns. The integrated framework is demonstrated in the context of a case study of the Tampa Bay region of Florida. The findings demonstrate the value of the proposed methodology in job accessibility analysis and the policy-making process.
•Studying job accessibility is important to urban planning and economic development.•Conventional job accessibility measures fail to consider the temporal dynamics.•A methodological framework is developed to measure and visualize place-based space-time job accessibility.•A case study of the Tampa Bay region of Florida suggests the value of the framework.•The framework can be adopted by other time-sensitive applications.
In this paper, we propose a controllable high-quality free viewpoint video generation method based on the motion graph and neural radiance fields (NeRF). Different from existing pose-driven NeRF or ...time/structure conditioned NeRF works, we propose to first construct a directed motion graph of the captured sequence. Such a sequence-motion-parameterization strategy not only enables flexible pose control for free viewpoint video rendering but also avoids redundant calculation of similar poses and thus improves the overall reconstruction efficiency. Moreover, to support body shape control without losing the realistic free viewpoint rendering performance, we improve the vanilla NeRF by combining explicit surface deformation and implicit neural scene representations. Specifically, we train a local surface-guided NeRF for each valid frame on the motion graph, and the volumetric rendering was only performed in the local space around the real surface, thus enabling plausible shape control ability. As far as we know, our method is the first method that supports both realistic free viewpoint video reconstruction and motion graph-based user-guided motion traversal. The results and comparisons further demonstrate the effectiveness of the proposed method.
The volumetric tomography reconstruction technique (VTRT) can calculate and restore the three-dimensional physical field based on the multi-view projection information of the flame and similar flow ...fields, which plays an important role in the research of volumetric targets. However, the reconstruction of three-dimensional scalar/vector fields representing the physical information of the flow field (temperature, density, chemical composition, etc.) requires a significant amount of computational resources and time, especially in the turbulent flame flow field, which requires high spatial and temporal resolutions. Based on the said background, in this work, we proposed a Fast Neural Fluid Reconstruction Technique (Fast-NFRT) based on deep learning. In this investigation, we first test the reconstruction accuracy, speed, and noise robustness of Fast-NFRT using numerically simulated flames. Then Fast-NFRT is used to reconstruct the experimental turbulent jet flame under two different conditions. Finally, with the camera settings preserved, the Fast-NFRT model was tested for transfer learning between the numerical simulation flame and the two experimental jet flames to examine the generalization performance of the reconstruction. It is found that the proposed Fast-NFRT model can achieve a temporal resolution of 50-500 fps with a similar reconstruction fidelity to traditional algebraic reconstruction methods, which demonstrates the capacity of the Fast-NFRT model and its potential in real-time reconstruction applications and dynamic analysis for complex flow dynamics.
Volumetric reconstructions of transparent or translucent mediums are critical for various applications. For instance, successful reconstructions of transient and turbulent flames will assist in ...understanding the complex combustion mechanisms during combustion and advanced burner design. The most common method for volumetric flame combustion diagnosis is the tomographic reconstruction technique. Originating from computed tomography for medical diagnosis purposes, computed tomography of chemiluminescence (CTC) is a volumetric flame diagnostic method that utilizes two-dimensional projections of flame under limited viewing angles to reconstruct three-dimensional information of the combustion field. Typical flame reconstructions use discrete volumetric voxels to represent the flame luminosities at different spatial locations. However, this approach increases the computation costs in both weight matrix calculations and tomographic iterations. This investigation proposes a neural volume reconstruction technique (NVRT) that uses a neural network to represent the continuous flame luminosity implicitly. Besides, this investigation adopts the differentiable volume rendering (DVR) technique to train the network based on two-dimensional flame projections and does not require 3D supervision. We use both simulated flames and experimental flames to verify the capability of the proposed NVRT method against the traditional algebraic reconstruction technique (ART). Results show that the NVRT method is superior to the ART method in reconstruction fidelity, resistance to noise, and computational cost (especially RAM usage) for flame reconstructions.
Cinematic rendering (CR) a new method of 3D computed tomography (CT) volumetric visualization that produces photorealistic images. As with traditional 3D visualization methods, CR may prove to be of ...value in providing important information when evaluating regions of complex anatomy such as the heart.
The gated, IV contrast-enhanced chest CT angiogram data from three recent patients were evaluated with CR. Image comparision demonstrates the difference between CR and traditional volume rendering (VR), owing to a more complex lighting model that enhances surface detail and produces realistic shadows to add depth to 3D visualizations.
Representative examples of normal cardiac anatomy, a coronary artery stenosis, and an intracardiac malignant neoplasm are presented with 2D multiplanar reconstruction, traditional VR and CR. A potential pitfall in CR utilization, namely the possibility of obscuring important pathology, is demonstrated and discussed.
CR is a promising method to enhance display volumetric CT data and should prove useful in diagnosis, treatment planning, surgical navigation, trainee education, and patient engagement. However, further study is needed to establish the advantaged and disadvantages of CR in comparison to other 3D methods.
Scientists across all disciplines increasingly rely on machine learning algorithms to analyse and sort datasets of ever increasing volume and complexity. Although trends and outliers are easily ...extracted, careful and close inspection will still be necessary to explore and disentangle detailed behaviour, as well as identify systematics and false positives. We must therefore incorporate new technologies to facilitate scientific analysis and exploration. Astrophysical data is inherently multi-parameter, with the spatial-kinematic dimensions at the core of observations and simulations. The arrival of mainstream virtual-reality (VR) headsets and increased GPU power, as well as the availability of versatile development tools for video games, has enabled scientists to deploy such technology to effectively interrogate and interact with complex data. In this paper we present development and results from custom-built interactive VR tools, called the iDaVIE suite, that are informed and driven by research on galaxy evolution, cosmic large-scale structure, galaxy–galaxy interactions, and gas/kinematics of nearby galaxies in survey and targeted observations. In the new era of Big Data ushered in by major facilities such as the SKA and LSST that render past analysis and refinement methods highly constrained, we believe that a paradigm shift to new software, technology and methods that exploit the power of visual perception, will play an increasingly important role in bridging the gap between statistical metrics and new discovery. We have released a beta version of the iDaVIE software system that is free and open to the community.