Understanding the functional morphology and mobility of appendages of fossil animals is important for exploring ecological traits such as feeding and locomotion. Previous work on fossils from the 518 ...million‐year‐old Chengjiang biota of China was based mainly on two‐dimensional information captured from the surface of the specimens. Only recently, μCT techniques started to reveal almost the entire, though flattened and compressed, three‐dimensionally preserved morphologies of the arthropods from Chengjiang. This allows more accurate work on reconstructing the possible movement of certain structures such as the appendages. Here, we present a workflow on how to reconstruct the mobility of a limb of the early Chengjiang arthropod Ercaicunia multinodosa from the famous Chinese fossil site. Based on μCT scans of the fossil, we rendered surface models of the 13th–15th right endopods using the 3D visualization and 3D‐rendering software Amira. The 3D objects then were postprocessed (Collapse Hierarchy, Unify Normals) in SAP 3D Visual Enterprise Author before being imported into the 3D animation program Autodesk Maya 2020. Using the add‐on tool X_ROMM in Maya, we illustrate step‐by‐step on how to make the articles of the limbs swing‐in toward each other. Eventually, we propose several possible limb movements of E. multinodosa, which helps to understand how this early arthropod could have moved its endopods.
Based on surface reconstructions performed with Amira, we show a step‐by‐step workflow on how to generate a flexible endopod with movable articles using Autodesk Maya 2020 and discuss challenges for Chengjiang arthropod kinematics and mobility research.
•Multi-operating systems tools for intracerebral electrode localization.•Automatic electrode segmentation, localization and labelling.•3D visualization of signal processing results within patient’s ...anatomy.
In pharmacoresistant epilepsy, exploration with depth electrodes can be needed to precisely define the epileptogenic zone. Accurate location of these electrodes is thus essential for the interpretation of Stereotaxic EEG (SEEG) signals. As SEEG analysis increasingly relies on signal processing, it is crucial to make a link between these results and patient’s anatomy.
Our aims were thus to develop a suite of software tools, called “EpiTools”, able to i) precisely and automatically localize the position of each SEEG contact and ii) display the results of signal analysis in each patient’s anatomy.
The first tool, GARDEL (GUI for Automatic Registration and Depth Electrode Localization), is able to automatically localize SEEG contacts and to label each contact according to a pre-specified nomenclature (for instance that of FreeSurfer or MarsAtlas). The second tool, 3Dviewer, enables to visualize in the 3D anatomy of the patient the origin of signal processing results such as rate of biomarkers, connectivity graphs or Epileptogenicity Index.
GARDEL was validated in 30 patients by clinicians and proved to be highly reliable to determine within the patient’s individual anatomy the actual location of contacts.
GARDEL is a fully automatic electrode localization tool needing limited user interaction (only for electrode naming or contact correction). The 3Dviewer is able to read signal processing results and to display them in link with patient’s anatomy.
EpiTools can help speeding up the interpretation of SEEG data and improving its precision.
3D Heritage Online Presenter (3DHOP) is a framework for the creation of advanced web-based visual presentations of high-resolution 3D content. 3DHOP has been designed to cope with the specific needs ...of the Cultural Heritage (CH) field. By using multiresolution encoding, it is able to efficiently stream high-resolution 3D models (such as the sampled models usually employed in CH applications); it provides a series of ready-to-use templates and examples tailored for the presentation of CH artifacts; it interconnects the 3D visualization with the rest of the webpage DOM, making it possible to create integrated presentations schemes (3D + multimedia). In its design and development, we paid particular attention to three factors: easiness of use, smooth learning curve and performances. Thanks to its modular nature and a declarative-like setup, it is easy to learn, configure, and customize at different levels, depending on the programming skills of the user. This allows people with different background to always obtain the required power and flexibility from the framework. 3DHOP is written in JavaScript and it is based on the SpiderGL library, which employs the WebGL subset of HTML5, implementing plugin-free 3D rendering on many web browsers. In this paper we present the capabilities and characteristics of the 3DHOP framework, using different examples based on concrete projects.
Display omitted
•The presentation of a complete framework, already in use by the community.•The imperative-style structure of the framework, as an alternative to declarative approaches.•The support for the remote rendering of very complex geometries.•The open-source policy, with the aim of the creation of a strong community of users.
With improvements to both scan quality and facial recognition software, there is an increased risk of participants being identified by a 3D render of their structural neuroimaging scans, even when ...all other personal information has been removed. To prevent this, facial features should be removed before data are shared or openly released, but while there are several publicly available software algorithms to do this, there has been no comprehensive review of their accuracy within the general population. To address this, we tested multiple algorithms on 300 scans from three neuroscience research projects, funded in part by the Ontario Brain Institute, to cover a wide range of ages (3-85 years) and multiple patient cohorts. While skull stripping is more thorough at removing identifiable features, we focused mainly on defacing software, as skull stripping also removes potentially useful information, which may be required for future analyses. We tested six publicly available algorithms (afni_refacer, deepdefacer, mri_deface, mridefacer, pydeface, quickshear), with one skull stripper (FreeSurfer) included for comparison. Accuracy was measured through a pass/fail system with two criteria; one, that all facial features had been removed and two, that no brain tissue was removed in the process. A subset of defaced scans were also run through several preprocessing pipelines to ensure that none of the algorithms would alter the resulting outputs. We found that the success rates varied strongly between defacers, with afni_refacer (89%) and pydeface (83%) having the highest rates, overall. In both cases, the primary source of failure came from a single dataset that the defacer appeared to struggle with - the youngest cohort (3-20 years) for afni_refacer and the oldest (44-85 years) for pydeface, demonstrating that defacer performance not only depends on the data provided, but that this effect varies between algorithms. While there were some very minor differences between the preprocessing results for defaced and original scans, none of these were significant and were within the range of variation between using different NIfTI converters, or using raw DICOM files.
Human hands are essential in everyday tasks, mainly manipulating and grasping objects. Thus, accurate and precise three-dimensional (3D) models of digitally reconstructed hands are valuable to the ...world of ergonomics. A 3D scan-to-render system called the “3D hands model rendering using a 6-degrees of freedom (DoF) collaborative robot” is proposed to ensure that a person receives the best possible outcome for their unique anatomy. The description implies this is using a 6-DoF robot with a two-dimensional (2D) camera sensor that will encompass all forms of the production line in a timely, low-cost, precise, and accurate manner so that an individual can go to and scan their hand and have an actual 3D reconstruction print within the same facility, the same day. It is expected to generate an accurate hand model using structure from motion (SFM) system techniques to create a dense point cloud using photogrammetry. The point cloud is used to develop the tetrahedral mesh of the surface of the hand. This mesh is then refined to filter out the noise of the point cloud. The mesh can produce a precise 3D model that can tailor products to the consumer's needs. The results show the effectiveness of the 3D model of the hand.
Purpose: To retrospectively evaluate the accuracy of a novel software platform for assessing completeness of percutaneous thermal ablations.
Materials & methods: Ninety hepatocellular carcinomas ...(HCCs) in 50 patients receiving percutaneous ultrasound-guided microwave ablation (MWA) that resulted in apparent technical success at 24-h post-ablation computed tomography (CT) and with ≥1-year imaging follow-up were randomly selected from a 320 HCC ablation database (2010-2016). Using a novel volumetric registration software, pre-ablation CT volumes of the HCCs without and with the addition of a 5 mm safety margin, and corresponding post-ablation necrosis volumes were segmented, co-registered and overlapped. These were compared to visual side-by-side inspection of axial images.
Results: At 1-year follow-up, CT showed absence of local tumor progression (LTP) in 69/90 (76.7%) cases and LTP in 21/90 (23.3%). For HCCs classified by the software as "incomplete tumor treatments", LTP developed in 13/17 (76.5%) and all 13 (100%) of these LTPs occurred exactly where residual non-ablated tumor was identified by retrospective software analysis. HCCs classified as "complete ablation with <100% 5 mm ablative margins" had LTP in 8/49 (16.3%), while none of 24 HCCs with "complete ablation including 100% 5 mm ablative margins" had LTP. Differences in LTP between both partially ablated HCCs vs completely ablated HCCs, and ablated HCCs with <100% vs with 100% 5 mm margins were statistically significant (p < .0001 and p = .036, respectively). Thus, 13/21 (61.9%) incomplete tumor treatments could have been detected immediately, were the software available at the time of ablation.
Conclusions: A novel software platform for volumetric assessment of ablation completeness may increase the detection of incompletely ablated tumors, thereby holding the potential to avoid subsequent recurrences.
•Generation of high-precision 3D mesh objects is implemented.•Immersive, unconstrained photo-realistic 3D rendering.•The training speed of the NeRF-Ag model is limited to less than 20 min.
...Efficiently, accurately, and realistically reconstructing large-scale 3D orchard scenes in a virtual world is an immensely challenging task. This complexity stems from the intricate and expansive of real orchard scenes. Traditional 3D reconstruction and rendering methods have encountered limitations in terms of modeling efficiency and computational costs, hindering the ability to provide users with immersive experiences. In response to these challenges, this study introduces a strategy for 3D scene reconstruction and rendering grounded in implicit neural representation: the NeRF-Ag model. Building upon the baseline NeRF, this model integrates a multi-resolution latent feature encoding technique, notably heightening training efficiency and elevating modeling precision. Furthermore, by means of environmental factor embedding, the model's robustness and practical applicability are further enhanced. The experimental outcomes illustrate that NeRF-Ag attains photo-realistic rendering outcomes across small, medium, and large scales. Moreover, it surpasses NeRF concerning the evaluation metrics of PSNR, SSIM, and LPIPS. Notably, the training speed of NeRF-Ag is roughly 39 times faster than NeRF. In 3D reconstruction tasks, NeRF-Ag showcases enhanced texture detail representation and higher modeling accuracy compared to the COLMAP-based 3D reconstruction method. Additionally, this study accomplishes free-viewpoint rendering of 3D scenes employing NeRF-Ag and provides evidence substantiating the connection between the quantity of training images and the precision of 3D rendering. The conclusions of this study will contribute to supporting and referencing the implementation of immersive visual interactive features within agricultural digital twin systems.
The two most widely used postprocessing 3D tools in clinical practice are volume rendering (VR) and maximum intensity projection (MIP). With the use of current-generation MDCT, these techniques ...enable accurate characterization of arterial anatomy and pathology in all anatomic regions. Recently, the VR algorithm has been enhanced by the incorporation of a new lighting model. This new technique-called cinematic rendering-generates photorealistic images with the potential to more accurately depict anatomic detail.
As an enhancement of the technology championed in VR, cinematic rendering promises to provide additional anatomic detail for MDCT interpretation and display. Future investigations must be conducted to evaluate the diagnostic accuracy of cinematic rendering and determine whether interpretative pitfalls result from its unique lighting model in practice.
Amyloidosis is a major problem in over one hundred diseases, including Alzheimer’s disease (AD). Using the iDISCO visualization method involving targeted molecular labeling, tissue clearing, and ...light-sheet microscopy, we studied plaque formation in the intact AD mouse brain at up to 27 months of age. We visualized amyloid plaques in 3D together with tau, microglia, and vasculature. Volume imaging coupled to automated detection and mapping enables precise and fast quantification of plaques within the entire intact mouse brain. The present methodology is also applicable to analysis of frozen human brain samples without specialized preservation. Remarkably, amyloid plaques in human brain tissues showed greater 3D complexity and surprisingly large three-dimensional amyloid patterns, or TAPs. The ability to visualize amyloid in 3D, especially in the context of their micro-environment, and the discovery of large TAPs may have important scientific and medical implications.
Display omitted
•iDISCO clearing is used to detect amyloid plaques in a full mouse brain hemisphere•3D amyloid patterns (TAPs) are detected in human brain archival samples•Triple labeling of cleared tissues allows highly contextual analysis of amyloidogenesis•Automated anatomical mapping empowers accurate and fast quantitation of plaques
Liebmann et al. present 3D renderings of Alzheimer’s disease in an entire mouse brain hemisphere using iDISCO. Volume imaging coupled to automated detection and mapping to the Allen Brain Atlas enables precise and fast quantification of plaques. Plaques in archival human brain samples showed a greater 3D complexity.