Purpose
The treatment of pelvic and acetabular fractures remains technically demanding, and traditional surgical navigation systems suffer from the hand–eye mis-coordination. This paper describes a ...multi-view interactive virtual-physical registration method to enhance the surgeon’s depth perception and a mixed reality (MR)-based surgical navigation system for pelvic and acetabular fracture fixation.
Methods
First, the pelvic structure is reconstructed by segmentation in a preoperative CT scan, and an insertion path for the percutaneous LC-II screw is computed. A custom hand-held registration cube is used for virtual-physical registration. Three strategies are proposed to improve the surgeon’s depth perception: vertices alignment, tremble compensation and multi-view averaging. During navigation, distance and angular deviation visual cues are updated to help the surgeon with the guide wire insertion. The methods have been integrated into an MR module in a surgical navigation system.
Results
Phantom experiments were conducted. Ablation experimental results demonstrated the effectiveness of each strategy in the virtual-physical registration method. The proposed method achieved the best accuracy in comparison with related works. For percutaneous guide wire placement, our system achieved a mean bony entry point error of 2.76 ± 1.31 mm, a mean bony exit point error of 4.13 ± 1.74 mm, and a mean angular deviation of 3.04 ± 1.22°.
Conclusions
The proposed method can improve the virtual-physical fusion accuracy. The developed MR-based surgical navigation system has clinical application potential. Cadaver and clinical experiments will be conducted in future.
Purpose
Free fibula flap is the gold standard for the treatment of mandibular defects. However, the existing preoperative planning protocol is cumbersome to execute, costly to learn, and poorly ...collaborative with the robot-assisted cutting of the fibular osteotomy plane.
Methods
A surgical planning system for robotic assisted mandibular reconstruction with fibula free flap is proposed in this study. A fibular osteotomy planning algorithm is presented so that the virtual surgical planning of the fibular osteotomy segments can be obtained automatically with selected mandibular anatomical landmarks. The planned osteotomy planes are then converted into the motion path of the robotic arm, and the automatic fibula osteotomy is completed under optical navigation.
Results
Surgical planning was performed on 35 patients to verify the feasibility of our system’s virtual surgical planning module, with an average time of 13 min. Phantom experiments were performed to evaluate the reliability and stability of this system. The average distance and angular deviations of the osteotomy planes are 1.04 ± 0.68 mm and 1.56 ±1.10°, respectively.
Conclusions
Our system can achieve not only precise and convenient preoperative planning, but also safe and reliable osteotomy trajectory. The clinical applications of our system for mandibular reconstruction surgery are expected soon.
To realize the three-dimensional visual output of surgical navigation information by studying the cross-linking of mixed reality display devices and high-precision optical navigators.
Applying ...quaternion-based point alignment algorithms to realize the positioning configuration of mixed reality display devices, high-precision optical navigators, real-time patient tracking and calibration technology; based on open source SDK and development tools, developing mixed reality surgery based on visual positioning and tracking system. In this study, four patients were selected for mixed reality-assisted tumor resection and reconstruction and re-examined 1 month after the operation. We reconstructed postoperative CT and use 3DMeshMetric to form the error distribution map, and completed the error analysis and quality control.
Realized the cross-linking of mixed reality display equipment and high-precision optical navigator, developed a digital maxillofacial surgery system based on mixed reality technology and successfully implemented mixed reality-assisted tumor resection and reconstruction in 4 cases.
The maxillofacial digital surgery system based on mixed reality technology can superimpose and display three-dimensional navigation information in the surgeon's field of vision. Moreover, it solves the problem of visual conversion and space conversion of the existing navigation system. It improves the work efficiency of digitally assisted surgery, effectively reduces the surgeon's dependence on spatial experience and imagination, and protects important anatomical structures during surgery. It is a significant clinical application value and potential.
: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of ...high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.
: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.
: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.
Objective: Intraoperative liver deformation poses a considerable challenge during liver surgery, causing significant errors in image-guided surgical navigation systems. This study addresses a ...critical non-rigid registration problem in liver surgery: the alignment of intrahepatic vascular trees. The goal is to deform the complete vascular shape extracted from preoperative Computed Tomography (CT) volume, aligning it with sparse vascular contour points obtained from intraoperative ultrasound (iUS) images. Challenges arise due to the intricate nature of slender vascular branches, causing existing methods to struggle with accuracy and vascular self-intersection. Methods: We present a novel non-rigid sparse-dense registration pipeline structured in a coarse-to-fine fashion. In the initial coarse registration stage, we introduce a parametrization deformation graph and a Welsch function-based error metric to enhance convergence and robustness of non-rigid registration. For the fine registration stage, we propose an automatic curvature-based algorithm to detect and eliminate overlapping regions. Subsequently, we generate the complete vascular shape using posterior computation of a Gaussian Process Shape Model. Results: Experimental results using simulated data demonstrate the accuracy and robustness of our proposed method. Evaluation results on the target registration error of tumors highlight the clinical significance of our method in tumor location computation. Comparative analysis against related methods reveals superior accuracy and competitive efficiency of our approach. Moreover, Ex vivo swine liver experiments and clinical experiments were conducted to evaluate the method's performance. Conclusion: The experimental results itasize the accurate and robust performance of our proposed method. Significance: Our proposed non-rigid registration method holds significant application potential in clinical practice.
Orthopedic surgery remains technically demanding due to the complex anatomical structures and cumbersome surgical procedures. The introduction of image-guided orthopedic surgery (IGOS) has ...significantly decreased the surgical risk and improved the operation results. This review focuses on the application of recent advances in artificial intelligence (AI), deep learning (DL), augmented reality (AR) and robotics in image-guided spine surgery, joint arthroplasty, fracture reduction and bone tumor resection. For the pre-operative stage, key technologies of AI and DL based medical image segmentation, 3D visualization and surgical planning procedures are systematically reviewed. For the intra-operative stage, the development of novel image registration, surgical tool calibration and real-time navigation are reviewed. Furthermore, the combination of the surgical navigation system with AR and robotic technology is also discussed. Finally, the current issues and prospects of the IGOS system are discussed, with the goal of establishing a reference and providing guidance for surgeons, engineers, and researchers involved in the research and development of this area.
Orthodontic treatment is a lengthy process that requires regular in-person dental monitoring, making remote dental monitoring a viable alternative when face-to-face consultation is not possible. In ...this study, we propose an improved 3D teeth reconstruction framework that automatically restores the shape, arrangement, and dental occlusion of upper and lower teeth from five intra-oral photographs to aid orthodontists in visualizing the condition of patients in virtual consultations. The framework comprises a parametric model that leverages statistical shape modeling to describe the shape and arrangement of teeth, a modified U-net that extracts teeth contours from intra-oral images, and an iterative process that alternates between finding point correspondences and optimizing a compound loss function to fit the parametric teeth model to predicted teeth contours. We perform a five-fold cross-validation on a dataset of 95 orthodontic cases and report an average Chamfer distance of 1.0121 <inline-formula><tex-math notation="LaTeX">mm^{2}</tex-math> <mml:math><mml:mrow><mml:mi>m</mml:mi><mml:msup><mml:mi>m</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:math><inline-graphic xlink:href="chen-ieq1-3277914.gif"/> </inline-formula> and an average Dice similarity coefficient of 0.7672 on all the test samples in the cross-validation, demonstrating a significant improvement compared with the previous work. Our teeth reconstruction framework provides a feasible solution for visualizing 3D teeth models in remote orthodontic consultations.
Purpose
Precise determination of target is an essential procedure in prostate interventions, such as prostate biopsy, lesion detection, and targeted therapy. However, the prostate delineation may be ...tough in some cases due to tissue ambiguity or lack of partial anatomical boundary. In this study, we propose a novel supervised registration‐based algorithm for precise prostate segmentation, which combines the convolutional neural network (CNN) with a statistical shape model (SSM).
Methods
The proposed network mainly consists of two branches. One called SSM‐Net branch was exploited to predict the shape transform matrix, shape control parameters, and shape fine‐tuning vector, for the generation of the prostate boundary. Furthermore, according to the inferred boundary, a normalized distance map was calculated as the output of SSM‐Net. Another branch named ResU‐Net was employed to predict a probability label map from the input images at the same time. Integrating the output of these two branches, the optimal weighted sum of the distance map and the probability map was regarded as the prostate segmentation.
Results
Two public data sets PROMISE12 and NCI‐ISBI 2013 were utilized to evaluate the performance of the proposed algorithm. The results demonstrated that the segmentation algorithm achieved the best performance with an SSM of 9500 nodes, which obtained a dice of 0.907 and an average surface distance of 1.85 mm. Compared with other methods, our algorithm delineates the prostate region more accurately and efficiently. In addition, we verified the impact of model elasticity augmentation and the fine‐tuning item on the network segmentation capability. As a result, both factors have improved the delineation accuracy, with dice increased by 10% and 7%, respectively.
Conclusions
Our segmentation method has the potential to be an effective and robust approach for prostate segmentation.
Minimally invasive surgery (MIS) remains technically demanding due to the difficulty of tracking hidden critical structures within the moving anatomy of the patient. In this study, we propose a soft ...tissue deformation tracking augmented reality (AR) navigation pipeline for laparoscopic surgery of the kidneys. The proposed navigation pipeline addresses two main sub-problems: the initial registration and deformation tracking. Our method utilizes preoperative MR or CT data and binocular laparoscopes without any additional interventional hardware. The initial registration is resolved through a probabilistic rigid registration algorithm and elastic compensation based on dense point cloud reconstruction. For deformation tracking, the sparse feature point displacement vector field continuously provides temporal boundary conditions for the biomechanical model. To enhance the accuracy of the displacement vector field, a novel feature points selection strategy based on deep learning is proposed. Moreover, an ex-vivo experimental method for internal structures error assessment is presented. The ex-vivo experiments indicate an external surface reprojection error of 4.07 ± 2.17mm and a maximum mean absolutely error for internal structures of 2.98mm. In-vivo experiments indicate mean absolutely error of 3.28 ± 0.40mm and 1.90±0.24mm, respectively. The combined qualitative and quantitative findings indicated the potential of our AR-assisted navigation system in improving the clinical application of laparoscopic kidney surgery.
The distal interlocking of intramedullary nail remains a technically demanding procedure. Existing augmented reality based solutions still suffer from hand-eye coordination problem, prolonged ...operation time, and inadequate resolution. In this study, an augmented reality based navigation system for distal interlocking of intramedullary nail is developed using Microsoft HoloLens 2, the state-of-the-art optical see-through head-mounted display.
A customized registration cube is designed to assist surgeons with better depth perception when performing registration procedures. During drilling, surgeons can obtain accurate and in-situ visualization of intramedullary nail and drilling path, and dynamic navigation is enabled. An intraoperative warning system is proposed to provide intuitive feedback of real-time deviations and electromagnetic disturbances.
The preclinical phantom experiment showed that the reprojection errors along the X, Y, and Z axes were 1.55 ± 0.27 mm, 1.71 ± 0.40 mm, and 2.84 ± 0.78 mm, respectively. The end-to-end evaluation method indicated the distance error was 1.61 ± 0.44 mm, and the 3D angle error was 1.46 ± 0.46°. A cadaver experiment was also conducted to evaluate the feasibility of the system.
Our system has potential advantages over the 2D-screen based navigation system and the pointing device based navigation system in terms of accuracy and time consumption, and has tremendous application prospects.
∙We proposed a HoloLens-to-world registration method using an external EM tracker and a customized registration cube. Better depth perception and less registration time can be achieved.∙We developed an integrated AR-based surgical navigation system for distal interlocking of intramedullary nail without radiation exposure and hand-eye coordination problem.∙We conducted a cadaver experiment to demonstrate the feasibility of the proposed system.