This paper presents a novel dynamic motion planner designed to provide safe motions in the context of the Smart Autonomous Robot Assistant Surgeon (SARAS) surgical platform. SARAS is a multi-robot ...autonomous platform designed to execute auxiliary tasks in Minimally Invasive Surgeries (MIS) with a high degree of autonomy. The development of robotic systems with a high level of autonomy and reliability requires to perceive the workspace and human actions, to contextualize them with the surgical workflow, and, finally, plan and dynamically control the required motions. The autonomous control relies on a multi-level hierarchical Finite State Machine (hFSM) that decides and supervises all robot actions and their transitions. This approach requires multi-granularity decomposition of the surgical procedure and defines different motion profiles to preserve and safely interacts with the patients’ anatomy. The motion planner is developed under the minimally invasive surgery context since it is an extreme use case where the environment is complex, dynamic and unstructured. Moreover, in the SARAS platform the autonomous robots share workspace as well as collaborate with other human-guided robotic instruments. This creates an even more complex working environment and defines a set of hierarchical relationships in which auxiliary instruments have a lower priority. The presented motion planner acts at two levels: Global and Local. The Global Planner generates an initial spline-based trajectory that, defined by a set of Control Points, follows a certain profile determined by the ongoing surgical action and the interaction with the patient’s anatomy. Then, during the execution of the motion, the Local Planner observes the workspace (anatomy and other tools) and applies different virtual potential fields to the control points to dynamically modify their position to avoid potential collisions or tool blocking while maintaining trajectory coherence. At this level, it reactively modifies the trajectory between the tool position and the next control point applying Dynamical Systems based obstacle avoidance. This approach ensures collision free connections between the spline control points. The proposed motion planner is validated in a realistic surgical scenario. The experimental results are analysed from data collected during various Robotic-Assisted Radical Prostatectomy surgeries on manikins, performed with the SARAS SOLO-SURGERY platform: the main surgeon teleoperates a daVinci Research Kit and two robotic arms autonomously perform different auxiliary surgical tasks.
•Collision-free motion generation for autonomous execution of assistive tasks in Robotic Minimally Invasive Surgery (RMIS).•Integration into a cognitive-based control for RMIS.•Global Motion Level: spline-based trajectories, continuously updated using virtual potential fields.•Local Motion Level: real-time obstacle avoidance between spline control points SARAS robots for RMIS.•Experimental verification using sRARP procedure.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Grip force measurement enables better control performance for robot-assisted minimally invasive surgery (RMIS). A deep-learning-based method is proposed to measure the instrument grip force without ...mounting additional sensors in this article. First, the training trajectory and input data frame are studied. Seven data are derived from the original sensor data, and a binary butterfly optimization algorithm with opposite-based learning (bBOA-OBL) is conducted to form the suitable input data frame. Based on the data frame, a novel convolutional network with an attention mechanism and feedforward of current (CAM-FoC) is proposed to calculate the grip force. The results of the ablation study tell that the master-slave trajectory has the highest accuracy, the optimized input data frame can reduce the error by 17%, and each component of CAM-FoC can enhance the measurement accuracy. Experiments and comparisons are also carried out. The root mean squared error (RMSE) in the experiment is only 0.1233 N and is lower than four other popular methods. In addition, the average computation time is around 2 ms on different platforms. The method manifests higher measurement accuracy than the state-of-the-art and is of acceptable computational complexity. The technology would potentially achieve grip force feedback in clinical usage, which can significantly improve surgery performance. The method could also be used by other robots composed of the cable-pulley mechanism to acquire external forces.
Training a surgeon to be skilled and competent to perform a given surgical procedure is essential in providing a high quality of care and reducing the risk of complications. However, existing ...training techniques limit us from conducting in-depth analyses of surgical motions to evaluate these skills accurately. We develop a method to identify the gestures by applying unsupervised methods to cluster the surgical activities learned directly from raw kinematic data. We design an unsupervised method to determine the surgical motions in a Suturing procedure based on predefined surgical gestures. The first step is to find the prototypes by clustering the surgemes of the expert surgeon from all the same expert trials. Then, we map the other surgeons surgemes to the nearest representative of the prototypes and report the clustering accuracy by employing the rand index technique. We utilize four techniques in our proposed unsupervised approach for gesture clustering based on Hierarchical and FCM algorithms. In addition, we highlight the advantages of representing time series data before clustering in terms of computation time saving and system complexity reduction, respectively.
In this article, a primary-secondary robotic system with force sensing and feedback function is developed to address the critical issue of lacking haptic information in minimally invasive surgery ...(MIS). A 3-D microforce sensor with high sensitivity and linearity is designed and integrated into the end gripper of the secondary manipulator. Consequently, real-time 3-D interaction force between the gripper and tissues can be directly detected. The primary manipulator is equipped with a specially designed force feedback grasper. During clamping and suturing operations, the operator can perceive the feedback interaction force from the microsensor in the end gripper, allowing for timely adjustment of clamping force. Experimental results demonstrate that the incorporation of force sensing and feedback control in the primary-secondary robotic system reduces the maximum clamping force from 1.2 to 0.8 N, and the average clamping force from 0.8 to 0.55 N, representing a significant reduction of about 20%-50%. In needle holding and knotting experiments for suturing, the operator can monitor the real-time 3-D operating force within 2 N, ensuring the safety of MIS procedures.
Abstract
In robotic‐assisted minimally invasive surgery (RMIS), non‐sentient surgical instruments make it impossible for surgeons to perceive operational force during the procedure. To facilitate ...surgeons with force telepresence during surgery, a highly integrated MEMS‐based piezoresistive 3D force sensing module, which is composed of a MEMS‐based piezoresistive sensor chip, an encapsulation cap with miniature pyramids, and a top elastic layer, is demonstrated. This innovative combined construction allows for rapid replacement of elastic layers with different thicknesses and different Young's modulus, so as to realize an adjustable sensitivity and measurement range for different surgeries. By replacing the elastic layer, the same amount of change in resistance can be achieved for external force of 3 and 10 N in the
Z‐
axis; the sensitivity in
X
‐ and
Y
‐axes can be increased by a maximum of eight and seven times, respectively. Meanwhile, miniature size enables it to be integrated into various surgical instruments’ tip. Experimental demonstrations involving palpation simulated nodule detection of kidney, ex vivo puncture, and threading forces estimation of tissue‐mimicking are conducted to validate the effectiveness of adjusting sensitivity and range. This sensing module is potentially a promising solution for low‐cost and versatility to surgical instruments, which can facilitate unification of force‐sensing/intelligent surgical instruments.
Full text
Available for:
BFBNIB, FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
The LAPARA System, a Philippine-made robotic surgical system, tested its control system in this paper. Three types of tests are then done to the system: PID Optimization Test, Position Checking, and ...Data Transfer Rate and Memory Bandwidth Testing. Results from the PID resulted in the values 2.32 for the P, 0.4 for the I, and 1.5 for the D to be chosen to ensure the system runs smoothly. The system was also able to run properly during Position Checking, though movement in the Pitch and Yaw required refinement due to the constraints. Also, the data transfer rate for the PC to Arduino Due connection yielded a 128kb/s speed, slower than the 480 Mbps rating, while the memory bandwidth testing yielded results that allowed for storage of 23,040 32_bit values. In conclusion, although minor adjustments were needed to refine the system, the LAPARA system was able to perform as intended.
Background
Robotic‐assisted minimally invasive surgery changes the direct hand and eye coordination in traditional surgery to indirect instrument and camera coordination, which affects the ...ergonomics, operation performance, and safety.
Methods
A camera, two instruments, and a target, as the descriptors, are used to construct the workspace correspondence and geometrical relationships in a surgical operation. A parametric model with a set of parameters is proposed to describe the hand–eye coordination of the surgical robot.
Results
From the results, optimal values and acceptable ranges of these parameters are identified from two tasks. A 90° viewing angle had the longest completion time; 60° instrument elevation angle and 0° deflection angle had better performance; there is no significant difference among manipulation angles and observing distances on task performance.
Conclusion
This hand–eye coordination model provides evidence for robotic design, surgeon training, and robotic initialization to achieve dexterous and safe manipulation in surgery.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
Training a surgeon to be skilled and competent to perform a given surgical procedure is essential in providing a high quality of care and reducing the risk of complications. However, existing ...training techniques limit us from conducting in-depth analyses of surgical motions to evaluate these skills accurately. We develop a method to identify the gestures by applying unsupervised methods to cluster the surgical activities learned directly from raw kinematic data. We design an unsupervised method to determine the surgical motions in a Suturing procedure based on predefined surgical gestures. The first step is to find the prototypes by clustering the surgemes of the expert surgeon from all the same expert trials. Then, we map the other surgeons surgemes to the nearest representative of the prototypes and report the clustering accuracy by employing the rand index technique. We utilize four techniques in our proposed unsupervised approach for gesture clustering based on Hierarchical and FCM algorithms. In addition, we highlight the advantages of representing time series data before clustering in terms of computation time saving and system complexity reduction, respectively.
Malaria is a major public health risk in Rwanda where children and pregnant women are most vulnerable. This infectious disease remains the main cause of morbidity and mortality among children in ...Rwanda. The main objectives of this study were to assess the prevalence of malaria among children aged six months to 14 years old in Rwanda and to identify the factors associated with malaria in this age group. This study used data from the 2017 Rwanda Malaria Indicator Survey. Due to the complex design used in sampling, a survey logistic regression model was used to fit the data and the outcome variable was the presence or absence of malaria. This study considered 8209 children in the analysis and the prevalence of malaria was 14.0%. This rate was higher among children aged 5-9 years old (15.6%), compared to other age groups. Evidently, the prevalence of malaria was also higher among children from poor families (19.4%) compared to children from the richest families (4.3%). The prevalence of malaria was higher among children from rural households (16.2%) compared to children from urban households (3.4%). The results revealed that other significant factors associated with malaria were: the gender of the child, the number of household members, whether the household had mosquito bed nets for sleeping, whether the dwelling had undergone indoor residual spraying in the 12 months prior to the survey, the location of the household's source of drinking water, the main wall materials of the dwelling, and the age of the head of the household. The prevalence of malaria was also high among children living in houses with walls built from poorly suited materials; this suggests the need for intervention in construction materials. Further, it was found that the Eastern Province also needs special consideration in malaria control due to the higher prevalence of the disease among its residents, compared to those in other provinces.