To enable a natural and fluent human robot collaboration flow, it is critical for a robot to comprehend their human peers’ on-going actions, predict their behaviors in the near future, and plan its ...actions correspondingly. Specifically, the capability of making early predictions is important, so that the robot can foresee the precise timing of a turn-taking event and start motion planning and execution early enough to smooth the turn-taking transition. Such proactive behavior would reduce human’s waiting time, increase efficiency and enhance naturalness in collaborative task. To that end, this paper presents the design and implementation of an early turn-taking prediction algorithm, catered for physical human robot collaboration scenarios. Specifically, a robotic scrub nurse system which can comprehend surgeon’s multimodal communication cues and perform turn-taking prediction is presented. The developed algorithm was tested on a collected data set of simulated surgical procedures in a surgeon–nurse tandem. The proposed turn-taking prediction algorithm is found to be significantly superior to its algorithmic counterparts, and is more accurate than human baseline when little partial input is given (less than 30% of full action). After observing more information, the algorithm can achieve comparable performances as humans with a
F1 score
of 0.90.
•A sterile system for navigating MRI images in the operating room is presented.•A novel method for continuous gesture recognition has been discussed.•Contextual cues from a neurobiopsy procedure were ...integrated into the system.•The system has been shown to significantly improve task completion performance.•The system was significantly more natural than current methods of MRI navigation.
A sterile, intuitive context-integrated system for navigating MRIs through freehand gestures during a neurobiopsy procedure is presented. Contextual cues are used to determine the intent of the user to improve continuous gesture recognition, and the discovery and exploration of MRIs. One of the challenges in gesture interaction in the operating room is to discriminate between intentional and non-intentional gestures. This problem is also referred as spotting. In this paper, a novel method for training gesture spotting networks is presented. The continuous gesture recognition system was shown to successfully detect gestures 92.26% of the time with a reliability of 89.97%. Experimental results show that significant improvements in task completion time were obtained through the effect of context integration.
This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer ...vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.
Cardiac involvement as an initial presentation of malignant lymphoma is a rare occurrence.We describe the case of a 26 year old man who had initially been diagnosed with myocardial infiltration on an ...echocardiogram,presenting with a testicular mass and unilateral peripheral facial paralysis.On admission,electrocardiograms(ECG)revealed negative T-waves in all leads and ST-segment elevation in the inferior leads.On twodimensional echocardiography,there was infiltration of the pericardium with mild effusion,infiltrative thickening of the aortic walls,both atria and the interatrial septum and a mildly depressed systolic function of both ventricles.An axillary biopsy was performed and reported as a T-cell lymphoblastic lymphoma(T-LBL).Following the diagnosis and staging,chemotherapy was started.Twenty-two days after finishing the first cycle of chemotherapy,the ECG showed regression of T-wave changes in all leads and normalization of the ST-segment elevation in the inferior leads.A followup two-dimensional echo confirmed regression of the myocardial infiltration.This case report illustrates a lymphoma presenting with testicular mass,unilateral peripheral facial paralysis and myocardial involvement,and demonstrates that regression of infiltration can be achieved by intensive chemotherapy treatment.To our knowledge,there are no reported cases of T-LBL presenting as a testicular mass and unilateral peripheral facial paralysis,with complete regression of myocardial involvement.
Most common approaches to one-shot gesture recognition have leveraged mainly conventional machine learning solutions and image based data augmentation techniques, ignoring the mechanisms that are ...used by humans to perceive and execute gestures, a key contextual component in this process. The novelty of this work consists on modeling the process that leads to the creation of gestures, rather than observing the gesture alone. In this approach, the context considered involves the way in which humans produce the gestures - the kinematic and biomechanical characteristics associated with gesture production and execution. By understanding the main "modes" of variation we can replicate the single observation many times. Consequently, the main strategy proposed in this paper includes generating a data set of human-like examples based on "naturalistic" features extracted from a single gesture sample while preserving fundamentally human characteristics like visual saliency, smooth transitions and economy of motion. The availability of a large data set of realistic samples allows the use state-of-the-art classifiers for further recognition. Several classifiers were trained and their recognition accuracies were assessed and compared to previous one-shot learning approaches. An average recognition accuracy of 95% among all classifiers highlights the relevance of keeping the human "in the loop" to effectively achieve one-shot gesture recognition.