Tidal observations influence the transport efficiency of international commercial ports and can be affected by mechanical failures or typhoon-induced storms. These factors cause observational ...interruptions, which lead to tidal data loss or anomaly. Thus, the applicability of the data is reduced. Existing methods still have certain limitations in accurately predicting the tide level because of the omission of a large amount of data. Therefore, missing value imputation and tide level forecasting of tidal data are crucial topics in tidal observation study. In this study, we propose a deep learning algorithm for missing value imputation and tide level forecasting of tidal data. The test data are obtained from the tidal data of Keelung Port, Taipei Port, Tamsui Port, Taichung Port, Jiangjun Port, Anping Port, Kaohsiung Port, Hualian Port, Suao Port, and Penghu Port, constructed by the Harbor and Marine Technology Center, Taiwan. The average error value for conducting missing value imputation is 0.086 m ± 5%, and the average error value for tide level forecasting is 0.071 m. The experimental results reveal that the deep neural network has better performance than the traditional statistical methods and other artificial neural networks.
In recent years, it has become a trend to analyze shoreline changes through satellite images in coastal engineering research. The results of sea–land segmentation are very important for shoreline ...detection. CoastSat is a time-series shoreline detection system that uses an artificial neural network (ANN) on sea–land segmentation. However, the method of CoastSat only uses the spectral features of a single pixel and ignores the local relationships of adjacent pixels. This impedes optimal category prediction, particularly considering interference by climate features such as clouds, shadows, and waves. It is easy to cause the classifier to be disturbed in the classification results, resulting in classification errors. To solve the problem of misclassification of sea–land segmentation caused by climate interference, this paper applies HED-UNet to the image dataset obtained from CoastSat and learns the relationship between adjacent pixels by training the deep network architecture, thereby improving the results of erroneous sea–land segmentation due to climate disturbances. By using different optimizers and loss functions in the HED-Unet model, the experiment verifies that Adam + Focal loss has the best performance. The results also show that the deep learning model, HED-Unet, can effectively improve the accuracy of the sea–land segmentation to 97% in a situation with interference from atmospheric factors such as clouds and waves.
In a situation where a robot initiates conversation with a group of people, questions such as “where is the people group?” and “whether the robot should approach them?” should be addressed. This ...paper develops a new system that enables a robot to determine whether or not it should approach the aforementioned human group and interact with them after identifying what the current social situation is. The system is mainly to fuse depth-related data to track the positions of a group of people, extract social cues of those people by using depth-related data and a decision network (DN) model, and the main challenge lies in understanding the social cues of the group and the current underlying social situation concerning the relation between the robot and the group. The social cues are based on Proxemics and F-formations, whereas the social situations are categorized as individual-to-individual, individual-to-robot, robot-to-individual, group-to-robot, robot-to-group, confidential discussion and group discussion. Our system proceeds as follows : once a group of people are detected and the social cues of that target group of people are extracted, the corresponding social situation is appropriately inferred, and in turn the robot decides whether it should initiate conversation with the group based on rules to be specified later. The conducted experimental results demonstrate the properness of the system design and the efficacy of the proposed method in recognizing the social cues among individuals of the group as well as the nature of the social situations concerning the group and the robot.
•We developed a new system to allow a robot to determine whether it should approach a human group and interact with them by inferring what the current social situation is.•The proposed system tracks the positions of people through the fusion of different depth-related data within a long range and complex environment.•Social situations concerning the robot and the target group of people are inferred first and then the robot decides whether it should initiate interactions or not in the human–robot social domain.
As robots are put into humans' daily life, the assigned tasks to robots are varied, and the different needs of people interacting with robots are immense. As a result, when facing different users, it ...is important for robots to personalize the interactions and provide user-desired services. This paper, therefore, proposes a learning strategy on the service-providing model. Through human feedback, the strategy enables the robot to learn the users' needs, as well as preferences, and adjust its behaviors. Here, we assume that users' needs and preferences may vary with time; hence the goal of this paper is to let the adjustment of robot behaviors be able to adapt to those variations. In turn, the service-providing model of the robot could adjust online as well. That is, it can select a new action from those favorable actions that have already been selected or an action that is not an unfavorable action but has annoyed humans recently. To implement our system, the service robot under discussion is applied to the home environment. For performance evaluation, we have performed extensive experiments that satisfactorily demonstrate that our robot can provide services to different users and adapt to their preference change.
In this paper, we propose a sensory cues guided robotic walker for improving gaits of Parkinson Disease (PD) patients. A completely non-intrusive, 3D real-time leg pose tracking and gait analysis are ...proposed by using a depth camera mounted on the rear of the robotic walker. It has been studied that the sensory cues can serve as effective stimuli to the PD patients for gait improvement. In our work, the sensory cues including visual and auditory cues are incorporated into the robotic walker. More specifically, both sensory cues are gait-adaptive, of which the visual cue in particular is projected onto the ground by a projector installed on the walker, in order to stimulate patients walking gait more easily. Since the adjustable cues can improve patients gaits and reduce their uncomfortableness simultaneously, the hereby developed robotic walker serves as a gait rehabilitation mechanism. To demonstrate the performance of the developed walker, several real experiments have been conducted. First, the accuracy of the proposed 3D leg pose tracking is verified by a standard motion capture system. Next, seven participants (4PD patients and 3 healthy elders) are recruited to test the system three days for verifying the effectiveness of participants gait improvement. The experimental results confirm the potential of the walker serving as a rehabilitation device for PD patients. Note to Practitioners-We proposed a completely non-intrusive, relatively inexpensive and real-time gait analyzer integrated with sensory cues guided rehabilitation system on an active robotic walker. Our goal is to provide a smart robotic walker with safety, reliability, and with rehabilitation function for elders whoever suffered from chronic disease or health problem. In this work, a reliable 3D leg pose tracking is proposed. Then, a gait analysis for acquiring spatio-temporal gait parameters such as stride length and gait velocity is proposed based on the tracking to analyze the walking gait of the patients. On the other hand, improving Parkinson disease patients gaits by using sensory cues which are known to have remarkable effects for Parkinson disease patients is in many research. Therefore, we concentrate on the sensory cues include visual cue and a rhythmic auditory cue cooperating with the robotic walker which aimed to stimulate and provide walkingassistance in their ambulation. For the sensory cues, an adaptive gait mechanism is proposed to improve patients gait based on their personal gait pattern as well as to reduce uncomfortableness of them by doing the rehabilitation. The experimental results confirm the potential of the walker serving as a rehabilitation device for Parkinson disease patients. We hope the new invention of this kind assistive robotic walker will become more popular and may provide more living aid facilities to elders.
For a service robot, it is not adequate to let its navigational movement be based only on a single metric, such as minimum distance path. In the environment where the robot and humans are coexisting, ...the robot should always perform social navigation whenever it is moving. However, to perform social navigation, the robot needs to follow certain "social norms" of the environment. Recently, deep reinforcement learning (DRL) technique is popularly applied to the robotics field; yet, it is rarely used to solve the mentioned social navigation problem, generally deemed as a high dimension complex problem. In this paper, we propose the composite reinforcement learning (CRL) framework under which the robot learns appropriate social navigation with sensor input and reward update based on human feedback. For learning the aspect of human robot interaction (HRI), we provide a method to facilitate the training of DRL in real environment by incorporating prior knowledge to the system. It turns out that our CRL system not only can incrementally learn how to set its velocity and to perform HRI but also keep collecting human feedback to synchronize the reward functions to the current social norms. The experiments show that the proposed CRL system can safely learn how to navigate in the environment and show that our system is able to perform HRI for social navigation.
In an ageing society, we expect that a robotic caregiver is able to persuade the elderly to perform a healthier behavior. In this work, pragmatic argument is adopted to make the elderly realize that ...a choice beneficial for health is really worthwhile, such as eating suitable fruits. Based on this concept, an adaptive recommendation dialogue system through pragmatic argumentation is proposed. There are three objectives in this system. First, a knowledge base for pragmatic argument construction is built, which concerns not only the effect of a decision but also the reason for the effect. Secondly, the robot is endowed with the ability to do recommendation that adapts to different states of the elder, and the recommendation is determined based on the integration of both the robot's and the elder's preference for different perspectives so that the robot knows how to reach a compromise with the elder. Lastly, through learning about the elder's preference for perspectives in conversation, the robot will try to select such a perspective to construct arguments that the elder can be more easily convinced to accept its recommendation. We invited 21 volunteers to interact with the robot. The experimental result has proved that the recommendation system has potential to affect the decision making of the elderly and help him/her pursue a healthier life.
This paper introduces a context-aware assisted active robotic walker for Parkinson's disease (PD) patients. Most of PD patients suffer from not only loss of balance but also abnormal gaits. These ...symptoms tend to make PD patients fall down more easily and result in low quality of life. We use Hidden Markov Model (HMM) to analyze the gait of PD patients, and then use our walker to help patients adjust their gait to become normal while applying auditory cues when abnormal gaits are recognized. To prevent user from leaning forward before falling down, the walker locks the motors when sudden forward pushing by the user is detected. Moreover, the walker can record the statics of gait from the user, making the therapists monitor the rehabilitation process relatively easier. Finally, the road conditions in front of the walker will be automatically analyzed, making user able to adjust his/her walking pace dynamically. To our best knowledge, the hereby proposed active robotic walker should be the first system which can provide walking aid to PD patients. In our experiments, the feasibility and performance of this system are evaluated by PD patients at two actual senior care units.
RGB-D sensor has gained its popularity in the study of object recognition for its low cost as well as its capability to provide synchronized RGB and depth images. Thus, researchers have proposed new ...methods to extract features from RGB-D data. On the other hand, learning-based feature representation is a promising approach for 2D image classification. By exploiting sparsity in 2D image signals, we can learn image representation instead of using hand-crafted local descriptors like SIFT or HoG. This framework inspired us to learn features from RGB-D data. Our work focuses on two goals. First, we propose a novel Hierarchical Sparse Shape Descriptor (HSSD) to form learning-based representation for 3D shapes. To achieve this, we analyze several 3D feature extraction techniques and propose a unified view of them. Then, we learn hierarchical shape representation with sparse coding, max pooling and local grouping. Second, we investigate whether RGB and depth information should be fused at lower level or higher level. Experimental results show that, first, our HSSD algorithm can learn shape dictionary and provide shape cues in addition to the 2D cues. Using the proposed HSSD algorithm achieves 84% accuracy on a household RGB-D object dataset and outperforms a widely used VFH shape feature by 13%. Second, fusing RGB-D information at lower level does not improve recognition performance.
For service robots to be able to enter a multi-human office environment, it is important to find a group of human users' social patterns and then to provide a proper service to them in time. Usually, ...human users' social patterns are represented in terms of nonverbal social signals. In this paper, a new integrated approach on recognizing multi-human social signals is proposed. Specifically, the nonverbal social signals are detected by a laser range finder and a RGB-D camera and are processed to find the multi-human (spatial) social patterns. Those recognized patterns are then applied to human-to-human, human-to-robot or multi-human-to-robot interactive formation. Experimental results shows that our robot successfully recognizes the aforementioned users' social patterns followed by appropriate services.