Open circuit voltage (OCV) is crucial for battery degradation analysis. However, high-precision OCV is usually obtained offline. To this end, this paper proposes a novel self-evaluation criterion ...based on the capacity difference of State of Charge (SoC) unit interval. The criterion is integrated into extended Kalman filter (EKF) for joint estimations of OCV and SoC. The proposed method is evaluated in a typical application scenario, energy storage system (ESS), using a LiFePO4 (LFP) battery. Extensive experimental results show that a more accurate OCV and incremental capacity and differential voltage (IC-DV) can be achieved online with the proposed method. Our method also greatly improves the accuracy of SoC estimation at each SoC point where the maximum estimation error of SoC is less than 0.3%.
In teleoperation, the operator is often required to command the motion of the remote robot and monitor its behavior. However, such an interaction demands a heavy workload from a human operator when ...facing with complex tasks and dynamic environments. In this article, we propose a shared control method to assist the operator in the manipulation tasks to reduce the workload and improve the efficiency. We adopt a task-parameterized hidden semi-Markov model to learn a manipulation skill from several human demonstrations. We utilize the learned model to predict the manipulation target given the current observed robotic motion trajectory and subsequently estimate the desired robotic motion given the current input of the operator. The estimated robotic motion is then utilized to correct the input of the operator to provide manipulation assistance. In addition, a set of virtual reality devices are used to capture the operator’s motion and display the vision feedback from the remote site. We evaluate our approach through two manipulation tasks with a dual-arm robot. The experimental results show the effectiveness of the proposed method.
Pairwise frame registration with sparse geometric local features on real-world depth images is not particularly robust due to the low resolution and incomplete nature of the 3D scan data. Moreover, ...there might be many regions with similar geometric information. In this paper, we present 3DTDesc, a data-driven descriptor which closely combines both 2D texture and 3D geometric information for frame registration. The proposed descriptor is learned directly from color point clouds, which is time-efficient and provides robust and accurate geometric feature matching in a variety of settings. The texture information and the geometric information closely interact in the fusing network, which are complements of each other in situations of textureless regions or regions with similar geometric information and different texture information. We also propose a multi-scale 3DTDesc to further improve the performance of the feature matching. The effectiveness and efficiency of our proposed 3DTDesc are demonstrated by extensive experimental results on challenging RGB-D datasets and various ablation studies.
In this paper, we present a new solution to inter-camera multiple target tracking with non-overlapping fields of view. The identities of people are maintained when they are moving from one camera to ...another. Instead of matching snapshots of people across cameras, we mainly explore what kind of context information from videos can be used for inter-camera tracking. We introduce two kinds of context information, spatio-temporal context and relative appearance context in this paper. The spatio-temporal context indicates a way of collecting samples for discriminative appearance learning where target-specific appearance models are learned to distinguish different people from each other. The relative appearance context models inter-object appearance similarities for people walking in proximity. The relative appearance model helps disambiguate individual appearance matching across cameras. We show improved performance with context information for inter-camera tracking. Our method achieves promising results in two crowded scenes compared with state-of-art methods.
In recent years, the perceptual capabilities of robots have been significantly enhanced. However, the task execution of the robots still lacks adaptive capabilities in unstructured and dynamic ...environments.
In this paper, we propose an ontology based autonomous robot task processing framework (ARTProF), to improve the robot's adaptability within unstructured and dynamic environments. ARTProF unifies ontological knowledge representation, reasoning, and autonomous task planning and execution into a single framework. The interface between the knowledge base and neural network-based object detection is first introduced in ARTProF to improve the robot's perception capabilities. A knowledge-driven manipulation operator based on Robot Operating System (ROS) is then designed to facilitate the interaction between the knowledge base and the robot's primitive actions. Additionally, an operation similarity model is proposed to endow the robot with the ability to generalize to novel objects. Finally, a dynamic task planning algorithm, leveraging ontological knowledge, equips the robot with adaptability to execute tasks in unstructured and dynamic environments.
Experimental results on real-world scenarios and simulations demonstrate the effectiveness and efficiency of the proposed ARTProF framework.
In future work, we will focus on refining the ARTProF framework by integrating neurosymbolic inference.
Compared to traditional data-driven learning methods, recently developed deep reinforcement learning (DRL) approaches can be employed to train robot agents to obtain control policies with appealing ...performance. However, learning control policies for real-world robots through DRL is costly and cumbersome. A promising alternative is to train policies in simulated environments and transfer the learned policies to real-world scenarios. Unfortunately, due to the reality gap between simulated and real-world environments, the policies learned in simulated environments often cannot be generalized well to the real world. Bridging the reality gap is still a challenging problem. In this paper, we propose a novel real–sim–real (RSR) transfer method that includes a real-to-sim training phase and a sim-to-real inference phase. In the real-to-sim training phase, a task-relevant simulated environment is constructed based on semantic information of the real-world scenario and coordinate transformation, and then a policy is trained with the DRL method in the built simulated environment. In the sim-to-real inference phase, the learned policy is directly applied to control the robot in real-world scenarios without any real-world data. Experimental results in two different robot control tasks show that the proposed RSR method can train skill policies with high generalization performance and significantly low training costs.
Pan–tilt–zoom (PTZ) camera is a powerful tool in far-field scenarios. However, most of the current PTZ surveillance systems require manual intervention to move the camera to the desired position. In ...this paper, we address the problem of persistent people tracking and face capture in uncontrolled scenarios using a single PTZ camera, which could prove most helpful in forensic applications. The system first detects and tracks pedestrians in zoomed-out mode. Then, according to a scheduler, the system selects a person to zoom in. In the zoomed-in mode, we detect a set of face images and solve the face–face association and face–person association problems. The system then zooms back out where tracking is continued as people re-appear in the view. The person–person association module associates the people on the schedule list with the people in the current view. The detected faces are associated with the corresponding people and trajectories. Due to the dynamic nature of our problem, e.g. the field of view of the camera changes because of the pan/tilt/zoom movement of the camera, all of the processes including receiving images from the camera and processing must be done in real time. To the best of our knowledge, the proposed method is the first to address the association of face images to people and trajectories using a
single
PTZ camera. Extensive experiments in challenging indoor and outdoor uncontrolled conditions demonstrate the effectiveness of the proposed system.