This paper presents a comprehensive survey on vision-based robotic grasping. We conclude three key tasks during vision-based robotic grasping, which are object localization, object pose estimation ...and grasp estimation. In detail, the object localization task contains object localization without classification, object detection and object instance segmentation. This task provides the regions of the target object in the input data. The object pose estimation task mainly refers to estimating the 6D object pose and includes correspondence-based methods, template-based methods and voting-based methods, which affords the generation of grasp poses for known objects. The grasp estimation task includes 2D planar grasp methods and 6DoF grasp methods, where the former is constrained to grasp from one direction. These three tasks could accomplish the robotic grasping with different combinations. Lots of object pose estimation methods need not object localization, and they conduct object localization and object pose estimation jointly. Lots of grasp estimation methods need not object localization and object pose estimation, and they conduct grasp estimation in an end-to-end manner. Both traditional methods and latest deep learning-based methods based on the RGB-D image inputs are reviewed elaborately in this survey. Related datasets and comparisons between state-of-the-art methods are summarized as well. In addition, challenges about vision-based robotic grasping and future directions in addressing these challenges are also pointed out.
Motivation: Since 1990, the basic local alignment search tool (BLAST) has become one of the most popular and fundamental bioinformatics tools for sequence similarity searching, receiving extensive ...attention from the research community. The two pioneering papers on BLAST have received over 96 000 citations. Given the huge population of BLAST users and the increasing size of sequence databases, an urgent topic of study is how to improve the speed. Recently, graphics processing units (GPUs) have been widely used as low-cost, high-performance computing platforms. The existing GPU-BLAST is a promising software tool that uses a GPU to accelerate protein sequence alignment. Unfortunately, there is still no GPU-accelerated software tool for BLAST-based nucleotide sequence alignment.
Results: We developed G-BLASTN, a GPU-accelerated nucleotide alignment tool based on the widely used NCBI-BLAST. G-BLASTN can produce exactly the same results as NCBI-BLAST, and it has very similar user commands. Compared with the sequential NCBI-BLAST, G-BLASTN can achieve an overall speedup of 14.80X under ‘megablast’ mode. More impressively, it achieves an overall speedup of 7.15X over the multithreaded NCBI-BLAST running on 4 CPU cores. When running under ‘blastn’ mode, the overall speedups are 4.32X (against 1-core) and 1.56X (against 4-core). G-BLASTN also supports a pipeline mode that further improves the overall performance by up to 44% when handling a batch of queries as a whole. Currently G-BLASTN is best optimized for databases with long sequences. We plan to optimize its performance on short database sequences in our future work.
Availability:
http://www.comp.hkbu.edu.hk/∼chxw/software/G-BLASTN.html
Contact:
chxw@comp.hkbu.edu.hk
Supplementary information:
Supplementary data are available at Bioinformatics online.
SOAP3 is the first short read alignment tool that leverages the multi-processors in a graphic processing unit (GPU) to achieve a drastic improvement in speed. We adapted the compressed full-text ...index (BWT) used by SOAP2 in view of the advantages and disadvantages of GPU. When tested with millions of Illumina Hiseq 2000 length-100 bp reads, SOAP3 takes < 30 s to align a million read pairs onto the human reference genome and is at least 7.5 and 20 times faster than BWA and Bowtie, respectively. For aligning reads with up to four mismatches, SOAP3 aligns slightly more reads than BWA and Bowtie; this is because SOAP3, unlike BWA and Bowtie, is not heuristic-based and always reports all answers.
Deep learning (DL) techniques have obtained remarkable achievements on various tasks, such as image recognition, object detection, and language modeling. However, building a high-quality DL system ...for a specific task highly relies on human expertise, hindering its wide application. Meanwhile, automated machine learning (AutoML) is a promising solution for building a DL system without human assistance and is being extensively studied. This paper presents a comprehensive and up-to-date review of the state-of-the-art (SOTA) in AutoML. According to the DL pipeline, we introduce AutoML methods – covering data preparation, feature engineering, hyperparameter optimization, and neural architecture search (NAS) – with a particular focus on NAS, as it is currently a hot sub-topic of AutoML. We summarize the representative NAS algorithms’ performance on the CIFAR-10 and ImageNet datasets and further discuss the following subjects of NAS methods: one/two-stage NAS, one-shot NAS, joint hyperparameter and architecture optimization, and resource-aware NAS. Finally, we discuss some open problems related to the existing AutoML methods for future research.
•An efficient approach to simulate fully non-stationary wind field is developed.•Reducing matrix factorizations and decoupling time-frequency spectra realized simultaneously.•The accelerated FFT ...algorithm can further improve the simulation efficiency.
The spectral representation method (SRM) has been widely used to simulate stationary or non-stationary wind fields for engineering structures. Although several attempts have been made to realize the invoking of Fast Fourier Transform (FFT), the SRM is still very inefficient to simulate the fully non-stationary wind field with a time-varying coherence due to the extremely time-consuming Cholesky decompositions and large memory requirement. In this paper, a reduced 2D Hermite interpolation-enhanced approach is developed to further improve the efficiency of SRM in simulating fully non-stationary wind fields. Central to this approach is the interpolation procedure which requires Cholesky decompositions and storage of cross power spectral density matrix (CEPSD) elements only at interpolation knots. Thus the computational costs of Cholesky decompositions and memory requirement are dramatically decreased. The number of Cholesky decompositions is then fixed with no relation to the segments of frequency and duration of wind samples, which eliminates the Cholesky decomposition as a cause that affects the simulation efficiency. Meanwhile, each element in the decomposed CEPSD matrix is decoupled into products of time- and frequency-dependent functions by the reduced 2D Hermite interpolation, so the FFT can be used to expedite the summation of trigonometric terms. Apart from using FFT, another merit of the proposed approach is that an accelerated FFT algorithm can be incorporated to further improve the simulation efficiency based on the specific decoupled expression of frequency-dependent functions. The parametric analysis shows that the proposed approach is very efficient in comparison with the existing method using proper orthogonal decomposition (POD), and it provides a desired level of simulation accuracy when appropriate interpolation interval is selected. The case study in simulating the fully non-stationary wind field of a long-span cable-stayed bridge demonstrates the effectiveness of the proposed approach with verifications on both evolutionary power spectra and correlation functions.
•The proposed method can significantly improve the simulation efficiency and reduce the memory consumption in multi-spatial dimensions.•The simulation points can be arbitrarily selected using NUFFT ...enhanced SWSRM.•The proposed method can improve the turbulent spectral accuracy in low-frequency region.
Simulation of stochastic wind field is necessary for wind-induced vibration analysis of wind-sensitive structures. Stochastic wave-based spectral representation method (SWSRM) is one of the most widely used methods for wind field simulation. However, it may face challenges to simulate turbulent wind field in multi-spatial dimensions due to its large memory consumption and low simulation efficiency. In this study, a novel non-uniform FFT (NUFFT) enhanced SWSRM is proposed. The approach for selecting sampling points of wave number is also established based on adaptive integral method. Numerical experiments are further designed to demonstrate the validity and effectiveness of the enhanced SWSRM. Results show that the proposed method can significantly improve the simulation efficiency and reduce the memory consumption in multi-spatial dimensions. In addition, the simulation points can be arbitrarily selected using NUFFT enhanced SWSRM. More importantly, the proposed method can improve the turbulent spectral accuracy in low-frequency region.
Realizing on-demand media streaming in a peer-to-peer (P2P) fashion is more challenging than in the case of live media streaming, since only peers with close-by media play progresses may help each ...other in obtaining the media content. The situation is further complicated if we wish to pursue low aggregated link cost in the transmission. In this paper, we present a new algorithmic perspective toward on-demand P2P streaming protocol design. While previous approaches employ streaming trees or passive neighbor reconciliation for media content distribution, we instead coordinate the streaming session as an auction where each peer participates locally by bidding for and selling media flows encoded with network coding. We show that this auction approach is promising in achieving low-cost on-demand streaming in a scalable fashion. It is amenable to asynchronous, distributed, and lightweight implementations, and is flexible to provide support for random-seek and pause functionalities. Through extensive simulation studies, we verify the effectiveness and performance of the proposed auction approach, focusing on the optimality in overall streaming cost, the convergence speed, and the communication overhead.
The stochastic wave approach is a new realization of the spectral representation method and has been widely used for the simulation of stationary non-homogeneous or non-stationary homogeneous wind ...fields. However, it inevitably creates many unnecessary simulation points due to the invoking of the Fast Fourier Transform (FFT) and suffers from low efficiency in non-stationary non-homogeneous scenarios due to the incapability to accelerate the summation of trigonometric functions. In this study, an efficient approach is developed for the simulation of non-stationary non-homogeneous wind field. Central to this approach is a multi-dimensional interpolation used to decouple the evolutionary spectra accompanied by the application of non-uniform FFT (NUFFT) to expedite the simulation. A surrogate model that accurately approximates the 3D evolutionary spectrum related to height, frequency, and time is established by the 3D reduced Hermite interpolation, which enables a successful separation of the frequency components. Then, the NUFFT is applied to speed up the summation of trigonometric functions over wavenumbers with a well-designed non-uniformly distributed sampling points, which effectively reduces the segments used in the Fourier transform. Since there is no required mapping between the wavenumbers and the locations of simulation points in NUFFT, a significant number of unnecessary simulation points in the FFT scenario is avoided. Moreover, the wind samples at different locations can be obtained by directly altering the query points of NUFFT, thus the efficiency of the enhanced approach is almost independent of the number of simulation points. The numerical example in simulating the non-stationary non-homogeneous wind field of a long-span cable-stayed bridge demonstrates the efficiency of the developed approach and validates the effectiveness of the simulated wind samples in view of evolutionary spectrum and coherence function.
•Efficient approach is developed to simulate non-stationary non-homogeneous wind fields.•Surrogate model successfully separates frequency from high-order evolutionary spectrum.•Application of NUFFT enables simulation points arbitrarily distributed in each dimension.
Deep neural networks (DNNs) have achieved great success in the area of computer vision. The disparity estimation problem tends to be addressed by DNNs which achieve much better prediction accuracy in ...stereo matching than traditional hand-crafted feature based methods. On one hand, however, the designed DNNs require significant memory and computation resources to accurately predict the disparity, especially for those 3D convolution based networks, which makes it difficult for deployment in real-time applications. On the other hand, existing computation-efficient networks lack expression capability in large-scale datasets so that they cannot make an accurate prediction in many scenarios. To this end, we propose an efficient and accurate deep network for disparity estimation named FADNet with three main features: 1) It exploits efficient 2D based correlation layers with stacked blocks to preserve fast computation; 2) It combines the residual structures to make the deeper model easier to learn; 3) It contains multi-scale predictions so as to exploit a multi-scale weight scheduling training technique to improve the accuracy. We conduct experiments to demonstrate the effectiveness of FADNet on two popular datasets, Scene Flow and KITTI 2015. Experimental results show that FADNet achieves state-of-the-art prediction accuracy, and runs at a significant order of magnitude faster speed than existing 3D models. The codes of FADNet are available at https://github.com/HKBU-HPML/FADNet.