A typical computer vision task for surveillance may include the following step; Image Acquisition, Image Pre-processing, Image Enhancement, Image Segmentation, Feature Extraction and Identification ...of object. Each step in the pipeline is critical since poor performance at any stage may affect the overall result of a computer vision application. Among the steps indicated, the image pre-processing step is crucial for the performance of a model. However, the time taken by the pre-processing algorithms to complete their tasks generally contributes to the overall running time of a computer vision application. Various techniques and algorithms have been proposed to help reduce the time complexities associated with image pre-processing algorithms. Among the many algorithms are the Adaptive Approximated Median Filtering Algorithm for Impulse Noise Reduction, A Robust Median-based Background Updating Algorithm and Fast Generation of Image's Histogram Using Approximation technique for Image Processing Algorithms. These proposed methods used approximation of existing techniques to significantly reduce the time complexity associated with the algorithms. However, their resultant effects in the pipeline of a typical computer vision application had not been evaluated. This paper therefore evaluated the effectiveness of the algorithms on motion detection and movement tracking computer vision application. The experimental results indicate that these pre-processing algorithms can significantly reduce the overall running times for motion detection and scene boundary monitoring, and therefore can easily replace original ones for real-time motion and object tracking.
In this study, it is aimed to determine the number of reference fruits and health status (sturdy, rotten, mottled, non-spotted) by using real-time image or recorded video taken from the autonomous ...Unmanned Aerial Vehicle (UAV) camera in orchards. In the determinations made by using image processing techniques, sturdy-rotten and mottled-speckless distinction are made for oranges and apricots, respectively. These distinction and determination processes are carried out using highly trained classifiers. Three types of multi-trained classifiers performance have been compared and a highly trained classifier which has high performance has been preferred for object detection. The accuracy of the Haar, local binary pattern (LBP), and histogram of oriented gradients (HOG) classifiers are compared in Python using the open source computer vision library. It has been shown experimentally that Haar classifier achieves high performance in determining realtime reference fruit health status and yield.
For real-time image-processing applications, a highly parallel system that exploits parallelism is desirable. A content addressable memory (CAM), or an associative processor, that can perform various ...types of parallel processing with words as the basic unit is a promising component for creating such a system because of its suitability for LSI implementation. Conventional CAM LSI's, however, have neither efficient function nor enough capacity for pixel-parallel processing. This paper describes a fully parallel 1-Mb CAM LSI. It has advanced functions for processing various pixel-parallel algorithms, such as mathematical morphology and discrete-time cellular neural networks. Moreover, since it has 16-K words, or processing elements (PEs), which can process 128/spl times/128 pixels in parallel, a board-sized pixel-parallel image-processing system can be implemented using several chips. A chip capable of operating at 56 MHz and 2.5 V was fabricated using 0.25-/spl mu/m full-custom CMOS technology with five aluminum layers. A total of 15.5 million transistors have been integrated into a 16.1/spl times/17.0 mm chip. Typical power dissipation is 0.25 W. Processing performance of various update and data transfer operations is 3-640 GOPS. This CAM LSI will make a significant contribution to the development of compact, high-performance image-processing systems.
Intelligent vehicle is one kind of popular research field at present, and the environment perception of unmanned vehicle, especially the detection and tracking of target objects, is indispensable for ...it's development. In this paper, we built an image acquisition simulation environment based on the robot operating system (ROS) and Gazebo, and designed a algorithm to realize the feature matching in normal region of the rotating camera. Different from the common matching algorithm, this paper places the target object at the imaging center of the right camera through parameter setting and rotates the left camera in the normal ROI region to obtain the images of the two cameras for feature matching. In order to verify the accuracy of the algorithm, during the rotation of left camera this paper selected the symmetrical angle with the camera on the right side, as well as 10 angles for the change of the fixed step size on the left and right sides of the symmetrical angle to acquire images, and analyzed and compared the angles and matching rate as parameters to verify the accuracy and stability of the matching algorithm.
The color space conversion employs a significant function in preprocessing phase of digital image processing. Color conversion can improve the quality of images. The color space conversion is used in ...various applications such as commercial, multimedia, computer vision, visual tracking systems etc. The objective is to convert one color space to another and the inverse of same. Various color space conversions are used such as RGB←→HSV, RGB←→HSI and RGB←→HSL. The conversion process can be done using color space conversion algorithm. The hardware realization of color space conversion models can be implemented by using Xilinx System Generator (XSG) is implemented on Field Programmable Gate Array (FPGA) Spartan-6 XC6SLX16 Family. Finally, the above color space conversion models are implemented in real time and then tabulated in terms of percentage resources utilization and power report of various color space models. By means of color conversion the efficiency and speed are increased.
In recent years, the need for real-time pattern recognition applications has sharply increased. Along with deep and probabilistic neural networks, hybrid architectures such as neo-fuzzy networks and ...networks based on neo-fuzzy units turned out to be quite effective for solving this problem. In this paper, we consider the task of developing an architecture and a learning algorithm for a real-time pattern recognition system built on neo-fuzzy units. The objective function based on cross-entropy allows formulating a learning criterion that provides a high learning rate.
This paper presents a virtual reality(VR) control system for a robotic endoscope holder for minimally invasive surgery (MIS). The system features a three degree of freedom(DOF) robotic endoscope ...holder, virtual reality head mounted display and a control system that allows robot to follow headset movement. The headset display is aligned with endoscope camera view thus creating a full presence effect for a surgeon. The video from endoscope is streamed to the virtual reality head mounted display after image processing in a computer. During surgery, the proposed control system generates commands to the robot based on the headset's positioning data. After the desired camera motion direction is estimated, the robotic endoscope holder receives motion commands through serial interface. The proposed virtual reality robot control system is implemented in Unity. The system has been tested in the laboratory environment through a set of motion tests and achieved a response latency of 2s. The proposed system is able to restore the surgeon's perceptual capability of the operation space that is highly restricted in MIS and provide highly intuitive endoscope robot control.
During the process of implementing a parameterized hardware IP generator for an image fusion algorithm, we had a chance to test various tools and techniques such as HLS, pipelining, and PCIe ...logic/software porting, which we developed in a previous design project. Image fusion combines two or more images through a color transformation process. Depending on the application, different fps and/or resolution may be needed. Yet the specifics of the image-processing algorithm may frequently change causing redesign. If the target platform is FPGA, usually rapid yet optimized hardware implementation is required. All these requirements cannot be met only by HLS. Clever approaches in terms of architectural techniques such as unorthodox ways of pipelining, RTL coding, and creative ways of porting interface logic/software allowed us to meet the requirements outlined above. With all these in our arsenal, we were able to get 3 versions of the algorithm (with different fps and/or resolution) running on Cyclone IV and Arria 10 FPGAs in a fairly short amount of time. This paper explains the image fusion algorithm, our hardware architecture as well as our specific flow for rapid implementation of it.
Given that vision is the sense providing most information to the human being, any affection related to it significantly reduces the quality of life of the patients. Computing is showing to be a ...promising approach to provide therapies for patients suffering from low vision or even blindness. We describe some specific tools based on software and hardware solutions oriented to the development of this kind of aids. We present a system performing bioinspired image processing on a portable equipment, including a neuromorphic coding module that generates spike event patterns corresponding to information obtained from the processed images. These events drive a computer-controlled platform for neural stimulation of the visual cortex, intended for eliciting the perception of patterns of phosphenes in the visual field of blind subjects.