Precise navigation is often performed by sensor fusion of different sensors. Among these sensors, optical sensors use image features to obtain the position and attitude of the camera. Runway relative ...navigation during final approach is a special case where robust and continuous detection of the runway is required. This paper presents a robust threshold marker detection method for monocular cameras and introduces an on-board real-time implementation with flight test results. Results with narrow and wide field-of-view optics are compared. The image processing approach is also evaluated on image data captured by a different on-board system. The pure optical approach of this paper increases sensor redundancy because it does not require input from an inertial sensor as most of the robust runway detectors.
Display omitted
•The proposed metric is applied to multi-colored underwater environments.•The metric obtains a higher correlation between the predictions and true quality.•The calculation of true ...quality scores is based on the measured turbidity.•The proposed metric has the characteristics of strong generalization ability.•The proposed metric performs well in real-time image processing.
The color of an underwater target is different under different lighting conditions and underwater environments. To date, no quality metric has been proposed for images of underwater targets in multi-colored environments. In this paper, we proposed a no-reference quality metric for images of underwater targets in multi-colored environments (QMICE). This metric is a weighted combination of colorfulness index, contrast index and sharpness index. The colorfulness index is used to measure the color loss caused by absorption. The contrast index and sharpness index are used to measure the blurring caused by scattering. The weighted coefficients of the three indexes are calculated by multiple linear regression (MLR). For the contrast index and sharpness index, we proposed a grayscale conversion method that can adaptively adjust the coefficients of red, green, and blue (RGB) values to enhance their generalization ability under multi-colored environments. During the calculation of weighted coefficients, the quality scores based on turbidity are regarded as the true quality scores. It is more reliable than subjective assessment scores. The experimental results show that compared with the leading underwater image quality metrics available in the literature, the proposed metric has the best correlation between the metric predictions and the true quality scores. More importantly, QMICE can also be used to process underwater images in real time and evaluate the performance of underwater image restoration algorithms.
The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given ...sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications.
An efficient product packing system through a simplified automatic counting and collecting process is a critical aspect of the manufacturing industry. In the present paper, we propose a real-time ...tool for simultaneous counting and collection of the objects into different bins using machine vision. The system employs a minimum distance classifier for object detection, Counting on the conveyor belt is done by tracking the Euclidian distances between the centroids of the objects in successive frames. Furthermore, the required number of object collection is carried out in two steps. In the first step, multiple objects nearest to the required count are collected. In the second step, the difference of objects that remained from the previous step are added one by one. Object collection is carried out by integrating software output to the hardware, which collects them in required numbers in the different bins. To meet the fast counting requirements, a high-speed conveyor system constituting vision sensor with pixel resolution of 1920x1200 and 41 frames per second frame rate is employed. The high computational power requirements for implementing the algorithm is accomplished by employing a dedicated graphics microcontroller, whose GPIO pins are used for interfacing the hardware for the object collection. The testing results of the proposed device with 30 variants of different plumbing industry objects confirm an accuracy of 100% for a period of 3 hours. Moreover, the proposed module is scalable and could boost the productivity of the company in a complex environment without affecting the accuracy of the system.
This study conducts an in-depth evaluation of imaging algorithms and software and hardware architectures to meet the capability requirements of real-time image acquisition systems, such as spaceborne ...and airborne synthetic aperture radar (SAR) systems. By analysing the principles and models of SAR imaging, this research creatively puts forward the fully parallel processing architecture for the back projection (BP) algorithm based on Field-Programmable Gate Array (FPGA). The processing time consumption has significant advantages compared with existing methods. This article describes the BP imaging algorithm, which stands out with its high processing accuracy and two-dimensional decoupling of distance and azimuth, and analyses the algorithmic flow, operation, and storage requirements. The algorithm is divided into five core operations: range pulse compression, upsampling, oblique distance calculation, data reading, and phase accumulation. The architecture and optimisation of the algorithm are presented, and the optimisation methods are described in detail from the perspective of algorithm flow, fixed-point operation, parallel processing, and distributed storage. Next, the maximum resource utilisation rate of the hardware platform in this study is found to be more than 80%, the system power consumption is 21.073 W, and the processing time efficiency is better than designs with other FPGA, DSP, GPU, and CPU. Finally, the correctness of the processing results is verified using actual data. The experimental results showed that 1.1 s were required to generate an image with a size of 900 × 900 pixels at a 200 MHz clock rate. This technology can solve the multi-mode, multi-resolution, and multi-geometry signal processing problems in an integrated manner, thus laying a foundation for the development of a new, high-performance, SAR system for real-time imaging processing.
The micro-Doppler (m-D) features induced by targets with micro-motions provide important information for automatic radar target recognition. In this study, the m-D effect induced by rotational ...micro-motion in wideband radar is analysed, and an algorithm for the extraction of these m-D features is proposed. By making use of the amplitude and phase information of ‘range-slow-time image’, a dictionary with m-D signal atoms is constructed in the complex image space. The orthogonal matching pursuit algorithm in vector space is then extended to the complex image space to decompose the range-slow-time image and to extract the m-D features of the target. The proposed algorithm can extract the m-D features in the presence of migration through range cells of micro-motional scatterers, and can also work well when the sampling rate in slow-time domain is lower than the Nyquist sampling rate. Simulations are given to validate the effectiveness and robustness of the proposed method.
We propose a real-time, parameter-free circle detection algorithm that has high detection rates, produces accurate results and controls the number of false circle detections. The algorithm makes use ...of the contiguous (connected) set of edge segments produced by our parameter-free edge segment detector, the Edge Drawing Parameter Free (EDPF) algorithm; hence the name EDCircles. The proposed algorithm first computes the edge segments in a given image using EDPF, which are then converted into line segments. The detected line segments are converted into circular arcs, which are joined together using two heuristic algorithms to detect candidate circles and near-circular ellipses. The candidates are finally validated by an a contrario validation step due to the Helmholtz principle, which eliminates false detections leaving only valid circles and near-circular ellipses. We show through experimentation that EDCircles works real-time (10–20ms for 640×480 images), has high detection rates, produces accurate results, and is very suitable for the next generation real-time vision applications including automatic inspection of manufactured products, eye pupil detection, circular traffic sign detection, etc.
► Detects accurate circles and near-circular ellipses. ► Validates the detected circles by the Helmholtz principle to eliminate false detections. ► Runs in blazing real-time speed.
We report a novel small-animal whole-body imaging system called ring-shaped confocal photoacoustic computed tomography (RC-PACT). RC-PACT is based on a confocal design of free-space ring-shaped light ...illumination and 512-element full-ring ultrasonic array signal detection. The free-space light illumination maximizes the light delivery efficiency, and the full-ring signal detection ensures a full two-dimensional view aperture for accurate image reconstruction. Using cylindrically focused array elements, RC-PACT can image a thin cross section with 0.10 to 0.25 mm in-plane resolutions and 1.6 s/frame acquisition time. By translating the mouse along the elevational direction, RC-PACT provides a series of cross-sectional images of the brain, liver, kidneys, and bladder.
Distinctive narrative conditions arise when “speedrunning” the zombie narrative in Valve Corporation’s cooperative first-person shooter games Left 4 Dead (2008) and Left 4 Dead 2 (2009). Close ...analyses of two live speedruns recorded at the biannual Games Done Quick charity marathon, guided by concepts from Deleuze and Guattari, explain how the player’s narrative body, space, and time are impacted by the optimizations and exploits of the Left 4 Dead series’ zombie narrative. While the zombie story preprogrammed for players is largely bypassed, speedrunning through the Left 4 Dead series’ environments is a generative act of rupture that activates and deepens storytelling tendencies within zombie media that embrace chaos and decay. The speedrun is itself a form of collapse, where scripted meaning and intentionality fall away, replaced by the chance and ephemeral story of an emergent, optimized engagement.
▶ Real-time image processing for weed/crop discrimination. ▶ Vegetation segmentation robust to changes in illumination. ▶ Tested successfully on a wide variety of real outdoor conditions.
This paper ...presents a computer vision system that successfully discriminates between weed patches and crop rows under uncontrolled lighting in real-time. The system consists of two independent subsystems, a fast image processing delivering results in real-time (Fast Image Processing,
FIP), and a slower and more accurate processing (Robust Crop Row Detection,
RCRD) that is used to correct the first subsystem’s mistakes. This combination produces a system that achieves very good results under a wide variety of conditions. Tested on several maize videos taken of different fields and during different years, the system successfully detects an average of 95% of weeds and 80% of crops under different illumination, soil humidity and weed/crop growth conditions. Moreover, the system has been shown to produce acceptable results even under very difficult conditions, such as in the presence of dramatic sowing errors or abrupt camera movements. The computer vision system has been developed for integration into a treatment system because the ideal setup for any weed sprayer system would include a tool that could provide information on the weeds and crops present at each point in real-time, while the tractor mounting the spraying bar is moving.