Abstract Image‐based algorithms have become a powerful tool for estimating flow velocities in rivers. In this study, we generalize the space‐time image velocimetry (STIV) framework for reach‐scale ...application rather than along a cross section. The new algorithm provides information on both the magnitude and orientation of velocity vectors, and we refer to the algorithm as two‐dimensional STIV, or 2D‐STIV. The workflow involves setting up a grid, using centreline tangent vectors as initial estimates of flow direction, and then extracting space‐time images (STIs) along search lines radiating from each grid node. The autocorrelation function is used to infer the inclination of streak lines present in STIs, which represents the advection of water surface features. Information on flow direction is obtained by evaluating various candidate search lines and identifying that which yields the highest velocity. This search can be performed exhaustively or via optimization. We applied the new 2D‐STIV algorithm to three test cases, one simulated data set and two natural channels, and compared image‐derived velocities to modelled or measured values. We also applied two established particle image velocimetry (PIV) algorithms to the same data sets. 2D‐STIV performed as well as the two PIV algorithms for simulated images. For a natural river with distinct water surface features, 2D‐STIV was effective for much of the channel but also led to a more patchy, irregular velocity field than the two PIV algorithms. For a site lacking obvious surface features, exhaustive 2D‐STIV led to velocity estimates uncorrelated with field data while the optimization‐based version produced erratic flow directions. 2D‐STIV also required greater image sequence durations, higher frame rates, and generally longer computational run times. Overall, ensemble PIV was the most reliable algorithm.
•LWMG-YOLOv5 model achieves the accuracy of 99.3% and speed of 30.246 fps.•LWMG-YOLOv5 model outperforms YOLOv5, GSEH-YOLOv5, and MobileNetV3-YOLOv5 models.•LWMG-YOLOv5 model increases 1.7% chip ...yields and reduces 1.82% production cost.
In the fab, semiconductor manufacturers often use deep learning approaches for chip contour detection to shorten automated optical inspection to minimize the loss of production costs and lower power consumption in chip contour detection for realizing energy-efficient computing. However, YOLOv5 and GSEH-YOLOv5 models have sacrificed their accuracy to improve the operational speed. MobileNetv3-YOLOv5 model can enhance the accuracy but lacks high-speed operation. Therefore, this study presents a light version of MobileNetv3-YOLOv5 model with ghost convolution, abbreviated LWMG-YOLOv5, to speed up chip contour detection because this architecture can reduce the number of model parameters and computational burden at the same time. As a result, the proposed approach can outperform the other methods by getting a 3.62% speed-up in chip contour detection to gain a better manufacturing advantage in increasing the chip yields by 1.7% and reducing the loss of production costs by 1.83% significantly.
In this paper, we propose a new real one-dimensional cosine fractional (1-DCF) chaotic map. Several chaos-theory analysis tests demonstrate that the proposed map has many good cryptography ...properties, such as a highly chaotic behavior, a large chaotic range, an infinite number of unstable fixed points, and a widely superior sensitivity to the initial conditions than most of the low-dimensional chaotic maps. Regarding these attractive features, we use the 1-DCF map to design a novel fast image encryption scheme for real-time image processing. Unlike most of the existing encryption schemes, we adopt a permutation-less architecture to increase the encryption speed. Regardless of the permutation phase absence, a high-security level is obtained by using a substitution process with a high sensitivity to the plain image. Moreover, we replace the natural row-order encryption with a more secure random-like encryption order generated from the secret key. Experimentation and simulations show that the new scheme is better than many recently proposed encryption schemes in both security and rapidity.
Blind image quality assessment (BIQA) has received increasing attention in the past decades. However, it still remains inadequately researched on BIQA for night-time images suffering from the diverse ...authentic degradations. Since the intrinsic content degradations of night-time images are highly related to the illumination, how to use the connection between content and illumination to enhance the feature representation ability is the key issue in designing BIQA methods for night-time images. In this paper, we first construct an ultra-high-definition night-time image dataset (UHD-NID) with high image resolution and abundant parameter settings. UHD-NID contains 1600 images with a high resolution of 5616×3744, and each group of images contains ten exposure levels. Then, we conduct subjective assessment and analyze the subjective data to obtain a mean opinion score to each image in UHD-NID. To enhance the feature representation ability in content and illumination, we propose a progressive bidirectional feature extraction and enhancement network (PBFEE-Net). In addition, we use a decomposition network to decompose the input image into the reflectance and illumination, which can facilitate the ability of feature extraction to some extent. The experimental results show that our proposed method achieves superior performance in evaluating the quality of night-time images. The dataset and code will be released at https://github.com/NBUsjl/UHD-NID .
ABSTRACTThe constant need for decarbonization has led to the replacement of artificial light at night (ALAN) with light-emitting diodes (LEDs), inducing blue light pollution and its consequent ...adverse effects. As a result, there is an urgent need for the development of a technique for the rapid, accurate, and large-scale discrimination of the various illumination sources. The newly launched Sustainable Development Science Satellite-1 (SDGSAT-1) can play this role by supplementing the existing nighttime light data with multispectral and high-resolution features. Along these lines, in this work, a novel approach to identify various types of illumination sources using machine learning in SDGSAT-1 images was proposed, taking Beijing as a worked example. The results indicate that: (1) The method can effectively distinguish the various types of light sources with an overall accuracy of 0.92 for ALAN and 0.95 for streetlights. (2) The illumination patterns can be clearly depicted, indicating distinct spatial heterogeneity in ALAN along Beijing’s 5th Ring Road. (3) Statistically significant disparities between road classes and streetlight types were detected, with a notable increase in LED streetlight usage as the road class diminishes. This work emphasizes the crucial role of SDGSAT-1 in analysing ALAN, providing valuable insights in urban lighting management.
Until now, real-time image guided adaptive radiation therapy (IGART) has been the domain of dedicated cancer radiotherapy systems. The purpose of this study was to clinically implement and ...investigate real-time IGART using a standard linear accelerator.
We developed and implemented two real-time technologies for standard linear accelerators: (1) Kilovoltage Intrafraction Monitoring (KIM) that finds the target and (2) multileaf collimator (MLC) tracking that aligns the radiation beam to the target. Eight prostate SABR patients were treated with this real-time IGART technology. The feasibility, geometric accuracy and the dosimetric fidelity were measured.
Thirty-nine out of forty fractions with real-time IGART were successful (95% confidence interval 87–100%). The geometric accuracy of the KIM system was −0.1 ± 0.4, 0.2 ± 0.2 and −0.1 ± 0.6 mm in the LR, SI and AP directions, respectively. The dose reconstruction showed that real-time IGART more closely reproduced the planned dose than that without IGART. For the largest motion fraction, with real-time IGART 100% of the CTV received the prescribed dose; without real-time IGART only 95% of the CTV would have received the prescribed dose.
The clinical implementation of real-time image-guided adaptive radiotherapy on a standard linear accelerator using KIM and MLC tracking is feasible. This achievement paves the way for real-time IGART to be a mainstream treatment option.
The demand to implement semantic segmentation networks on mobile devices has increased dramatically. However, existing real-time semantic segmentation methods still suffer from a large number of ...network parameters, unsuitable for mobile devices with limited memory resources. The reason mainly arises from the fact that most existing methods take the backbone networks (e.g., ResNet-18 and MobileNet) as an encoder. To alleviate this problem, we propose a novel Reparameterizable Channel & Dilation (RCD) block and construct a considerably lightweight yet effective encoder by stacking several RCD blocks according to three guidelines. The strengths of the proposed encoder result in the abilities not only to extract discriminative feature representations via channel convolutions and dilated convolutions, but also to reduce computational burdens while maintaining segmentation accuracy with the help of re-parameterization technique. Except for encoder, we also present a simple but effective decoder that adopts an across-resolution fusion strategy to fuse multi-scale feature maps generated from the encoder instead of a bottom-up pathway fusion. With such an encoder and a decoder, we provide a Reparameterizable Across-resolution Fusion Network (RAFNet) for real-time semantic segmentation. Extensive experiments demonstrate that our RAFNet achieves a promising trade-off between segmentation accuracy, inference speed and network parameters. Specifically, our RAFNet with only 0.96M parameters obtains 75.3% mIoU at 107 FPS and 75.8% mIoU at 195 FPS on Cityscapes and CamVid test sets for full-resolution inputs, respectively. After quantization and deployment on a Xilinx ZCU104 device, our RAFNet obtains a favorable segmentation performance with only 1.4W power.
Vision-based practical applications, such as consumer photography and automated driving systems, greatly rely on enhancing the visibility of images captured in night-time environments. For this ...reason, various image enhancement algorithms (EHAs) have been proposed. However, little attention has been given to the quality evaluation of enhanced night-time images. In this paper, we conduct the first dedicated exploration of the subjective and objective quality evaluation of enhanced night-time images. First, we build an enhanced night-time image quality (EHNQ) database, which is the largest of its kind so far. It includes 1,500 enhanced images generated from 100 real night-time images using 15 different EHAs. Subsequently, we perform a subjective quality evaluation and obtain subjective quality scores on the EHNQ database. Thereafter, we present an objective blind quality index for enhanced night-time images (BEHN). Enhanced night-time images usually suffer from inappropriate brightness and contrast, deformed structure, and unnatural colorfulness. In BEHN, we capture perceptual features that are highly relevant to these three types of corruptions, and we design an ensemble training strategy to map the extracted features into the quality score. Finally, we conduct extensive experiments on EHNQ and EAQA databases. The experimental and analysis results validate the performance of the proposed BEHN compared with the state-of-the-art approaches. Our EHNQ database is publicly available for download at https://sites.google.com/site/xiangtaooo/.
•Cost-effective monocular visual odometry system for tracked vehicles on agricultural terrains.•Simplified hardware and a low complexity system, without compromising performance.•Enhanced image ...processing algorithm with sub-pixel capability.
In precision agriculture, innovative cost-effective technologies and new improved solutions, aimed at making operations and processes more reliable, robust and economically viable, are still needed. In this context, robotics and automation play a crucial role, with particular reference to unmanned vehicles for crop monitoring and site-specific operations. However, unstructured and irregular working environments, such as agricultural scenarios, require specific solutions regarding positioning and motion control of autonomous vehicles.
In this paper, a reliable and cost-effective monocular visual odometry system, properly calibrated for the localisation and navigation of tracked vehicles on agricultural terrains, is presented. The main contribution of this work is the design and implementation of an enhanced image processing algorithm, based on the cross-correlation approach. It was specifically developed to use a simplified hardware and a low complexity mechanical system, without compromising performance. By providing sub-pixel results, the presented algorithm allows to exploit low-resolution images, thus obtaining high accuracy in motion estimation with short computing time. The results, in terms of odometry accuracy and processing time, achieved during the in-field experimentation campaign on several terrains proved the effectiveness of the proposed method and its fitness for automatic control solutions in precision agriculture applications.