Underwater image enhancement has attracted much attention due to the rise of marine resource development in recent years. Benefit from the powerful representation capabilities of Convolution Neural ...Networks(CNNs), multiple underwater image enhancement algorithms based on CNNs have been proposed in the past few years. However, almost all of these algorithms employ RGB color space setting, which is insensitive to image properties such as luminance and saturation. To address this problem, we proposed Underwater Image Enhancement Convolution Neural Network using 2 Color Space (UICE^2-Net) that efficiently and effectively integrate both RGB Color Space and HSV Color Space in one single CNN. To our best knowledge, this method is the first one to use HSV color space for underwater image enhancement based on deep learning. UIEC^2-Net is an end-to-end trainable network, consisting of three blocks as follow: a RGB pixel-level block implements fundamental operations such as denoising and removing color cast, a HSV global-adjust block for globally adjusting underwater image luminance, color and saturation by adopting a novel neural curve layer, and an attention map block for combining the advantages of RGB and HSV block output images by distributing weight to each pixel. Experimental results on synthetic and real-world underwater images show that the proposed method has good performance in both subjective comparisons and objective metrics. The code is available at https://github.com/BIGWangYuDong/UWEnhancement.
•An end-to-end CNN-based underwater image enhancement using RGB and HSV color space is proposed. We are the first to use HSV color space for underwater image enhancement based on deep learning.•A piece-wise linear scaling curve is learned to adjust image properties in HSV color space.•Using differentiable RGB and HSV color space conversions to permit the end-to-end learning.•Our model has better generalization ability and gets better results on real-world underwater image datasets.
Corrosion products formed on steel in various environments were identified with image analysis, Raman spectroscopy, and X-ray diffraction (XRD). The results of Raman spectroscopy showed that γ-FeOOH ...exhibits bright yellow color and β-FeOOH exhibits gray color. Main corrosion products composition formed on the steels by XRD analysis was γ-FeOOH. Using proper threshold of HSV color space, makes it possible to identify corrosion products formed on steel. Comparison of corrosiveness of exposed environments and formed corrosion products, α-FeOOH is preferentially formed in mild corrosive environments, while β-FeOOH was formed in severe corrosive environments.
Corrosion products formed on steel in various environments were identified with image analysis, Raman spectroscopy, and X-ray diffraction (XRD). The results of Raman spectroscopy showed that γ-FeOOH ...exhibits bright yellow color and β-FeOOH exhibits gray color. Main corrosion products composition formed on the steels by XRD analysis was γ-FeOOH. Using proper threshold of HSV color space, makes it possible to identify corrosion products formed on steel. Comparison of corrosiveness of exposed environments and formed corrosion products, α-FeOOH is preferentially formed in mild corrosive environments, while β-FeOOH was formed in severe corrosive environments.
Weed infestations have the potential to cause significant economic losses for farmers as a result of diminished crop yields and escalated labor and input costs linked with weed management. ...Traditional weed control methods often entail indiscriminate application of herbicides across entire fields, irrespective of weed density or species composition. In this investigation, we propose a hierarchical detection algorithm for identifying multi-species weeds and have devised a variable spraying system in agricultural settings. The weed classification detection algorithm comprises two components: crop seedling target detection and weed detection. The enhanced YOLOv5 algorithm was initially merged with Vision Transformer (ViT) to introduce the W-YOLOv5 crop seedling target detection algorithm. Experimental validation results indicate that the mean average precision (mAP) of the proposed W-YOLOv5 stands at 87.6%, representing a 3.2% increase over the original YOLOv5. Compared with YOLOv7, this method demonstrates a 4.4% improvement in mAP, accompanied by an 80.14% reduction in floating point operations (FLOPs). These findings underscore the effectiveness of the proposed crop seedling target detection algorithm in ensuring detection accuracy while minimizing model FLOPs. Subsequently, following the detection of crop seedlings, the Hue, Saturation, and Value (HSV) color space filtering algorithm was employed to identify the locations of all weeds in the image, thereby facilitating the detection of weeds among wheat (Triticum aestivum L.), radish (Raphanus sativus L.), cucumber (Cucumis sativus L.), soybean (Glycine max (L.) Merr.), and corn (Zea mays L.) seedlings. Finally, the severity of weed infestation in farmland was classified into five levels. Leveraging these severity levels, we developed a variable spraying system and integrated the developed algorithm into the system. Field validation experiments demonstrate that at a speed of 4 km/h, the spraying accuracy of the system can reach 90.32%, effectively enabling precise variable spraying.
•The method can detect weeds hierarchically and is not limited by weed types.•The Vision Transformer mechanism can improve the detection accuracy.•Implemented HSV color space filtering to detect weeds in various crop seedlings.•Engineered a precise variable spraying system based on weed severity levels.
An accurate detection and management of pain, measured through its relative intensity, plays an important role in the treatment of disease and reducing a patient’s discomfort. As it is relatively ...difficult to assess, describe, evaluate and manage the pain level using a patient’s self-report, automated pain-detecting tools can provide useful information to assist in the management of pain intensity. This study proposes a new predictive modeling framework that employs a modified Temporal Convolutional Network (TCN) algorithm to recognize the pain intensity prevalent in patients’ video frames collected as part of UNBC-McMaster Shoulder Pain Archive and MIntPAIN databases. The inputs of the proposed TCN network is composed of the extracted and reduced face image features from a fine-tuned VGG-Face and principal component analysis (PCA) with Hue, Saturation, Value (HSV) color spaces video images. The results of TCN based predictive model, employing a long short-term memory (LSTM) model as well as other state-of-the art models, show that the proposed approach performs faster with a high level of efficiency. This is demonstrated by the low magnitude of error metrics (i.e., Mean Squared Error = 0.0629, Mean Absolute Error = 0.1021, correctness validation results represented by Area under Curve = 85% and accuracy metric = 92.44%). Considering the efficiency of the proposed TCN framework, integrating fine-tuned VGG-Face and PCA with Hue, Saturation, Value (HSV) color spaces video images for pain intensity estimation, the present study affirms that the new method can be adopted as an automatic health informatics tool, mainly for pain detection, and subsequently, implemented in the pain management area.
•Temporal Convolutional Network, TCN predictive model for pain modelling is proposed.•The TCN model considers face images in Hue, Saturation, Value (HSV) colour space.•Fine-tuned VGGFace is incorporated in TCN algorithm to extract face image features.•Predictive model is tested against UNBC-McMaster Shoulder Pain & MIntPAIN database.•The proposed TCN framework has significant implications in health informatics.
This paper presents a simple yet efficient image retrieval approach by proposing a new image feature detector and descriptor, namely the micro-structure descriptor (MSD). The micro-structures are ...defined based on an edge orientation similarity, and the MSD is built based on the underlying colors in micro-structures with similar edge orientation. With micro-structures serving as a bridge, the MSD extracts features by simulating human early visual processing and it effectively integrates color, texture, shape and color layout information as a whole for image retrieval. The proposed MSD algorithm has high indexing performance and low dimensionality. Specifically, it has only 72 dimensions for full color images, and hence it is very efficient for image retrieval. The proposed method is extensively tested on Corel datasets with 15,000 natural images. The results demonstrate that it is much more efficient and effective than representative feature descriptors, such as Gabor features and multi-textons histogram, for image retrieval.
► The concept of micro-structure and its detection method are proposed. ► A micro-structure descriptor (MSD) is developed for an image retrieval. ► MSD has discrimination power of color, texture, shape and layout information. ► MSD has high indexing performance and low dimensionality. It is very efficient.
We propose an iterative single-image haze-removal method that first divides images with haze into regions in which haze-removal processing is difficult and then estimates the ambient light. The ...existing method has a problem wherein it often overestimates the amount of haze in regions where there is a large distance between the location the photograph was taken and the subject of the photograph; this problem prevents the ambient light from being estimated accurately. In particular, it is often difficult to accurately estimate the ambient light of images containing white and sky regions. Processing those regions in the same way as other regions has detrimental results, such as darkness or unnecessary color change. The proposed method divides such regions in advance into multiple small regions, and then, the ambient light is estimated from the small regions in which haze removal is easy to process. We evaluated the proposed method through some simulations, and found that the method achieves better haze reduction accuracy even than the state-of-the art methods based on deep learning.
Image dehazing has become a crucial prerequisite for most outdoor computer applications. The majority of existing dehazing models can achieve the haze removal problem. However, they fail to preserve ...colors and fine details. Addressing this problem, we introduce a novel high-performing attention-based dehazing model (ADMC2-net)that successfully incorporates both RGB and HSV color spaces to maintain color properties. This model consists of two parallel densely connected sub-models (RGB and HSV) followed by a new efficient attention module. This attention module comprises pixel-attention and channel-attention mechanisms to get more haze-relevant features. Experimental results analyses can validate that our proposed model (ADMC2-net) can achieve superior results on synthetic and real-world datasets and outperform most of state-of-the-art methods.
We proposed the Retinex-based fast algorithm (RBFA) to achieve low-light image enhancement in this paper, which can restore information that is covered by low illuminance. The proposed algorithm ...consists of the following parts. Firstly, we convert the low-light image from the RGB (red, green, blue) color space to the HSV (hue, saturation, value) color space and use the linear function to stretch the original gray level dynamic range of the V component. Then, we estimate the illumination image via adaptive gamma correction and use the Retinex model to achieve the brightness enhancement. After that, we further stretch the gray level dynamic range to avoid low image contrast. Finally, we design another mapping function to achieve color saturation correction and convert the enhanced image from the HSV color space to the RGB color space after which we can obtain the clear image. The experimental results show that the enhanced images with the proposed method have better qualitative and quantitative evaluations and lower computational complexity than other state-of-the-art methods.
With the busy pace of modern life, an increasing number of people are afflicted by lifestyle diseases. Going directly to the hospital for medical checks is not only time-consuming but also costly. ...Fortunately, the emergence of rapid tests has alleviated this burden. Accurately interpreting test results is extremely important; misinterpreting the results of rapid tests could lead to delayed medical treatment. Given that URS-10 serve as a rapid test capable of detecting 10 distinct parameters in urine samples, the results of assessing these parameters can offer insights into the subject’s physiological condition. These parameters encompass aspects such as metabolism, renal function, diabetes, urinary tract disorders, hemolytic diseases, and acid–base balance, among others. Although the operational procedure is straightforward, the variegated color changes exhibited in the outcomes of individual parameters render it challenging for lay users to deduce causal factors solely from color variations. Moreover, potential misinterpretations could arise due to visual discrepancies. In this study, we successfully developed a cloud-based health checkup system that can be used in an indoor environment. The system is used by placing a URS-10 test strip on a colorimetric board developed for this study, then using a smartphone application to take images which are uploaded to a server for cloud computing. Finally, the interpretation results are stored in the cloud and sent back to the smartphone to be checked by the user. Furthermore, to confirm whether the color calibration technology can eliminate color differences between different cameras, and also whether the colorimetric board and the urine test strips can perform color comparisons correctly in different light intensity environments, indoor environments that could simulate a specific light intensity were established for testing purposes. When comparing the experimental results to real test strips, only two groups failed to reach an identification success rate of 100%, and in both of these cases the success rate reached 95%. The experimental results confirmed that the system developed in this study was able to eliminate color differences between camera devices and could be used without special technical requirements or training.