Determination and classification of the bruise degree for cherry can improve consumer satisfaction with cherry quality and enhance the industry's competiveness and profitability. In this study, ...visible and near infrared (Vis-NIR) reflection spectroscopy was used for identifying bruise degree of cherry in 350-2500 nm. Sampling spectral data were extracted from normal, slight and severe bruise samples. Principal component analysis (PCA) was implemented to determine the first few principal components (PCs) for cluster analysis among samples. Optimal wavelengths were selected by loadings of PCs from PCA and successive projection algorithm (SPA) method, respectively. Afterwards, these optimal wavelengths were empolyed to establish the classification models as inputs of least square-support vector machine (LS-SVM). Better performance for qualitative discrimination of the bruise degree for cherry was emerged in LS-SVM model based on five optimal wavelengths (603, 633, 679, 1083, and 1803 nm) selected directly by SPA, which showed acceptable results with the classification accuracy of 93.3%. Confusion matrix illustrated misclassification generally occurred in normal and slight bruise samples. Furthermore, the latent relation between spectral property of cherries in varying bruise degree and its firmness and soluble solids content (SSC) was analyzed. The result showed both colour, firmness and SSC were consistent with the Vis-NIR reflectance of cherries. Overall, this study revealed that Vis-NIR reflection spectroscopy integrated with multivariate analysis can be used as a rapid, intact method to determine the bruise degree of cherry, laying a foundation for cherry sorting and postharvest quality control.
Plant stress is one of major issues that cause significant economic loss for growers. The labor-intensive conventional methods for identifying the stressed plants constrain their applications. To ...address this issue, rapid methods are in urgent needs. Developments of advanced sensing and machine learning techniques trigger revolutions for precision agriculture based on deep learning and big data. In this paper, we reviewed the latest deep learning approaches pertinent to the image analysis of crop stress diagnosis. We compiled the current sensor tools and deep learning principles involved in plant stress phenotyping. In addition, we reviewed a variety of deep learning applications/functions with plant stress imaging, including classification, object detection, and segmentation, of which are closely intertwined. Furthermore, we summarized and discussed the current challenges and future development avenues in plant phenotyping.
Strawberry is one of the popular fruits with numerous nutrients. The ripeness of this fruits was estimated using the hyperspectral imaging (HSI) system in field and laboratory conditions in this ...study. Strawberry at early ripe and ripe stages were collected HSI data, covered wavelength ranges from 370 to 1015 nm. Spectral feature wavelengths were selected using the sequential feature selection (SFS) algorithm. Two wavelengths selected for field (530 and 604 nm) and laboratory (528 and 715 nm) samples, respectively. Then, reliability of such spectral features was validated based on support vector machine (SVM) classifier. Performance of SVM classification models had good results with receiver operating characteristic values for samples under both field and laboratory conditions higher than 0.95. Meanwhile, the spatial feature images were extracted from the spectral feature wavelength and the first three principal components for laboratory samples. Pretrained AlexNet convolutional neural network (CNN) was used to classify the early ripe and ripe strawberry samples, which obtained the accuracy of 98.6% for test dataset. The above results indicated real-time HSI system was promising for estimating strawberry ripeness under field and laboratory conditions, which could be a potential application technique for evaluating the harvesting time management for farmers and producers.
•Strawberries at early ripe and ripe stage were imaged using hyperspectral imagery.•Spectral features were selected using sequential feature selection algorithm.•Spatial features were obtained from spectral feature and the first three principal component.•Convolutional neural network was used to identify strawberry ripeness.
This study developed and field tested an automated weed mapping and variable-rate herbicide spraying (VRHS) system for row crops. Weed detection was performed through a machine vision sub-system that ...used a custom threshold segmentation method, an improved particle swarm optimum (IPSO) algorithm, capable of segmenting the field images. The VRHS system also used a lateral histogram-based algorithm for fast extraction of weed maps. This was the basis for determining real-time herbicide application rates. The central processor of the VRHS system had high logic operation capacity, compared to the conventional controller-based systems. Custom developed monitoring system allowed real-time visualization of the spraying system functionalities. Integrated system performance was then evaluated through field experiments. The IPSO successfully segmented weeds within corn crop at seedling growth stage and reduced segmentation error rates to 0.1% from 7.1% of traditional particle swarm optimization algorithm. IPSO processing speed was 0.026 s/frame. The weed detection to chemical actuation response time of integrated system was 1.562 s. Overall, VRHS system met the real-time data processing and actuation requirements for its use in practical weed management applications.
Crop pests have a great impact on the quality and yield of crops. The use of deep learning for the identification of crop pests is important for crop precise management.
To address the lack of data ...set and poor classification accuracy in current pest research, a large-scale pest data set named HQIP102 is built and the pest identification model named MADN is proposed. There are some problems with the IP102 large crop pest dataset, such as some pest categories are wrong and pest subjects are missing from the images. In this study, the IP102 data set was carefully filtered to obtain the HQIP102 data set, which contains 47,393 images of 102 pest classes on eight crops. The MADN model improves the representation capability of DenseNet in three aspects. Firstly, the Selective Kernel unit is introduced into the DenseNet model, which can adaptively adjust the size of the receptive field according to the input and capture target objects of different sizes more effectively. Secondly, in order to make the features obey a stable distribution, the Representative Batch Normalization module is used in the DenseNet model. In addition, adaptive selection of whether to activate neurons can improve the performance of the network, for which the ACON activation function is used in the DenseNet model. Finally, the MADN model is constituted by ensemble learning.
Experimental results show that MADN achieved an accuracy and F1Score of 75.28% and 65.46% on the HQIP102 data set, an improvement of 5.17 percentage points and 5.20 percentage points compared to the pre-improvement DenseNet-121. Compared with ResNet-101, the accuracy and F1Score of MADN model improved by 10.48 percentage points and 10.56 percentage points, while the parameters size decreased by 35.37%. Deploying models to cloud servers with mobile application provides help in securing crop yield and quality.
Currently, the most efficient method of resolving the pollution problem of weed management is by using variable spraying technology. In this study, an improved genetic ...proportional-integral-derivative control algorithm (IGA-PID) was developed for this technology. It used a trimmed mean operator to optimize the selection operator for an improved searching rate and accuracy. An adaptive crossover operator and mutation operator were constructed for a rapid convergence speed. The weed density detection was performed through an image acquisition and processing subsystem which was capable of determining the spraying quantity. The variable spraying control sub-system completed variable spraying operation. The performance of the system was evaluated by simulations and field tests, and compared with conventional methods. The simulation results indicated that the parameters of the overshoot (1.25%), steady-state error (1.21%) and the adjustment time (0.157s) of IGA-PID were the lowest when compared with the standard algorithms. Furthermore, the field validation results showed that the system with the proposed algorithm achieved the optimal performance with spraying quantity error being 2.59% and the respond time being 3.84s. Overall, the variable spraying system based on an IGA-PID meets the real-time and accuracy requirements for field applications which could be helpful for weed management in precise agriculture.
The segmentation and positioning of tea buds are the basis for intelligent picking robots to pick tea buds accurately. Tea images were collected in a complex environment, and median filtering was ...carried out to obtain tea bud images with smooth edges. Four semantic segmentation algorithms, U-Net, high-resolution network (HRNet_W18), fast semantic segmentation network (Fast-SCNN), and Deeplabv3+, were selected for processing images. The centroid of the tea buds and the image center of the minimum external rectangle were calculated. The farthest point from the centroid was extracted from the tea stalk orientation, which was the final picking point for tea buds. The experimental results showed that the mean intersection over union (mIoU) of HRNet_W18 was 0.81, and for a kernel with a median filter size of 3 × 3, the proportion of abnormal tea buds was only 11.6%. The average prediction accuracy of picking points with different tea stalk orientations was 57%. This study proposed a fresh tea bud segmentation and picking point location method based on a high-resolution network model. In addition, the cloud platform can be used for data sharing and real-time calculation of tea bud coordinates, reducing the computational burden of picking robots.
Maturity is a key attribute to evaluate the quality and acceptability of fruit products. In this study, the impact method was used for nondestructive measurement of kiwifruit maturity. The fruit was ...vertically dropped onto an impact plate, and an accelerometer was used to measure the response signal. Then, fruit firmness, soluble solid content (SSC), titratable acidity (TA), and sensory scores were measured to determine the kiwifruit maturity. In addition, different modeling methods were proposed for data analysis. The results showed that the optimized prediction results were obtained by the principal component analysis–back‐propagation neural network (PCA‐BPNN) method for both quantitative and qualitative analysis. The optimized correlation coefficient between prediction and actual values (rp) and root mean square error of prediction (RESEP) for firmness, SSC, TA, and sensory score were 0.881 (2.359N), 0.641 (1.511 Brix), 0.568 (0.023%), and 0.935 (0.693), respectively. The optimized discriminant accuracy for immature, mature, and overmature kiwifruits was 94.2% and 92.1% for calibration and validation, respectively. Such results indicated the feasibility of the proposed impact method for kiwifruit maturity evaluation.
The fruit was vertically dropped onto an impact plate, and an accelerometer was used to measure the response signal. The results showed that best prediction results were obtained by the principal component analysis–back‐propagation neural network (PCA‐BPNN) method for both quantitative and qualitative analysis. Good performance of the prediction models proved that the proposed impact method for kiwifruit maturity evaluation was feasible.
UAV may be limited by its flight height and camera resolution when aerial photography of a tea garden is carried out. The images of the tea garden contain trees and weeds whose vegetation information ...is similar to tea tree, which will affect tea tree extraction for further agricultural analysis. In order to obtain a high-definition large field-of-view tea garden image that contains tea tree targets, this paper (1) searches for the suture line based on the graph cut method in the image stitching technology; (2) improves the energy function to realize the image stitching of the tea garden; and (3) builds a feature vector to accurately extract tea tree vegetation information and remove unnecessary variables, such as trees and weeds. By comparing this with the manual extraction, the algorithm in this paper can effectively distinguish and eliminate most of the interference information. The IOU in a single mosaic image was more than 80% and the omissions account was 10%. The extraction results in accuracies that range from 84.91% to 93.82% at the different height levels (30 m, 60 m and 100 m height) of single images. Tea tree extraction accuracy rates in the mosaic images are 84.96% at a height of 30 m, and 79.94% at a height of 60 m.
At present, picking Sichuan pepper is mainly undertaken by people, which is inefficient and presents the possibility of workers getting hurt. It is necessary to develop an intelligent robot for ...picking Sichuan peppers in which the key technology is accurate segmentation by means of mechanical vision. In this study, we first took images of Sichuan peppers (Hanyuan variety) in an orchard under various conditions of light intensity, cluster numbers, and image occlusion by other elements such as leaves. Under these various image conditions, we compared the ability of different technologies to segment the images, examining both traditional image segmentation methods (RGB color space, HSV color space, k-means clustering algorithm) and deep learning algorithms (U-Net convolutional network, Pyramid Scene Parsing Network, DeeplabV3+ convolutional network). After the images had been segmented, we compared the effectiveness of each algorithm at identifying Sichuan peppers in the various types of image, using the Intersection Over Union(IOU) and Mean Pixel Accuracy(MPA) indexes to measure success. The results showed that the U-Net algorithm was the most effective in the case of single front-lit clusters light without occlusion, with an IOU of 87.23% and an MPA of 95.95%. In multiple front-lit clusters without occlusion, its IOU was 76.52% and its MPA was 94.33%. Based on these results, we propose applicable segmentation methods for an intelligent Sichuan pepper-picking robot which can identify the fruit in images from various growing environments. The research showed good accuracy for the recognition and segmentation of Sichuan peppers, which suggests that this method can provide technical support for the visual recognition of a pepper-picking robot in the field.