Abstract
With the rise and development of precision agriculture and smart agriculture concepts, traditional agricultural pest detection and identification methods have become increasingly unable to meet current agricultural production requirements due to their slow recognition speed, low recognition accuracy, and strong subjectivity need. This article aims to combine multifeature fusion technology with sensors to apply to crop pest detection and build crop pest detection services based on image recognition. In terms of image recognition, the use of image denoising methods based on median filtering, image preprocessing methods based on the maximum between-class error method (Otsu), image segmentation methods based on super green features, and feature extraction methods based on multiparameter features and based on the one-to-one elimination strategy and the M-SVM multiclass recognition algorithm fused with the kernel function, it realizes the identification and detection of six soybean leaf borers. The system uses the ARM920T series S3C2440 chip as the central processing unit. Through the temperature and humidity sensor and infrared, the multisensor module composed of sensors collects real-time information on the agricultural greenhouse. After normalizing the information, the central processing unit performs judgment processing and information fusion. And through experimental data, it is finally verified that the image recognition method used in this paper improves the recognition rate and effectiveness by nearly 65% in the detection of soybean leaf moth pests.
1. Introduction
With the rise and development of the concept of precision agriculture and smart agriculture, the traditional agricultural pest detection and identification methods have become increasingly unable to meet the needs of current agricultural production due to their slow recognition speed, low recognition accuracy, and strong subjectivity. The use of information technology to assist agricultural production and the application of image recognition technology to crop pest identification to replace traditional manual identification methods make crop pest identification fast, accurate, and real-time, qualities that traditional methods do not have. The realization of intelligent identification and detection of crop pests through intelligent technology plays a very important role in reducing unnecessary pesticide spraying, protecting the balance of the ecosystem, ensuring the safe production of crops, and improving the quality of crops.
Pest is a general term for insects that are harmful to humans. From our own point of view, it is a general term for insects that adversely affect our human existence. Whether an insect is beneficial or harmful is quite complex and often varies by time, place, and quantity. We tend to think of any insects that compete with us as pests when in fact, they are only harmful to humans when they reach a certain number. Herbivorous insects should not be considered pests for control if they are small in number and low in density and have little or no impact on crops at that time or over a period of time. On the contrary, because of their small presence, they provide food for natural enemies, which can keep them in this habitat, increasing the complexity and stability of the ecosystem.
There are countless combinations of fusion models, and there is no single right way. The bottom line of fusion is to try to maintain a balance between “accuracy” and “variety.” For example, although different classifiers + different features + different samples will have high differences, due to the limited amount of data, each model cannot be fully trained, and the accuracy is very low, resulting in poor performance of the final model.
Yang Xintin used whiteflies and thrips as insect pest detection samples to detect the types and quantity changes of cucumber pests, using the I vector of the HIS color space and the b vector of the L a b color space to binarize the pest image. The high contrast between the target and the pest background image, a cucumber pest identification and classification algorithm based on Prewitt, Canny edge detection segmentation algorithm and SVM algorithm are proposed, which achieves a high average recognition rate of the algorithm in pest detection, as well as for two kinds of cucumbers. The characteristic of high pest identification rate, combining the Internet of Things technology and image recognition technology, using image segmentation, feature extraction, and image separation and recognition technology to realize the identification and detection of common vegetable pests, but lacks the number of types, only small-scale research, and the research purpose is not obvious [1]. Su uses the locust image collected by the digital camera to compare and analyze the RGB classification average of the area and the background and selects the super-G absolute value method for gray-level two-level processing to realize the separation of the locust image and the background image, and finally use the area blur; the principle of set sum maximum membership degree realizes the fuzzy recognition method of locusts. Aiming at rice blast and sheath blight, Gaussian smoothing filter is used to denoise the image, saliency detection technology and HSV color model are used to extract image feature parameters, and finally the RBF neural network based on parameter adjustment is used to realize the two diseases of rice. The specific data relationship is not obvious [2]. Liu obtained four kinds of alfalfa leaf pest images by using manual cropping methods, and used K-median clustering algorithm and linear discriminant analysis image segmentation method to achieve image segmentation, and finally used a neural network based on convolution to complete the pest image. Feature extraction and support vector machine algorithm are used to realize the identification and detection of pests. Through automatic cropping of alfalfa pest images, effective pest images are obtained, and then image segmentation methods with complex backgrounds are used to extract feature parameters. Finally, a distinguishing depth confidence network and exponential loss function are used to construct a pest identification and detection model. However, the experimental data is too small to effectively illustrate the experimental results. We need to continue to improve [3]. In the aspect of image recognition, image denoising method based on median filter, image preprocessing method based on maximum class error method (Otsu), and image segmentation method and feature extraction method based on super green feature are adopted. It is based on multiparameter features. In addition, the M-SVM multiclass recognition algorithm based on one-to-one removal strategy and fusion kernel function realizes the recognition and detection of 6 kinds of soybean bowling balls.
This paper mainly studies the implementation technology of crop pest detection services based on image recognition, through the analysis of image recognition preprocessing technology, image feature extraction technology, and SVM classification recognition algorithm based on fusion kernel function, and the development of cloud platform based on VMware. Based on the research of technology, SpringBoot-based microservice development technology, and Python-OpenCV-based image processing and recognition methods, the theory of crop pest detection services based on image recognition is researched and implemented. Mainly take soybean crops as the subject of the experiment and conduct automatic identification and detection of six soybean leaf moth insects. Through software and hardware environment construction, 120 soybean leaf pest images acquired by image acquisition equipment are used to extract and analyze features from three aspects of color, texture, and morphology, and finally, build the M based on the fusion of local and global kernel functions. The SVM algorithm, combined with the one-to-one elimination strategy of multicategory recognizers for experimental verification, further proves that the crop pest detection service based on image recognition constructed in this paper can improve the effectiveness and recognition accuracy of the soybean leaf moth pest detection by nearly 65%.
In this paper, through in-depth research on multifeature fusion and sensor technology, intelligent technology is used to realize intelligent identification and detection of crop pests. The quality of crops plays a very important role. In this paper, the ARM920T series S3C2440 chip is used as the central processing unit, and the real-time information of the agricultural greenhouse is collected through a multisensor module composed of a temperature and humidity sensor and an infrared sensor.
2. Crop Pest Detection Method Based on Image Recognition
2.1. Image Preprocessing Method
Crop pest detection services are based on image recognition because it is convenient to better identify and analyze images. When collecting pest image data, it is necessary to ensure that the contours of the pest area in the image are complete, and the image pixels should be at least 300W and above. Ensure the validity and detectability of the collected crop pest images [4]. After the image data collection is completed, the image is preprocessed. Image preprocessing mainly implements denoising processing and image segmentation processing on the collected crop pest images, which is convenient for further image feature extraction. In terms of image denoising, an image denoising method based on median filtering is used, and in terms of image segmentation of grayscale images, a binarization method based on the maximum between-class error method (Otsu) and super green features is used [5]:(1)Image denoising method based on weighted median fast filtering method: in the process of collecting crop pest images, if the image is generated, transmitted, etc., it is unavoidable to be interfered by various noises, which will affect the image itself and the subsequent image processing process. Make an impact. In order to eliminate the influence of image noise as much as possible, this paper uses the image denoising method based on the median filter method for processing [6]. The median filter algorithm is a nonlinear image smoothing method, which can better eliminate the impulse noise in the image data while protecting the edge of the target image. The median filter algorithm is a neighborhood calculation method. The pixels of all neighborhoods are sorted according to the gray level as the input, and the sorted middle pixel value is taken as the final output pixel value. Specifically, the median filtering algorithm combines the pixels currently to be processed and several pixels in the neighborhood into a template and then uses the template to sort the pixels contained in the template from small to large, and finally takes the intermediate value in the template to replace the original pixel value [7].(2)Image segmentation method based on the maximum between-class error method (Otsu) and super green features: image segmentation is an important part of image preprocessing, and the accuracy of image segmentation will directly affect subsequent feature extraction and the accuracy of image recognition. In this paper, after denoising the image, using the obtained image file, the image is segmented using the binarized image segmentation method based on the maximum interclass error Hull ultragreen feature to remove the crop background in the crop pests.The purpose of separating crop leaves and pest images is a prerequisite for better feature extraction [8, 9].
2.2. Feature Extraction Method Based on Multiple Parameters
Compared with the past general machine learning methods, SVM has the following characteristics: firstly, because the traditional machine learning methods are based on the principle of empirical risk minimization, the generalization ability is insufficient for a limited number of samples, while SVM has good generalization ability based on VC and structural risk minimization, and shows obvious advantages in solving small sample problems. Secondly, in solving high-dimensional problems, neural networks are often easy to fall into local extremum. The basic idea of SVM is to solve the optimal classification surface by maximizing the classification interval. The algorithm is finally transformed into solving an optimization problem of convex quadratic programming, and the global optimal solution will be obtained. Thirdly, SVM is transformed into high-dimensional feature space by introducing kernel function and nonlinear transformation. The complexity of the algorithm is not related to the dimension of samples but only related to the number of samples. Fourthly, support vector machine is similar to the three-layer feedforward neural network in structure. The number of hidden layer nodes is determined by the support vector. By solving the optimization problem of convex quadratic programming, the number of hidden layer nodes and weight vector can be obtained at the same time.
Image feature extraction, as an important part of the image recognition process, realizes the conversion of image visual features into quantitative features. The selection and value of feature parameters will have a direct impact on the subsequent construction of image recognition classifiers [10]. Crop pest detection service based on image recognition in the feature extraction of image recognition, the feature extraction method with multiple feature parameters is used to extract the features of soybean leaf pest images obtained after image segmentation. The selected features mainly include color, texture, and shape aspects:(1)M-SVM multiclass recognition algorithm based on one-to-one elimination strategy and fusion kernel function: the crop pest detection service based on image recognition uses an SVM classifier to realize the classification and recognition of soybean leaf moth pests. As a nonlinear classifier, SVM has the characteristic of not requiring a large number of training samples when constructing a classification recognition model. This paper uses the M-SVM algorithm of fusion of local kernel function and global kernel function to construct a classification recognizer, but SVM classification recognizer is generally used to solve the classification problem of two types of samples. However, the types of soybean leaf moth pests are diverse, so when using M-SVM to classify and identify multiclass samples, a one-to-one elimination strategy (one against one with eliminating) is used to complete the classification and identification of soybean leaf borers [11, 12].(2)M-SVM algorithm based on the fusion of local kernel function and global kernel function: the selection of SVM kernel function has a significant impact on the performance of SVM recognizers, especially for linearly inseparable data. According to the previous analysis of the characteristics of the SVM local kernel function and the global kernel function, it is concluded that the use of a single type of kernel function will affect the recognition accuracy and efficiency of the final image recognizer [13]. Therefore, using the global kernel function with strong generalization ability but poor learning ability, combined with the characteristics of strong local kernel function learning ability but weak generalization, this paper adopts local kernel function and global kernel function in the selection of kernel function of SVM algorithm the way of fusion. Specifically, in the selection of the global kernel function of SVM, a polynomial kernel function is adopted, and in the aspect of local kernel function selection, a radial basis function is adopted [14]. In this way, an M-SVM classification recognizer that uses radial basis kernel function and polynomial kernel function is finally constructed, and the classifier is used to classify and recognize soybean leaf pests to increase the accuracy and performance of image recognition of soybean leaf pests.
3. Validity Experiment Based on Local Kernel Function and Global Kernel Function
The SVM algorithm is based on the linear separable situation and is developed on the optimal classification surface. The one that conforms to the following principles is called the optimal classification surface. One is that the classification line is required not to distinguish between the two patterns categories while ensuring the maximum classification interval [15, 16].
SVM searches for a hyperplane that meets the two requirements of the optimal classification surface. Assuming that the training data has n samples x to be classified and the attribution category is y, then
The training data can be normalized by a hyperplane. Using the hyperplane expression W + b = 0, the samples can be distinguished by class, and the classification interval can be maximized. At this time, the maximum classification interval is transformed into the minimum value of formula (2) under the condition of formula (1). At this time, the maximum interval is the classification surface that minimizes , that is, the optimal classification surface. Among the classification samples, the samples that are closest to the classification plane and parallel to the optimal classification plane, and located in the hyperplane are called support vectors [17, 18].
At this time, formula (2) is transformed into
The optimal classification surface is based on the linear separable condition, but when the linear is not separable, a condition called relaxation direction needs to be added, and the relaxation direction satisfies i ≥ 0 (i = 1,2,3, …, n).
In formula (3), C is a specified constant, which is called a penalty parameter. The final realization transforms the linear inseparable problem into a dual problem, that is, under the conditions of formula (1) and formula (2), find the maximum value of formula (3).
To find the maximum value of formula (3) is to find the extreme value of a binary function under an inequality condition. It is assumed that the coefficients i, W, and b of formula (3) correspond to the optimal solutions W′, i′, b′, the optimal classification function obtained is formula (5) [19, 20], where sgn() is the symbolic function, b' is the threshold for classification, and kx is the kernel function.
Assuming that χ is the input space (Euclidean space or discrete set) and Η is the feature space (ie Hilbert space) if there is a mapping φ (x) from χ to Η: χ⟶Η, it is satisfied that all x, z ∈ χ, all have formula (6) established [21, 22].where Κ (x, z) is the kernel function, φ (x) is the mapping function, φ (x)·φ (z) is the inner product of x, z mapped to the feature space [23].
The linear kernel function is a relatively simple kernel function, which can be regarded as a special case of the radial basis kernel function. The linear kernel function formula is as follows:
The radial basis kernel function is used as a local kernel function, but when the input variable point is far from the center point, the value of the radial basis kernel function will also become smaller. The radial basis kernel function formula is as follows:
The polynomial kernel function is often used to solve the problem of the normalization of politics and religion. That is, the input variables are vector orthogonal and the modulus is 1 [24]. The polynomial kernel function formula is as follows:
The polynomial kernel function is a kind of global kernel function, and the input variable with a larger distance will also affect the value of the kernel function. The larger the parameter d, the higher the dimensionality of the mapping, but at the same time, the amount of calculation also increases. When d is large to a certain level, it will cause performance problems due to high learning complexity [25, 26].
The sigmoid kernel function is derived from the neural network, and the sigmoid kernel function formula is as follows:
Using the sigmoid function as the kernel function, the SVM can actually be regarded as a neural network model with multi-layer perceptual ability. At this time, the value obtained by using the sigmoid kernel function is the global optimal instead of the local optimal so as to ensure the SVM using the sigmoid kernel function has good generalization ability for unknown samples [27, 28].
Through the research and analysis of the local kernel function and global kernel function of SVM, this paper adopts the SVM algorithm based on the fusion of local kernel function and global kernel function in the aspect of the SVM algorithm which is used for image recognition and analysis. Complementary advantages improve the recognition accuracy and efficiency of the algorithm [29, 30].
4. Image Recognition Experiment of Agricultural Pests Based on Multisensor Image Fusion Technology
This experiment is based on a multisensor for image recognition of agricultural pests. The first part of this experiment introduces the design of a multisensor. The second part applies multisensor technology to image recognition and detection of pests, so as to know the results of image recognition, and analyze the results of image recognition of pests.
4.1. Integration of the Multisensor Protocol
This article uses the ARM series microcontroller STM32F103RCT6. The reason for choosing this development board is not only because the STM32 series development board has the characteristics of high performance, low power consumption, small size, etc., it has a cost performance, and it also has a wealth of boards. Load resources. Use the network port debugging assistant to debug interface parameters, send request data to the subcontrol node, and receive sensor data. After receiving the requested data, the subcontrol node transmits real-time monitoring data to the upper computer [31, 32].
Feature-level fusion refers to first extracting the data features in the original data and then fusing these data features. The fusion can retain the information characteristics of the monitoring target, and the extracted characteristic information can improve the efficiency of data processing after compression processing. It can also obtain various attributes of the target relatively easily, but its fusion accuracy is better than the accuracy of the data level difference. Decision-level fusion is a high-level information fusion. First, each sensor collects raw data. After data-level fusion and feature-level fusion, the raw data obtains various attribute characteristics of the target, and then the acquired various attribute characteristic data is based on certain information. The fusion rules are fused to get the final decision result [33, 34]. The fusion method has good real-time performance and strong fault tolerance, can be applied to different types of sensors, and has strong adaptability. However, this method is cumbersome and expensive to preprocess, as shown in Table 1.
In the two-way data transmission between the network port and the serial port, the data transmission rate of the serial port is much lower than the data transmission rate of the network port, which belongs to asynchronous transmission. When the serial port sends data to the network port, there is generally no error code (unless the Ethernet is in a congested state), so this section focuses on testing the data transmission process from the network port to the serial port. Figure 1 is a data frame sent every 100 ms, every 10,000 pieces of data are counted, and the bit error rate under different transmission rates. Since TCP/IP is much higher in transmission rate than RS232, RS485, and IIC, it is necessary to set the transmission rate of RS232, RS485, and IIC and test the limit of the interface by reducing the transmission rate. The horizontal axis is the amount of data, and the vertical axis is the percentage of the data transmission rate. Figure 1 summarizes the data transmission rates of the three transmission methods.

As shown in Figure 1, when the transmission rate of the low-speed interface is reduced to 4800 bps, the bit error rate between the TCP/IP interface and the RS232 and RS485 interface is 0.33% and 0.31%, and the bit error rate between the TCP/IP interface and the IIC interface is 0.33% and 0.31%. The bit error rate is 0.28%. The main reason is that when the transmission rate difference between the two ends of the interface is too large, it will cause the buffer area to overflow and cause data transmission errors. It can be adjusted by increasing the storage space of the buffer area and reducing the survival time of the data packet.
As shown in Figure 2, from 100 data transmissions between any two ports, the average delay of every 10 transmissions obtained by statistics, the delay between different ports is relatively small, all between 15.5 and 17 ms. The average delay between each port changes, the sensor protocol is integrated into each subcontrol node, and the data is encapsulated into a unified format, converted into the TCP/IP protocol, and sent to the main controller through the Wi-Fi module.

The data transmission delay between the subcontrol node and the main controller is also an important indicator to measure the performance of a system. The fire monitoring Internet of Things in this article has three subcontrol nodes, the structure of the three subcontrol nodes, and the connected sensors and households. The environment is the same, and three identical subcontrol nodes are placed in the bedroom, kitchen, and living room, respectively [35].
As shown in Figure 3, the average time delay obtained by statistics for every 500 pieces of data sent between the subcontrol node and the main controller, a total of 6 times, when there is only one subcontrol node connected to the main controller, the subcontrol node. The delay of data transmission to the main controller is between 610 ms and 630 ms; when two subcontrol nodes are connected to the main controller at the same time, the data transmission delay of the two subcontrol nodes to the main controller is between 650 ms and 670 ms. When there are three subcontrol monitoring nodes connected to the main controller at the same time, the data transmission delay of the three subcontrol nodes to the main controller is between 710 ms and 730 ms. It can be seen from the test results that the home fire monitoring Internet of Things adopts wireless networking, and the transmission delay is between 600 and 750 ms, which can meet the real-time requirements for fire monitoring.

4.2. Feature Extraction and Image Recognition Data Results and Analysis
Image recognition may be based on the main features of the image. Each image has its characteristics, such as the letter A has a point, P has a circle, Y has an acute angle at the center, etc. The research on eye movement in image recognition shows that the line of sight always focuses on the main features of the image, that is, the places where the contour curvature of the image is the largest or the contour direction suddenly changes, and these places have the greatest amount of information. Statistics is a very old science. It is generally believed that its academic research began in the time of Aristotle in ancient Greece and had a history of more than 2,300 years. It originated from the study of social and economic problems. In the course of more than two thousand years of development, statistics has at least experienced three stages of development: “political situation in city-state,” “political arithmetic,” and “statistical analysis science.” The experiment mainly realizes the image recognition and detection analysis of soybean leaf borer moth pests. The detected pest samples mainly include six species, namely, bean leaf roller moth, bean leaf moth, flame leaf moth, blue ash butterfly, bean Lycaenidae, and alfalfa Adults of Noctuidae. Through the software and hardware environment, 120 soybean leaf pest images acquired by image acquisition equipment, image denoising method based on median filter method and image segmentation based on maximum interclass error method (Otsu), and super green feature used in this paper are used. The method's image preprocessing method and the multiparameter-based feature extraction method are used to extract and analyze features from the three aspects of color, texture, and shape, and finally, build the M-SVM based on the fusion of local kernel function and global kernel function. The algorithm, combined with the one-to-one elimination strategy of multiclass recognizers for experimental verification, proves that the crop pest detection service based on image recognition constructed in this paper is effective and highly accurate in detecting soybean leaf moth pest features, as shown in Table 2. The experimental data in this paper is based on the real data source of the Internet, and the data is also obtained through on-site investigation, which is reliable.
Using the above six categories of pests (as shown in Table 2), randomly select an image for each category, process the images using image preprocessing methods, analyze and verify the results of image preprocessing, and finally select 120 effective pest images as samples, the selected samples type, and sample size. In the following, we take 20 adult worms of the leaf roller moth as an example to analyze the experimental results of image preprocessing used in this article under different environments. Using the image preprocessing program developed by python-OpenCV, compare the original image of the collected adult leaf roller moth and the image obtained after using the image preprocessing method in this paper. Statistics is a branch of applied mathematics. It mainly uses probability theory to establish mathematical models, collect data of observed systems, conduct quantitative analysis, summary, and make inferences and predictions to provide basis and reference for relevant decisions. Statistical significance is a basic statistical significance of the data. The p < 0.05 in this paper is statistically significant.
As shown in Figure 4, these six types of pests (as shown in Figure 4(b)), are randomly selected using the preprocessing method to process several sets of photo analysis and test results, and finally, 120 samples are used as samples. The results of these studies provide a preview of these references. The rotation imaging of beans and moths collected in the analysis data under different circumstances provides a preview of these references.

(a)

(b)
As shown in Figure 5, when the original image of the adult C. vulgaris is segmented into the target image and the background image, the 2g-rb method is first used to obtain the grayscale image and histogram, and then the largest class-based image is used to segment the target image and the background image. Error method (Otsu) and ultrafiltration feature calculation method to complete the image segmentation operation.

Using the soybean leaf moth pest image obtained in the image preprocessing stage, the feature extraction method based on multiparameter is used to extract and analyze the features from the three aspects of color, texture, and shape. For the selection of color and texture, the segmented image is used. Texture feature: complete the feature extraction of color and texture; the following uses the morphological feature, and the data extracted from the six kinds of soybean leaf pests randomly selected one kind of pest image as an example to illustrate the image feature extraction parameter items and values, as shown in Table 3.
For the maximum error rate, a two-digit analysis was performed on a 2g rgb image of an adult worm. In fact, from the image dilatation, the gray matter image is transformed into an image containing beans and a polymer binder. We used image editing to compare this image with the original image recognition results.
As shown in Figure 6, the 2g-r-b grayscale image of the adult image of C. vulgaris was binarized using the method based on the maximum between-cluster error to obtain the gray binarization process image of the adult C. vulgaris image. On the basis of the gray-scale binarization 0.4 of the obtained image of the adult worm, the image obtained by using this as a mask, the image after segmentation is as follows. The image processed by the image preprocessing method is compared with the original object area, and finally, obtained image data can be used for feature extraction.

Using the feature parameters after image preprocessing and image feature extraction, the M-SVM algorithm based on the fusion of the radial basis kernel function and the polynomial kernel function proposed in this paper, the local kernel function and the global kernel function fusion, and the one-to-one multicategory recognizer of the elimination strategy was verified by experiments, and the image recognition parameters and recognition results of six soybean leaf borers were obtained as shown in Table 4.
Through the above M-SVM algorithm based on the fusion of radial basis kernel function and polynomial kernel function, and the one-to-one elimination strategy, the soybean leaf moth pest identification method is constructed. The weight of the polynomial kernel function is 0.4. When the weight ratio of the kernel function is 0.6, the recognition rate of the six soybean leaf borers is higher.
As shown in Figure 7, according to the characteristics of digital image data, the number of general digital image acquisition and quantization levels is limited, usually 256 levels. At the same time, in the collected sample images, the target image and the background image will only fluctuate greatly when the pixel value of the transition area where some images cross is 40%. Most areas of the image belong to the 30% pixel change trend is small and the gray value is closer. That is, each image has an area with similar gray levels, and the above areas are also in the same gray level area when the gray level is quantized. In this way, it can be obtained that the pixel changes of an image have a correlation and the grayscale changes of the image have regional characteristics. Based on this feature of the image, the weighted median fast filter algorithm is used to reduce the noise of the image.

As shown in Figure 8, the image recognition processing function analyzes the uploaded image data and uses image recognition algorithms to perform recognition processing. Image recognition processing and analysis functions can be specifically divided into three subfunctions: image preprocessing, feature extraction and analysis, and image recognition. The image preprocessing subfunction realizes noise reduction and segmentation processing on the collected crop pest images, which is convenient for subsequent feature extraction of the image.

The feature extraction and analysis subfunction are mainly responsible for completing the feature extraction of image data after image preprocessing, using feature extraction algorithms to perform feature extraction, forming an image recognition feature library, which is used for image recognition and analysis and is also used for subsequent collection of crop pest images to be identified. Feature extraction for subsequent image recognition processing: Image recognition processing uses image recognition algorithms to perform image preprocessing, feature extraction, and analysis on the collected agricultural pest pictures, and train the feature data with the classification recognition training sample set provided by the image feature extraction module to construct the final image recognition classification model, identify and analyze the features extracted from the image of unknown crop pests to be detected, and obtain the results of image recognition and analysis. The image recognition result feedback processing module is mainly based on the image recognition result fed back by the user to manually process the image data that is incorrectly recognized or unrecognized so that the feature library can be corrected and processed to improve the accuracy of image recognition.
The crop pest detection service based on image recognition researched and constructed in this paper mainly realizes that the user uploads the crop pest image information taken by the image acquisition device (such as smart phone, digital camera, etc.) to the client of the crop pest detection service based on image recognition. The server uses the image recognition algorithm to process and analyze the uploaded image information, obtain the image recognition results of crop pests, and provide the image recognition result feedback function.
5. Conclusions
This article uses the microservice design pattern, develops based on the SpringBoot framework, and deploys it in a distributed manner on the VMware cloud platform. In terms of image recognition, the image denoising method based on median filtering, the image preprocessing method based on the maximum between-class error method (Otsu) and the image segmentation method based on super green features, the feature extraction method based on multiparameter features, and the M-SVM multiclass recognition algorithm based on one-to-one elimination strategy and fusion kernel function realizes the recognition of six soybean leaf borers, but there are also some shortcomings, which still need further research and improvement. At the same time, in the M-SVM multiclass recognition algorithm based on the one-to-one elimination strategy and the fusion kernel function, the method of fusion of multiple kernel functions also needs to be continuously optimized. I hope that this plan can be further optimized and perfected in the follow-up research and exploration, and I hope to contribute my own strength to the issue of agricultural pest control in the future.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare no conflicts of interest in this study.
Acknowledgments
This work was supported by the 2021 Education Department of Guangdong Province Scientific Research Project: the Fiscal Revenue Forecast of the Country’s 14th Five-Year Plan in Guangzhou Based on Support Vector Regression (Grant number: 2021KTSCX164) and 2019 Scientific Research Project of South China Business College of Guangdong University of Foreign Studies: the Research about Image Recognition Method of Plant Based on Deep Learning (Grant number: 19-026B).