Abstract

In the context of rural revitalization, it is imperative to evaluate the performance of poverty alleviation scientifically, which can not only promote the removal of poverty areas and their sustainable development but also comprehensively evaluate the achievements of targeted poverty alleviation work. In this work, the rough set and Support Vector Machine- (SVM-) related concepts are first introduced to establish the required index system. Long-Range Radio (LoRa) wireless communication technology is adopted to collect relevant data, rough set is utilized to preprocess the data, and the importance and relative weight of each indicator are calculated. After elimination of redundant indexes, a new decision table is established, and a prediction model of SVM is established. In parameter optimization, Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and grid search are employed. Experimental data show that the minimum error of GA is 1.82%, the test error is 5.11%, and the training error is 3.18%. The minimum error of PSO is 1.86%, the test error is 5.62%, and the training error is 3.15%. The minimum error of grid search method is 2.11%, the test error is 10.73%, and the training error is 2.34%. These three algorithms can optimize SVM parameters and effectively improve the performance evaluation results of targeted poverty alleviation by the model. By comparing the effect of the SVM model without rough set, it is found that the prediction accuracy of model can be improved by using rough set. Given the cross-validation error rate of the model and the predicted mean square error, GA is better than others. The established model makes up for the situation of strong subjectivity and low universal applicability in typical performance evaluation, enriches performance evaluation methods of targeted poverty alleviation, and is of certain practical reference value in performance evaluation of targeted poverty alleviation.

1. Introduction

Important tasks related to poverty alleviation must be carried out since the founding of the People’s Republic of China and in the future [1]. As the characteristics of poverty are different in different periods, it is necessary to carry out targeted poverty alleviation. The poverty alleviation experiences several stages: relief for poverty alleviation stage, system reform stage, development poverty alleviation stage, comprehensive poverty alleviation stage, and targeted poverty alleviation stage [2]. At present, the battle for poverty alleviation has reached a decisive moment. Whether the poverty-stricken areas can be eliminated from poverty and achieve sustainable economic development are critical issues. Under this background, it is required to establish the task orientation of targeted poverty alleviation in a timely manner, clarify the situation, grasp new perspectives, and use more efficient methods to analyze and evaluate the performance of targeted poverty alleviation [3]. Performance appraisal is not only the supervision of poverty alleviation work and achievements but also the summary of poverty alleviation work experience, providing a basis for promoting the sustainable development of poverty alleviation work under the new development background. The performance evaluation system of targeted poverty alleviation is an issue worth exploring. Reasonable and efficient performance evaluation shows high reference significance for solving practical problems; and it plays a role in promoting the reform and innovation of the poverty alleviation system [4].

Zhu et al. quantitatively analyzed the effects of national poverty alleviation policies in the context of targeted poverty alleviation by using breakpoint regression and panel regression methods on the panel data of 9 national-level poverty-stricken counties (districts) in the Three Gorges Reservoir Area of Chongqing from 1998 to 2015. The results showed that the national targeted poverty alleviation policy can greatly affect the poverty reduction, and the market economy activity and total output value of agricultural products significantly increase the net income of farmers in poor areas [5]. Liu and Wang elucidated the framework between land engineering and rural poverty alleviation. The contribution of land engineering technology and newly invented land use methods to poverty alleviation was illustrated [6]. Zou et al. introduced the unsustainable situation of poor rural areas and the implications and advantages of poverty alleviation policies. Then, the whole process of poverty alleviation policy, including formulation, implementation, and completion, was analyzed. It was found that the whole-village relocation model was successful in improving the living environment, income, and public services of local villagers [7]. Li et al. proposed a method for decision-making test that can analyze the relationship between standards and find the key factors in the evaluation system and then analyze the key factors affecting the evaluation system [8]. Fan and Cho suggested that the international community has committed itself to eliminating poverty in all its forms around the world. While global poverty is gradually reduced, it is unbalanced in different parts of the world. This research is vital to ensure that regions are not left behind in the fight against poverty. It is valuable to evaluate international experience and agriculturally focused approaches to poverty alleviation. Even before poverty is substantially reduced through sectoral and regional development, social safety nets must be established to cover those who have not yet benefited from growth and development [9]. Hidayat and Sutarjo adopted qualitative methods to collect and analyze poverty reduction plans in the form of literature research, interviews, and observation. It was found that the participation in poverty reduction through CSR projects was not ideal, because most companies carried out CSR projects directly for target groups without coordinating with the government. Factors affecting enterprises’ participation in coordinated poverty alleviation include poor leadership, lack of communication between government and company leaders, and poor education [10]. The theories or methods of the above experts and scholars are summarized in Table 1.

To sum up, from the perspective of rural revitalization, Long-Range Radio (LoRa) wireless communication technology is used to collect experimental data after determination of the required indicator system. The rough set is used to preprocess the data, and then, the prediction model of SVM is established. Poverty alleviation is a protracted campaign. The adoption of targeted poverty alleviation evaluation method is more suitable for scientific evaluation in the new era, and the establishment of effective evaluation model can better grasp the process and effect of poverty alleviation. Therefore, it is hoped to use different indicators and new methods to establish a model under rough set optimization SVM in view of targeted poverty alleviation, aimed at improving the efficiency of performance evaluation of targeted poverty alleviation, enriching evaluation methods, and providing some research ideas for other scholars. The organizational structure of this work is shown in Figure 1.

2. Methods

2.1. Construction of Precision Poverty Alleviation Performance Evaluation Model Based on SVM

Rough set can reason about information, explore the hidden data, and discover the deep law of the information [11]. This is a mathematical tool through soft computing, which can effectively classify and process the fuzzy data [12]. In a plain way, it can analyze unknown completeness and deterministic information and even data with contradictory information. Its basic idea is the knowledge reduction based on classification processing, combined with the concepts and decision rules formed by knowledge base classification, that is, to ensure the consistency of classification ability. With the equivalent classification relationship in a specific space, it can leave the basic concepts and eliminate redundant secondary indexes, thereby achieving the attribute compression approximation [1315]. What rough set is exploring is decision table, which can reduce redundant information only by observing the data without using prior experience other than the observing data. Therefore, when some practical problems are solved, using rough set can make rough information better [16]. The data of 14 indexes of 23 counties in China are analyzed, and the 14 indexes are Gross Domestic Product (GDP), total retail sales of social consumer goods, rural residents’ disposable income, per capita savings deposit balance, on-the-job worker average wage, number of people relocated, number of students in primary and secondary schools, number of beds in medical and health institutions, number of social welfare units, rural employment rate, capacity of social welfare units, number of villages lifted out of poverty, number of people lifted out of poverty, and ecological afforestation.

SVM is better than traditional statistical methods and shows good results even for nonlinear. It can avoid the disaster of dimensionality in a certain number of samples and shield the occurrence of local minima [17]. The earliest SVM is one of the classification learning machines to solve two classifications [18]. Figure 2 shows the classification of SVM.

In Figure 2, there are two different sample points in the figure. The cube and the cylinder are denoted as -1 and 1, respectively, and the line where the cylinder is located is . The line where the cube is . The line between and is denoted as . This is the classified hyperplane. and are the upper and lower edge lines in the hyperplane.

The distance between and is called the separation distance. If the maximum separation distance is required, it can be replaced by finding . It can be seen that when , its value is the smallest, which means that the distance between the two straight lines and is infinite, and the sample points and are not separated but clustered between and . Therefore, to satisfy the preconditions and synchronously, the following equation can be obtained:

In the above equation, s.t. is short for subject to (such that). The Lagrange multiplier is introduced to construct the Lagrangian function, and the unconstrained optimization can be found and solved. The optimal solution satisfies , the support vector is which is not zero, the corresponding sample point is , and it satisfies the falling point on the boundary line of and . The objective function can be maximized and the decision function can be obtained as follows:

If the classification information is linear but there are indiscernible outliers, it can introduce slack variable to make the classification error and interval not large. The information combining with empirical risk is expressed with . Based on the minimizing of structural risk, the penalty parameter can be added to balance. The maximum interval and minimum misclassification error can be obtained by classifier optimization soft interval problem

The nonlinear changes can be used to solve the nonlinear classification. SVM combines the kernel function as a mapping function and maps the vector in the initial input space to the high-dimensional feature space for simulation. Then, the nonlinear situation is transformed into a linear situation [19]. Figure 3 shows the kernel function mapping.

As shown in Figure 3, the choice of the kernel function should be related to the amount of information and the research. If the choice is different, the effect of the experiment will be different. After the kernel function is selected, it should replace the inner product value with the kernel function value , and the final classification function can be obtained as follows:

The regression prediction is solved by support vector classification. The maximum interval of classification refers to that when all the sample points fall between the boundary lines, it can be approximately understood as a linear regression [20]. Figure 4 shows the support vector regression machine.

In data simulation, it is not guaranteed that sample points all fall on the boundary. There are gaps set as . It is the maximum gap between and , namely, the maximum distance between hyperplanes. Only when , the overall error gap can be calculated, so the loss function is defined as . To reduce outlier error, relaxation variable is introduced and penalty factor is added. Then, the support vector regression is expressed as below.

In equation (7), s.t. is short for subject to (such that). Similar to classification, the Lagrange multiplier is substituted to construct the Lagrange function. The linear regression function is obtained as shown in

In equation (9), represents the model of training set sample, represents training set sample, represents intercept, and is the parameter. The calculation method of nonlinear regression function is shown in

In equation (10), represents the random coefficient and the remaining letters have the same meaning as the above equation.

The specific model construction process is shown in Figure 5.

Figure 5 shows that the performance evaluation of data should be carried out first, and discrete processing of problem data should be carried out. Then, the initial decision table is established, and the weight of the data is calculated. On this basis, a series of optimization and upgrading are carried out. In a word, the overall idea of this model is reducing the information through rough set, building a new decision table, and then predicting and learning the data through SVM. In response to relevant policy requirements, targeted poverty alleviation has invested all kinds of resources into all walks of life. Various poverty alleviation projects provide reserve forces for poverty alleviation in poverty-stricken households and areas and produce good social effects. The most intuitive result of targeted poverty alleviation is the results of antipoverty, followed by the objective impact of targeted poverty alleviation in all aspects. The establishment of the model indicators this time consists of four aspects, namely, the economic level, the social level, the antipoverty effect, and the ecological situation. Table 2 shows the specific evaluation indicators.

The equal frequency discretization is adopted to process the simulated the performance evaluation indexes of targeted poverty alleviation. By discretization, the information of each performance evaluation index can be evenly distributed. An initial decision table is established for the discretized performance evaluation index data. The decision table contains sample sets and performance decision values, that is, . Based on the established decision table, the rough set is introduced to optimize the attribute importance reduction algorithm. The degree of dependence is calculated through the positive domain, which represents the degree of dependence of on , and the expression is given as follows:

The importance of conditional attributes is calculated by combining the degree of dependence.

The weight of the importance reduction index is calculated with

An index containing empty data can be dropped by calculating the importance of each attribute. The comprehensive score can be calculated according to the weight, and a new decision table can be established based on the reduced indicators, the sample set of poor areas, and the performance evaluation decision value.

2.2. Precision Poverty Alleviation Performance Evaluation Model Based on Optimized SVM

The data of the new decision table is normalized. This is because the performance evaluation indicators of targeted poverty alleviation have different information dimensions. Information of different sizes will cause large errors due to imbalance [21]. The data is normalized, and the simulation speed of the model will also become faster. The information uses the normalization formula to normalize the information to ; the expression is as follows:

In the above equation, represents the decision table information. Genetic Algorithm (GA) is used to optimize the parameters. The idea comes from the Biological Evolutionism, that is, survival of the fittest and elimination of the unfit, leaving the best population. From selection to inheritance to variation, the initial population, namely, the solution space, is evolved into the optimal solution space through self-optimization [22]. Figure 6 is the flowchart of GA to optimize the SVM model.

In Figure 6, GA is simple and easy to understand, but the search speed is slow. The crossover operation is the most important step, which determines the performance of the entire algorithm to a large extent. It is basically the same as SVM parameter optimization and heuristic GA for information reduction steps.

Particle Swarm Optimization (PSO) is also used to optimize the parameters. The idea is inspired by the foraging scene of birds, where birds fly towards the area where there are birds foraging. During the process, the birds stop to search to get close to the best position [23]. Figure 7 is the flowchart of PSO algorithm to optimize the SVM model.

The PSO algorithm regards the situation to be explored as a particle and the orientation of the particle as the required solution. The particle randomly explores the target in the localization region (the solution space) according to its historical position and the optimal position of the particle in the swarm until the optimal position is found [24].

Grid search uses an exhaustive method to combine parameters, and the combined results generate a grid. Cross-validation is performed and evaluated to find the most accurate value.

2.3. Information Collection and Processing Based on LoRa Wireless Transmission Technology

LoRa wireless communication technology is characterized by farther propagation distance than other wireless modes under the same power consumption condition, realizing the unification of low power consumption and long distance. It is 3-5 times larger than traditional radio frequency communication at the same power consumption [2527]. LoRa has a propagation range of 2-5 km in urban areas and up to 15 km in suburban areas. The advantages of the communication technology are used to collect experimental data. The information is collected through the wireless communication network built by LoRa communication technology (Figure 8).

Figure 8 shows that the scheme is divided into three parts. The data acquisition module collects information in combination with the data acquisition system and then transmits it to the collector, which transmits it to the concentrator. Concentrators and dedicated concentrators are transmitted to Ethernet via LoRa wireless communication. LoRa wireless communication transmission is of long-distance transmission and low power loss. When A/D acquisition system collects specific information data, the system will change from receiving data mode to sending data mode, and the data signal will be sent to the antenna through coding and data transmission line for air signal transmission. If there is signal input, it will switch to receive signal mode after transmitting signal. If there is no signal input, the system will enter the hibernation mode.

23 groups of data are collected, represented with , which is the argument domain in the decision table. The conditional attribute selection is selected by the performance evaluation index established above, represented with , the discretized data is used as the conditional attribute value, and the decision value is .

To sum up, rough set can realize the calculation of information reasoning and explore the data implied in the information. SVM can improve the calculation accuracy of data samples. The operation logic of GA and PSO algorithm is continuously selecting the parameters in the experiment. Finally, the optimal conclusion is drawn that LoRa wireless communication technology relies on its own characteristics to make the propagation distance longer. These algorithms play an important role in analyzing the performance of targeted poverty alleviation under wireless communication technology adopted in this work.

2.4. Key Algorithms to Be Adopted

(i)Grey Wolf Optimizer (GWO) is a swarm intelligence optimization algorithm proposed by Mirjalili et al., a scholar at Griffith University in Australia in 2014. It is an optimization search method inspired by the prey activity of grey wolves. It has the characteristics of strong convergence performance, few parameters, and easy implementation. In recent years, it has received extensive attention and successfully applied to workshop scheduling, parameter optimization, and image classification. The principle of the GWO is that grey wolves belong to the canids that live in packs and are at the top of the food chain and they strictly adhere to a social dominance hierarchy. The specific form is shown in Figure 9

The first level of social hierarchy: the head wolf in the pack is recorded as A, who is responsible for making decisions on activities such as predation, habitat, work, and rest time. Other wolves must obey the orders of wolf A, the dominant wolf. In addition, wolf A is not necessarily the strongest wolf in the pack, but must be the best in management ability.

The second level of social hierarchy: wolf B, which obeys wolf A and assists wolf A in making decisions. After wolf A dies or becomes old, wolf B will be a candidate for wolf A. Although wolf B obeys wolf A, it can dominate wolves at other social levels.

The third level of social hierarchy: wolves C, which obey wolf A and dominate the wolves of the remaining levels. Wolves C are generally composed of young wolves, sentinel wolves, hunting wolves, old wolves, and nursing wolves.

The fourth level of the social hierarchy: wolves D are usually required to obey wolves in other social hierarchy. It seems that wolves D do not play a big role in the pack, but the wolves will have internal problems such as cannibalism without the existence of wolves D. (ii)Initialization algorithm: CPA is a population-based optimization algorithm that starts by initializing a population of potential solutions to a potential problem. First, an individual population consisting of carnivorous plants and prey was randomly initialized in the wetland. The numbers of carnivorous plants and prey are denoted as NCPLANT and NPREY, respectively. The location of each person is represented in the matrix as follows:

In the above equation, is the dimension and is the sum of NCPLANT and NPREY. The fitness of individual is evaluated by replacing individual with a predefined fitness function. The obtained fitness value is stored as

Individual represents the solution vector of the optimization problem, and the fitness value in equation (16) represents the quality of a particular solution vector. For the minimization case, the smaller the number of fitness values, the higher the quality of the solution vector. (iii)Plane optimization algorithm: planar optimization algorithm is an important algorithm in graph theory, which refers to the method of judging whether a given graph is a planable graph and finding a plane embedding of it (if it is a planable graph) that can be realized on a computer. The plane optimization algorithm has the following theorem

Conditions permitting, equation (17) is applicable for each bridge of .

In equation (17), denotes the set of faces that are painted in the middle bridge , and denotes a data set.

2.5. Control-Related Parameters

Parameters for SVM are given as follows.

TestSVM_Parameter.svm_type = C_SVC;
          TestSVM_Parameter.kernel_;
          TestSVM_Parameter.;
          TestSVM_Parameter.;
          TestSVM_Parameter.;
          TestSVM_Parameter.Cache_;
          TestSVM_Parameter.;
          TestSVM_Parameter.
          TestSVM_Parameter.;
           TestSVM_Parameter.nr_;
           TestSVM_Parameter.;
           TestSVM_Parameter.weight_;
Meanings for parameters of SVM are defined as follows:
             Int svm_type: the type of SVM
                0: C_SVC: multiclass recognition and solution
                1: NU_SVC: multiclass recognition and solution
                2: ONE_CLASS: two-class recognition and solution
                3: EPSILON_SVR: regression analysis and solution
              Int kernel_type: type of kernel function

Among the above-mentioned parameters, there are only two commonly used ones. One is a linear kernel function, and the other is a Gaussian kernel function. The reason is that their effects are often the best, and the amounts of parameter adjustment are relatively small. Specifically, two parameters need to be adjusted, one is and the other is gamma. refers to the penalty coefficient, which is the coefficient of the slack variable. It balances the relationship between the complexity of the support vector and the misclassification rate in the optimization function, which can be understood as a regularization coefficient. When it is larger, the loss function will be larger, which means that it is reluctant to drop outliers that are farther away. In this way, there will be more support vectors, which means that the model of support vectors and hyperplanes will become more complex and easier to overfit. On the contrary, when the comparison is small, it means that those outliers are not to be dealt with, fewer samples will be selected as support vectors, and the final support vector and hyperplane model will also be simple. Another tuned parameter is the parameter of the RBF kernel function. When it is large, the influence of a single sample on the whole classification hyperplane is relatively small, and it is not easy to be selected as a support vector; otherwise, the support vector of the whole model will be small.

In addition, the specific specifications of the system are shown in Table 3.

In Table 3, the local search capability of the wireless communication system used in this experiment is 1.5, and the global search capability is 1.7. The maximum number of evolutions is 200, the initial particle swarm number is 20, the search range of language is 0~100, and the search range of language is 0~100. Hence, the performance of the information system is good.

3. Results

3.1. Predictive Results of the Model

GWO, initialization algorithm, and plane optimization algorithm are employed to analyze the data before the prediction model is analyzed. The results are shown in Figure 10.

Figure 10 shows that GWO, initialization algorithm, and plane optimization algorithm all have good optimization effects on relevant research data. GWO eliminates about 19 duplicates from 150 data sets, with a final accuracy of 96.9%. The initialization algorithm removes about 21 duplicate data from 150 data, and the final accuracy is 94.3%. The plane optimization algorithm eliminates 30 duplicate data in 150 data, and the final accuracy reaches 94.1%. Obviously, GWO has the highest accuracy among the three algorithms.

Based on the rough set, the indexes in the system are not equally important. Rough set can reduce the redundant information in the data. Figure 11 shows the attribute reduction structure and weight calculation results.

As given in Figure 11, the number of indexes is greatly reduced, and rough set reduces the redundant information, leaving relevant important indexes and reducing the computational complexity of SVM. The optimal parameters of the simulation are found through GA. and are set, the maximum evolutionary algebra is 200, the maximum population size is 20, and the training set is subjected to tenfold cross-validation. The minimum error is 1.82%, the test error is 5.11%, the training error is 3.18%, and the optimal parameters () are . However, it must consider the test error rate and select models with errors of 5.11% and 3.18% to minimize the risk. Figure 12 shows the fitness curve of GA.

Figure 12 shows that the smaller the value of the fitness, the better the fitness; and the average fitness value is close to the optimal fitness value, indicating that each parameter solution in the population is around the optimal solution. In addition, the population converges faster. Figure 13 shows the comparison of GA-SVM model prediction results.

Figure 13 intuitively reflects the difference between each predicted value and the actual value. In the training set, the performance predictions for these five regions varied widely. The overall data of the proposed contract in the test set is consistent. It is interpreted as better model learning ability and generalization ability.

The optimal parameters of the simulation are found through PSO. and are set, the maximum evolution algebra is 200, the initial particle population is 20, the local search ability is 1.4, and the global search ability is 1.6. The minimum error is 1.86%, the test error is 5.62%, the training error is 3.15%, and the optimal parameters are . Figure 14 shows the fitness curve of PSO.

As illustrated in Figure 14, a stable fitness value is maintained when the number of iterations is smaller than 145. When the number of iterations is 145-180, the fitness decreases. After the number of iterations reaches 180, the optimal parameter fitness value remains unchanged, and the convergence speed is slow. Figure 15 shows the comparison of the prediction results of the PSO-SVM model.

In Figure 14, the performance prediction values of the five regions in the training set are significantly different, which is consistent with the prediction results of the GA-SVM model. The sample performance errors in the remaining poor areas fluctuate slightly, and the trends of the two values remain consistent. It suggests that the PSO-SVM poverty alleviation performance evaluation model shows a better promotion ability.

The optimal parameters of the simulation are found through grid search method. and are set. The minimum error is 2.11%, the test error is 10.73%, the training error is 2.34%, and the optimal parameters are . It shows that there may be overfitting, which is contrary to the principle of minimization. Figure 16 is a comparison of the prediction results of the grid-SVM model.

In Figure 16, only 4 samples from poor areas are consistent with the actual value, indicating that the grid-SVM model is not very effective in evaluating poverty alleviation performance, including the obvious deviation of the individual data prediction. In addition, the overall value is lower than the established value.

In summary, the GA-SVM model shows the best prediction effect on the targeted poverty alleviation performance evaluation score.

3.2. Comparative Analysis of Prediction Results after RS Optimization

Comparison on the error rate between the RS-SVM model and the SVM model reveals the effect of rough set as a reduction system more clearly. Figure 17 shows the comparison of the prediction results of each model optimized by rough set.

Figure 17 shows that the established SVM prediction model can show better generalization ability after using rough set as the prereduction system. In addition, the RS-GA-SVM model performs best in the performance evaluation model of targeted poverty alleviation. The established model expression is given as follows.

There is no need to reexamine the data for a more accurate forecast. The results show that the model has certain practicability and universality in the performance prediction of targeted poverty alleviation and can achieve better performance prediction and evaluation.

4. Conclusions

Targeted poverty alleviation has become one of the means of rural revitalization and development, and how to effectively evaluate the performance of targeted poverty alleviation has become one of the issues of widespread concern to the government. Using wireless communication technology, the following conclusions are drawn. The smaller the fitness value is, the better the fitness is, and the average fitness value is close to the optimal fitness value, indicating that each parameter solution in the population is around the optimal solution, and the population convergence rate is fast. After comparisons of the prediction results and prediction errors, it is found that the GA-SVM model has the best prediction effect on the performance evaluation score of targeted poverty alleviation. Using rough set as the prereduction system, the SVM prediction model can show better generalization ability. Moreover, the RS-GA-SVM model has the best performance in the performance evaluation model of targeted poverty alleviation.

Under the background that targeted poverty alleviation has entered the decisive stage and taken off the hat, the research on the performance evaluation of poverty alleviation has practical significance. Although some attempts have been made on the selection of indicators and evaluation methods of targeted poverty alleviation performance research, many problems still need to be explored due to the pertinence and complexity of targeted poverty alleviation. Specifically, although several criteria layers are considered and scholars’ studies are used for reference in the construction of indicators, successful targeted poverty alleviation projects and other aspects are not involved due to the limitations of research data. In addition, the decision attributes should be further combined with the actual measurement, which should be the object of future work. The research on the performance of targeted poverty alleviation is complicated. However, due to the limitations of the research level and data collection, the research on the establishment of the target poverty alleviation performance index model and evaluation method in this paper is rather superficial. In the future, it is necessary to further improve the data and modify the model system.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.