Abstract

The search range and search position of the particle are closely related to the shrinkage-expansion factor of the particle and the value of the center of the potential well. As the number of iterations increases, its search will gradually fall near the center of the potential well, and the search range will gradually decrease. Therefore, when the center of the potential well and the search range gradually approach the global optimum, the final result of the algorithm can be guaranteed to be the global optimum. Therefore, we need to optimize the shrinkage-expansion factor and the potential well center, improve the individual development ability of the group in the later stage of the algorithm and the expansion ability of the group, so that the potential well center is easier to fall near the global optimal point. Based on the chaotic strategy, a particle swarm initialization method is proposed. Using the unique ergodicity of the chaotic system, the method can make the particles spread well in the solution space of the search problem. Based on the antecedent normal cloud model, an adaptive method for determining control parameters and the center of the potential well is proposed. Based on the consequent normal cloud model, a particle adaptive mutation method is proposed. The comprehensive application of these improvement measures to the improved QPSO effectively improves the performance of the original algorithm. In this paper, the principle and method of multioptical axis parallelism measurement are studied. Based on the hardware equipment of the parallelism measurement system based on the large-diameter off-axis parabolic mirror collimator, the measurement accuracy and automation of the system are improved through image processing technology. The degree and anti-interference ability of the system are analyzed in detail. According to the characteristics of the system, an optimized focus evaluation function and a center positioning algorithm are proposed to improve the measurement speed and accuracy of the system. Through the improvement of the system DCT focus evaluation coefficient method and the optimization of the least squares ellipse fitting process, the measurement accuracy and anti-interference ability of the system are improved, the calculation time is shortened, and the real-time performance and automation of the system are enhanced.

1. Introduction

Computing is one of the most important aspects of human thinking ability, and the improvement of computing ability is closely related to the progress of human civilization [1]. From the ancient abacus to the modern supercomputer, human computing technology has achieved revolutionary breakthroughs. Throughout today, the widespread use of computers has and continues to transform our world. The modern information processing with computer as the core brings human beings into a new information age. Therefore, accelerating the computing speed of the computer to improve the computing power of the computer has become one of the central tasks of computer science [2]. How to speed up the computing power of the computer? This problem can be solved from two aspects. One is to manufacture more advanced computer hardware, and the other is to design appropriate computer operation processes, which can be called “algorithms.” The proposal of intelligent computing provides us with new ideas for finding fast algorithms and solving complex problems. Computational intelligence is a new computing method formed on the basis of the relatively mature development of neural networks, fuzzy systems, and evolutionary computing. Because computational intelligence does not need to establish an accurate mathematical model of the problem itself, it is suitable for solving those problems that are difficult or even impossible to solve with traditional artificial intelligence technology because it is difficult to establish an effective formal model [3]. In fact, computational intelligence is an esoteric science that spans disciplines including physics, computer science, mathematics, electromechanics, physiology, evolutionary theory, etc. It does not mainly study a single technology, but how to integrate several technologies up, complement each other’s strengths, and use them to solve practical problems.

In the imaging process, the image is easily blurred due to noise and other reasons, and the image obtained directly from the detector cannot well meet the requirements of clinical diagnosis, so image enhancement processing is also required [4]. In the field of image enhancement, the current image processing methods can be roughly divided into two categories: one is the frequency domain processing method, and the other is the spatial domain processing method. Most of the image enhancements are enhancements in the spatial domain. For the spatial domain processing methods, the pixels of the image are directly processed. The traditional processing methods are basically based on grayscale mapping transformation, such as uniform histogram and neighborhood enhancement. The development of the multioptical axis parallelism measurement system needs to solve the shortcomings of traditional parallelism detection methods such as large errors, inability to quantitatively analyze, poor operability, nonrepeatability, and inability to detect and adjust at the same time [5]. The main technical key point is how to improve the measurement accuracy and realize the automatic detection process and expand its use range. This has important significance and practical value in the actual production and testing process of the multioptical axis system [6].

In this paper, the chaotic initialization of quantum particle swarm optimization algorithm and the improved quantum behavioral particle swarm optimization algorithm based on cloud model adaptive mutation are proposed in turn. The two improved methods are combined to form an improved quantum behavioral particle swarm optimization algorithm. The particle swarm generated by the chaotic phenomenon is used as the initial population; the function after the change of the former normal cloud model is used as an adaptive contraction-expansion factor, and the random point, namely the center of the potential well, is dynamically adjusted. A certain probability replaces the particles updated by the algorithm to improve the activity of the particles and improve the accuracy of the algorithm. Combining the improved quantum particle swarm optimization algorithm with the BP neural network and using the superior optimization characteristics of the improved quantum particle swarm optimization algorithm to replace the method of training the weights and thresholds of the BP neural network gradient descent method, an improved method is established. The significance of focusing technology to the improvement of system precision and automation is analyzed. According to the system structure, passive focusing method is adopted. The common focus evaluation function methods are compared, and the DCT coefficient evaluation method is selected as the systematic focus evaluation method according to the characteristics of the focus image information. In view of the poor image quality, the DCT coefficient evaluation method is studied and improved in detail, and the improved algorithm is used for experiments, which achieves a better focus evaluation effect. The characteristics of the pulsed image spot at the focal plane of the parabolic mirror are analyzed, and the commonly used spot center finding algorithms are compared. The fitting accuracy and anti-interference ability are improved, and the calculation time is shortened. Experiments are carried out by using the optimization algorithm, which achieves better positioning accuracy and faster operation speed, and the automatic processing capability of the system has been improved.

The swarm intelligence algorithm uses simple and limited individual behavior and intelligence to form the inestimable overall ability of the entire group through interaction to solve problems. In this process, the swarm intelligence algorithm simulates the search and optimization process as the individual foraging or evolution process, and each organism has also been manually processed to become a point in the search space, which no longer has the quality of the actual organism. The individual’s ability to adapt to the environment is measured by the objective function of the problem to be solved. The iterative process of replacing poor feasible solutions with good feasible solutions in the search and optimization process in the algorithm is also to simulate the foraging process or the survival of the fittest process of organisms in nature. We can see that the optimization process of the swarm intelligence algorithm is an iterative search process characterized by “generation + experience.” Swarm intelligence algorithm is an adaptive artificial intelligence technology for solving extreme value problems.

Swarm intelligence algorithm is an optimization method proposed by simulating the interaction mechanism of animals or creatures in nature [7]. The “group” reflects the use of multiple solutions during the solution process. Ant colony optimization algorithm is a meta-heuristic random search algorithm, which uses the principle of positive feedback, so that individual artificial ants act on the environment through pheromone, and in turn, the environment stimulates individual ants [8]. Through the continuous indirect information exchange and synergy between the ants in the colony, the algorithm can quickly converge to a smaller subset in the feasible solution space and is especially suitable for solving complex problems that are difficult to solve by traditional search methods [9]. Particle swarm optimization is implemented by simulating the behavior of birds and fish in nature. The position of each bird corresponds to a solution, and each bird is treated as a particle. Birds search through the guidance of the global optimal particle and the particle’s own social experience, and finally find the optimal solution [10]. Particle swarm optimization has obvious advantages in continuous optimization problems. Later, inspired by the behavior of other natural species, people proposed a series of swarm intelligence algorithms, such as artificial bee colony algorithm, cuckoo search algorithm, bat algorithm, firefly algorithm, and gray wolf optimization algorithm.

The swarm intelligence algorithm is a novel distributed model for solving optimization problems. It is proposed by people inspired by the behavior of insects or animals in the biological world. Swarm intelligence refers to the characteristic of intelligenceless individuals expressing intelligent behavior through cooperation. In a swarm intelligence system, each individual has a simple behavior or action. A swarm is a collection of individuals that can communicate with each other (by changing the local environment). This group of individuals can cooperate in distributed problem solving, thus making the group have overall intelligence.

The key to successful meta-heuristics in solving optimization problems is to balance the capabilities of “exploration” and “exploitation,” which can expand the search capabilities of the algorithm so that better solutions in the search space can be identified [11, 12]. The search ability is strengthened from the search experience, and the optimal solution can be found near the good solution obtained by “development.” However, all meta-heuristics employ exploration capabilities, but they use different methods. In other words, all search algorithms have a common framework [13]. Different algorithms can solve different optimization problems, but so far, theory has proved that there is no algorithm that can solve all optimization problems well [14]. Therefore, it is necessary for us to propose a new swarm intelligence optimization algorithm.

Initially, the form of meme algorithm is a combination of local search strategy and genetic algorithm, this kind of algorithm is also called hybrid genetic algorithm [15]. Therefore, the framework of the original meme algorithm is similar to the process and genetic algorithm. In essence, based on the genetic algorithm, a local search strategy is introduced to locally optimize the individual in each iteration [16]. However, the meme algorithm is a new framework, which is an algorithm architecture proposed by integrating the global search strategy and the local search strategy, which is fundamentally different from the genetic algorithm. The global search strategy can use the genetic algorithm and can also use other global search algorithms. The meme algorithm combines the advantages of the local search algorithm and the global search algorithm [17]. It uses the local search strategy to realize the local depth search of the individual and adopts the global search strategy to complete the breadth search of the population. Therefore, the meme algorithm not only has a strong local search ability to speed up the search speed but also has a strong global optimization ability, which ensures that the algorithm has a better search accuracy and a higher quality of the solution [18]. Studies have shown that the search efficiency of meme algorithms is several orders of magnitude faster than traditional genetic algorithms in solving some problems [19, 20].

3. Methods

3.1. Chaos Optimization of Quantum Particle Swarm Optimization

The correlation dimension D can describe the structural characteristics of the strange attractor, it is an important parameter in the chaotic phase space, and it is also the basic index of the strange attractor. The correlation dimension D is determined by the realization data of the time series, that is, the time series xi is set, the delay time of the sequence is τ, and the embedding dimension is d, let:

Then the association dimension D iswhere C(r, N, W) is the autocorrelation function of the time series. We can judge whether a sequence is a chaotic sequence according to the association dimension D, because when the association dimension increases, the dimension of the chaotic sequence is gradually unchanged, while the dimension of the random sequence keeps increasing.

Chaos Logistics model is

Among them, y represents the chaotic variable of particle x and k represents the number of iterations. When α = 4, the logistics model completely enters the chaotic state. Because 0.25, 0.5, and 0.75 are fixed points, and y is not equal to 0.25, 0.5, 0.75. Then the ergodicity of chaos theory means that for any chaotic map f(x), there are

Among them, the formula of the distribution function F(x) is

Quantum particle swarm optimization is essentially a random search method, which depicts the movement of particles in the form of probability waves. The algorithm updates and iterates according to the information of individual historical information, individual optimal value, and global optimal value. The QPSO algorithm is a random process under certain conditions from the initialization and iterative update of the entire particle. Although the random process can try to keep the particles uniformly distributed in the quantum space, the quality of the random individual is not enough. The convergence requirements of the algorithm, such as the repetition of particles, do not cover the entire quantum space well. That is to say, the initialization effect of particles often cannot reach the optimal initial population. Since each generation of the QPSO algorithm is affected by the historical information, individual extremum, and global extremum of the previous generation, the next generation is even farther the number of iterations, which causes the algorithm to fall into a local optimum and cannot converge globally.

3.2. Cloud Model Algorithms

A cloud model is a mathematical model that can transform qualitative and quantitative uncertainty. Through such transformation, the transformation between discrete concepts (qualitative) and continuous data (quantitative) can be realized, which shows that qualitative concepts and quantitative data have interrelated and interdependent mapping relationships. The cloud model overcomes the formal problem of combining qualitative and quantitative, its conversion process follows objective laws, and can establish a clear mapping relationship for the conversion of ambiguity and certainty.

The cloud model is established based on fuzzy theory and mathematical probability theory. It depicts the transformation relationship between qualitative concepts and quantitative values and explains the relationship between the randomness of quantitative values and the fuzzy relationship between qualitative concepts. In this way, the conceptual uncertainty in the cloud model can be fully described.

Membership function is the most important research tool in fuzzy sets. It reflects the one and the other nature of the event. Using the experience and knowledge of domain experts, the membership function can be accurately determined, so that the membership function itself has uncertainty. Among many membership functions, the normal membership function is the most widely used. The normal membership function is

In the definition of cloud, cloud droplet is the basic unit of cloud model. A cloud model contains several cloud droplets. Each cloud droplet is a point where a qualitative concept is mapped to the number domain space, that is, a random realization of a qualitative concept in quantity. The degree of certainty of the qualitative concept that each cloud droplet represents is also vague. Cloud droplets are randomly generated, and there is a series of subtle changes. The properties of a cloud droplet do not have a significant impact on the properties of the entire cloud. However, many cloud droplets are gathered together to describe the random and fuzzy characteristics of cloud mapping, that is, the cloud model describes the conversion relationship between qualitative concepts and quantitative values according to the overall shape of the cloud.

3.3. QPSO Based on Normal Cloud Adaptive Variation

Except for the population size and the number of iterations, there is only one adjustable parameter in the setting parameters of the quantum particle swarm algorithm, which is the contraction-expansion coefficient CE. The contraction-expansion coefficient CE is the search radius of the QPSO algorithm, so the selection of CE is the key to the performance of the QPSO algorithm and affects the convergence speed of the QPSO algorithm. If CE is too small, the global search range will be reduced. If the particle is searched in a small range, the development ability of the particle will be weakened, and the optimal point will be easily lost. This may be very beneficial to the late search period, and the global optimal value point can be fully explored, so that it is possible to explore a better optimal point, which is unfavorable for the early search, because it is not conducive to the full range development of particles in the early stage. If the value of CE is too large, contrary to the above situation, the QPSO algorithm becomes purely random. Therefore, the selection of CE is contradictory to particle exploration and development.

For the control of α, the necessary and sufficient condition for the QPSO algorithm to converge to the center of the potential well is α < 1.78. At present, the linear decreasing method is generally used from 1 to 0.5, but there are some problems with the general linear decreasing strategy:(1)CE will decrease rapidly and cannot maintain a large value for a long time in the early stage of the algorithm;(2)If the global optimal point can be found in the early stage of the algorithm, but because the search radius is too large, that is, if the CE is too large, particles may jump out of this optimal point, thereby reducing the search ability of the optimal point.

Therefore, the selection of the CE value depends not only on the number of iterations but also on the current position of the particle. In view of this, this paper proposes a new adaptive algorithm to adjust the CE value to take into account the global and local search capabilities of particles.

Using a one-dimensional antecedent normal cloud, its eigenvector C(Ex, En, He) is

The degree of certainty of the Xi,j particle position in the QPSO algorithm is

Among them, gbest is the contemporary global optimal value, and gworst is introduced as the contemporary global worst value. Xi is the current particle position, N is the total number of particles, and D is the dimension of the particle. In order to make the particles completely map to gbest and gworst, the value of En takes into account the “3En” law of the contribution of cloud droplet interval to qualitative concepts. The value of He is based on the fogging characteristics of the normal cloud model, so that the degree of certainty u(Xi) is a complete curve.

In order to further prevent the convergence speed of particles from slowing down in the evolution process, especially in the case of falling into a local optimum and the convergence accuracy cannot continue to be optimized due to insufficient diversity of particles in the later stage of the algorithm, this paper introduces a normal cloud model. A certain perturbation operation is performed to enhance the local convergence ability and the ability to jump out of the local optimum. The quantitative values in the cloud model can be distributed on the certainty curve without repetition, and have ergodicity within the quantitative value range. In addition, according to the known certainty, the quantitative value can be generated by using the consequent normal cloud. Therefore, it is feasible to use the cloud model to take a certain variation of the particles. The search strategy based on cloud model is undoubtedly superior to other methods. Based on the principle of the adaptive mutation quantum particle swarm algorithm based on the normal cloud model, the algorithm flow is obtained, as shown in Figure 1.

3.4. Improved Fusion Method of QPSO and Neural Network

The learning of the neural network is carried out by modifying the weights and thresholds of the network. With the progress of time, the learning and perfecting of the parameters of the entire network are carried out through certain predetermined metrics. According to the information provided by different environments, learning can be divided into tutored learning and nontutored learning. In the process of learning with a tutor, the network must have both input and expected output. Therefore, the amount of information to be processed is large, and for each input, an output must be generated in the network. The way of learning in BP network is to have tutors. Tutor-free learning is in the process of self-learning in the entire learning process. It only needs the input of the network to learn, and does not require the network to provide expected output and external feedback.

BP neural network is a feedforward network, and the processing ability of a single neuron is limited, but in the network, single neurons with simple processing ability are distributed and connected to form a network that can handle complex nonlinear problems. Nevertheless, the learning algorithm of BP neural network is based on the characteristic of gradient descent, so it is inevitable that the algorithm is prone to problems such as premature maturity, slow convergence speed, and long training time. The QPSO algorithm avoids the process of differentiability and derivation of functions by the gradient descent algorithm; it is also different from the genetic algorithm’s complex operations such as encoding, decoding, hybridization, and mutation, which affect the convergence time. Therefore, using the improved quantum particle swarm algorithm (M-QPSO) to replace the gradient descent method of the BP neural network to train the weights and thresholds of the network can not only improve the performance of the algorithm, but also speed up the convergence speed, and can effectively jump out of the local maximum. The influence of the advantages enhances the generalization ability of the algorithm. The process of training the BP neural network based on the improved quantum particle swarm algorithm is shown in Figure 2.

4. Results and Analysis

4.1. Automatic Console Spot Image Preprocessing

The pulse width of the pulsed laser emitted by the semiconductor pulsed laser transmitter is 10 ns, and the continuous emission is 2 times per second on average, and the average power is 20–30 mJ. Because its pulse width is extremely short and the corresponding peak power is extremely large, it is easy to cause harm to equipment or personal safety. Therefore, in practical application, it is necessary to add an attenuator at the port of the laser transmitter to attenuate the pulsed laser power. Figure 3 shows the transmittance curves of different specifications of attenuators at different wavelengths. Attenuators ZAB25, ZAB35 and ZAB45 represent 25%, 50% and 70% transmittance in the visible light band, respectively. The pulsed laser in the experimental system has a near-infrared wavelength of 1020 nm, and the transmittance of the attenuator in this band decreases to about 8%, 27%, and 40%, respectively.

The actual system uses ZAB25 attenuator to reduce the transmittance to less than 10%. At this time, the diameter of the image spot on the focal plane is about 0.29 mm, and the size of the spot image on the CCD receiving surface is about 4 pixels, and the light spot is in the shape of a regular and uniform circular spot. The geometric center of the spot basically coincides with the energy center, and the center positioning error is small. In the following analysis of the spot center positioning algorithm, in order to facilitate the viewing and comparison of the positioning accuracy of different algorithms, the ZAB70 attenuator with high transmittance was used for experiments to obtain the image of the image spot.

The emitted laser is a pulsed laser with a pulse width of 10 ns, which is emitted 2 times per second on average during continuous emission. The near-infrared camera is set in video mode, and the image spot video is captured in real time. Based on the comprehensive consideration of CPU processing speed, hard disk read and write speed, laser pulse width, attenuation and CCD integration time, the video frame rate of the near-infrared camera is set to 25 frames per second, multiple frames are collected continuously, and the image spot only flickers. in a certain frame of image.

4.2. Experiment of DCT Coefficient Evaluation

This process is similar to the processing process of the human visual system: the human eye is sensitive to low-frequency signals and insensitive to high-frequency signals. The degree of clarity and blur of the image is distinguished by the obvious degree of change in the edge of the image. This information is reflected as a coefficient in the low-frequency region after DCT transformation, so the DCT coefficient can better simulate the blur and clarity of the image during the focusing process. The reasonable extraction and combination of coefficients in the low frequency region can not only reduce the time complexity of the algorithm, but also optimize the DCT coefficient evaluation method, remove the influence of the coefficients with poor focus evaluation performance on the overall focus evaluation performance, and obtain a focus evaluation function with better performance.

Each coefficient in the focus evaluation coefficient matrix has different characteristics, such as frequency, amplitude, and amount of information. DCT is performed on the target cross-wire diagram in the visible light system, and the focus evaluation function curve of 8 diagonal coefficients of the 64 DCT coefficients is obtained, as shown in Figure 4.

It can be seen from Figure 4 that the DC coefficient DC and the medium and high frequency coefficient ACmm (m ≥ 3) fluctuate up and down with the change of the image plane position d, which do not have a single peak and are not suitable for use as focus evaluation functions. The AC11 and AC22 AC coefficients on the main diagonal of the DCT coefficients have obvious unimodality, especially for AC11, which meets the requirements of the focus evaluation function signal-to-noise ratio and sensitivity. That is to say, whether the image is clear or not, is mainly concentrated in the low-frequency region of the AC coefficient.

The image clarity information is mainly concentrated in the low-frequency region, so only the low-frequency region, that is, the 15 AC coefficients ACmn (m ≤ 4, n ≤ 4) in the upper left corner of the 8 × 8 DCT coefficient matrix, is analyzed, and it is used as the focus evaluation function. The curve is shown in Figure 5.

A detailed analysis of the data in Figure 5 shows that when the AC coefficient in the low-frequency region is used as the criterion for focusing evaluation, the performance is divided into three categories: complete failure, poor unimodality; good unimodality, but signal-to-noise ratio. And the sensitivity is average; the single peak is good, and the signal-to-noise ratio and sensitivity are also good.

For the poor imaging quality of the far-infrared system, the focus evaluation performance of these five coefficients is shown in Figure 6. As can be seen from Figure 6, when the imaging quality is not good, the quality of focus evaluation is degraded, especially the two AC coefficients AC01 and AC02, which are closer to the DC component, have worse unimodality. AC03, AC04, and AC05 still maintain a good unimodality, with a certain signal-to-noise ratio and sensitivity.

The focus evaluation performance of the DC component and the AC high frequency component is poor; in the low and intermediate frequency components, the sensitivity and signal-to-noise ratio of the coefficients close to the DC component or the high frequency component are greatly affected by the imaging quality, and the generality of the focus evaluation performance is not good. In the low frequency region, the three AC components AC03, AC04 and AC05 have better performance in focusing evaluation under different imaging systems. Figure 7 shows the focus evaluation graphs before and after the improvement of the visible light system and the far-infrared system. It can be seen from the figure that the improved particle swarm computing intelligent algorithm has better performance and is more suitable for imaging systems of different qualities.

We compare the time-consuming of different algorithms, process 200 2448 × 2048 images, and calculate the average processing time. The computer CPU used for the test is an Intel Core i3 530 processor with a main frequency of 2.93 GHz and a memory of 1.80 G. In the Visual Studio 2010 development environment, we use the OpenCV image processing function library for algorithm development.

It can be seen from Table 1 that the improved DCT focus evaluation function has a shorter calculation time and better focus evaluation performance, which can provide a reasonable and effective focus evaluation value for the system, automatically obtain the focus images of different optical systems. The error introduced by human eye observation is reduced, which is convenient for later image processing and data calculation.

4.3. Experiment of Spot Image Center Positioning

The further optimization of the measurement process is that in the actual measurement, the average value of multiple measurements is taken to improve the center positioning accuracy. The pulsed laser continuously emits pulsed laser multiple times. In one measurement, the spot that appears multiple times in the video is captured and fitted in real time, and the average value of the spot center is used as the stability index of the system measurement, as shown in Table 2. From the standard deviation of the spot center positioning results in the table, it can be seen that after the optimization of the fitting process, the fitting algorithm has strong anti-interference.

During the measurement process, the attenuator ZAB25 was added to the pulse laser output port. In the near-infrared band of 1060 nm, the transmittance was about 8%, and the spot diameter at the focal plane was attenuated to about 0.29 mm. After passing through the near-infrared camera with a magnification of 0.0625, the size of the imaging spot on the CCD is about 18 μm.

The near-infrared camera sets the video acquisition frame rate to 25 frames per second, the pulsed laser pulse width to 10 ns, and the continuous laser emission frequency to be 2 times per second. In the process of software processing, two threads are opened to be responsible for image acquisition and image processing respectively. The data of the image processing thread is provided by the image acquisition thread, so that “visible” means “processing.” During the video acquisition process, due to the difference in the allocation of time slices and data processing speeds between threads, the average interval is 1.8 s, that is, 45 frames of images, and an image containing image spots is captured. The calculation accuracy of the image spot center positioning algorithm is compared, as shown in Figure 8.

Although the gray-scale centroid method has a fast calculation speed, its accuracy is limited. When the x-axis image symmetry is good, the positioning accuracy is high, but when the y-axis image symmetry is not good, the accuracy is greatly reduced. The calculation time of Hough circle transform is long, and in practical applications, due to the limitation of the imaging quality of the spot image itself, the accuracy is not high. Least squares ellipse fitting has faster processing speed and higher calculation accuracy. However, due to the influence of background noise and target images in the measured environment, the positioning accuracy has declined. The least squares ellipse fitting method has been optimized for the image preprocessing and fitting process. The ROI area is automatically set by the gray centroid method, and the fitting operation is performed in the ROI area, which shortens the image processing time. And the fitting in the area eliminates the influence of background noise, plus image preprocessing such as smoothing, closing operation and thresholding, the positioning accuracy and algorithm anti-interference have been improved, speed, accuracy, degree of automation and anti-interference, all meet the actual measurement environment requirements.

5. Conclusion

Although it is possible to improve the late-stage development capability of the algorithm and increase the activity of particles by improving the center of the potential well and the average best position, from the perspective of the operation process of the QPSO algorithm, the particle is only centered on the attractor, and the average best position is the radius. In the later stage of the algorithm, there is no mechanism to mutate the particle itself, increasing the ability of individual particle development that gradually weakens with iteration. This ability is an important factor for the algorithm to jump out of the local optimum. Therefore, it is necessary to find a mutation method, which increases the search ability of particles in the later stage of the algorithm, so that the algorithm is not easy to fall into the local optimum. There is also an important problem in the QPSO algorithm and the PSO algorithm, that is, there will be premature phenomenon in the search process of the particles. Although the QPSO algorithm has improved the PSO algorithm in this regard, the QPSO algorithm will also increase the number of iterations. Population diversity and search ability are gradually reduced, so there are defects of “early maturity,” “late maturity” and “unripeness.”

Using the ergodicity of the chaotic phenomenon, the chaotic initialization of the particle swarm position of the QPSO algorithm is performed to ensure that the particle swarm traverses the entire search area and achieves the optimal initial effect. Using the chaotic initialization strategy not only improves the diversity of the initial population, but also enhances the traversal of particles in the solution space. The normal cloud model is introduced into the QPSO algorithm, and a new adaptive mutation quantum particle swarm optimization algorithm is proposed. This method adopts the normal cloud model optimization strategy, introduces its own worst particle and the global worst particle, and adaptively adjusts the center position of the potential well and the contraction-expansion coefficient in combination with its own optimal particle and the global optimal particle. For new particles, the normal cloud model is used to mutate the particles with a certain probability. Finally, the experimental results of the standard function extremum optimization show that the single-step iteration time of this algorithm is longer, but the optimization ability is greatly improved compared with similar algorithms.

Replacing the BP learning algorithm of the neural network with the improved QPSO algorithm and constructing a new quantum particle swarm neural network model can improve the learning speed of the network, reduce the sensitivity to the initial network weights, and reduce the network generated by external excitation. The vibration of the algorithm is strengthened, and the global search ability of the algorithm is improved and optimized. According to the system structure and the characteristics of the focusing target, the DCT coefficient evaluation method is used as the system focusing evaluation method, and the DCT focusing technology is improved for the situation of poor image quality, and a focusing evaluation function with better general performance is obtained. Integrating it into the system software as a part of image preprocessing, a better focusing effect is obtained, which lays a foundation for the completion of high-precision measurement work. At the same time, compared with the common center positioning methods of image light spots, the least squares ellipse fitting algorithm is selected, and its implementation process is optimized. While improving the fitting accuracy, it reduces the processing time and realizes the real-time capture of image light spots.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported by the project of 2021 School Level Scientific Research and Innovation Team (Innovation and entrepreneurship education scientific research and innovation team, No. HNACKT-2021-01).