Abstract
Moving object detection and tracking is the basis and key technology for intelligent video surveillance, human-computer interaction, mobile robot and vehicle visual navigation, industrial robot system, and other applications. It has important applications in intelligent monitoring, human-computer interaction, visual navigation, and so on. Intelligent technology can greatly facilitate the monitoring of countless targets. This paper adopts the random motion model to describe the motion state of the target. Based on wireless communication and information security, this paper studies the communication and information security of the intelligent system of martial arts target detection and tracking algorithm, and the basic idea of mean-shift tracking algorithm is to use gradient climbing of probability density to find local optimal. This paper uses the background subtraction method based on the vibe algorithm to obtain the binary image of the foreground object; the shadow detection algorithm is used to remove the shadow of the foreground image; the haar-like (Haar) feature is selected as the feature of motion detection, and the feature value of the rectangular area is quickly calculated to describe the adjacent image area. The difference between the features, the final result image, and then the intelligent system for analysis. Experimental data shows that the time consumed by the tracking algorithm is less than 20 ms, which can meet the real-time requirements of ordinary target tracking systems. The average processing time of the hybrid modeling method is 62.8 ms, and the detection rate is 15.92 frames/SEC. The results show that the algorithm improves the utilization of particles, greatly reduces the complexity, and reduces the degradation of the particle filter.
1. Introduction
Because the generation of intelligent technology needs to collect the track of motion, and then implant it into the system to produce intelligence, so moving target tracking recognition and moving target detection are the basis of intelligent technology, so target detection is a very important step. In video surveillance images, moving target detection is also very important and is the basis of image understanding. With the continuous development of computer video technology, the in-depth research of target detection and tracking technology is driven. Through continuous research on moving target detection and tracking technology in video surveillance, it is widely used in video analysis conferences and based on mobile target recognition, system navigation, security surveillance, traffic dispatch and weather analysis, and other fields. How to apply wireless communication and communication security to martial arts target detection and tracking algorithms is a question worth pondering.
Target detection and tracking are mainly to analyze and process each frame of the video sequence. Target detection is to separate the background image from the image to obtain the foreground target, so that it can carry out tracking, recognition, and other middle-level and high-level instructions. Target tracking is to extract color, texture, scale, shape, edge, and other information from the continuous frame images of the video sequence and then perform feature matching. But in the video, the moving target will be affected by other interference, such as partial occlusion and multitarget interference. External interference can cause temporary disappearance or deformation of the target. The target detection and recognition algorithms we need have strong robustness. The so-called robustness is the ability of the system to survive under abnormal and dangerous conditions.
The detection and tracking of moving targets is an important subject in the field of intelligent systems. Hu et al. discuss how to strike a balance between the quality and life cycle of target detection in wireless sensor networks and puts forward two target monitoring schemes. Although his research idea is correct, it lacks specific experimental content [1]. Hu and Liu believe that in the case of moving target detection, it is very difficult to replace or charge the battery of the sensor node. Due to the inherent resource-constrained characteristics, the value of the parameter will seriously affect energy efficiency and testing quality. He proposed a novel moving target detection method, called the node part with complete duty cycle (PNFDC), to improve the detection quality and energy efficiency of wireless sensor networks (WSN). Each sensor node adopts the best duty cycle according to the network load, which can guarantee the detection quality. In addition, PNFDC sets a certain percentage of nodes to full duty cycle (FDC) based on their energy abundance to further improve the quality of detection. His research lacks experimental data [2]. Shi et al. believe that distributed radar network system has many unique functions. In practice, the networked radar in radar network should maximize its transmitting power to obtain better detection performance, which may be in contradiction with low interception probability (LPI). Therefore, he studies the adaptive power allocation of radar network in the framework of cooperative game theory, which can improve the LPI performance. Considering the emission power constraints and the minimum signal-to-noise ratio (SINR) requirements of each radar, he developed a cooperative Nash bargaining power allocation game (NBPAG) based on LPI. First, he defines a novel SINR based network utility function and uses it as a measure to evaluate power allocation. Then, he proved the existence and uniqueness of Nash bargaining solution (NBS) through the well-designed network utility function. Although his research is relatively comprehensive, the research direction is relatively vague [3]. Yu et al. proposed a new constant false alarm rate (CFAR) target detection algorithm based on superpixels for high-resolution synthetic aperture radar (SAR) images. The detection algorithm includes three stages, namely, segmentation, detection, and clustering. In the segmentation stage, he uses the super pixel generation algorithm to segment the SAR image. In the detection stage, based on the generated superpixels, even in the case of multiple targets, the clutter distribution parameters of each pixel can be estimated adaptively. Then, he uses the two-parameter CFAR test statistic for testing. In the clustering stage, he uses hierarchical clustering to cluster the detected superpixels to obtain candidate targets. He used miniSAR data to prove the effectiveness of the algorithm. His research is not accurate enough [4]. Liu and Zhang based on the fusion of target detection network YOLO V4 and detection based multitarget tracking algorithm DeepSORT, an automatic vehicle detection and tracking method based on deep learning in urban environment was designed. Simulation results show that the proposed algorithm can realize automatic vehicle detection and tracking in urban environment [5, 26].
This paper studies the mean-shift tracking algorithm for the influence of the target partial occlusion on the mean-shift tracking algorithm. The algorithm uses only a few particles to describe the target state completely and can well characterize the target characteristics, improve the utilization of particles, and reduce the time complexity of the algorithm. In this paper, the VIBE algorithm is used to update the martial arts background model, which is beneficial to the research and development of the recognition of a variety of different sports actions, and improves the scalability of the sports training system. And through the experiment to verify the martial arts sports target tracking monitoring research on the development of intelligent technology has a role in promoting.
In this paper, the method of target tracking track is described in detail, the moving target is discussed in depth, the process of image processing is described, and the tracking and detection of martial arts target is verified through experiments.
2. Moving Target Detection and Tracking
2.1. Moving Target Detection
Moving target detection can be used in gait recognition, face detection and tracking, traffic statistics, and intelligent video monitoring, which can provide information about the speed and size of the target. The object of moving target detection is generally passers-by and vehicle. Background modeling is the key to moving target detection. A good background model not only can detect foreground targets well but also adapt to the changes of background itself, such as sudden illumination and periodic motion objects in the scene, so as to prevent them from being mistaken as foreground targets [6]. The Euclidean distance is generally used to calculate the distance between the pixel point and the background model, and the Euclidean distance of two points and is calculated by the following formula.
In a foreground detection algorithm based on background update vibe, the pixel value is used in the background model, and for a pixel point is the three-dimensional vector , where , , and are the pixel values of the image RGB space [7]. In order to improve the efficiency of calculation, the distance between pixels is calculated by the following formula in the VIBE model.
The principle of Vibe algorithm is to establish a sample set of pixels by extracting the pixel values around the pixel and previous pixel values and then compare the pixel values at another frame with the pixel values in the sample set. If the distance between the pixel value and the pixel value in the sample set is greater than the set threshold, the pixel is the front scenic spot; otherwise, it is the background point [8]. Assuming that the current set of previous scenic spots is , the distribution of the previous scenic spots in the feature space is
The Vibe algorithm is not sensitive to changes in light conditions and is suitable for target detection under dynamic background. But in actual application, the detection effect of Vibe algorithm is very dependent on the time interval when the frame difference is made. If the speed of the moving target in the background is faster, you must choose a two-frame image sequence with a short interval, otherwise, it is easy to detect a false moving target; if the moving target moves slowly, you should choose two frames with a longer time interval, otherwise, the two frames before and after are almost overlapped, and it is difficult to detect the object [8]. In addition, the Vibe algorithm detects that objects are prone to voids. Morphological processing and connectivity analysis of digital images are required for moving objects. If a foreground object is detected in a certain frame, then in the next frame, the probability of the same color value appearing in its adjacent position will increase, which is the use of time constraints. In the end, all the detected spots will be used to update the foreground model, and in order to allow foreground objects to become part of the background, all pixels of the current frame will be used to update the background model. The intensity of the background should be invariable when the light changes and some dynamic textures of the background are removed. Therefore, the background pictures are linearly related to each other, thus, forming a low-rank matrix [9].
In motion detection, the characteristics of moving objects and shadows are very important. According to many research algorithms, the existing methods are mainly divided into two categories, one is based on the shape characteristics of the target, and the other is based on the shadow characteristics. The shadow detection methods can be divided into two categories: light-based direction detection and image feature-based detection. The two-dimensional image feature extraction is based on two-dimensional image feature extraction or texture analysis. The detection based on image features, such as color, texture, and edge features, is divided into spatial feature-based methods, which are pixel-based and region-based methods, respectively [10].
2.2. Moving Target Tracking
Moving target tracking requires accurate and effective tracking of different moving targets under various complex backgrounds. These complex backgrounds include the background of global illumination changes, such as the background of local disturbances such as branches and bushes shaking in the wind [11]. The mean-shift tracking algorithm can converge to the pixel with the maximum local probability density, but the pixels in the image are evenly distributed. In order to reflect the probability of moving objects in the target model, histogram back projection is used to process the next frame image. Because the movement of the tracked target is continuous and regular, it is necessary to expand the target search area appropriately and set the back projection value of other areas to 0 when carrying out back projection, which will help to speed up the convergence of the algorithm and improve the accuracy of the algorithm [12, 13]. Then use the kernel density estimation algorithm to determine the probability density function corresponding to the sample point, and its expression is as follows.
The core of the algorithm of matching-based moving target tracking is to find the best match with the feature template in the target area. The algorithm first selects all regions to be matched in the current frame through the scan window, then maps these region images to the template feature space, calculates the similarity or distance function with the target template, and finally selects the optimal region center as the target spatial position [14]. The general matching-based moving target tracking process is shown in Figure 1.

The template matching method based on the characteristics of the target region divides the target into different parts and improves the stability of the algorithm against local geometric deformation by matching the template of each part. Such algorithms usually weight the similarity matching results of each part and adjust the contribution of different part templates to the overall matching by assigning weights. This kind of algorithm overcomes the possible rigid deformation of the moving target to a certain extent, but it cannot overcome the influence of target occlusion or scale change on the tracking result. The frequency-domain template matching transforms the image into the frequency domain, and the image matching is realized by calculating the similarity between the phase and amplitude of the target image transform coefficient and the template [15, 16].
2.3. Image Processing
In the process of image transmission, it is always affected by external environmental conditions, and a certain amount of noise is generated. Image smoothing is a method to reduce or even remove the influence of noise. By defining a mathematical model and performing matrix operations, it can effectively denoise and make the image smoother. Common image smoothing methods mainly include neighborhood average method, median filter method, and Gaussian filter method. When the video sequence is collected and transmitted, interference from the scene environment and equipment conditions will produce noise. Noise is an irregular random signal, which will cause false interference to the understanding and analysis of image information. In the process of image sequence input, acquisition, and processing, the focus is to suppress the noise in the acquisition and transmission process. If the noise in the input is large, it will have a negative impact on the processing and output process. So in the image processing system, the first task is to suppress noise [17].
The purpose of video image enhancement is to emphasize some unclear or valuable features of the source image, so that the target area has a better visual effect, and the image processing makes it more suitable for specific application scenarios. However, due to the instability of the external environment and the defects of the shooting instrument itself, there are inevitably various problems in the quality of the collected images. For some special purposes, occasionally more demanding requirements are put forward on the image quality [18]. Therefore, for special needs, the source image must be technically processed and improved to meet the requirements of the machine vision recognition system for image processing. This technology is called image enhancement. Its effect can play a very important role in both human visual effects and computer visualization image analysis. Of course, the enhancement technology can only highlight the characteristic information of the image, but cannot increase the valuable information contained in the image data itself. Only some scenes with blurred edges, texture characteristics, or poor contrast in the image can be enhanced [19].
3. Martial Arts Movement Detection and Tracking Experiment
3.1. Experimental Platform
First, carry out experimental tests on the hardware platform and the drivers of each module to verify whether the basic image acquisition, storage, and real-time display functions are normal. The hardware platform designed in this paper works normally. Images can be collected and stored through the OV7725 camera to achieve real-time display of images with a resolution of 640480@60 Hz, which can meet the real-time requirements of the target detection and tracking system [20]. The device resources occupied by basic functions such as image acquisition, storage, and display are shown in Table 1. The basic hardware driver occupies less processor resources, leaving enough resource margin for the realization of moving target detection and tracking algorithms. In addition, this paper integrates the functional modules of the system into a circuit board, no need to design interfaces, high integration, and small size [21].
3.2. Martial Arts Sports Model
When the mean shift tracking algorithm is used to track moving objects, first, a certain motion model is established to describe the state change of particles. Generally speaking, the closer the model is to the actual model, the better the detection effect is. An accurate motion model usually needs a lot of prior information to build. If the movement law of the target changes greatly, it is often difficult to establish an accurate model. In this paper, the random walk model is used to describe the moving state of the target [22, 23].
3.3. Moving Target Detection
(1)Local Feature Selection. Build an observation window, and move the window at each pixel point in the image. If the gray value in the observation window is different before and after sliding, and the moving in any direction will cause great change in the gray value, we can determine that there are corners in the observation window. Excellent corner features should have the characteristics of fast detection speed, rotation invariance, and insensitive to illumination changes [24](2)Corner Detection. The improvement of Harris corner detection is to detect the gray changes in all directions, so it has rotation invariance and stability to some affine transformations(3)Edge Detection. First, the Gaussian smoothing filter is used to convolute the original image, and then the first-order partial derivative finite difference is used to calculate the maximum value and direction of gradient, and nonmaximum suppression is applied. At the same time, high and low thresholds are used to realize edge detection and connection [25]
3.4. Moving Target Tracking
Step 1. Input the video frame, perform the background subtraction method modeled by the improved Vibe algorithm, and obtain the binary image of the foreground target.
Step 2. Use the shadow detection algorithm in this article to remove the shadow of the foreground image.
Step 3. Through morphological processing, discard small targets and fill holes to make the detection results more accurate.
Step 4. Start from the top left of the image to find the target contour, and determine the minimum circumscribed rectangle of the moving target according to the target contour. If the area of the circumscribed rectangle is greater than the set value, the circumscribed rectangle is considered to be valid.
Step 5. Determine the center of mass position of the obtained circumscribed rectangle of the moving target, and use the circumscribed rectangle as the initial target tracking frame of the KCF algorithm to execute KCF.
Step 6. Perform feature sampling on the initial target, build a sample set, and train the target detector.
Step 7. Detect whether there is a target in the next frame prediction area, and if there is, update the training sample set with the new target. If the target does not exist and the centroid position of the tracking box does not move for several consecutive frames, then the target detection algorithm based on Vibe is reinvoked to obtain the target region and go to Step 2.
3.5. Target Classification Test
(1)Sample Acquisition and Preprocessing. The positive samples of martial arts actions select images with different distances, orientations, colors, and sizes and then normalize them to pixel size; the negative samples of actions select the background images without Wushu actions. Here, 2316 martial arts action sample sets and 7708 nonmartial arts action sample sets are selected from the image data set(2)Sample Feature Extraction. Haar feature is selected as the feature of action detection, and the characteristic value of rectangular region is quickly calculated by integral graph method to describe the feature difference of adjacent image regions(3)Training Results. The trained cascade classifier consists of 18 strong classifiers. Each strong classifier has a different number of weak classifiers. Each weak classifier is composed of a Haar feature, a threshold, and an indication of the direction of unequal sign. The larger the number of classifiers, the more weak classifiers are contained in the strong classifier, and the more Haar features are
4. Target Detection and Tracking Results
4.1. Moving Target Detection Effect
The key of moving object detection method is to establish a model describing the target according to its own characteristics and then detect the moving target according to the model. In practical application scenarios, due to the interference of illumination conditions, moving object occlusion, and complex background, there are many difficulties in moving object detection. Moving object detection mainly focuses on the recognition and analysis of moving objects in video image sequences. In real application scenarios, there are multiple moving objects. Scene motion pattern can grasp the temporal and spatial distribution of motion behavior as a whole and obtain high-level semantic knowledge of scene motion. The comprehensive analysis of the scene motion mode can study and grasp the individual’s movement intention and movement state change on a higher level of understanding, so as to select and adjust the operation strategy of intelligent video monitoring algorithm, alleviate the adaptive contradiction between the monitoring scene and visual application algorithm, and enhance the accuracy and robustness of various algorithms. The classical hybrid modeling method and the improved hybrid modeling method combined with -means are used to detect the target of 500 frames in a video initialization stage. The real-time detection results are shown in Figure 2. It can be seen from the figure that the detection performance of the classical hybrid modeling method is higher than that of the improved hybrid modeling method combined with -means. The peak time of classical algorithm is 43.16 s, while the peak of the improved algorithm is 38.75 s.

The characteristic of the hybrid model is to use multiple Gaussian distributions to describe the pixel value distribution of a point, which is more in line with the multipeak distribution of pixel values in the complex background. For example, if the current scene object stays in the background for a long time, the foreground target will be mistaken for the object in the background image sequence. At this time, the advantages of the multi-Gaussian distribution model are reflected. The Gaussian distribution previously established is still in the Gaussian mixture distribution model, but the weight is reduced correspondingly. If the foreground object suddenly moves again, because the Gaussian distribution describing the background still exists, as long as its weight is increased, the previous background can be quickly segmented.
The real-time comparison before and after the algorithm improvement is shown in Table 2. It can be seen from the table that the average processing time per frame of the hybrid modeling method is 62.8 milliseconds, and the detection rate is 15.92 frames/sec. Combined with -means improved hybrid modeling method, the average processing time per frame is 41.8 milliseconds, and the detection rate reaches 23.89 frames/sec. Experiments show that the improved hybrid modeling method combined with -means can effectively improve the real-time detection in the initialization phase. The particles in the particle filter algorithm will degenerate due to traditional resampling. As the number of iterations of the algorithm increases, a large number of high-weight particles are retained, while low-weight particles are deleted, causing serious particle exhaustion. Aiming at the problem of particle degradation caused by traditional resampling in the particle filter algorithm, the improved algorithm improves on the traditional resampling algorithm. First, calculate the normalized cumulative probability of the weight of the particle set, and then select the smallest particle with the index number that satisfies the weight normalized cumulative probability greater than the random number to add to the new particle set for resampling. Results are shown in Table 2.
4.2. Target Tracking Algorithm
The operation time of single detection and tracking is shown in Table 3. The traditional tracking algorithm consumes the shortest time, mainly because its target model is relatively simple and the amount of data operation is relatively small; the tracking algorithm of multifeature fusion consumes the longest time, mainly because the method needs to constantly calculate the color, texture, and edge features of the image, which greatly increases the amount of data calculation. For the algorithm in this paper, only the process of statistical background histogram is added to the traditional algorithm, and no other auxiliary features are extracted. The target model obtained by this method increases the time consumption, but the increased consumption time is not very large. As can be seen from the table, for single frame image target tracking, the tracking algorithm in this paper consumes less than 20 ms, which can meet the real-time requirements of common target tracking system.
In the field of computer vision, moving object detection technology is a very important technology. In fact, we use a certain detection method to separate the moving object from the background and then extract the unique features which can represent the moving object. Then, we can track the moving object by searching and matching in the subsequent video images. It can also be said that moving target detection is the basis of moving target tracking. If the result of moving target detection is not ideal, the follow-up tracking effect of moving target will also have certain error, which may lead to serious consequences such as the loss of moving target tracking. Therefore, we must select the most appropriate target detection method for moving target detection before target tracking. In this way, the impact on the follow-up tracking can be minimized to the greatest extent. Results are shown in Table 3.
First, the texture feature statistics of the image are complex and time consuming. For example, when the LBP feature is counted, the value of 8 neighboring points around the image should be calculated for each pixel point. Compared with the color feature, the calculation amount of texture feature increases by 8 times when the histogram information of the region is counted; second, the feature information of the inner edge is relatively small. The effect of using edge feature information of image to detect target is not ideal; finally, the correlation between RGB color channels is strong, but the correlation of HSV color space is relatively weak. So when extracting images, RGB color vector is usually converted to HSV color vector, and the conversion process is relatively simple, so the processing time is not increased. The tracking process of the target establishes the relationship between the front and rear frame images, so the error of detection results of a few frame images will not affect the overall tracking effect of the system.
Morphological processing and connected domain labeling were performed to obtain the extraction results, and the extraction results of shadows and bright spots were combined and displayed as shown in Figure 3. It can be seen from the figure that in the processed image, the pixel value of the bright spot reaches more than 250, which is convenient for extraction. For the extraction of shadows, when the moving target is close in the scene, the spread of the moving target’s shadow in the distance will cause the pixel value between the two targets to also decrease, and the histogram equalization cannot significantly widen this part of the gap. The grayscale piecewise linear stretching method can change the grayscale distribution of a specific pixel interval and highlight the details of the image. However, it can reduce the missed detection rate due to background extraction and image registration errors. For target tracking, especially for maneuvering targets, missed detection increases the probability of occurrence of false follow-up and follow-up time, and the impact on tracking accuracy is greater than the increase in a small amount of false alarms is shown in Figure 3.

4.3. Algorithm Performance
The accuracy comparison of the algorithm is shown in Figure 4. As can be seen from the figure, the accuracy of the algorithm in this paper is the highest, while the accuracy of the frame difference method is the lowest. Because the frame difference method is more sensitive to the disturbance in the background, there is interference in the detected target, which makes the value larger, which reduces the accuracy of the algorithm. Although the target detected by the Gaussian mixture method also has interference, compared with the frame difference method, the integrity of the target detected by the Gaussian mixture method is higher than that of the frame difference method, which makes the TP value of the algorithm larger than that of the frame difference method. Therefore, the accuracy rate of Gaussian mixture is higher than that of frame difference method; the target integrity detected by OTSU algorithm is higher than that obtained by Gaussian mixture method, and the accuracy rate of OTSU algorithm is higher than that of Gaussian mixture method when the number of interference points is not much different; the integrity degree of target detected by this algorithm is almost the same as that of OTSU algorithm. At the same time, the processing effect of the algorithm is better than the other three algorithms, so the value of the algorithm is the largest, the value is the smallest, and the accuracy of the algorithm is the highest.

Particle filter algorithm includes two steps: prediction and update. First, the motion state of the target in the previous frame is analyzed as information, and the result of the analysis is used to predict the current frame. The motion state of the target in the current frame is obtained by prediction and observation as the posterior information. After the above iterative process, the target motion state is predicted and tracked. But with the extension of tracking time, particle filter algorithm will inevitably lead to the weight of most particles is very low, and a small number of particles are very high, which is called particle degradation. In this case, a large amount of time may be spent on the particles with low weight, and the calculation results of these particles have little effect on the posterior information, and the reliability of the final results is very low. Therefore, we need to use resampling method to remove the particles with small weight and retain and copy the particles with heavy weight to replace the particles with small weight results are shown in Figure 4.
The whole process of moving target detection is that the image collected by the camera is grayed first, and then the moving target is detected by three frame difference method. Then, after filtering and denoising, the gray image is closed by morphology. Finally, the moving object is selected by frame, and the moving target area is obtained in the image. It can be seen from the above detection results that the detection results of moving targets are better. However, the detected moving area is larger than the target, which is because after the image operation, part of the background is covered by the foreground after dilation operation, and the shadow caused by the occlusion of light in the image is mistaken for the moving target. The comprehensive performance comparison of the algorithm is shown in Figure 5. The accuracy and recall rate of frame difference method are the lowest, so its comprehensive index is the lowest among the four algorithms; the comprehensive performance index of Gaussian mixture method in three groups of experiments is higher than that of frame difference method, but it is still lower than OTSU algorithm and the algorithm in this paper, especially in experiment 1, the integrity of the detected target is poor and the recall rate is low. Compared with OTSU algorithm, its comprehensive index is 16.4% lower than that of OTSU algorithm and 20.8% lower than that of this algorithm. Compared with OTSU algorithm, the comprehensive index of this algorithm is higher than that of OTSU algorithm, which is 4.4% higher in experiment 1, 1.6% higher in experiment 2, and 2.2% higher in experiment 3, so the comprehensive performance of this algorithm is the best. Results are shown in Figure 5.

5. Conclusions
This paper mainly studies the martial arts moving target detection and tracking algorithm in intelligent system. Moving target detection and tracking a moving target is a process of two closely linked, detection is the basis of tracking, and tracking is to get the target movement information, such as target position, motion parameters such as speed, direction, these information for subsequent movement behavior, behavior understanding, semantic analysis of high-level applications, and context-sensitive reviews of structurally correct source programs provide unnecessary data for high-level compilation processes. The image is processed according to the grayscale and edge features, and the moving target and background are segmented by appropriate threshold to obtain the moving target. When tracking the moving target, we take the detected moving target as our initial template, extract the key features according to the template, and then select the appropriate tracking algorithm to find the moving target.
The key of the moving target detection method is to establish a model describing the target according to the characteristics of the target itself and then detect the moving target according to the model. Vibe algorithm is a background modeling method proposed in recent years. It is a moving target detection method based on pixel level, using random clustering of samples, and quickly and effectively performing background modeling. In the dynamic background interference environment, the Vibe modeling method does not have a very good foreground and background segmentation effect. According to the variance of the sample set of the current pixel, the fixed radius threshold is changed to an adaptive threshold. Through experimental comparison and quantitative analysis, under the dynamic background of the algorithm in this paper, the recall rate RE increased by 0.2, and the comprehensive index increased by 0.09.
This paper analyzes the importance of background modeling to the background subtraction method and combines the improved background subtraction method and frame difference method to carry out detection experiments. A large number of experiments have proved that the method can accurately extract the target, the influence of background interference on the moving target is suppressed, and the accurate detection of the moving target is realized. For the severe occlusion of the target, this paper uses the mean-shift tracking algorithm to track the occluded target on the basis of using the filter to estimate the target. Through experiments, the algorithm can track the moving target in real-time and realize the accurate tracking of the moving object when it is completely occluded. Of course, there are still many shortcomings in this study for tracking and detection of countless movements. It is hoped that future studies can be more complete and more academic.
Data Availability
No data were used to support this study.
Conflicts of Interest
The author states that this article has no conflict of interest.
Acknowledgments
This study was funded by 2020 Guangxi Philosophy and Social Science planning research topic: Guangxi ethnic minority sports culture inheritance and national fitness integration path research under the Healthy China strategy (20 FTY011).