Abstract
The existing edge distortion correction methods of moving images ignore the consistency of human subjective vision in the restoration process. Therefore, this study proposes an edge distortion correction method of moving images using a discrete mathematical model. According to the image binarization, the moving target region is segmented, the motion fuzzy degradation model is constructed by using the discrete mathematical model, the global structure information of the image is considered by the structure similarity, the image gradient is transformed from the rectangular coordinate system to the polar coordinate system, the image deconvolution algorithm of the edge distribution of the natural image is used to enhance the image edge features, and the edge distortion correction of the moving image is realized. The experimental results show that the astringency of the experimental scattergram is obviously improved, the fitting line is almost a straight line, and it has good recognition of the jump of the gray level of noise points, which can meet the requirements of the consistency of the subjective vision of human eyes.
1. Introduction
The key of motion-blurred image restoration is to know the process of image degradation, that is, to know the degradation model of the image, and then take the opposite process to get the original clear image [1]. There are many reasons for the degradation of image quality. If there is relative motion between the camera and the object during the shooting, the image blur is called motion blur [2–5]. Because the image is often accompanied by noise, the existence of noise not only reduces the image quality but also affects the acquisition of image degradation model and then affects the restoration effect of blurred image [6–8].
The restoration of motion-blurred image is a difficult problem in image processing. The reason is that the causes of image blur are complex, and the image damage is large, so the restoration of motion-blurred image is also very difficult. For example, the camera shakes when taking pictures on the aircraft or spacecraft, moving objects or still objects with the camera [9–12]. Because of the relative motion between the camera lens and the object at the moment of exposure, this kind of blur or even distortion will be caused. In order to restore the image, the plane image processing software is usually used to process the image before, but this software cannot completely and effectively remove the blur distortion, and even if the image quality is significantly reduced after partial removal of the distortion, it is not satisfactory. So, the restoration of motion-blurred image is also an important topic in the field of image restoration [13–16].
The traditional fuzzy image restoration methods are mainly divided into three categories: one is the blind restoration of the fuzzy image based on the frequency-domain characteristics of the digital image; the other is the defocused fuzzy image restoration, which keeps the original edge characteristics; the other is the fuzzy restoration of the moving image in the direction of coordinate transformation [17, 18]. Each blurred image has its own characteristics, such as the degradation of a specific region in the image and the front and back distortion of the image. Kim et al. [19] and Stark [20] proposed local histogram equalization. The basic idea is to define a sub-block of the image and carry out a histogram equalization operation on it and update the gray level after histogram equalization with the gray level value of the current sub-block. After the update, the sub-block is translated from left to right and from top to bottom, and the above operations are repeated every time, so as to access all sub-blocks of the whole image as the termination condition. This method can well express the image features because the neighboring pixels of each pixel are processed. Because of data redundancy and a large number of floating-point operations in the process of processing sub-blocks, this kind of algorithm has high time complexity and huge computation. In order to reduce the computational complexity of local histogram equalization, in theory, the overlapping degree of sub-blocks should be reduced, while completely nonoverlapping sub-blocks often lead to too large differences in histogram equalization of self-built sub-blocks, resulting in a very serious blocking effect, so they are rarely used. In view of the above problems, Kim et al. [21] proposed a local histogram equalization algorithm with partial overlapping of sub-blocks. By compromising the algorithm efficiency and image quality, the time complexity of this algorithm is obviously reduced compared with the algorithm with overlapping of sub-blocks, and its blocking effect has been greatly weakened compared with the algorithm with non-overlapping of sub-blocks. Not only that, but it can also be overcome by blocking effect elimination filtering. However, the motion-blurred image is dominated by low-frequency components, while the high-frequency components are few. Therefore, the above method is not applicable. Park et al. [22] designed a fusion heterogeneous confrontation network based on the variants of confrontation generation network, Cyclic GAN and cGAN. On the one hand, the network uses Cyclic Gan to ensure the clarity of the image, on the other hand, cGAN to preserve the texture details of the image. In addition, a fusion loss function is proposed in the network to minimize the artifacts generated by Gan and recover the fine detail color components as much as possible. Because of making full use of the game characteristics of GAN network, it has achieved a good deblurring effect. However, without considering the difference of image degradation degrees at different depths, it is easy to cause information loss and color distortion.
The continuous frame difference method and background difference method are the most widely used independent change detection algorithms [23–26]. In addition, the optical flow method is also one of the commonly used algorithms [16, 27–31]. Because the image inevitably introduces different noises (e.g., environment noise, such as illumination change, image acquisition equipment disturbance, and image enhancement noise), various processing algorithms are different in real time, accuracy, anti-interference, and other indicators. Based on the above research, the edge distortion of moving image is corrected by using the discrete mathematical model, the adaptive threshold method is proposed, and the segmentation algorithm is also added to the image binarization problem. Through the segmentation algorithm, the appropriate threshold can be adaptively selected, and the moving target area can be more accurately separated from the differential image.
2. Edge Distortion Correction Method of Moving Image Based on Discrete Mathematical Model
2.1. Moving Target Tracking
The target detection of moving image sequence is mainly to detect the difference between the two images before and after and obtain the different target areas. In the moving image sequence, the changes of the front and back two frames can reflect the subtle motion of the target. By detecting the changed area, the moving target can be obtained, and then its motion state can be analyzed. The background of moving image sequence is the scene, which has been invariable or changed little in the image frame. The invariable background template is extracted by the background extraction operation. The background is invariable in a period of time.
The process of background extraction is as follows: in a period of time, consecutive image frames , of which are obtained by sampling; for each pixel in the image, the pixel extraction formula is as follows:
From the formula, it can be seen that the larger is, the more average the pixel value is, the more accurate the background extraction is, but, at the same time, it will increase the computational complexity. Extracting the moving objects from each frame of the moving image sequence is an essential step for tracking the moving objects [32–35].
When extracting a moving object, there may be multiple blocks in the binary image. Generally, the block where the moving object is located is the largest, and the area is greater than 100, so the block with the largest area can be found. If the condition is met, the target can be determined. Because the moving target and the static background are different in the gray-scale domain, the current frame image and the background image are subtracted, and then the binary operation is used to obtain all the moving point clusters of the current frame.
Image binarization is a key step in target detection. There are two common binarization methods: fixed threshold binarization and adaptive threshold binarization. Among them, the fixed threshold binarization is that the value of preset can be obtained according to experience. When the difference value of a pixel is greater than , it will be determined as a moving target pixel; otherwise, it is a background pixel. The judgment is as follows:
The limitation is the selection of , and a good strategy can bring good accurate target extraction. There are a lot of small holes and isolated points in the target area of binary image, and some of them have broken lines, so they cannot be directly used for target tracking, and usually need to use morphological filtering method to process binary image. The basic idea of morphological filtering is as follows: corrosion template is used to eliminate isolated noise points, and expansion template is used to eliminate cavities and broken lines. Two basic operations are commonly used: expansion operation and corrosion operation.
The expanding effect makes the groove in the image be filled and gradually expanded into a rectangle with rounded corners. The effect of corrosion makes the grooves in the image wider and deeper. Two morphological filtering operations, open operation and closed operation, are derived from expansion and corrosion. The difference between open operation and close operation is obvious; the effect of open operation is to eliminate isolated points and small targets. The effect of closed operation is to increase pixel points so that areas such as cavities can be connected together.
The expansion operation can make the target more “full” (increase the number of target pixels, fill holes, and cracks); the corrosion operation can eliminate the isolated small points and redundant small lines in the target. The specific process is as follows: morphological filtering needs to use structural elements, square is one of the three commonly used structural elements, and the inversion of its definition of structural elements is as follows:
The corrosion is as follows:
After mathematical morphology filtering, the image still cannot be used for target tracking, because there are still large holes in the image, which has a great impact on target determination, so it is also necessary to carry out connectivity analysis on the image of opening and closing operation. For a set of pixels, if each pixel in the set is connected with other pixels in the set, the set of pixels is a connected component. Connectivity analysis is to find out all the connected components in the morphologically processed image and mark all the pixels in each connected component with the same mark. Different connected components have different marks. Connectivity analysis often uses 4-connected and 8-connected methods. Sequential algorithm and recursive algorithm are used to identify each target position. Among them, the recursive algorithm is relatively inefficient, so the sequential algorithm is more often used.
Based on an 8-connected sequential algorithm, the specific operation steps are as follows:(1)Scan the image from left, right, up, and down in sequence. If the gray value of the pixel point is 1, the process to be followed is as follows:(2)If only one of the four points on the left, the top left, the right, and the top right has a mark, then the mark is also used for the pixel point; if there are four same marks, the mark is also used for the pixel point.(3)If these four points have different marks, then any mark is randomly selected as the mark of the pixel, and this mark and different marks are recorded in the equivalence table and treated as equivalent marks; otherwise, a mark is regenerated and assigned to the pixel point, and the new mark is recorded in the equivalence table as equivalent marks.(4)Repeat until all points are marked. Scan the whole image and replace each mark with the lowest mark in each equivalence set in the equivalence table.(5)Calculate the proportion value of all pixels owned by each marker to the whole image pixels, and then use this value to compare with the preset target pixel threshold value to determine the target: if it is greater than the specified threshold value, it is the target to be found; otherwise, it is not the target.
2.2. Acquisition and Restoration of Motion Fuzzy Parameters Based on Discrete Mathematical Model
Because the target is in motion, the image is blurred. For motion-blurred images, blur can be regarded as the moving track of objects. When objects are distinguished from their backgrounds, this kind of blur looks very clear. Therefore, there are two very important model parameters: fuzzy direction and fuzzy scale.
Assuming that the opening and closing time of the shutter is very short, the optical imaging process will not be interfered by the motion, and the motion blur degradation of the image will not appear. If the exposure time is set, then motion blur degradation occurs. If is set as the exposure time, then the motion blur degradation model is as follows:
In the above formula, is the blurred degraded image, is the original image, is the noise, is time, and are the moving distance of pixels in time. In order to speed up the calculation, a Fourier transform should be carried out for the above formula when there is no noise.
When the length and width of the processed image are not equal, it is not accurate to simply think that the blur angle and the slant angle of the stripe are vertical. If the image is cropped improperly, the original image information will be destroyed. Especially for the captured high-speed vehicle image, the background is still and only the vehicle moves, so the original pixel information will be better preserved. For example, forcibly trimming the image to a square will bring adverse effects on the detection of fuzzy parameters. For any size image, once the angle of spectrum fringes of degraded image is detected, the motion blur angle is determined.
For motion-blurred images, we need to collect edge information of different gradient directions. First, we transform the image gradient from the rectangular coordinate system to the polar coordinate system, then divide the gradient direction into four regions to avoid the local minimization of the threshold, and select the first edges with the largest intensity for each region. According to the size of image and blur kernel, the appropriate threshold is selected. The threshold is determined by the following formula:
In the formula, represents the total number of pixels in the image, represents the size of the blur kernel, represents the minimum threshold value, which guarantees the minimum edge information needed in the blur kernel estimation process, and represents the weight.
Mathematically, images use continuous or discrete methods to represent the spatial distribution and color intensity of their pixels. In discrete space, a matrix is usually used instead of continuous space. If the ideal image is matrix , the degraded image is matrix , and the point spread function is matrix , then the image degradation model can be defined as follows:
The definition domain of point spread function is , which indicates that every pixel in the degraded image is affected by other points in the whole scene space, and any change of point will lead to error.
For the two-dimensional function, randon transformation is used to calculate its projection transformation in a certain angle ray direction, that is, its line integral in a certain direction. For an image, the randon transform reflects the projection properties of the image in different directions. First, the obtained spectrum image is preprocessed by binarization. If the coordinate axis turns to be perpendicular to the direction of the fringe, the maximum value of the randon transform is the maximum value of the maximum value of the randon transform of each angle, so the tilt angle of the dark fringe can be determined by finding this maximum value. The width of the two-dimensional transformed image obtained by randon transformation at this angle corresponds to the width of the center adjacent dark stripe in the spectrum image, and the adjacent dark stripe spacing corresponds to the corresponding position.
In the mathematical model, the edge statistical property is usually regarded as a regularization term, thus the deconvolution problem is transformed into a well-posed problem. The image deconvolution algorithm based on the edge distribution of natural image can be expressed by the following mathematical model:
In the formula, represents data matching term, which requires that the degradation process conforms to the description of fuzzy kernel K, represents image prior knowledge, that is, image gradient distribution fitting term, and represents image gradient.
Because when processing the image, detecting the distance between adjacent dark fringes will produce an absolute error, with a maximum of 1 pixel. In this case, the absolute error can be reduced by detecting the total distance between multiple dark fringes, detecting the absolute error generated by the length, and then taking the average value of the distance between fringes [36]. For the general resolution image with lower resolution, when the smaller noise exists in the spectrum image, except the center dark stripe is still clear and visible, other dark stripes are blurred. In this way, it can detect the spacing of multiple dark stripes and reduce the resistance to noise by taking the average value. The mathematical formula is as follows:
In the formula, represents the first-order and second-order directional derivatives of image . The first-order edge detection operator and Laplace operator are used to obtain them respectively. represents the positive and negative of return , which makes the sharp jump at the inflection point in the input signal and sharpens the signal, thus enhancing the edge characteristics of the image.
Based on the above theoretical analysis, we can design a method to detect the angle and length of motion blur, realize the automatic restoration of degraded image, and realize the edge distortion correction of moving image.
3. Experimental Verification and Analysis
The development tool for this experiment is MATLAB2015, vs2013+opencv2.4.9. The processor is Inter(R) Core(TM) i7-6700 CPU, with the main frequency of 3.4 GHz and the memory of 8 G. The experimental data are based on the live image quality estimation database II provided by the image and video Engineering Laboratory of Texas University in the United States. There are 779 distorted images in the image database, including four types of distortion: Gaussian blur, white noise, JPEG, and fast fading, in which fast fading is the image distorted due to errors during the transmission of JPEG 2000 code stream in a fast fading channel. The live image library also provides DMOS of subjective score difference. The larger the DMOS value is, the worse the image quality is. The smaller the DMOS value is, the better the image quality is. Because DMOS is the difference between the subjective score MOS and the full score (100), its value range is [0100].
Four commonly used parameters of objective image quality evaluation methods are used as evaluation indexes in the experiment, namely, correlation coefficient, absolute error mean value, root mean square error, and rank correlation coefficient under the condition of nonlinear regression. Root mean square error is a kind of numerical index to measure the measurement accuracy. Its value is the square root of the mean value of the square sum of the deviation between the predicted value and the true value:where represents the real image and represents the corrected image.
Using the correction method designed above, the five types of distortion images are tested separately, and the results are compared with the existing methods. The existing methods are set as the control group, and the designed method is the experimental group. Through the subjective score difference, the differences between the two are analyzed, and the experimental conclusions are drawn.
3.1. Experimental Results of Gaussian Blur Distortion Image
Gauss blurred image is mainly due to the loss of edge information, resulting in the degradation of image quality. The proposed method extracts the edge information as an area separately and focuses on it. The image correction effect is as follows:
As can be seen from Figure 1, the method in this study can well correct the distorted part of this kind of image, which is highly consistent with the subjective feeling of human eyes. The specific experimental results are shown in Table 1 and Figure 2.

(a)

(b)

(a)

(b)
It can be seen that the performance indexes of the experimental group are significantly better than those of the control group, in which the correlation coefficient value under the condition of nonlinear regression is increased by 0.1038, the mean absolute error is reduced by 3.0133, and the square root of mean square deviation is reduced by 4.3037. The astringency of the scatter diagram is very good, the fitting line is almost straight, and the correction of the image with serious distortion is more accurate.
3.2. Experimental Results of White Noise Distortion Image
The white noise distortion image is obtained by adding Gaussian white noise to the undistorted image, and the noise points in the image are randomly scattered on the whole image, independent of the content and structure of the image. The correction effect is shown in Figure 3.

(a)

(b)
In the experimental group, JND threshold is used to judge whether the distortion is visible or not, and the jump of the gray level of noise points is well recognized. Therefore, the correction results of the experimental group are in good agreement with the subjective evaluation (Table 2 and Figure 4).

(a)

(b)
From the comparison of the above results, we can see that each index of the experimental group is better than that of the control group, and for the images with higher fidelity, the scatter diagram is more convergent.
3.3. Experimental Results of Fast Fading Distorted Image
Fast fading distortion is a full frequency distortion. The correction effect chart is shown in Figure 5.

(a)

(b)
In the frequency domain, the edge contour information represents the medium and high-frequency information of the image. The experimental group modifies the medium and high-frequency information, and the result is better than that of the control group (Table 3).
It can be seen from the experimental results that each index of the experimental group is better than that of the control group, and the consistency between the results of the experimental group and the subjective evaluation is significantly better than that of the control group in the case of large distortion of the scatter diagram (Figure 6).

(a)

(b)
3.4. Experimental Results of JPEG Distorted Image
JPEG is an image coding method based on DCT transform. After quantization coding, the image will produce a block effect. The higher the compression rate, the more obvious the block effect is. It looks like some false edges to human eyes. The existence of block effect may lead to obvious distorted edge in the nonedge region, while the experimental group mainly considered the loss of real edge information during the correction process (Table 4).
It can be verified from the experimental results that the results of the experimental group are only slightly higher than that of the control group under the JPEG distortion, and the scatter diagram is more similar (Figure 7).

(a)

(b)
From the above results, we can see that this method has a better correction effect for different blurred moving images. The main reason is that the edge distortion correction method is based on the discrete mathematical model, which emphasizes the influence of human visual characteristics on the basis of structure similarity. It not only considers the structural similarity to the global structure information of the image but also extracts the edge region of the image according to the physiological and psychological characteristics of human vision and strengthens the obvious edge distortion that can be perceived by human eyes. It not only makes up for the shortcomings of the existing methods for correction of serious distortion and cross distortion but also reduces the complexity of correction.
4. Discussion
In this study, a method of image edge distortion correction is studied by constructing a discrete mathematical model of motion blur. Experiments show that this method can effectively correct the distorted part of the image, the mean absolute error and the root mean square error are both small, and it can cope with various types of distorted images and maintain good application performance.
In practice, many cases of motion blur are caused by nonlinear and nonuniform motion, but many of them can also be approximately regarded as uniform linear motion or synthesized by uniform linear motion, so uniform motion blur has certain representativeness. Digital image restoration is to use the prior knowledge of image degradation to restore the degraded image. Its goal is to recover the original image from the degraded image. Mathematically speaking, it belongs to the mathematical inversion problem of calculus equation. Image restoration is an important and difficult problem in image processing, which has not been completely solved.
For the restoration of degradation, two methods are generally used: one method is suitable for the case where the image lacks prior knowledge. At this time, the degradation process can be modeled and described, and then a process of removing or weakening its influence can be found. Because this method attempts to estimate the situation before the image is affected by some relatively known degradation processes, it is an estimator law. If the other method has sufficient prior knowledge of the original image, it is more effective to build a mathematical model for the original image and fit the degraded image according to it. For example, if the known image only contains a circular object of a certain size, this is a detection problem because only a few parameters of the original image are unknown [37–42].
In image restoration, there are many other choices in mathematical methods. First of all, the problem can be solved by either continuous mathematics or discrete mathematics; second, the problem can be solved in both spatial domain and frequency domain. In addition, when the restoration must be done by mathematical method, the processing can be realized by convolution in the spatial domain or multiplication in the frequency domain [43]. In this way, we can choose the most suitable method according to the requirements and constraints of the problem under the condition of making clear assumptions. For physical problems, linear motion in any direction can be decomposed into two perpendicular directions. But for any direction of linear motion-blurred image, it cannot be considered in this way. Different from the physical problem, the image degradation process involves the image information in a specific direction, which is decomposed into two mutually perpendicular directions, which will inevitably make the image information obtained by the two processes very different. Therefore, the blurred image generated by straight-line motion in any direction can only be recovered directly in its motion direction and cannot be decomposed into two one-dimensional directions.
Moving target tracking has a wide application prospect in aerospace exploration, military field, unmanned driving, robot, and other fields, and it is an important research topic in the field of computer vision [44]. It is an important research direction of target tracking research to mark the moving objects interested in moving image sequence or video sequence and track the moving objects accurately and continuously for a long time by using the target tracking algorithm. The moving target data are obtained and analyzed how to achieve a good tracking effect for a long time in the scene of illumination change, target shape change, and so on.
In the follow-up study, we will propose a low contrast enhancement algorithm for the moving image sequence with unbalanced illumination to achieve the image brightness balance and a good distinction between background and foreground. Using the target tracking algorithm, we can track the moving target and achieve a better target tracking effect under the dynamic background and object deformation. The proposed target tracking method can accurately track the target under static background and background change and can also adaptively adjust the target shape change to prevent the target from losing or error. A moving image sequence target tracking tool is designed and developed. Its main function modules are as follows: image preprocessing module, target detection module, and target tracking module. It is convenient to set parameters and execute the specific algorithm, which saves the tedious work of manually modifying parameters and facilitates the integrated management.
The object tracking of brightness changing video sequence is divided into video enhancement and object tracking. The video sequence of shape changing is gradually studied from simple to complex, and the object tracking algorithm of video sequence is studied step by step from static background to dynamic background to shape changing. The multicamera target tracking system not only includes the knowledge of computer vision and information fusion but also involves the theory of pattern recognition and artificial intelligence. It is a multidisciplinary research problem. Multicamera target tracking with the overlapping visual field can track the moving target from different angles, which can effectively solve the problem of target loss when the target is occluded or enters a dead angle. However, it is difficult to correctly match the target information obtained by cameras with different viewing angles. Multitarget and multicamera tracking are directions for future research.
5. Conclusion
In this study, the discrete mathematical model is used to optimize the fuzzy motion image correction. According to the image binarization, the moving target area is segmented. The discrete mathematical model is used to build the motion fuzzy degradation model, obtain and restore the parameters of the motion fuzzy image, and realize the edge distortion correction of the motion image. It is hoped that the above content can provide a reference for the research in related fields. Due to the urgency of the research time, there are still some deficiencies that need to be improved in the follow-up research.
Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
The research was supported by: Henan science and technology innovation team (no. CXTD2017091) and Science and Technology Innovation Team of Colleges and Universities in Henan Province (no. 18IRTSTHN013).