Abstract
Vehicle detection and identification and safe distance keeping technology have become the main content of current intelligent transportation system research. Among them, vehicle detection and recognition is one of the most important research contents, and it is also crucial to the safe driving of vehicles. Real-time detection and recognition of current vehicles can effectively prevent the occurrence of malignant traffic accidents such as rear-end collision. Because the infrared image has some shortcomings such as poor contrast, loud noise, and blurred edge, this paper mainly studies the color space preprocessing of the image and uses threshold segmentation method and infrared image enhancement to segment the front vehicle and background. That is to say, by analyzing the infrared image captured by infrared CCD, we use median filter to remove noise from the collected infrared image and then use the improved histogram equalization to enhance the contrast of the image. Vertical Sobel operator is selected to enhance the vertical edge of the image, and the image is segmented by binary segmentation method. Finally, vehicle detection and recognition are realized by vertical edge symmetry, aspect ratio, and gray-scale symmetry. The experimental image and experimental data analysis results show that the image processing technology studied in this paper has achieved the intended research purpose.
1. Introduction
With the development of economy, the number of automobile and traffic accidents is increasing, so it is urgent to take measures to reduce traffic accidents and protect the safety of life and property [1]. Vehicle detection and recognition and safe distance keeping technology have become the main content of current intelligent transportation system research. Real-time detection and identification of the current vehicle can effectively prevent the occurrence of vicious traffic accidents such as rear-end collisions. The predecessor of the intelligent transportation system is the intelligent vehicle road system. The intelligent transportation system effectively and comprehensively applies advanced information technology, data communication technology, sensor technology, electronic control technology, and computer technology to the entire transportation management system, so as to establish a large-scale, all-round function, real-time, accurate, and efficient integrated transportation and management system [2]. The intelligent transportation system (ITS) project was formally proposed in 1990. It mainly focuses on improving vehicle safety, intelligence, and providing friendly human-vehicle interface. Transportation is facing many problems, such as high accident rate, traffic congestion, traffics, and carbon emission air pollution. Researchers combine virtual technology with transportation called intelligent transportation system, and it is effectively and comprehensively applied to transportation, service control, and vehicle manufacturing and strengthens the connection between vehicles, roads, and users, thereby forming an integrated transportation system that guarantees safety, improves efficiency, improves the environment, and saves energy [3]. It has become a research hotspot to improve the utilization rate of existing roads, the safety degree of road traffic, and the comfort degree of road use by increasing the technology content. The growing demand for mobility has brought about significant changes in transport infrastructure [4, 5]. Intelligent transportation system (ITS) is becoming an important part of the society, and reliable and efficient vehicle communication is the key driving factor for good operation of ITS. Vehicle detection has been an important research area for many years. There are many valuable applications from traffic planners’support to real-time traffic management. Because of the high traffic volume and limited space, especially in dense urban areas, it is meaningful to detect cars. In order to meet the needs of various ITS applications, it is necessary to jointly consider the configuration and optimization of vehicle-to-vehicle and infrastructure communications [6, 7]. The application of wireless access technology in vehicle environment leads to the improvement of road safety and the reduction of the number of deaths caused by road traffic accidents through the development of road safety applications and the promotion of information sharing among mobile vehicles on roads. It is the important application of intelligent transportation system in highway management, so the research on vehicle detection and recognition based on infrared image and feature extraction is particularly important.
In recent years, video surveillance and monitoring system has been widely used in traffic management, mainly for traffic density estimation and vehicle classification. In recent years, many algorithms have been proposed to detect, recognize, and track vehicles in front of them based on the corresponding relationship between regions and vehicles established by moving vehicles through image sequences. In the past decade, vision-based vehicle detection technology for road safety improvement has attracted more and more attention [8]. Guido et al. [9] introduce a method of tracking mobile vehicles, which combines unmanned aerial vehicles with video processing technology. For vehicle distance measurement in front, Wu et al. [10] use the shadow of the vehicle in front to identify the location of the vehicle in front and use functional fuzzy neural network to estimate the actual distance. The experimental results show that the system runs successfully in real-time environment. The actual distance is estimated by using functional fuzzy neural network. It is a challenging task. Zhang [11] uses HMAX to recognize vehicle types from images. The database includes more than 2,000 vehicle images from 26 categories involving various complex photographic conditions recorded by surveillance cameras.
As an important way of information expression and storage, image has wide application value. Certain parking spaces are used in vehicle management through monitoring and image recognition. Cameras are used for the identification of vehicles by their plates based on smart and visual character recognition technology. The symmetric disk drive proposed by Hsieh et al. [12] is used to solve the problems of multiplicity and fuzziness and determines the areas of concern of each vehicle on the road without mobile function. Xiaoling [13] presents a method of vehicle location for accident-causing operation based on computer dynamic image processing technology, which can accurately locate the moving vehicle. Experiments show that the computer dynamic visualization method can greatly improve the accuracy of key frame location results of separated vehicle images. Neto et al. [14] introduced a new system that can sense and recognize Brazilian license plates. The digital image processing technologies such as Hough transform, morphology, threshold and Canny edge detector are used to extract characters, and the multilayer concept theory of least square, least average square, limit learning machine, and neural network is used to recognize numbers.
At present, the detection and recognition of vehicles in front of us are mainly based on monocular vision, infrared sensors, lidar, and so on. Wang et al. [15] studied the digital signal processing technology of visual perception, ultrasonic sensor, and radar technology. Ahmed et al. [16] use real-time velocity data collected from AVI to check the identification of highway locations with high collision potential. Sivaraman and Trivedi [17] introduced the progress of vehicle detection in detail and discussed the application of monocular. The research involved the use of space-time measurement, trajectory, and various features to characterize road behavior. Cheng et al. [18] designed an automatic vehicle detection system for aerial surveillance based on pixel classification, which retains the relationship between adjacent pixels in the region during feature extraction. Vehicle color and nonvehicle color are effectively separated by color transformation. The threshold of canny edge detector is adjusted automatically for edge detection. The results show that the method has flexibility and good generalization ability. Tian et al. [19] propose a vehicle recognition method using multiple sensor nodes.
Teoh and Bräunl [20] describe the use of a single front-view camera and vehicle edge and symmetry features to detect vehicle and road information. This paper introduces a method of extracting symmetrical regions in images by using multisize windows and clustering techniques. According to the detected symmetrical regions, the position of vehicles in images is assumed, and then, these regions are further processed to enhance their symmetrical edges. Vehicle boundaries are detected from enhanced projections of vertical and horizontal edges. Liu and Li [21] study the parameters of the optimal solution obtained by gray-scale image acquisition by CCD to identify lane lines, use parabolic model as objective function to fit lane lines, use genetic algorithm to optimize parabolic parameters, and use binary coding, multipoint crossover, and mutation genetic process. Tang et al. [22] use laser, ultrasonic or radar sensor space sensor systems, and communication technology to exchange information between HISS servers and vehicles, thus providing road information around vehicles. Biqing et al. [23] describe the function of vehicle recognition and location mainly by two core components, binocular vision system and camera calibration model. The results show that it can accurately locate the target position in recognition function. Xi et al. [24] study the depth estimation algorithm for monocular infrared images based on a nonlinear learning model. Experimental results show that most of the depth estimated by this model is consistent with the original information on the depth of infrared images.
CCD camera is in the security system, the generation of image is mainly from CCD camera, CCD is the abbreviation of charge coupled device, and it can turn light into electric charge and store and transfer the electric charge and can also take out the stored electric charge to make the voltage. Therefore, it is an ideal CCD camera element. The CCD camera formed by it has the characteristics of small size and light weight, not affected by magnetic fields, has the characteristics of antivibration and impact, and is widely used. In this paper, infrared CCD is used to collect images. Under this background, this paper is aimed at the malignant traffic accidents caused by vehicle rear-end collision on expressway at night. By using the theory and technology of infrared CCD and image processing, the vehicle identification method in front of the vehicle at night is systematically studied in order to provide technical support for reducing the occurrence of similar traffic accidents.
2. Proposed Method
The process of vehicle recognition mostly follows the flow chart of Figure 1. As can be seen from Figure 1, it can be seen that each module affects the final results. This paper mainly studies vehicle detection and recognition based on infrared image analysis and feature extraction process from vehicle edge features. Figure 1 is an analysis of the extraction of infrared images, from the extraction of raw color features to the construction of feature sets to the tenfold classification using SVM, and the entire process is strictly carried out.

2.1. Infrared Image Enhancement
In the process of vehicle recognition, infrared image preprocessing is an indispensable link. Infrared image preprocessing can remove or reduce noise and clutter in infrared image and improve image quality and signal-to-noise ratio. The general infrared picture preprocessing methods can be summarized as follows: some operations on the original infrared picture, such as a transformation or calculation from the space or frequency range, can improve the target in the infrared picture, suppress noise in the infrared image, and provide a good input for subsequent processing, improving the overall performance of the system [25]. This entry uses the spatial domain method for image enhancement.
A known image is decomposed into reflecting object image and incident light image , and any point in image can be shown . Image enhancement based on Retinex theory can enhance the visual effect of some low-quality infrared images, improve their brightness uniformity, further enhance the brightness of local dark areas, and highlight the outline information of the objective. Not only the gray level of the whole area of the equipment has been improved, but also the detailed information of the enhanced equipment has been greatly strengthened and highlighted, especially the brightness of some darker areas caused by illumination and the connecting areas between the parts has been further enhanced, which further restores the formal structure information of the equipment. Among them are image pixel coordinates. The basic steps of image enhancement are as follows: (a)The arithmetic functions are performed in the original image to separate the elements of the incoming and outgoing light and the reflected light:(b)Convolution of the original image using a Gauss template is equivalent to low-pass filtering of the original image, and the image denotes the Gauss filtering function: (c)In the log domain, the image obtained in the previous step is subtracted from the original image. (d)The final image is obtained by taking the antilogarithmic operation of .
2.2. Smooth Enhancement of Infrared Image
The improved adaptive bilateral filtering divides the image into the base layer and the detail layer. In the base layer, the histogram specification based on Gaussian mixture model is used to achieve brightness preservation, and in the detail layer, the enhancement function is adaptively selected by using human visual characteristics to enhance weaker details. And protect the clear edges in the original image from distortion, and then, restore to the original gray-scale space. For infrared images, noise can be generated either by random interference of external environment or by random variation of internal physical quantities. The purpose of image smoothing and enhancement is to eliminate noise, improve image quality, and extract object features. In this paper, two kinds of smoothing algorithms are studied.
2.2.1. Mean Filtering
Average filtering is also called linear filtering. Its basic principle is to replace each pixel value in the original image with the average value. Then, assign the mean value to the current pixel point as the gray level of the processed image at that point, i.e., is the total number of pixels in the template, including the current pixel, namely, .
Filter could be regarded as a low-pass spatial filter, which can effectively remove noise. It can be seen from the above formula that the cost of noise reduction by mean filtering is to make the image unclear, and the larger the neighborhood, the better the ability of noise elimination, but the more serious the image is, the impulse noise cannot be suppressed. The common mean filter algorithm is convolution operation with image pixels. The mean filter can remove noise to some extent. Therefore, when noise is removed, image details become blurred, especially the edges of vehicle images become blurred, which does not favour the detection of extremities and the identification of features of the later images of the vehicle, which is shown as Figure 2.

2.2.2. Median Filtering
The average filtering is a nonlinear image isolation method based on statistical theory. According to the different dimensions of sliding window, median filtering can be divided into one-dimensional and two-dimensional. In order to suppress the noise better, this paper uses median filtering method to filter the noise in the image. Because of the low contrast of the infrared image, the contour of the vehicle in the infrared image is blurred, and the median filtering is better than the mean filtering in noise processing. At the same time, the problem of blurring of vehicle contour image caused by mean filter is overcome to a certain extent, which enhances the details of vehicle image, provides high-quality image for vehicle image edge detection and feature recognition, and enhances vehicle contour features, and it is shown as Figure 3.

2.3. Improved Histogram Equalization Image Enhancement
Histogram equalization can greatly improve the dynamic range of image gray distribution, which is a typical spatial enhancement method. By increasing the overall contrast of the image, better visual effect can be obtained for infrared image. Most of the infrared images have the defects of low contrast, low resolution, and low signal-to-noise ratio. Image enhancement is to enhance its use value, not only to improve the overall quality, mainly reflected in the prominent degree of the goal. At present, a highly practical contrast enhancement technology is based on gray histogram correction technology, namely, histogram equalization.
Contrast is an important index of image quality. Contrast refers to the gray difference between the signal and the surrounding background. The calculation formula is as follows: , and among them, . is the probability of the distribution of pixels whose gray difference value is delta between adjacent pixels.
Histogram equalization is to make the gray value range of the output image as wide as possible and as uniform as possible through a mapping relationship between gray value intervals. In this way, the processed image can be maximized to meet the human visual requirements. However, this method does not have the ability to identify the target and background. Blindly homogenizing the gray level of all regions will increase the contrast of the background region and reduce the contrast of the target region. The gray range of the processed image will be narrowed, and some local regions will be weakened. There are some peaks in the gray histogram of some images. After processing, the contrast of the image region has been excessively improved.
In order to eliminate the blindness of traditional histogram equalization in data processing, reduce the background gray interval and increase the target gray interval at the same time, so that histogram equalization can be applied to the target gray interval to the greatest extent. Gray scale stretching is the most direct means to change the range of gray scale in a region. Selective gray scale stretching can be achieved by using piecewise linear transformation function.
2.4. Edge Detection of Target Image
Vehicle recognition based on image features needs to recognize the image first. The basis of image recognition is picture separation. The essence of image segmentation is to divide a digital image into different parts and then express these different parts, that is, to extract the characteristics of specific parts [26]. This will help to reduce the amount of data operated in subsequent feature recognition, save storage space, and largely save the operation steps in the subsequent process. It not only weakens the impact of noise but also retains the basic shape features of the target feature structure. Edge detection plays an important role in subsequent processing. Given this nature, the picture edges are usually obtained quickly by derivation, and the first-order derivation and the second-order derivation are often used. (1)Sobel Operator. That is the process of solving the first derivative, that is, the approximation of the two-dimensional gradient direction of the image. It is known that is a continuous function. The inclination at a point may be expressed by a vector. Its magnitude and angle are as follows: The magnitude and orientation angles of the vectors are as follows:
.The above formula gives the extreme value of the rate of change of the value for each unit of distance added in the direction of . . When measuring the angle, the -axis is chosen as the datum.
The above operators contain a matrix of in horizontal and vertical directions. If convolution operation is made between them and the image, the brightness difference in horizontal and vertical directions can be calculated. (2)Canny operator
The Canny edge detection operator is a multilevel edge detection algorithm developed by John F. Canny in 1986. More importantly, Canny created “Theory of Computing for Edge Detection” to explain how this technology works. Canny operator uses different thresholds and the first-order differential of Gauss function to extract the strong and weak edges of the image. Therefore, the algorithm can extract weak edges, has good edge feature recognition and location ability, but is very suspectible to noise. The algorithm of Canny operator will be listed below: (1)To filter the image is to convolute the objective picture and the Gauss template(2)Find out the extreme value and edge direction of the amplitude, that is, convolute the target image with the template(3)Compare the magnitude of each pixel with its gradient direction, and judge whether the tilt of each pixel is the local maximum, and the nonmaximum suppression of the magnitude is also called “nonmaximum suppression”(4)Take the upper and lower thresholds, compare the pixels in the image with the upper and lower thresholds, and classify the upper bounds larger than the threshold as strong bounds, and the lower bounds smaller than the threshold as weak boundaries. Weak boundaries need to be processed later(5)Take a larger threshold to identify the actual contour features, and then track all edges as a starting point. In the process of tracking, in order to track the fuzzy part of the edge of the target in the image, a small threshold is selected
2.5. Improved Mathematical Morphology
Traditional mathematical morphology can essentially be understood as a set of points in an image, and then, the appropriate structural element operation. (1)Extensions. Extension operators is , expands with , and is written, which is defined as follows: , and both represent the domain of definition of and , and in defining foreign territory is assumed to be .(2)Reading. The corrosion operator is corrodes with , writes , which is defined as follows: .
and both represent the domain of definition of and , and in defining foreign territory is assumed to be .
When using morphological algorithm to extract image edges, a single structural element cannot obtain better image edge detection results. According to the characteristics of the target image, the structural elements with appropriate shape and size are combined when selecting the structural elements. The simulation algorithm after twice processing the vehicle image is as follows, which is used to extract edge features. The stages of the improved algorithm are as follows. (1)For rectangular elements, open and close operations are applied to rectangular images to remove noise(2)For disc elements, open and close operations are performed to reduce the darker details(3)Two images have been obtained by using expansion and corrosion operations, respectively(4)Find out the difference between the two images in step 3, and get a better image edge
The improved mathematical morphology algorithm is used to analyze and further process the measured image, and the contour features of the vehicle image can be extracted more completely. Because the improved algorithm has obvious advantages in contour extraction, it can eliminate the influence of noise and achieve better road width extraction effect.
3. Experiments
The near infrared imaging system used in this paper is an active infrared imaging system, which includes infrared CCD camera, video capture card, and scanning laser. Through data acquisition card and industrial computer data communication, vehicle detection is realized. For the use of infrared CCD cameras, video capture cards, scanning lasers, etc. as shown in Figure 4. Firstly, according to the characteristics of the original infrared image, image processing and threshold segmentation are used to process the image, and vehicle license plate recognition is based on shape features. In order to eliminate the influence of the factors on both sides of the road and shorten the time of vehicle detection, it is necessary to determine the region of interest for vehicle detection in front of the lane in order to further extract and detect the vertical edges of vehicles. The detection process is shown in Figure 4. Use openCV to extract image features, train svm classifier, and classify vehicles and nonvehicles. Use the trained model to identify the vehicle in the video recorded by the car’s front camera.

4. Infrared Image Analysis
This paper mainly focuses on the analysis of the infrared image of the vehicle and carries out vehicle recognition for the vehicle image with obvious license plate characteristics. Firstly, the infrared image is segmented; secondly, the binary image is segmented by an improved morphological operation, and the connected area is marked; the region of interest of the front vehicle recognition is determined. In the region of interest, the area, rectangularity, aspect ratio, roundness, and symmetry of each connected area are analyzed to recognize and locate the vehicle in front of it. (1)Preprocessing of the collected infrared image
To pick up the boundary between the target and the background in the image, it is necessary to remove the noise of the infrared image and increase the contrast of the image. In this paper, Sobel operator is selected as the edge enhancement operator for vehicle edge detection. According to the direction of derivation, the template is divided into horizontal and vertical directions, and the horizontal and vertical edges of the image are enhanced by finding the maximum gradient. The horizontal and vertical convolution kernels of Sobel operator are shown in Table 1. In this paper, only the vertical convolution kernels are calculated to highlight the vertical edges of vehicles when extracting the vertical edges of vehicles. Figure 5 is the result of vertical edge enhancement. (2)Threshold segmentation of image edges

After the image is enhanced by Sobel operator, we can see from the result of edge enhancement that a lot of useless information is still included. In this paper, Otsu segmentation algorithm and binary threshold segmentation algorithm are compared. For the image enhanced by vertical Sobel operator edge, Otsu method and binary threshold segmentation algorithm are used to segment the result as shown in Figure 6.

The processing effect shows that the threshold method can retain the edge information, but the edge information of the image segmented by Otsu threshold method is not as rich as that of the binary threshold method. (3)Binary Image Processing. The binary image after threshold segmentation often results in redundant segmentation. The vertical edges of the segmented image are mainly preserved. Figure 7 shows the effect of morphological processing of the original infrared image after image segmentation.(4)Feature-based vehicle detection

For vehicles, the license plate is usually a regular rectangle with a certain aspect ratio, and the lamp on both sides usually has a certain circularity characteristic. The center of the license plate is the symmetrical axis, showing a certain degree of symmetry. Table 2 calculates the circumference, area, aspect ratio, and rectangularity of the target area of the extracted image.
The license plate is a regular rectangle with a width-to-height ratio of 2 or 3.2. From the data in Table 2, it can be seen that the rectangularity of 1 and 3 is greater than 0.75. Considering the influence of reflection and illumination, the range of width-height ratio in this paper is [1.5, 3.9], and the threshold of rectangularity is 0.7. (5)Symmetry feature extraction based on vertical edge
In this paper, we first select the edge points of the region of interest in the vertical height direction and then add the edge points together to get the vertical projection value of the edge. Figure 8 is to calculate and analyze the symmetry on the basis of the vertical edge of the vehicle. The value range of symmetry measure is [-1,1]. When the value of measure is 1, it means complete symmetry, and when the value of measure is -1, it means complete asymmetry.

The threshold of symmetry measure chosen in this paper is 0.5. From Figure 9, it can be seen that the maximum symmetry measure in this measurement image is 0.65, and the corresponding -axis is 434 columns. The threshold of symmetry measure is 0.5. Because the symmetry measure is more than 0, the vehicle area meets the symmetry requirement.

5. Conclusions
In order to extract the boundary between the target and the background in the image, it is necessary to remove the noise in the infrared image and improve the contrast of the image. For the collected infrared image, the median filter is used to remove noise, and the gray linear enhancement is used to enhance the contrast of the image, and then, the image is segmented. The method of vehicle license plate recognition is also discussed. The research shows that the license plate lamp features of the vehicle image collected by infrared CCD are obvious in most cases. On the basis of denoising and segmentation of vehicle image, this paper identifies and determines the region of interest in vehicle recognition. Vehicle recognition and location are realized according to the rectangularity and aspect ratio of license plate and its central symmetry and based on rectangularity characteristics. For the recognition of vehicle image with obvious edge features, not only in the tail of the car body, but also in the infrared image and vehicle recognition and location, is carried out symmetrically from the vertical edge.
This paper is based on hypothetical conditions, so this study is only applicable to the identification of single vehicle on the road. There is no in-depth study on multilane vehicle identification in the urban area and the identification of front vehicles connected by vehicle back shadow and road guardrail reflections, opposite headlights, and so on. Therefore, I hope to solve this problem in the future research.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare that there is no conflict of interest with any financial organizations regarding the material reported in this manuscript.
Acknowledgments
This work was supported by the Science and Technology Research Projects of Jiangxi Provincial Department of Education, GJJ219310 and GJJ212101.