Abstract
A region of interest (ROI) that may contain vehicles is extracted based on the composite features on vehicle’s bottom shadow and taillights by setting a gray threshold on vehicle shadow region and a series of constraints on taillights. In order to identify the existence of target vehicle in front of Advanced Driver Assistance System (ADAS) for the extracted ROI, a neural network recognizer of the Radial Basis Function (RBF) is found by extracting a series of parameters on the vehicle’s edge and region features. Using a large amount of collected images, the ROI that may contain vehicles is verified to be effective by extracting composite features of the shadow at the bottom of vehicle and taillights. Based on the positive and negative sample base of vehicles, the neural network recognizer is trained and learned, which can quickly realize network convergence. Furthermore, the vehicle can be effectively identified in the region of interest using the trained network. Test results show that the vehicle detection method based on multifeature extraction and recognition method based on RBF network have stable performance and high recognition accuracy.
1. Introduction
Recently, Advanced Driver Assistance System is used widely to improve driving safety example as the Active Lane Keeping Assist System (ALKAS), the Front Collision Warning (FCW), the Autonomous Emergency Braking (AEB), and the Intelligent Adaptive Cruise Control System (IACCS). No matter which kind of assistant driving system, its core technology lies in the application of radar or machine vision to quickly and accurately extract the information of vehicles or obstacles in front of the system, which can timely remind the driver to avoid collision danger or automatically control the vehicle to avoid collision [1–5].
With the increasing maturity of technology on the popularity of cost-effective sensor and image processing, the researchers at home and abroad have done a lot of researches to detect or identify forward target based on machine vision. Reference [6] has made the vehicle detection with taillight feature information and realized the control of tracking vehicle. However, when single taillight information is used for vehicle detection, there may be some cases such as missed detection or undetectable detection. Reference [7] researches recognition method of target vehicle of the FCW/AEB system and distance estimation between the preceding vehicle and host car using the MONO-Camera sensor. References [8–11] have adopted the AdaBoost method and support vector machine classifier to complete the vehicle detection and marked an image windows on the target vehicle. Reference [12] has put forward a self-adjusting sliding window strategy to improve the detection performance of the target vehicle in the image. With the rapid development in the theory of deep learning, the object detection systems have sprung out, such as SPP-net, R-CNN, Fast R-CNN, and Faster R-CNN [13–15]. References [16–19] present a faster region-based convolutional neural network method to realize vehicle recognition. However, faster R-CNN has combined the RPN (the region proposal network) and fast R-CNN into a unified deep CNN framework, yielding greater detection precision than other methods.
It is our goal to propose a method of extracting region of interest that may contain vehicles based on the composite features of vehicle shadow and taillights. In order to confirm the existence of vehicle in the extracted region of interest, a RBF neural network recognizer is founded based on a series of parameters of the vehicle’s edge and region features, such as eight discrete cosine transforms, six independent invariant moments, and five regional feature descriptors. The recognizer can not only identify vehicle but also realize the classification of vehicle and nonvehicle, such as the E-bike.
The remainder of this paper is organized as follows. In Section 2, a vehicle detection method is described based on the composite features on vehicle shadow and taillights. Section 3 introduces the vehicle recognition adopting RBF neural network, which consists of two parts: the vehicle feature extraction and the design of recognizer. In Section 4, experimental results on vehicle detection and recognition adopting RBF network are shown. The conclusions and future works are discussed in Section 5. The main architecture of vehicle detection process is described in Figure 1.

2. Vehicle Detection Based on Multifeature Extraction
In advanced driver assistance system, vehicle detection is a very important step. As for vehicle detection, it involves two main parts: the vehicle feature extraction and the determination of vehicle’s region of interest. The image sequence should be preprocessed before the vehicle features are extracted as shown in Figure 2, where the image preprocessing steps mainly include image graying, image filtering, edge detection based on Canny operator, and morphological operation of the image.

2.1. Vehicle Taillight Feature Extraction
Figure 1 represents the structure image of taillight pairs, which consists of the left taillight Ll and the right taillight Lr.
As we all know, there are usually many obvious edge features in the vehicle area, such as the rear windscreen, the tail bumper, the taillight, license plate, and so on. These edge features are very important for locating vehicle region. After a Canny edge detection, the information of vehicle taillight will be included in the edge detection image. Next, the profiles of taillight pairs are extracted from the edge detection image based on the opening and closing operation of morphology. As you can see from Figure 2, the profiles of vehicle taillight pairs can be extracted by the following constraint conditions as shown in formulae (1)–(3).
The distance Wlr between the left taillight Ll and the right taillight Lr is restricted in the range of minimum and maximum pixel values as follows:
The height difference of taillight pairs is limited within a range of the threshold value as follows:where represent the height of the left taillight Ll, the right taillight Lr, and the height threshold value of taillight pairs image, respectively.
The area ratio between the left taillight Ll and the right taillight Lr is limited to a range as follows:where amin and amax represent the minimum and maximum values of the area ratio, respectively.
According to the constraint conditions, the center of mass of taillight area is found and the vehicle width is determined by connecting the distance between the center of mass and the width of taillight. At the same time, the vehicle height in the image will be determined based on the gained vehicle width combined with the structural proportion of the vehicle. So far, an analogous vehicle detection region can be marked as a colored rectangular box.
2.2. Vehicle Feature Extraction Based on Shadow Area
A vehicle shadow area usually consists of the following parts, such as the left and right wheel tire, the rear bumper, and the body of vehicle form a darker area. In general, the gray value of vehicle shadow area is in the minimum range in the whole road surface image. For the image to be detected, the mean Gµ and mean square error Gσ of road pixel gray value are listed as shown in the following formulae (see Figures 3 and 4):where M and N represent the pixel value of the length and width of image, respectively. f (x, y) represents the gray value of pixel for the x-row and y-column. At last, the threshold of road gray value is defined as follows:


There exists a background noise or discontinuous area in the binarization image of the segmented vehicle shadow region. Therefore, it needs to split the background noise from separated vehicle shadow area as much as possible and use the opening and closing operation of morphology to obtain the vehicle shadow area (see Figure 5). Such, an analogous vehicle detection region can be marked by a colored rectangular box as shown in Figure 6. But this area is only an area of interest for the existence of vehicles, and the existence of vehicles in this area still needs to be identified by subsequent steps.


2.3. Integration of Vehicle Detection Area Based on Taillight and Shadow Feature
According to Section 2.2, the shadow area at the bottom of the target vehicle has been obtained based on the shadow feature. Then, it is necessary to calibrate the coordinate points of the vehicle’s existing area in the image. The center point of the lower edge line of the shaded area is taken as the base reference point, and the width area of the marker box is determined by combining the width of the shadow (or referring to the width of the taillight). Based on the proportional relationship between the actual structure size of the vehicle and the image size (as shown in the following formula), the vehicle height coordinate points in the image are obtained, and then the rectangular marked area XS of the target vehicle is obtained:where fw1 and fh1 represent the image’s marking width and height based on shadow feature extraction, respectively. Vw and Vh represent the actual vehicle’s width and height, respectively. α1 represents a proportional coefficient.
In a similar way, based on the taillight pair information, the centroid coordinates of the taillight can be obtained according to Section 2.1. Selecting the center point of the center of mass as the reference point, the width area of the marker box is determined according to the width of the taillight. Combined with the proportional relationship between the actual structure size of the vehicle and the image size (as shown in formula (8)), the vehicle height coordinate points in the image can be obtained, and the rectangular marker area XT of the target vehicle can be further obtained:where fw2 and fh2 represent the image’s marking width and height based on taillight feature extraction, respectively. α2 represents a proportional coefficient. The symbols Vw and Vh have the same meaning as described above.
According to the shadow feature and taillight pair feature, the rectangular marked area of the target vehicle can be obtained, respectively, and the marked area can be integrated according to the actual situation to determine the unique target detection area Xw (as shown in formula (9)):where ks and kT represent the rectangular marker area’s weight coefficient, respectively.
3. Vehicle Recognition Adopting RBF Neural Network
3.1. Vehicle Characteristic Parameter Selection
In order to reliably recognize the vehicle target, it is necessary to extract the peculiar feature of vehicle from the detected image region. At present, the target shape recognition methods can be classified into two categories: one is called the edge feature extraction based on target edge shape recognition; the other is known as the region feature extraction based on the object covered area, where some parameters of hybrid feature are chosen to describe the feature of vehicle target, such as the discrete cosine descriptors, the independent invariant moments, and the region descriptors.
3.1.1. Discrete Cosine Descriptors
Because the discrete cosine transformation parameter possesses a series of characteristics, such as the object’s moving, rotation, and proportional invariance, the parameter is insensitive to the initial point on shape outline data of an object image.
A complex sequence on the shape outline data of an object image, , can be obtained by extracting the image edge as shown in the following formula:where m represents the feature point variable of the closed edge curve obtained by edge extraction after image segmentation. f (m) denotes a one-dimensional complex sequence. The discrete cosine descriptors can be extracted by transforming formula (10) as shown in the following formula:where N represents the number of feature points of closed edge curve obtained by edge extraction after image segmentation.
3.1.2. Independent Invariant Moments
For a binary image, the -order moment is listed as shown in the following formula:
The central moment of the target image region, , is described as follows:where . represents the center of the target image region, and and represent the zero-order and the first-order geometric moment of the region where the binary image is located. represents the coordinates of the center point of the region.
3.1.3. Region Descriptors
Five region descriptors, composed of the eccentricity of the target image region, the ratio of the short axis to the long axis in the image region, the area of region, the perimeter of region, and the compact parameter of the image region, are chosen in order to better recognize the vehicle targets, where the compact parameter of image region can be represented as 4πS/L2. S denotes the area of the target image region, and L represents the perimeter of target image region.
3.2. RBF Neural Network Design on Vehicle Recognition
The method based on the RBF neural network is a new kind of learning method extended or conducted the input vector to the high-dimensional space with linear or nonlinear unknown systems [20, 21]. The network can realize self-learning of neural network quickly because it not only has the good promotion ability of the rapid convergence of tracking error [22–24], but also avoids the tedious calculation like BP algorithm.
3.2.1. RBF Network Structure Design
Figure 7 represents the structure image of RBF neural network. As you can see from Figure 7, the RBF neural network is composed of input layer, hidden layer, and output layer.

Eight discrete cosine descriptors, six independent invariant moments, and five region descriptors are chosen as input vector. The output layer consists of two nodes, that is, whether or not the vehicle is identified. In Figure 7, ɷih and ɷhn denote the weights between input layer and hidden layer and between hidden layer and output layer.
3.2.2. Algorithm of RBF Neural Network for Self-Organizing Center Choice
The algorithm of RBF neural network for self-organizing center choice consists of two stages. The first stage is the self-organizing process without tutor learning for solving hidden basis function, and the second stage is the process with tutor learning to solve the weights between hidden layer and output layer. The Gaussian transform function is chosen as follows:where represents the Euclidean norm, ci denotes the center of the Gaussian function, and σ represents the variance of the Gaussian function.
The output of RBF neural network is described as shown in the following formula:where denotes the input sample . represents the total number of samples. ɷij (i = 1,2, …, h; j = 1,2, …, n) denotes the weights between input layer and hidden layer. yj represents the actual output of the jth node of RBF neural network.
Assume d is the expected output value of the sample, then the variance of the basis function can be expressed as shown in the following formula:
The specific steps of RBF neural network learning algorithm for self-organizing center choice are listed as follows.(1)Define the center of the basis function based on K-means clustering method.(i)During the network initialization phase, h training samples were randomly selected as the clustering center ci (i = 1, 2, …, h).(ii)The input training samples were grouped according to the nearest neighbor rule. Allocate to each cluster set of input samples, according to the Euclidean distance between the center ci and the input sample .(iii)Readjust the cluster center. Calculate the average value of training samples in each cluster set, the new cluster center ci. If the new cluster center does not change, the value of cluster center ci is determined as the final basis function center of RBF neural network. Otherwise, return to step 2 for recalculation.(2)Solve for the variance of the basis function σi. The basis function of the RBF neural network is the Gaussian function, and the variance can be solved by the following formula: where cmax is the maximum distance between the selected centers.(3)Calculate the weights between the hidden layer and the output layer. The connection weights of neurons between the hidden layer and the output layer can be calculated directly by the least square method, and the calculation formula is as follows:
4. Vehicle Detection and Recognition Results
4.1. Vehicle Detection Results and Analysis
In order to verify the effectiveness of the vehicle detection method proposed in this paper, 110 images of road vehicles in sunny and cloudy environments and 90 images of road vehicles at night and under rain and snow conditions were collected in different time periods. All images are uniformly converted to 360 × 240 pixels before verification. All images are processed by MATLAB (2013b) software combined with the MATLAB program designed in the test and verification part of the paper. The results of vehicle detection show that the proposed method based on composite features can accurately mark the region of interest of vehicles with 97% accuracy.
4.1.1. Detection Results and Analysis Based on Single Feature Extraction
Figure 8 shows the detection results of the vehicle’s interest region extracted by the taillight feature.

(a)

(b)

(c)

(d)
Figures 8(a) and 8(b) represent the original picture of the vehicle and the extracted results after the opening and closing operations based on the taillight constraint characteristics, respectively. As you can see from Figure 8(b), the taillight feature information at the rear of the vehicle can be detected reliably based on the taillight pair constraint conditions. Figures 8(c) and 8(d) show the result of vehicle width determined according to the center of mass of the taillight and the detection result of the rectangular box range of the region of interest that may exist in the vehicle. According to the test results of taillight pair, the candidate vehicle area, that is, the region of possible interest of the vehicle, can be accurately marked.
Figure 9 shows the detection results of the vehicle’s interest region extracted by the feature of shadow at the bottom of vehicle. Figures 9(a) and 9(b) represent the original image of the vehicle and the shadow detection results extracted based on the gray level of the shadow. It can be seen from Figure 9(b) that the shadow feature at the bottom of vehicle can be detected reliably based on the limit of shadow gray threshold. Figures 9(c) and 9(d) show the result of the vehicle width determined according to the shadow area and the detection result of the rectangular box range of the area of interest where the vehicle may exist. It can be seen from the figure that the candidate vehicle area can also be marked accurately according to the gray value of the shadow area.

(a)

(b)

(c)

(d)
4.1.2. Detection Results and Analysis Based on Composite Feature Extraction
Figure 10 displays the detection results extracted from the composite feature of taillight pair and shadow. As can be seen from Figure 10(b), the two yellow rectangular boxes obtained by the composite feature are almost coincident. This indicates that the detection results for the same candidate vehicle area using two characteristics are basic anastomotic.

(a)

(b)
A single feature (such as a taillight pair or a shadow under the car) can be used to mark the candidate vehicle area to a certain extent, but for some special cases, false detection or incorrect detection may occur.
You can see this in Figures 11(a) and 11(b); the partial vehicle targets extracted by taillights are missing (see the red vehicle in Figure 11(a)), and the vehicle target extracted by shadow gray feature is free of defects. The detected vehicle target is complete by adopting the composite feature of the taillight pair and shadow. Therefore, the candidate vehicle detection based on multifeature extraction can greatly improve the defects based on single feature detection, such as missed or undetectable (see Figure 11(c)). Even though, it is necessary to further confirm the confusion area (even the marked area adopting the composite feature) once the above confusion occurs.

(a)

(b)

(c)

(d)

(e)

(f)
Figure 12 shows the vehicle detection results under rain, snow, and night conditions based on the composite feature. The important thing to note is that the vehicle detection based on composite features is transformed into the detection of single taillight features because of missing shadow on the bottom of the vehicle. It is actually the coefficient ks in formula (9) is equal to 0 under rain, snow, and night conditions.

(a)

(b)

(c)

(d)
It can be seen from Figure 12 that the method proposed in this paper can also accurately determine the detection area of vehicles under the conditions of rain, snow, and night with high accuracy.
Figure 13 shows the detection results of two parallel vehicles extracted by the taillight pair and shadow composite feature.

(a)

(b)

(c)

(d)
It can be seen from the Figure 13(b) that the extraction method of a single taillight can lead to confusion in the vehicle’s interest region when the taillights are similar in shape and equal in height. The parallel vehicles can be detected by the shadow feature, but the shadow under the vehicle also interferes with the marked area. In order to ensure the accuracy of vehicle identification, it is necessary to further confirm the confusion area once the above confusion occurs, even the region of interest that can be identified by the composite feature. Therefore, RBF neural networks were used to further confirm the marked area.
4.2. Vehicle Recognition Result in ROI Adopting RBF Network
For marked candidate areas (i.e., areas of possible interest for vehicles), the RBF neural networks can be used for further validation adopting the MATLAB program designed in order to increase the reliability of the presence of vehicles in the ROI with composite feature markers.
Figure 14 is part of the images in the vehicle sample database established in this paper. All the sample images need to be processed uniformly, and each image is adjusted to 250 × 190-pixel size with a horizontal and vertical resolution of 96 dpi. Table 1 shows the extraction results of mixed feature parameters of vehicle targets in some test samples.

In order to verify the effectiveness of RBF network designed in this paper, 850 vehicle samples (as shown in Figure 14) and 850 nonvehicle samples (negative sample) were established, respectively, and the RBF neural network was trained. 60% positive samples were randomly selected to complete the test, and the test results are shown in Table 2. It is worth mentioning that the area marked by the yellow box in the middle shown in Figure 13(b) is identified as nonvehicle, thus avoiding false detection.
Figure 15 shows the error performance curve of RBF neural network test. As can be seen from Figure 15, the designed RBF network meets the training error requirements and has higher recognition accuracy and convergence speed.

5. Conclusions
(1)In this paper, a possible region of interest of vehicles is determined and verified by extracted the composite feature of vehicle’s taillight pair and shadow at the bottom of vehicle based on a large collection of images in various complex environments. The results of vehicle detection show that the proposed method can accurately mark the region of interest of vehicles based on composite features with 97% accuracy and greatly make up or improve the shortcomings of single feature method, such as missing or failing detection.(2)An RBF neural network recognizer is found and tested by extracting a series of parameters on the vehicle’s edge and regional features. The test results show that the vehicles in the marking region can be identified reliably by the RBF neural network recognizer and its accuracy reaches as high as 94%.
In a word, the research results have important reference value for the integrated control algorithm of intelligent safety system such as IACCS, FCW, or multisensor environment perception. Further research in this paper is intended to explore the detection method of e-bikes and pedestrians adopting intelligent control theory or algorithm and to study the real-time target tracking control of vehicle, nonmotor, and pedestrian in depth.
Data Availability
The data used to support the findings of this study are available upon request to the corresponding author.
Conflicts of Interest
The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Acknowledgments
This work was supported, in part, by the Natural Science Foundation of China under Grant nos. 61473139 and 61622303, the Natural Science Foundation of Liaoning Province of China (2019-MS-168), and the Project for Distinguished Professor of Liaoning Province.