Abstract
In the proposed detection method, a single image was used for the rapid quantitative measurement of the damage area of a building. By using cross-ratio invariability between real and image coordinates, the damage area and the side lengths of the selected region were measured using the vanishing point, the vanishing line, and a line segment with a known length. Perspective transformation and image binarisation were performed on the selected frame to convert the length into area information. The damage area was obtained rapidly from the proportion of pixels in the damage region to pixels in the selected region. Corner detection and subpixel methods were combined to determine measurement errors because of the selection of measurement points. The experimental results from the example test revealed that quantitative area measurement of surface damage on buildings could be realised using the proposed method.
1. Introduction
Surface damage, such as cracks, rusting, and spalling, has been reported in many houses, seaport projects, and bridges worldwide within ten years of their construction. The deterioration in the durability of concrete is a safety concern. The damage on concrete surfaces is a critical factor because the deterioration of both concrete and reinforcement involves water as a participator or medium. Concrete surfaces function as a barrier to prevent the invasion of harmful external substances such as water, oxygen, carbon dioxide, and chlorine salts [1, 2]. Surface concrete quality is a critical factor of surface damage and affects its overall quality and service performance. However, because of mixing, solidification, hardening, and loaded servicing, concrete structures are affected by various complex factors, which may result in numerous defects and damage. These defects include cracks, water seepage, spalling, honeycombs, pitted surfaces, exposed reinforcement, and rusting, which affect the durability of the concrete structures.
Concrete defects are a critical indicator of safety and structure evaluation. However, concrete defects are detected manually. Specifically, a visual inspection is performed to evaluate the defect level. However, this method of testing is subjective and limited by the expertise of the inspecting personnel, which can cause errors and uncertainty of results because of subjectivity and randomness [3]. Therefore, numerous studies have focused on the quantitative detection of concrete defects. Among them, nondestructive testing (NDT) methods, which include infrared recognition and digital image recognition techniques, have gained considerable attention.
Systemic frameworks for automatic detection of road and bridge surfaces damage have been studied, and some excellent detection systems include ARIATM and PCES from the United States, KOMATSU from Japan, CREHOS from Switzerland, and WiseCras from Canada [4]. These systems can be used to identify cracks in real time, generate crack graphs containing information such as determination of crack location, geometric dimension, and area, and automatically establish the corresponding image database. Suwwanakarn et al. [5] used a Gaussian filtering algorithm to accurately identify holes on concrete surfaces. However, the detection ability of this method is easily affected by stains and other defects on the surface. Zhu and Brilakis [6] proposed an automatic and quantitative method to characterise the influence of surface holes on concrete surface quality. In this method, the speckle filter is used on a high-contrast image of the concrete surface within a fixed range to accurately detect holes. The number, size, and area of holes could be approximated according to the filter size and pyramid levels. Balbin et al. [7] developed a surface crack detection technique by using pattern recognition and integrated image processing (e.g., grey transformation, expansion, Laplacian of Gaussian algorithm, and Canny detection). Image-processing techniques included cascade target detection after Haar training, edge detection, and multiple Haar wavelet transform classification on surface cracks in the concrete structure. This method could be effectively used to identify cracks and estimate the crack area. To realise quasi-real-time and simultaneous detection of multiple damage types, Cha et al. [8] used the region-based convolutional neural network to train multiple damage objects to overcome the influences of acquisition effects such as illumination and shadow changes and obtained average accuracies of 90.6%, 83.4%, 82.1%, 98.1%, and 84.7% for the detection of concrete cracks, two-stage steel rusting, bolt rusting, and steel delamination, respectively. Moreover, quasi-real-time, simultaneous, and automatic visual structure surface damage detection was realised for multiple damage types. German et al. [9] used the threshold algorithm based on local entropy to determine the spalling area. Combined with the image-processing methods in template matching and morphological operations, a novel global threshold algorithm was proposed to measure the exposure of longitudinal reinforcement (depth of spalling into the column) and length of spalling along the column. This method was tested on the database of reinforced concrete column damage images collected after the Haiti earthquake in 2010 and achieved an accuracy as high as 81.1%, which was higher than that for manual measurements. Lee et al. [10] developed a rust detection method based on statistical data acquisition and colour image multivariate analysis preprocessing to determine whether rust defects existed in a given image. They proposed a method based on the comparison of eigenvalues of defective images and nondefective images to detect if rusty objects existed. Zhang et al. [11] proposed a deep learning method for pavement crack detection. In this method, low-cost sensors were used to capture images, which were then manually labelled. The images were classified using deep convolutional networks to obtain features in the images to effectively distinguish pavement cracks and backgrounds, which revealed a high recognition accuracy. Chen et al. [12] combined Fourier transform and support vector machine (SVM) and proposed a SVM-based rust assessment approach (SVMRA) for steel bridge rust recognition for detection of rust images under nonuniform lighting. Compared with the simplified K-means algorithm and box-and-ellipse-based adaptive-network-based fuzzy inference system algorithm, the SVMRA proved to be effective for rust images with uneven surfaces, red or brown backgrounds, and nonuniform illumination. Oliveira and Correia [13] proposed a novel framework for automatic crack detection and classification using the images captured when travelling at high speeds. The crack image information was extracted using a Moire filter, and the dark pixels in the image were identified with a dynamic threshold, which generated the entropy block matrix. The threshold image was categorised into nonoverlapping blocks for entropy calculation, which were used to distinguish horizontal, vertical, hybrid, and crack-free images.
Several studies are conducted on concrete appearance defect detection in China and abroad. The use of computer vision technology for concrete appearance defect detection has attracted considerable research attention. The focus of this research is on the identification and classification of apparent concrete defects and the quantitative measurement of relevant geometric characteristics of defects. This study focused on the quantitative area measurement of the apparent spalling regions in concrete by using a single image, which exhibited excellent operability and high measurement accuracy.
2. Measurement Principle and Method
The geometric dimension measurement of a captured object can be realised using single- and multiple-image methods. Compared with the measurement using multiple images, image matching and camera calibration can be avoided using the single-image method, which mostly depends on the geometric constraints of the photographed object. Buildings exhibit abundant three-dimensional parallel lines. Therefore, determining a geometric attribute from a single building image is conducive to geometric measurement. The ratio of two simple ratios is the cross ratio, which is used in perspective projection. In terms of image measurement, the cross ratio of points in real space is equal to that of the corresponding points in the image. By combining the cross ratio and vanishing point, the geometric constraints in a single image can be used to calculate the length of a line segment on the straight line and the area of a specified region [14, 15]. The steps for quantitatively measuring apparent concrete defects are presented in Figure 1.

2.1. Length Calculation of the Line Segment on the Straight Line
As displayed in Figure 2, , , , and are four points on the same line in real space, V is the vanishing point of the line, and , , , and are the points in the captured image corresponding to , , , and , respectively. Therefore, the cross-ratio relationship of the points can be expressed by . The cross-ratio relationship can be expressed as follows:where and represent the distance between two points in real space and the distance between two points in the image, respectively, and and , which are infinite, represent the distance from the point in real space to the vanishing point and from the point in the image to the vanishing point, respectively.

Because the vanishing point is a point at infinity in real space, equation (1) can be transformed into the following:
The term on the right-hand side of equation (2) can be obtained from the image (assumed to be ); hence, . Therefore, the distance between and can be calculated from the distance between and . Similarly, according to , the distance between and can be calculated as .
Conclusion 1 can be obtained from the aforementioned analysis. If the length of a line segment on a straight line and its vanishing point are known, the distance between any two points on the straight line can be obtained. Because the cross-ratio property is independent of the point position, the position of the line segment to be calculated relative to the known line segment should be considered in the calculation.
2.2. Calculation of Line Segment Length in the Two-Dimensional Plane
To satisfy the requirements of practical engineering detection, the aforementioned method should be combined with other geometric conditions to extract geometric information on a two-dimensional plane. As displayed in Figure 3, two vanishing points in the same plane are obtained from two groups of intersecting parallel lines and the vanishing line is determined. After measuring the length of , the length of the line segment parallel to can be obtained using the following method. is connected and extended to intersect the vanishing line at point , is connected, and is ensured to meet at point . Two straight lines intersecting at the same vanishing point are parallel. Therefore, the quadrilateral is a parallelogram, and and have the same length. The length of line segment can then be obtained. Thus, the calculation of the line segment length in a plane can be transformed into the calculation on a straight line. According to Conclusion 1 in Section 2.1, the length of the line segment to be calculated can be obtained as given in Figure 3.

Conclusion 2 is as follows: given the length of a reference line segment in a plane and the vanishing line of the plane, the distance between any two points on a straight line parallel to the reference line segment in the reference plane can be obtained.
2.3. Area Calculation of the Measurement Region
Combined with Conclusions 1 and 2 in Sections 2.1 and 2.2, the area measurement of the apparent damage in concrete is categorised into the following steps: Select the apparent damage region with a rectangular frame according to the geometric relationship in the test plane. Calculate the length and width of the rectangular frame according to Conclusions 1 and 2 in Sections 2.1 and 2.2. Restore the selected region in the image into a standard rectangle through perspective transformation, and calculate the area of the rectangle according to its length and width obtained using the algorithm. Obtain the binary image of the apparent building damage region through morphological filtering and binarisation of the selected area. Count the number of pixels corresponding to the damage in the binary image, and obtain the actual area of the damage region by the ratio operation according to the following equation: where is the area of the damage region, is the number of pixels in the damage region, and is the number of pixels in the measurement region.
Conclusion 3 is as follows: in an image of a building with rich geometric lines, the damage area in the rectangular frame can be obtained for the building surface by locating two groups of vanishing points and the vanishing line.
3. Measurement Tests
To test the feasibility of the aforementioned method in practical engineering applications, an image of a building captured using a Nikon D7200 digital camera with obvious damage on the outer surface was selected. The relevant experimental calculation was implemented in the OpenCV library environment of Python.
3.1. Test of Scene 1
Scene 1 displays a building with a height of 15 m and obvious spalling on the upper part of its outer wall, which is difficult to measure manually. When capturing the image, the hand-held single-lens reflex (SLR) camera captured the images directly in the automatic mode, with no adjustments in the process.
3.1.1. Obtaining the Vanishing Point and Vanishing Line
A group of parallel lines in real space extend infinitely and intersect at the vanishing point, and all parallel lines in the same plane intersect at the same vanishing point. Because all vanishing points in the same plane are on the vanishing line, a two-dimensional plane has only one vanishing line. In this study, two groups of parallel lines were used to determine the vanishing points, and , and the vanishing line was determined according to the obtained vanishing points. As displayed in Figure 4, the coordinates of the vanishing points were (2056.3, 1326.3) and (497.1, −725.2), respectively.

3.1.2. Selecting Measurement Points
The measurement points shown in Figure 5 were obtained through human-computer interaction. The selection steps are as follows:(1)As displayed in Figure 5(a), draw auxiliary lines to determine each parallel line in the image and add the measurement points.(2)As displayed in Figure 5(b), import the picture into the algorithm implemented in Python to find a group of transverse parallel lines based on the measurement points.(3)As displayed in Figure 5(c), find a group of vertical parallel lines based on the measurement points.(4)As displayed in Figure 5(d), set the actual size of the reference object in the programme and box the reference object according to the measurement points.(5)As displayed in Figure 5(e), box the area to be measured according to the measurement points.

(a)

(b)

(c)

(d)

(e)
In Figure 5(a), quadrilateral refers to the framed region, refers to Window 1, refers to Window 2, and and are the side lengths to be measured. The actual lengths of each side of Windows 1 and 2 could be obtained using a hand-held rangefinder.
3.1.3. Calculating the Damage Area in the Target Region
After obtaining the vanishing points and selecting the measurement points, the area of the damage region was calculated using the following three steps: (1) The side lengths and of the selected region are calculated, and the boxed region is restored into a rectangle through perspective transformation to calculate its actual area. (2) The damage region is determined using the Otsu segmentation method. (3) Equation (3) is used to obtain the area of the target region.
The measurement of the side lengths of and shown in Figure 5 is difficult. To verify the accuracy of the adopted method, the sides of and in Window 1 were considered the reference and measured to be 1.16 and 1.78 m, respectively, by using the rangefinder. and in Window 1 and all sides in Window 2 were considered the sides to be measured. By comparing the actual lengths with the measured lengths, the measurement accuracy of the proposed test method was investigated.
The test results are displayed in Table 1 and demonstrate a relative measurement error between 1.705% and 5.073% and an average of 2.30%. The proposed method exhibited a high accuracy in the practical measurement and could satisfy the requirements of engineering detection. The lengths of the sides to be measured were 2.6473 and 9.2159 m, respectively.
3.1.4. Extracting the Damage Region
After obtaining the side lengths of the framed region, the actual area was calculated to be 24.3973 m2 because the region was rectangular in real space. To accurately extract the damaged part, the image was restored to a rectangle through perspective transformation, as displayed in Figure 6(a). In this study, the Otsu segmentation method was used to obtain the optimum grey threshold , which was used to segment the image and yield a binary image as displayed in Figure 6(b).

(a)

(b)
According to the statistical results of Figure 6(b), the total number of pixels inside the measurement area was 243144, the number of white pixels with a grey value of 255 was 28300, and the damage region accounted for 11.6392%. The area of the selected region was calculated to be 24.3973 m2, and the area of the damaged parts inside the measurement region was 2.83965 m2.
3.2. Test of Scene 2
The performance of the proposed method was tested in various scenes. Scene 2 was configured for a close-up test in which the shooting method and test steps were consistent with those of Scene 1. To verify the accuracy of the proposed method, the side lengths of the selected rectangle were determined, as displayed in Figure 7(a). Specifically, the rectangular frame had a dimension of 2.37 m × 1.21 m, and the reference is a red cardboard with a size of 0.518 m × 0.374 m.

(a)

(b)

(C)
Figure 7(a) illustrates the scene arranged on-site before testing and the measurement range of D1D2D3D4, a region framed with a red tape. Figure 7(b) depicts the frame manually selected in the computer. The test results are displayed in Table 2.
The test results in Table 2 revealed that the proposed algorithm exhibited a high accuracy in a close-up measurement of apparent building damage. The area of the framed region was 2.8677 m2, and the calculation yielded a result of 2.7942 m2, with a relative error of 2.563%. As displayed in Figure 7(c), after obtaining the binary image, the damage region accounted for 17.04%, indicating an area of 0.4761 m2.
3.3. Test of Scene 3
In Scene 3, the black cells in a printed black-and-white chessboard were considered the damage region to be tested. Compared with Scenes 1 and 2, the framed region in Scene 3 could be used to accurately obtain the geometric information of the chessboard when characterising the detection effect of the algorithm in terms of the damage region proportion. As determined from direct measurements, the size of a single cell in Figure 8(a) was 3.4 cm × 3.4 cm and the size of region E1E2E3E4 in Figure 8(b) was 13.6 cm × 13.6 cm. The reference object was a pink A4 paper with a dimension of 21 cm × 29.7 cm. The test results are displayed in Table 3.

(a)

(b)

(C)
The test results presented in Table 3 revealed that the test performance of the chessboard region was the optimum with an error less than 1%. In this scene, the area of the selected region was 0.018496 m2 and the area obtained from the proposed method was 0.01875528 m2. The Otsu segmentation method was used to obtain the optimum grey threshold, which was . As displayed in Figure 8(c), the damage regions (i.e., the black cells) accounted for 50.01%. The actual area of the boxed damage region was 0.009248 m2, and the area was calculated to be 0.00937952 m2 by using the proposed method, with an error of only 1.402%.
4. Error Analysis for Measurement Point Selection
The test results of Scenes 1, 2, and 3 proved that the proposed method achieved effective test results. The test results of Scenes 1 and 2 indicate that the scheme is applicable to both long-distance and close-range measurements. Furthermore, the test error in Scene 3 was less than 2%, which indicated that a singular background can effectively improve the accuracy of this single image measurement method in practice.
Manual marking was performed when selecting the measurement points. However, this type of subjective point selection could not be accurately and timely used to locate each measurement point and failed to avoid pixel errors. To reduce the error caused by the selection of measurement points, Scene 1 was considered as an example to realise corner detection under the OpenCV library. The FeaturesToTrack function was used, and the relevant parameter settings are as follows: goodFeatureToTrack(src_img, corners, maxCorners = 1, qualityLevel = 0.01, minDistance = 10, Mat(), blockSize = 3, useHarrisDetector = false). goodFeatureToTrack(src_img, corners, maxCorners = 1, qualityLevel = 0.01, minDistance = 10, Mat(), blockSize = 3, useHarrisDetector = false). The detected points are shown in Figure 9.

The existing corner detection algorithms cannot accurately locate all the measurement points [16], and the points detected on the image are pixel blocks containing multiple pixels. Therefore, the influence of the measurement point selection on the accuracy of measurement results was investigated at a subpixel level, using corner detected in Figure 9 as an example.
After obtaining the length of line segment presented in Table 1, as displayed in Figure 10, subpixel processing of point was conducted. The processed pixel block contained nine pixels that could be used as the measurement points. The measured distances of based on various pixel points are displayed in Table 4. Pixel 4 demonstrated the minimum relative measurement error of 0.43%, and pixel 3 led to the maximum error of 4.35%.

As displayed in Figure 11, the measured values from the third column of measurement points 3, 6, and 9 were the largest and the measured values from the second column (points 2, 5, and 8) were moderate, whereas the differences between the measured values from the first column (points 1, 4, and 7) and the true value were the lowest. The error between the measured distance from point 4 and the actual distance was only 0.5 cm.

When pixel 4 was selected as the measurement point for , the minimum error value (0.43%) was achieved, and the difference between the measured value and the true value was only 0.5 cm. Thus, the combination of corner detection and subpixel point selection could effectively reduce the error resulting from measurement point selection.
5. Conclusions
The principle of two-point ranging in a single image was used to convert the distance information into the damage area information in this study. The proposed measurement method involved capturing a single image with an inexpensive camera or a smart phone for rapid long-distance or close-range detection of a damage region on the outer surface of a building. This approach allows quantitative identification of the apparent damage of buildings. In addition to the error caused by the selection of measurement points, the accuracy of the proposed method may be affected by the following factors:(1)Error of damage region extraction: many types of damage, such as spalling, water stains, honeycombs, and pitted surfaces, exist on the outer surface of buildings. In particular, the grey value difference between water stains and the background is small. The coexistence of multiple damage increases the difficulty of identifying the damage region. Therefore, the use of Otsu threshold segmentation method was proposed because of its insensitivity to image brightness and contrast and high error tolerance rate. Furthermore, the external stains on the buildings should be removed by human judgement as they cannot be accurately determined through image processing.(2)Calculation error of the vanishing points: in the proposed measurement method, the vanishing point in the two-dimensional image can be calculated through the determination of the distant intersection of multiple parallel lines. When the intersection point is far from the image, the calculation error for the vanishing point is amplified, which affects the calculation results. Robust algorithms, such as random sample consensus (RANSAC), should be used to fit the vanishing point, which considerably improves measurement accuracy.
In addition, conditions such as error propagation, image definition, and image distortion cause errors in the proposed method. SLR cameras with a high resolution should be used, and the captured images should be corrected before measuring the relevant region.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.