Abstract

Disasters of deep underground caverns often occur during excavation or operation stage, which is closely related to the growth and evolution of surrounding rock cracks. The understanding of spatial distribution of internal cracks in rock mass is the key to reveal its deformation and failure mechanism. The transparent resin material with prefabricated crack was used to simulate the initial crack inside the rock, and the uniaxial compression experiment of transparent rock material under complete stress-strain path was carried out by using the rock mechanical rigidity testing machine. Four high-speed cameras were arranged around it to record the images of the same moment from different angles. Based on the theory of stereoscopic vision, a calculation method for the three-dimensional constitutive structure of crack propagation inside the rock was proposed, which can quantitatively describe the crack spatial morphological change. Therefore, the calculation method provides a reliable theoretical support for the surrounding rock reinforcement of underground engineering.

1. Introduction

In the deep underground caverns, disasters such as roofing and rib spalling often occur during the excavation process, and bolt-shotcrete support is a commonly used reinforcement method, which is not well targeted. Understanding of the internal defects of surrounding rock and the subsequent development is unclear, resulting in the frequent occurrence of underground engineering disasters. In previous research on rock cracks, it is often easier to derive an analytical solution by simplifying into a two-dimensional problem. However, it belongs to the space problem in practical project. The research on the spatial distribution and evolution of the rock crack is crucial to the stability of underground cavern when stress environment changes. Therefore, it is of great theoretical significance and engineering application value to study the internal cracking mode and evolution mechanism of rock.

As it is almost impossible to create cracks in a complete real rock, most of the current studies usually use rock-like materials, such as silica glass, poly(methyl methacrylate) (PMMA), ceramics, resins, gypsum powder, and cement, to simulate the crack propagation in rocks. Griffith first used silica glass materials to study the propagation of penetrating cracks. Wong et al. [1] and Adams and Sines [2] used PMMA materials to study the crack fracture first and then propagate. Reyes [3, 4] used gypsum materials with two open prefabricated cracks to conduct uniaxial compression tests.

Also, due to the opacity of real rocks, the 3D fracture and propagation of internal cracks cannot be observed by the naked eye or ordinary optical instruments. Most rock-like material simulations can only show the crack propagation on the surface under the assumption of two-dimensional (2D) plane cracks. The experiments for 3D crack propagation in some transparent materials, such as glass and resin, can only be observed by the naked eye or recorded by using cameras. However, the opacity of real rocks and some rock-like materials can be considered to be opaque only to visible light. Electromagnetic waves in other regions of the spectrum may still be translucent. Accordingly, many studies have been conducted using other translucent tools or methods, such as computed tomography (CT), acoustic emission, infrared spectroscopy, nuclear magnetic resonance and spectroscopy. Ren et al. [5] developed a real-time CT scanning device to observe 3D meso-damage evolution in real rocks. Scholz [6] studied rock fracture by an acoustic emission technique before expansion. Zhang et al. [7, 8] used this to study seismic wave amplification. However, the accuracy of such techniques, except for CT, is very low, and they can only be used as a supplementary result of the monitored phenomenon. Although the CT technique can achieve high accuracy results, currently, it is difficult to achieve a balance between real-time and experimental costs t.

Therefore, a 3D crack reconstruction method using visible light stereo vision is proposed in this article, which is based on simulating the 3D crack propagation in a rock. It can be used to obtain quantitative data of the 3D crack shape, propagation front, propagation angle, and so on. Transparent resin materials were used to simulate real rocks, and the 3D expansion of internal cracks was observed using visible light. The proposed method uses multiple cameras to record images from different angles at the same time and then uses algorithms to construct 3D coordinate data of image points. Provided that the frame rate of the camera is fast enough, real-time recording without unloading can be achieved.

2. Principle of Stereo Vision

Stereo vision is an important technique for 3D reconstruction in computer vision, which uses 2D images from different perspectives to obtain the 3D geometric information of objects. Usually, two cameras are used to acquire two images from different angles, and then, the spatial position information corresponding to each pixel is determined based on the principle of triangulation, namely, intersection of two rays through the two cameras, as illustrated in Figure 1.

A complete binocular stereo vision system involves the following key steps:(1)Camera model and relative position calibration: determination of the internal and external parameters of the camera. A 2D image is a perspective projection of a 3D scene, and it can be represented by classical pinhole imaging principles, which is a linear projection model. However, when the actual camera lens distortion is considered, it becomes a nonlinear model [915]. At present, the most widely used calibration method is that described by Zhang [16], which is simple, convenient, and accurate.(2)Image matching, also known as image registration: the main task is to determine the pixel in the so-called target image that corresponds to the specified pixel in the reference image, which means that the two pixels in the two images taken by using two cameras are projected by the same spatial point. Currently, the common matching methods involve the reduction of the 2D search space to one-dimensional (1D) search space [1720] using epipolar constraints. Then, image matching algorithms, including local matching [2126] and global matching algorithm [27], are used to search for the corresponding points in 1D space. Scharstein and Szeliski [28] set up an online platform for evaluating image matching algorithms, providing matching results of many existing matching algorithms, and ranking them by score.(3)3D spatial point reconstruction: the intersection points of the two rays determined by the matching pixel and the camera optic center are calculated, and the results are the spatial coordinates of the 3D point. The two rays may not intersect in the space, and the usual method is to calculate the least squares solution or to minimize the reprojection error by maximum likelihood estimation [29].

The steps and processes of the binocular stereo vision system are shown in Figure 2.

3. Reconstruction of 3D Static Crack

Based on the principle of the stereo vision method, the whole process of the 3D crack reconstruction in transparent rock samples is depicted in Figure 3, and it will be described in detail in this section.

3.1. The Sample Shape and Binocular Camera

Because only a part of the surface information of the object can be obtained from an angle, in order to obtain complete information of internal cracks in all directions, the ideal shape of the specimen is cylindrical, which can be observed from any angle. However, the cylindrical side is curved and the light refraction deformation is serious, which means extremely complex surface refraction corrections. It is much easier to have a flat side surface, and thus, a regular polygon prism is used. Since at least three angles are required to reconstruct the complete crack, the sample we designed uses the simplest triangular prism shape, as shown in Figure 4.

The binocular camera is made of customized FPGA board and has two camera lenses, as shown in Figure 5.

Three binocular cameras are installed, and each is placed right on a side surface of the triangular prism specimen, as illustrated in Figure 6.

3.2. Refraction Geometric Model

Compared with the basic principle of stereo vision triangulation in Section 2, there is a remarkable difference with the 3D reconstruction of cracks in transparent specimens regarding the effect of refraction deformation. As shown in Figure 7, the light is reflected onto the refraction surface, and the straight-line propagation model cannot be used for 3D localization. In earlier research of 3D stereo vision, the refraction effect was often neglected [30] or the effect of refraction was compensated by the approximate modified model. However, the refraction deformation is highly nonlinear and scene-dependent. Thus, ignoring refraction or adopting the approximate compensation model cannot eliminate the effect of refraction on 3D reconstruction. Refractive deformation depends on the depth of 3D points, so theoretically, no image spatial deformation model can describe the refractive effect [31]. Therefore, we must consider the depth of the scene of 3D points, use optical principles to build an explicit geometric model of the refraction, and establish an accurate refraction model.

The refraction geometric model is constructed based on the classical optical refraction laws. The geometric relationship of the camera refraction is shown in Figure 8. As the distortion factor can be corrected and eliminated, we still use the linear camera model. The camera internal parameter matrix is denoted by , the pixel image coordinates of the imaging point are denoted by , the refraction interface is a plane and the unit normal vector is denoted by , and the coordinates of any point on the refraction interface are denoted by . The refractive index of the gas medium is approximately 1, and the refractive index of the transparent sample is denoted by , which by default is relative to the refractive index of air. The angle between the incident line and the normal vector in air is denoted by , and the angle between the refractive line and the normal vector in the transparent sample medium is denoted by . The intersection point between the incident line and the refractive interface in air is denoted by  = . The 3D point in the transparent sample is denoted by  = . The coordinates of the denoted parameters are all defined in the camera coordinate system.

The process involves the following steps:(1)Back-projection formula: calculate the coordinates of the 3D point by using the coordinates of the imaging point .First, is calculated from the linear camera model, and the relationship with the normal line is as shown below:The above equations contain three unknowns and three independent equations; thus, the unique solution can be obtained.Clearly, the point can move along the refraction line when is fixed, so the direction of the refraction line can only be calculated. Considering the relation of the refraction formula and that the refraction ray is coplanar with the incident light and the interface normal and assuming the unit vector is , the following formulas are obtained:where is the function to find the determinant value of a matrix, is used to calculate the length of the vector value, and the unknowns are . There are three independent nonlinear quadratic equations, which have four sets of solutions. However, due to the refraction law constraint, there is only one set of effective solution. In this paper, instead of eliminating the solution after finding it, the Newton iterative method is used to solve the problem. When the initial value of the iteration is selected, it conforms to the law of refraction, and then, the final iterative solution will fall on the convergence point, which conforms to the law of refraction.According to the law of refraction, the initial value of the iteration is selected according to the geometric condition of the correct solution in Figure 9:where is a sign function, provided that the arbitrary choice conforms to the above inequality. Additionally, the final iterative solution must also conform to the above inequality.After obtaining the unit direction vector of the refracting line, the spatial point coordinate can be expressed in terms of the vectors below:(2)Projection formula: calculate the coordinates of the imaging point using the coordinates of the 3D point .It is still the first step to calculate the point of intersection between the incident line and the refraction interface. From the law of refraction in which the refracted light is coplanar with the incident light and the interface normal and the perpendicular relationship with the normal and the refraction formula, the following equations are obtained:The above equations are a set of nonlinear quadratic equations with as unknowns. At most, there may be four sets of real solutions. It is solved by the Newton iteration method with the intersection of the refraction surface as the initial iteration value which clearly satisfies the refraction law and the refraction geometry condition in Figure 9, and the initial iteration value is obtained by the following equations:After obtaining , the imaging point is easy to obtain by the linear camera model as shown below:So far, two important problems of projection and back projection in refraction, which are the basis of 3D point reconstruction, have been solved. In the resultant formulas (4)–(8), all the parameters are assumed to be known, except the imaging point coordinates and the 3D spatial point coordinates as well as the intermediate parameters involved. The refractive index (relative to air) of the transparent sample medium can be easily obtained by the needling method. The camera internal parameter matrix , the unit normal vector of the refractive interface , and the point coordinates on the refraction interface will be described in the next few sections.

3.3. Camera Calibration

Since each binocular camera has two lenses, in order to determine the complete spatial position of the crack, the position information obtained from three angles should be comprehensively considered. The relative position of each lens must be calibrated separately, and stereoscopic calibration between each lens is necessary.

Due to the triangular shape of the camera position, the angles of the two lenses in different binocular cameras in Figure 6 are quite different. It is very difficult to observe the calibration board at the same time when using the common single-sided checkerboard plane [14]. Therefore, an improved two-sided checkerboard plane calibration board is adopted, as shown in Figure 10. When two lenses are calibrated, one lens observes one checkerboard and the other lens observes the other. However, during the calibration, it is necessary to ensure that the corresponding checkerboard grid intersection points observed by using the two lenses have the same spatial coordinates in the world coordinate system. Accordingly, the calibration board must be very thin, and the checkerboard grid intersection points on both sides must correspond to one another. In order to make the calibration board as thin as possible, two identical sheets of thin checkerboard grid printing paper (each sheet is about 0.06 mm thick) are used to paste on the reverse side, so that the intersections on both sides are strictly aligned. However, ordinary thin paper cannot guarantee a flat plane; therefore, two very thin (refraction effect can be ignored) transparent toughened glass films are used to clamp the bonded double-sided checkerboard printing paper.

3.4. The Refraction Surface Parameter

The refraction plane on the side surface of the transparent specimen is a plane, as shown in Figure 8. The refraction plane can be completely determined by the unit normal vector of the interface and a point on the refraction interface.

Since the three side surfaces are symmetrical, only one side surface is taken as an example. The basic principle is that a thin marker is attached to the four corners of the side surface, and the marker can be approximately located on the plane of the side surface. Then, the four markers are determined by using the calibrated binocular camera. The steps are as follows:(1)Before using the binocular cameras to observe transparent specimens, thin circular patches are pasted on the four corners of the rectangular side surface. The patch colors, such as red, blue, or green, must be easily recognized from the background environment. In this study, we used the red patch as shown in Figures 11(a) and 11(b).(2)Two images of the specimen’s side surface with patches are captured simultaneously by using the left and right lenses of the binocular camera, and the distortion of the two images is undistorted.(3)According to the color feature of the patch, the digital image processing techniques are used to extract the binary region of the red circular patch. As shown in Figures 11(c) and 11(d), it should be noted that the binocular camera is not necessarily completely perpendicular to the side surface of the sample, and the circular patch region is actually elliptical.(4)The center pixel coordinates of the extracted binary region corresponding to all the circular patches in (3) are computed. Since the center of the circle is mapped to the ellipse center in the perspective projection transformation, the centroid of the binary region is the center of the circular patch. Therefore, the coordinates of the four corners can be obtained from the two images.(5)The spatial coordinates of the circular patch on the four corners of the specimen’s side surface in the camera coordinate system are calculated based on the conventional 3D reconstruction method as described in Section 2.(6)The refraction plane of a side surface can be determined by any three point coordinates on the four corners. Since four normal vector values can be obtained from four patched corners, their mean is used as the final normal vector of the refraction plane. The coordinates of the normal vector and the point in the coordinates of the two lenses of the binocular camera can be transformed by the stereo calibration parameters.

3.5. Image Matching

The transparent sample occupies only a small part of the image captured by using the camera, and the target of the image matching step is to match the corresponding points of the cracks in the sample. Therefore, only the transparent sample area in the captured image needs to be preserved, and the rest are not required in the calculation of image matching. As shown in Figure 12, the region of the transparent sample surrounded by the four ellipse center positions of the four corners of the sample side surface extracted in Section 3.4 in the digital image is called the image matching effective region.

Before image matching, image alignment is usually performed to reduce the searching space. However, the conventional alignment method introduced in Section 2 can only be applied to the surface points of objects, while the point inside the transparent medium is no longer effective due to refraction. In this article, a method of alignment correction for the interior points of transparent material is proposed. The basic principle is as follows: as shown in Figure 13, one image is used as a reference image and the other as a target image . A known point on the refraction interface can determine the direction of the refraction line, and then, the pixel position on the target image projected by the point on the refraction line can be determined by using the back projection formula described in Section 3.2. Through calculating and analyzing a large number of images, the pixel position projected by the point on the refraction line is almost on a straight line; thus, we establish a new image with the same size as the reference image . The point on the refraction interface mapped from the pixel position in image is calculated using the back projection formula, and then, the pixel value in the image , which is mapped from the obtained point on the refraction interface, is set as the pixel value of the image . The minimum bounding rectangle of the effective region of and is calculated and extracted separately, and the aligned rectangle is obtained as shown in Figure 14. The matching crack points in the transparent specimen are in the same horizontal line in the two images.

For the crack point inside a transparent material, it is similar to the surface point of an object, but it also has remarkable difference. On the one hand, the light reflected from the surface point of the object is directly mapped to the image, and its pixel value basically reflects the intensity of light, but the crack point inside transparent material actually happens to be small. It is precisely the small displacement that causes the change in the refracted light. The crack point is identified by the change in the refracted light relative to the background point. Therefore, the pixel value of the image pixel mapped by the crack point is greatly affected by the background illumination since besides the reflected light at the displacement interface, it also contains all sides in the background environment. When the intensity of the refracted light in each direction of the background is strong, the difference between the pixel value mapped by the crack point and the background is too small to identify. Therefore, it is difficult to directly use the conventional automatic matching algorithms (local and global algorithms mentioned in Section 2). On the other hand, the characteristic of 3D crack growth is that the initial precrack is propagated into a continuous 3D surface.

3.6. Spatial Point Reconstruction

The 3D position of crack points in the transparent material is similar to the conventional method mentioned in Section 2, but it is necessary to use the principle of refraction triangulation shown in Figure 7. The set of equations is solved by the least square method or the maximum likelihood estimation method to minimize the reprojection error as the 3D coordinates of the crack.

However, it should be noted that this is calculated only by a binocular camera image correspondence points, and when there is a lot of shielding between the crack surface, observing from only one angle cannot provide all the crack surface information. Therefore, the three binocular camera schemes used in this paper, on the one hand, can choose to observe the crack surface from multiple angles and improve the robustness against occlusion, while on the other hand, when the direction of precrack propagation is unknown, a binocular camera with the best viewing angle can be selected for reconstruction.

4. Reconstruction of 3D Dynamic Expansion Crack

The dynamic process of 3D crack propagation can be recorded as a video format file. The time interval and the upper limit of the shooting time can be set by using the binocular camera. At the same time, all binocular camera images are mosaics of multiple images to create a large image as a video frame. When recording the 3D crack growth under stress loading, the crack image at each moment can correspond to the stress state according to the start recording time and time interval. As the recording algorithm is responsible only for recording images and does not handle 3D crack reconstruction calculations, which are time-consuming, the processing time of the recording algorithm can be almost ignored, and the time interval of image recording can be set small (the minimum value depends on the frame rate of the camera), so as to achieve a highly accurate real-time process.

5. Verification of the Reconstruction Method of the Rock with 3D Crack

5.1. Test Instruments and Test Schemes

Based on the principle and method of 3D crack reconstruction of stereoscopic vision described earlier, the binocular camera is developed to observe the crack expansion of transparent rocks in real time during the test, as shown in Figure 15. The overall size of the test equipment in Figure 15 is approximately 450 × 300 × 400 mm3, with three vertical rods and three crossbars fixed support and a sliding link to secure the binocular camera on the vertical bar. The equipment can be combined with a RMT-150B testing machine for real-time recording of crack expansion under uniaxial compression conditions.

In order to enable the three binocular cameras to observe the cracks from different directions, the positive prismatic specimen with a length of 60 mm and a height of 90 mm was adopted. The interior of the transparent rock specimen contains prefabricated oval cracks (the short axis is 10 mm and the long axis is 20 mm), and the angle between the prefabricated crack plane and the compressed load is 30°, 45°, and 60°, respectively. After the preparation of the specimen, a small red round piece should be glued to four corners of each face before the compression test.

During the uniaxial compression test, the expansion of 3D cracks is captured in real time using a binocular camera with an interval of 200 ms for two adjacent photos. Uniaxial compression test is loaded by force at a load rate of 0.1 kN/s.

5.2. Test Results and Analysis

Figure 16 is the uniaxial compression test process taken by using three binocular cameras. Figure 17 consists of pictures of the crack propagation results observed from different angles at the compression loading end. From Figure 17, it can be seen that the crack surface is not continuous, but deflecting at the edge of prefabricated crack. The crack propagation in the upper half of the elliptical prefabricated crack is upward, while that in the lower half is downward, and there is no trace of the crack propagation at the endpoint of the middle short axis.

5.3. Analysis of Three-Dimensional Crack Reconstruction Results

Based on the above three-dimensional crack reconstructing method, any frame of the crack propagation video recorded by using binocular camera can be extracted to reconstruct the three-dimensional crack at corresponding time. Taking the 800th frame of 45-degree angle prefabricated crack propagation video file as an example, three-dimensional reconstruction was carried out. As shown in Figure 18, a preliminary three-dimensional discrete point cloud was obtained by using the three-dimensional crack reconstruction program developed in this paper. Subsequently, it was input into MeshLab for the point cloud processing. The fracture angle of crack propagation can be obtained by solving the surface normal in MeshLab. The 1000th frame of crack propagation of prefabricated crack specimens with angles of 30, 45, and 60 degrees was extracted, and it was used to reconstruct and calculate the initial fracture inclination (the angle between crack propagation and original crack surface) of the upper and lower ends of the elliptical crack major axis. The results were , respectively.

From Figures 17 and 18, it can be seen that the recognition result of very small crack surface of prefabricated crack edge propagation is poor because of the limitation of image resolution. At present, the recognition of crack surface contour edge needs manual intervention. It will be overcome in further research studies.

In view of this, the three-dimensional crack constitutive calculation method presented for transparent rock-like materials based on stereo vision is viable.

6. Conclusion

Considering that 3D crack propagation in real rock is difficult to observe by ordinary methods and the lack of real-time and high cost of traditional CT technology, a stereo vision-based method for the reconstruction of 3D cracks in transparent rock-like materials is constructed by observing 3D crack propagation in transparent materials from different angles with binocular cameras. Provided that the frame rate of the camera is fast enough, the 3D crack propagation of the specimen under loading can be recorded and analyzed in real time without unloading. First, the analytical model of light refraction geometry of transparent material is established. A double-sided checkerboard calibration board method is proposed to solve the problem that the traditional single-sided checkerboard calibration board cannot observe the calibration board simultaneously when the angle of the lens is very different. A simple method using discs is employed to accurately locate the position of the refraction interface. Combined with the prior artificial information, the partial matching points of the crack surface contour edge are given, and the matching points of the whole crack surface pixels are obtained using an interpolation fitting method.

Data Availability

The data used to support the findings of this study are included within the article.

Additional Points

Highlights. A method for reconstruction of three-dimensional rock cracks is proposed. A stereo vision technique is utilized in the reconstruction method. An explicit light refraction geometric model is proposed. The three-dimensional crack propagation can be analyzed in real time without unloading.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

Shu Zhu and Zhihua Luo conceived and designed the study. Shu Zhu, Zhihua Luo, and Nan Wu performed the experiments. Shu Zhu wrote the paper. Shu Zhu, Zhihua Luo, Yufeng Gao, Zhende Zhu, and Nan Wu reviewed and edited the manuscript. All authors read and approved the manuscript.

Acknowledgments

This research work was partially carried out with financial support from the National Basic Research Program of China (973 Program, grant no. 2015CB057903), project support by “the Fundamental Research Funds for the Central Universities”(nos. 2019B74314 and SJKY19_0450), and project support by the National Natural Science Foundation of China (NSFC, grant nos. 51579081 and 51878249).