Abstract
The working environment of the oil drilling platform is harsh, with many uncertain factors and high operating risks. During the drilling process, due to sudden formation factors or improper process operations, it is extremely easy to cause well wall instability, sticking, lost circulation, well kick, and blowout. In addition, other complicated situations and accidents have brought major challenges to drilling safety. In order to improve the technical level of oil and gas exploration and development and achieve the goal of reducing costs and increasing efficiency, it is necessary to strengthen the optimization of traditional oil drilling monitoring systems. This article summarizes the advantages and disadvantages of the existing image multiscale analysis algorithms, from wavelet transform, stationary wavelet transforms to contourlet transform, and nondownsampled contours based on the characteristics of the images collected by different sensors in the oil drilling monitoring system and the needs of practical applications. Wave transforms detailed comparison of the fusion performance of these image analysis algorithms under the same fusion rules. Aiming at the shortcoming of the large amount of calculation of nonsubsampled contourlet transform, a fast implementation algorithm (IFNSCT) is proposed. The multichannel filter bank structure is used to replace the original tree filter bank structure, which reduces the time-consuming to the original without affecting the analysis performance of the algorithm. One-half of the oil drilling monitoring efficiency has been improved.
1. Introduction
With the year-on-year increase in the amount of drilling engineering operations, the increasing renewal of technology, and the development of the information industry, offshore oil has entered the era of network information [1, 2]. According to the characteristics of offshore drilling technology, through special research, test analysis, evaluation optimization, and engineering practice, a complete set of the intelligent auxiliary decision-making monitoring system for offshore oil drilling has been formed [3, 4]. It ensures operation safety, improves operation timeliness, saves drilling costs, and provides solid technical support for the exploration and development of offshore oil and gas resources.
The most important part of the optimization of the oil drilling monitoring system is the information fusion of the acquired images of monitoring equipment. Information fusion technology is such a processing process, and it can make full use of advanced computer technology to comprehensively analyze and process the information obtained through different (multisource) sensors and then obtain a comprehensive and accurate information about a certain target or scene. Image fusion belongs to the category of information fusion [5–7]. As an important part of it, this technology integrates many hot subjects, such as image and signal processing, computer technology, and sensor technology and has huge application and development prospects. The so-called image fusion is not simply superimposing images of different natures after obtaining them through multisource sensors. However, it is committed to adopting appropriate fusion algorithms, but to make full use of the redundancy of these images through appropriate fusion algorithms [8, 9]. Based on the remaining and complementary information, they are fused, so that the final generated image has richer details and fuller content [10]. The fusion image has many advantages that cannot be compared with the image obtained by a single sensor, which greatly overcomes the latter are limitations in some aspects such as spectrum, spatial geometry, and resolution.
According to the different images processed, image fusion methods can be divided into two categories: one is the most widely studied grayscale image fusion; the other is the increasingly emerging color image fusion method. Commonly used fusion algorithms are all trying to make the fusion image with higher quality, which belongs to the category of pixel-level fusion [11, 12]. Literature [13] carried out the simplest image fusion experiment; he carried out a simple fusion processing of Landsat image and radar image and successfully applied the fused image in the interpretation of terrain and landform. Later in the literature [14], the MSS image and Landsat-RBV image were fused, and preliminary results were obtained. Image fusion technology has gradually attracted attention and has begun to be applied to general images such as visible light images. Image fusion technology has begun to become one of the research hotspots in the field of remote sensing image processing and analysis. In the first type of the fusion method, pixel-based or region-based fusion is more widely used and simpler. This type of method runs fast, but often has undesirable effects such as washout. Principal component analysis (PCA) is a commonly used algorithm for matrix dimension reduction and data relevance removal. It is similar to the KL transformation in the image compression fields [15–17]. A linear transformation processes the grayscale image into a two-dimensional matrix, solves the eigenvalues and corresponding eigenvectors of the matrix, and finally fuses the extracted principal components to obtain the fusion result [18]. The above algorithms are all performed on a single scale when processing the image, and there is no difference in the processing of pixels, which will cause the loss of image detail information. The idea of multiresolution analysis solves the abovementioned problems to a certain extent. Literature [19] first proposed the tower decomposition method, namely, the Gaussian pyramid and the Laplace pyramid. On this basis, Fedele and Merenda [20] proposed a low pass pyramid layered fusion method based on the contrast pyramid. Feng et al. proposed the gradient pyramid algorithm [21], which makes the poor directionality of the pyramid structure get to improve. In general, in the fusion method based on the tower decomposition, the Laplace pyramid does not express the directional information, the contrast and gradient pyramid algorithms will increase the amount of data in the processing process, and their stability needs to be enhanced. Wavelet transform technology was proposed in the 1990s. This method has many excellent properties such as variable time and frequency domain, good directionality, and multiscale and has been widely studied and applied [22–25]. The directionality of wavelet transform is not flexible enough, and it is not very ideal in expressing the curve. In response to this defect, scholars have proposed analysis tools such as ridge let and Curvelet transform. The literature [26] proposed the Curvelet transform. To obtain the decomposed subband, the source image needs to be filtered many times. Due to the multiple convolution of the image, it will inevitably affect the computational efficiency of the digital image.
In this paper, we optimize the monitoring system of oil drilling based on multisensor image fusion. First, analyze the structure of the oil drilling monitoring system and design a reasonable structure of the multisensor image fusion algorithm. Secondly, the relevant theories of the multisensor image fusion algorithm are analyzed and researched. Aiming at the slower image information fusion of the traditional nonsubsampled contourlet transforms multiscale transform method, an improved fast nonsubsampled contourlet transform algorithm is proposed. Finally, simulation experiments verify the effectiveness of the algorithm and improve the monitoring efficiency of oil drilling. The second part of the article is an introduction to the overall framework of the article and related theories; the third part is the algorithm structure and specific implementation; the fourth part is the simulation experiment verification; the fifth part is the full text summary. The main contributions are follows: (1) simultaneous interpreting the characteristics and practical applications of different sensors collected from the oil drilling monitoring system, a nonsampling contour is proposed. (2) A fast implementation algorithm (IFNSCT) is proposed to solve the problem of large amount of computation of nosubsampled contour wave transform. (3) The multi-channel filter bank structure is used to replace the original tree filter bank structure, which reduces the time-consuming of the original filter bank structure without affecting the analysis performance of the algorithm.
2. Related Theories and Technologies
2.1. Oil Drilling Monitoring System Design
The overall structure of the offshore oil drilling intelligent auxiliary decision-making monitoring system can be divided into 4 parts, as shown in Figure 1, which are, respectively, the preview layer, the monitoring layer, the decision-making layer, and the optimization layer. The preview layer is the simulation evaluation layer of the drilling design.

The drilling design is simulated before drilling, the key parameter changes are analyzed, and the possible complications are predicted. The monitoring layer is the tracking evaluation layer of the land center, which simultaneously analyzes the drilling data transmitted on site, intelligently evaluates the drilling conditions, and provides real-time guidance for the site. The decision-making layer is the plan formulation layer of the head office. It integrates drilling and geological conditions and provides an auxiliary role for technical experts in decision-making in a three-dimensional dynamic visualization method. The optimized layer is the monitoring image analysis layer after drilling, which performs overall monitoring and analysis of the drilled single well or regional multiwells, and provides reference materials for subsequent construction or adjacent well design.
Each area of the drilling well site is equipped with a video security monitoring system to monitor its operation status in real time to ensure that the system runs more continuously and reliably. Adopt professional methods to conduct real-time intelligent analysis of the video data of the well site, such safety risks are avoided and prevented, and the zero-accident construction behavior is counted to facilitate the orderly progress of safe construction work and make this work more standardized. The multisensor image self-adaptive fusion frame structure proposed in this paper is used for the optimization of the drilling monitoring system to facilitate the intelligent analysis of monitoring video information. The framework consists of a multiscale analysis module, a coefficient fusion module, a multiscale reconstruction and image evaluation module, and a parameter optimization module, as shown in Figure 2.

Image fusion is a preprocessing operation for subsequent tasks such as detection, recognition, segmentation, and classification. Different subsequent tasks often require observation or processing of different features in the same image. Therefore, unlike most fusion frameworks based on multiscale analysis, the image fusion framework proposed in this chapter introduces the evaluation of the fusion image quality into the fusion process, and the result of the image quality evaluation is used as feedback information to optimize the parameters in the coefficient fusion module, thereby get better fusion results adaptively. As shown in Figure 2, first, the input source image is subjected to multiscale transformation to obtain high-frequency coefficients and low-frequency coefficients. Different fusion rules are used to fuse the corresponding coefficients to obtain fusion coefficients, and the fusion coefficients are multiscale reconstruction to obtain the fused image. Then, the quality of the fusion image is evaluated, and the evaluation result is fed back to the optimization algorithm to optimize the parameters in the coefficient fusion module, so that the final fusion image will be a better fusion result for the selected evaluation index.
2.2. Analysis of Classical Multiscale Image Technology
Multiscale analysis is a fast and effective image signal processing algorithm. By decomposing the image at different scales, it can effectively extract the characteristic information of the image signal at each scale. In view of the different nature of the statistical characteristics of the information at different scales, a targeted image fusion rule is designed, which can effectively retain the important information of the source image and improve the overall quality and usability of the fused image. The current academic circles have proposed more than ten different multiscale image analysis algorithms, among which the classic multiscale analysis algorithms mainly include wavelet transform and contourlet transform.
Wavelet transform (WT) is one of the first multiscale image analysis algorithms introduced into the field of image fusion research. It has good time-frequency analysis capabilities [27–29]. Because the image under the computer platform is stored in the form of a pixel matrix, the wavelet transform used for image processing and fusion is mainly two-dimensional discrete wavelet transform.
In discrete wavelet transform (DWT), the family of functions can be expressed in the following form: where and are scale and translation coefficients, , is a positive value, , and the function satisfies where is the angular frequency. Discretize and with power exponents: and . Among them, , the expansion step is fixed values, and .The DWT coefficient is defined as follows:
In image fusion, permanent , , the wavelet basis function is simplified to
After the source image is wavelet transformed, each level of wavelet decomposition will get four subband images, including LL (low frequency sub band), LH (horizontal sub band), HL (vertical sub band), and HH (diagonal sub band). The length and width of each subband image are 1/2 of the source image, and the data volume is 1/4 of the source image. The second-level wavelet decomposition is to decompose the LL subband iteratively and so on.
In the one-dimensional signal representation, DWT is undoubtedly an effective algorithm, which to some extent provides the best representation of the one-dimensional signal. However, because the image is not directly stacked by one-dimensional signals, the poor direction extraction ability of DWT is only suitable for capturing point singularities, and it is not enough to encounter two-dimensional images containing line singularities and surface singularities. The multiscale decomposition is completed by Laplace filter bank, and a directional filter that satisfies the tree expansion rule achieves the multidirectional analysis. The device group is complete. The structure of the filter bank is shown in Figure 3. The yellow part is the range of the image frequency band filtered by the Laplace filter bank.

Laplace decomposition can generate a low-resolution image and a difference image (that is, the difference between the upper level decomposition image and the current level decomposition prediction image) at each level, and then iteratively decompose the low-resolution image, and finally obtain a one lowest level low-resolution image and several differential images at various levels. The reconstruction process is a process in which low pass and low-resolution images are predicted to the upper level and superimposed on the upper level difference image, and finally, the original image is obtained. Figure 3(b) describes the Laplace decomposition and reconstruction process. and are Laplacian decomposition filter and Laplacian reconstruction filter, respectively.
After improving the traditional Laplacian filter, the directional filter structure in the contourlet transform (CT) is shown in Figure 3(c). Among them, and are sector filters, and are quadrant filters, and is plum blossom. DFB can generate 2k wedge-shaped frequency segmentation through a binary tree decomposition of layers. In order to obtain the desired frequency segmentation, it is necessary to use a plum blossom sampling filter bank that satisfies the tree expansion law for image and filter adjustment.
2.3. Evaluation Index of Image Fusion Quality
Due to the complexity of the image itself, it is a relatively difficult task to evaluate the effectiveness of the fusion algorithm and the quality of the fusion image. Therefore, the evaluation of fusion algorithms and fusion quality requires a number of indicators with different focuses. At present, the evaluation indicators of image fusion are mainly divided into two categories: subjective evaluation indicators and objective evaluation indicators, and generally both types of indicators are used to comprehensively evaluate image fusion algorithms. The structure diagram of the evaluation index is shown in Figure 4.

For a visual image, the most intuitive way to evaluate the fusion effect is human observation. Human beings have developed a complete set of image evaluation standards related to life experience in daily life and study. For an image, humans can easily qualitatively judge the clarity, intelligibility, rationality, and information content of the image and other intuitive evaluations. However, subjective evaluation also has inevitable errors, and its evaluation conclusions are often different. Compared with the single and imprecise subjective evaluation standard, the objective evaluation standard provides a quantitative evaluation of a certain attribute of the fusion result. The existing objective evaluation indicators can be roughly divided into two categories, based on evaluation methods for the statistical characteristics of a single image: information entropy, average gradient, spatial frequency, image mean, standard deviation, etc. In addition, evaluation methods based on the amount of information transfer between multiple images: mutual information (MI), , QE, QAB/F, structural similarity (SSIM), visual information (VIF), etc. Because the evaluation method based on the information transfer of the source image combines the information of the source image and the fusion image, it has better reliability. The mathematical definitions of several evaluation indicators used in this article are given below: (1)MI: this parameter reflects the amount of information transferred from the original image to the fusion image and is defined as shown in formula (5). The larger the value, the more information the fusion image inherits from the original image and the better the fusion quality where and mean the intensity level in image A and image B. , , , , and mean probability mass function.(2)VIF: by modeling the human visual system, natural scenes, and image distortion models, comprehensive quantitative evaluations including additive noise, blur, and global or local contrast distortion can be carried out. The larger the value, the better. 1 means there is no distortion, and its definition is shown in formula (6) [30]: (3)SSIM: the quality of the fusion image is evaluated by the structural similarity evaluation of the source image and the fusion image. The larger the value 1 means no distortion, and its definition is given by formula (7) [30]:
3. Research on Multisensor Image Fusion Algorithm Based on Improved Fast NSCT
Two kinds of monitoring equipment that often appear in petroleum mines are ordinary visible light monitoring and infrared monitoring, and in the application field of image fusion, infrared and visible light fusion is an important application direction. Due to the different characteristics and points of interest between infrared images and visible light images, these two images contain a large amount of complementary useful information, such as clear texture information in visible light images and hidden targets in infrared images. The fusion of these two images, and then the effective use of the complementary information, is of great significance to facilitate human observation and later computer image processing.
3.1. Nonsubsampled Contourlet Transform (NSCT)
In the image multiscale and multidirectional analysis algorithm, NSCT is widely used because of its resolution ability in any direction and any scale, translation invariance provided by nondown sampling, and excellent performance without spectrum aliasing and Gibb’s phenomenon [22–25]. Attention. It holds the segment-based structure provides a progressive optimal, sparse image approximate way, better than the vast majority of image analysis algorithms on decomposition and reconstruction performance.
NSCT is a nonsubsampling version developed based on CT. It can be divided into two parts: nonsubsampled pyramid (NSP) decomposition and nonsubsampled directional filter bank (NSDFB) decomposition. The former guarantees the multiscale characteristics of NSCT. The latter provides NSCT with powerful multidirectional decomposition performance. Figure 5 shows the breakdown structure of NSCT.

The nonsubsampled pyramid (NSP) transform is formed by cascading a two-channel filter bank. At each decomposition level, one high-frequency subband image and one low-frequency subband image can be obtained, and then each the hierarchical low-frequency sub-band image is iteratively filtered to complete the multilayer NSP decomposition. In -level NSP filtering, a low-frequency subband image and high-frequency sub-band images are generated, and the size of all subband images is the same as the source image. Figure 5 shows a schematic diagram of three-layer NSP decomposition, where represents the low pass filter bank, represents the high pass filter bank, represents the decomposition level, and the grey area represents the filter passband of each NSP decomposition step.
The nondownsampling direction filter bank transform is a tree-shaped nondownsampling filter bank composed of a series of fan-shaped filter banks and quadrant filter banks according to the tree expansion principle, which can provide rich directional detailed information. In the 1-level NSDFB decomposition, 21 directional subbands of the same size as the source image can be obtained. In Figure 5, a 2-level NSDFB structure and its frequency splitting diagram are given, where and are sector filters, and and are quadrant filters.
3.2. Fast Implementation of NSCT
Although NSCT has excellent multiscale and multidirectional analysis capabilities, its application range is limited by its huge computational overhead. As described in the previous section, the traditional NSCT is divided into two parts. The source image to be decomposed needs to be decomposed by NSP and NSDFB in turn. Therefore, to obtain a decomposed sub band, the source image needs to be filtered multiple times. Due to the large amount of digital image data, calculating image convolution multiple times will inevitably affect the efficiency of the algorithm. On the contrary, if they obtained decomposed subbands that can be compressed in the primary filtering, the algorithm efficiency will be greatly improved. Based on this idea, we redesigned the filter structure, and its implementation on a single channel is shown in Figure 6.

As shown in Figure 6, and represent low pass and high pass filters in NSP, and and represent sector and quadrant filters in NSDFB. To obtain a final directional decomposition subband, the source image needs to be filtered by these four filters in sequence, since the size of the digital filter is much smaller than the size of the digital image. The amount of calculation to combine the filters is negligible compared to the image convolution. Therefore, the filter banks of the same channel in NSCT (, , , and ) are combined into a single filter in fast NSCT . Furthermore, the filter signal passes through a two-dimensional convolution network; finally, the signal sends to the NSCT algorithm. In this way, the fast implementation of NSCT is realized.
3.3. Multisensor Image Fusion Rule Design
Pixel-level image fusion is to directly integrate each pixel in the source image into a fusion pixel, and the calculation of the fusion weight is based on the importance of the original pixel in the information. Therefore, regardless of the fusion rule, it is a process of weighted summation of the original pixels, and these weights reflect the importance of the original pixels. The most essential requirement of image fusion is to retain as much important information in the source image as possible. Therefore, pixels with more information should be assigned with higher weights to retain more information, while pixels with low information should be assigned higher weights to retain more information in contrast. Based on this, a new fusion rule based on information theory pixel information estimation (PIE) is proposed to measure the information contained in the pixel and to determine the pixel weight.
In NSCT, the low-frequency subband image is severely blurred in the NSP decomposition, which means that a small neighborhood contains no more information than a single pixel, and too much time is spent calculating the neighborhood information of the pixel. It is not necessary. In addition, the current popular regional energy fusion rule ignores the dark information, and the regional variance rule only strengthens the pixels in the edge area. Therefore, in order to solve the above problems, the low frequency part of the PIE method is proposed. It only calculates the background brightness of the overall image once instead of calculating the neighborhood information for each pixel successively and then determines the fusion weight based on the difference between the pixel grayscale and the overall background brightness.
In NSCT, the high-frequency subband reflects the edge and texture information distribution of the image, and its coefficient is small in the smooth area, but increases sharply in the edge part. Different coefficient values represent different object characteristics, and different rules are needed to fuse them. Suppose that and are the visible light image and infrared image in rows and columns, respectively, for high-frequency coefficient values, if one of them belongs to the texture or edge part (that is, has a larger coefficient value), the value with the larger absolute value is selected as the fusion coefficient value to ensure that important information is preserved. If both belong to the smooth part (that is, both have smaller coefficient values), then they are weighted and averaged to ensure that as much source image information as possible is inherited. Therefore, it is necessary to calculate a threshold to distinguish the smooth region coefficient from the texture edge coefficient and apply different fusion rules.
4. Simulation Results and Performance Analysis
In this part, three different sets of infrared and visible light data are used to test the fusion performance of the proposed algorithm. For each set of data, the algorithm in this chapter will be compared with other existing algorithms, including CT algorithm, NSCT algorithm, and NSCT-PCNN algorithm. These algorithms have outstanding performances in the field of nonlinear fusion, image decomposition and representation, and field information weighting, respectively [31–34]. In the experiment, the image decomposition level is set to level three, and all parameter settings refer to the settings in the reference. In addition, the algorithm running environment is Core i7 2630, 4G RAM. The adopted “UN Camp” data set images cover 256 grey levels, all of which are infrared and visible light images, and are used as test images in a large number of documents. For more precise fusion results, the watermark in the Octet image collection is cut.
4.1. Simulation and Results of Subjective Evaluation of Multi-Sensor Image Fusion Effect
The comparison of the fusion results of the “UN-Camp” image set is shown in Figure 7. In these images, Figures 7(a) and 7(b) represent the original visible light image and infrared image, respectively, and Figures 7(c)–7(f) represent the fusion results of six fusion algorithms, including the IFNSCT algorithm proposed in this chapter.

(a) Visible light picture

(b) Infrared image

(c) CT fusion image

(d) NSCT fusion image

(e) NSCT-PCNN fusion image

(f) IFNSCT fusion image
Obviously, the visible light image depicts the background information of the environment well, while the infrared image highlights the target information of the characters hidden in the bushes. It can be seen that all the fusion algorithms retain the main information of the original image in the fusion image. However, there are still significant differences in details. The contrast of Figure 7(c) is the worst, because the low-frequency average fusion rule compresses the grey-scale range of the image. The overall grey scale of Figure 7(d) is too bright, so that more dark part information is lost. Figure 7(e) has a good visual effect and sense of hierarchy, but there is serious blur in some areas. This is because the PCNN algorithm enhances visual contrast but ignores texture details. In contrast, Figure 7(f) has the best performance in terms of contrast, image sharpness, target definition, and edge details, which reflects that the fusion rule is applicable to infrared and visible light fusion.
4.2. Simulation and Results of Objective Evaluation of Multisensor Image Fusion Effect
Subjective evaluation indicators can often provide humans with an intuitive comparison, but due to differences between individuals, determining the best results may be difficult and even controversial. Therefore, an effective objective evaluation index can give us a quantitative analysis of the fusion quality. In this part, three objective evaluation indicators including MI, VIF ,and SSIM are used to objectively evaluate the quality of the above fusion results.
Based on the statistical data in Figure 8 and the above analysis, it is obvious that the IFNSCT algorithm proposed in this chapter is very effective in retaining useful information, reducing image distortion and maintaining reasonable image contrast, and it is better than existing subjective and objective evaluations. The image fusion algorithm is better. The detailed results are shown in Table 1.

(a) Evaluation index MI

(b) Evaluation index VIF

(c) Evaluation index SSIM
Table 1 is the performance comparison data of the results of the five fusion methods. It can be seen from the experimental data in Table 1 that compared with the image fusion algorithm based on wavelet transform and NSCT transform, the fusion algorithm based on fast NSCT achieves higher EFQI and WFQI. In particular, the fusion algorithm proposed in this paper has the highest EFQI and WFQI, which shows that the algorithm proposed in this paper can better extract the edge information of the image and is more in line with human visual characteristics. Higher performance fusion images are obtained.
4.3. Feature Extraction Experiment and Result of Oil Drilling Monitoring Video Image
Based on the above analysis and simulation experiments, the effectiveness and superiority of the proposed IFNSCT multisensor image fusion algorithm are verified. The proposed algorithm is applied to the oil drilling monitoring system to optimize the image processing capability of the monitoring system. The video security monitoring system of the drilling well site is mainly used to supplement manual duty. If the monitoring result is the same as the algorithm setting rules, it will automatically prompt the monitoring system and give specific processing methods to achieve linkage alarm and manual intervention. Add a video intelligent analysis module to the intermediate media processing layer platform and make it the core of the system. At the same time, focus on analyzing illegal intrusions, high-altitude operations, hot work, smoking behavior, etc. and implement key monitoring on them to achieve active early warning. The algorithm in this paper provides a theoretical basis for these functions, and the test results are shown in Figure 9.

(a) Visible light picture

(b) Infarred image

(c) IFNSCT Feature image with a

(d) IFNSCT Feature image with b
Obviously, it can be seen from Figure 9 that the proposed IFNSCT algorithm can process the frame images in the surveillance video very well. For different types of monitoring equipment, visible light monitoring equipment, and infrared monitoring equipment, although the extracted gradient features will be different, the overall trend is almost the same, and the fusion of their features can better reflect the actual situation of the oil drilling site. Furthermore, Figures 9(c) and 9(d) mainly show the image features of the original image and infrared image processed based on the IFNSCT algorithm. It can be seen from the figure that although the infrared image accelerates the processing speed of the algorithm to a certain extent, the accuracy is obviously a little lower than that of the original image.
5. Conclusion
The research on the oil drilling intelligent auxiliary decision-making monitoring system has effectively improved the intelligent monitoring level of offshore drilling, greatly reduced the difficulty of drilling in complex formations, reduced the complexity, effectively avoided the occurrence of engineering accidents, and made a contribution to the realization of safe and efficient drilling operations. Starting from the multisensor image fusion algorithm, this paper first studies the composition of the monitoring system of oil drilling, then analyzes the basic framework of the multisensor fusion algorithm, and studies the fusion of multi-image information based on the multiscale analysis algorithm of the improved fast nonsubsampled contourlet transform. The feature information of the whole image is directly calculated, and the fusion weight is directly calculated through the difference between the pixel and the overall information of the image, which greatly improves the efficiency of the algorithm on the basis of improving the fusion performance. Finally, the image information of the visible light monitoring equipment and the infrared monitoring equipment in the oil drilling monitoring system is fused to improve the recognition accuracy of the monitoring system.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
This work was supported by the Southwest Petroleum University.