Abstract

As an important branch of multisensor information fusion, image fusion is widely used in various fields. As a hot research technology, virtual reality technology can bring different levels of experience to image fusion. At the same time, with the development of existing image processing software, it is conducive to the further analysis and processing of images. In today’s market, image processing technology still faces many problems, and with the advancement of technology, virtual technology is widely used in various fields, so combining virtual technology with images is conducive to improving image processing technology. This article mainly introduces the image fusion algorithm and its application research based on virtual reality technology and Nuke software. This paper first proposes a picture fusion model and a picture fusion system through the analysis of virtual technology and Nuke software and, on this basis, proposes a particle algorithm and a picture edge algorithm. Secondly, the optimal fusion of images is studied on Nuke software, and finally the experimental results are analyzed through image fusion algorithm. Studies have shown that the best image fusion greatly improves the security and privacy of the image, and the difficulty of cracking is as high as 80%. The data in the experimental analysis of the graphic fusion algorithm shows that the execution efficiency and time consumption of the algorithm are greatly shortened, and the time consumption is greatly reduced. The rate is reduced by about 50%, and a good image fusion effect has been achieved.

1. Introduction

1.1. Research Background and Significance

With the development of information technology, the field of development of virtual reality technology has gradually expanded. Virtual reality technology will play an important role in the development of future high-tech industries and the improvement of national technological innovation capabilities. The combination of image processing and virtual technology has become a research hotspot [1]. The use of image processing technology is often required in the medical field, and combining virtual technology with image processing can improve the hierarchy of image analysis and enable more accurate determination of the cause of disease. Among them, the rapid development of synthesis technology is based on the development of virtual reality technology. In the field of graphic image design, virtual reality technology is involved, and it is widely used, and the effect it presents has reached a very high level [2, 3]. At the same time, with the development of computer technology and the rapid development of image processing software, the Nuke software is the leader in many image processing, and its functions in image fusion, image three-dimensional integration, and film processing are extremely powerful, which further promotes image processing [4, 5]. The Nuke software provides artists with the means to create images with high-quality photo effects. Nuke requires no special hardware platform, yet provides artists with the flexibility, efficiency, economy, and full functionality to combine and manipulate scanned photographs, video boards, and computer-generated images. Having been used in nearly a hundred films and hundreds of commercial and music TVs in the digital sector, Nuke has the advanced ability to seamlessly integrate the final visuals with the rest of the film and TV, regardless of the style or complexity of the visuals to be applied. Among them, image fusion in image processing has attracted much attention. Image fusion is to fuse images about the same target collected through different methods through different algorithms, extract the information of each image, and finally generate an image containing each image feature [6, 7]. It extracts useful information in different images, eliminates the redundancy between image information obtained in different ways, and accurately, concisely, and completely describe the target in one image, which greatly facilitates people’s observation of the target and the subsequent processing of the target image [8, 9]. Since the emergence of image fusion technology, it has been applied to various occasions, and it was first applied in the use of satellites to observe ground conditions. Now it has very good applications in the detection of military field, the integration of information in the field of remote sensing, the transportation in daily life, and the diagnosis in the medical field.

1.2. Related Content

As the international standard of virtual reality, the virtual reality modeling language is developing rapidly. Sun and Zhao proposed a VRML method [10]. Through audiovisual-based virtual communication, virtual reality (VR) has played an indispensable role in dealing with this epidemic. They extended the functions of script nodes by introducing Java and script programs written in the Java script language. Make the library virtualized to meet the normal use of users. In principle, any text editing system can be used for VRML programming, but some editing systems have few related functions, so they are not suitable for large-scale VRML scene design [11]. The VRML algorithm they proposed can be applied to large buildings. The VRML algorithm they proposed is superior to traditional algorithms in terms of authenticity, interactivity, design rationality, and execution speed. Prove the practicability of the VRML algorithm. It provides assistance to people who are inconvenient to go out during the COVID-19 protection period. However, there are still limitations to its use in other periods. Allen et al. developed an open source software package [12], which is available for free on GitHub and distributed to other health systems under the Apache 2.0 open source license. They adopt the quality improvement project registry and promote it to the intended audience is an important factor in the success of the registry. To help understand the impact of improving quality project management in the hospital system, eventually reduce the time for approving quality improvement projects, increase the collaboration of the entire UF health hospital system, and reduce the redundancy and translation of quality improvement projects. They developed a registry matching algorithm based on Jaccard similarity coefficient, which uses quality project characteristics to find items of similar quality. The algorithm allows quality researchers to find existing or previous quality improvement projects to encourage collaboration and reduce duplicate projects. They also developed the QIPR approver algorithm, which can guide researchers to solve a series of problems that can enable projects of appropriate quality to be approved without manual intervention. Although it is convenient, it requires professional operation. Li et al. proposed a multifocus image fusion algorithm based on multilevel morphological decomposition and classifier [13]. The attraction of this algorithm is that it can decompose an image into several layers with different morphological components, thereby preserving more detailed information of the source image. In the algorithm they proposed, the source image is first decomposed by multilevel morphological component analysis [14, 15]. Then, feature vectors are extracted from the natural layer and classified by two well-trained support vector machines. Then, the consistency verification is used to verify the decision matrix set. Finally, the coefficients are fused based on the set of decision matrices. Their experimental results proved the superiority of this method in subjective and objective evaluation. But this algorithm has errors [16].

1.3. Main Content and Innovation

This paper is mainly based on virtual reality technology and Nuke software to carry out the research of image fusion algorithm and the research of algorithm application. It mainly uses virtual technology to build image fusion model and design graphics system model, then proposes image fusion algorithm, and conducts algorithm research and experiment on the Nuke software to get the effect and function of image fusion algorithm. The innovation of this paper is to combine virtual technology and software technology to propose an image fusion algorithm and conduct an image fusion experiment analysis to get the application and effect of the image fusion algorithm.

The first part of the article introduces the relevant background and significance of virtual reality technology and NUKE software and provides a brief overview of the relevant work involved and a brief outline of the innovations in the article.

The second part of the article focuses on the evolution of image fusion algorithms and software design and image evaluation metrics.

The third part of the article focuses on image fusion models for image edge detection and virtual techniques.

The fourth part of the article introduces the optimal image fusion and image fusion algorithms and analyzes the experimental results.

The fifth part of the article provides a brief discussion and analysis of the experiments and analysis of the article.

2. Software Overview and Image Fusion Algorithm

2.1. Overview of Nuke Software

Nuke as a picture and film synthesis software, Nuke won the Oscar Award. After more than ten years of experience, film and television professionals can now provide solutions for creating high-quality, high-precision photo-effect images [17]. Nuke does not need to define itself on the hardware platform and is very close to people. In terms of performance, it is an effective tool for artists to combine and process scanned photos and video cards, and it can also provide flexible, effective, and practical tools. With the preservation of picture fusion processing and complete picture processing solutions, the Nuke platform is favored by most users due to its own advantages. Nuke was originally a unique secret weapon developed and used by D2. It is used in conjunction with Houdini, the world’s most powerful special effects software, and mainly relies on these two powerful software. The functional practicability and speed of Nuke can be reflected in the relevant data [18, 19]. Nuke is used in many award-winning films and photography works, and many well-known artists use the software to create. Nuke has many additional components. Among them, three-dimensional film production additional program: Ocula is one of The Foundry’s best works and is an additional program designed specifically for the production of three-dimensional film. Ocula can help users quickly solve various problems in the production process of stereoscopic film. Its amazing appearance and performance make it possible to save a lot of productivity in the three-dimensional film production industry and improve production efficiency. In the subsequent Hollywood stereoscopic movies, almost all are Okura characters [20, 21]. Nuke is a cross-platform comprehensive software with a powerful composite type, which has great compatibility. It can be used and used on multiple systems on the network because it has different versions for different system platforms. In this way, users do not have to worry about being unable to use the software due to system incompatibility. Nuke can independently create its own model objects, cameras and lights on its own platform, can quickly and freely switch between 3D scene mode and 2D composition mode at runtime, and can also paste composite scenes. Attach as a texture to the newly created model. Nuke can perform multicamera settings, and a combination of multiple light shows on the three-dimensional scene created by itself, including projecting between multiple objects and adjusting the depth of field effect of the camera [22]. Nuke allows you to perform color correction and comparison between different color ranges on the screen and can quickly and easily switch between different color ranges on the same screen, so that users can use it quickly and effectively. More importantly, the Nuke platform has different color grading filters for different color ranges, and users can find their own different color grading methods and corresponding filter combinations [23]. It should also be mentioned here that Nuke also integrates the famous coding plug-ins Primatte, Uimatte, and Keylight by default, which adds fun and unlimited possibilities for postcoding work. Based on the powerful image processing function of the software, this paper conducts experiments and analysis on the image fusion algorithm on the software, and in the proposed image fusion algorithm, it gives the experimental basis and provides pictures for the proposal and application research of the image fusion algorithm fusion platform [24, 25].

2.2. Particle Swarm Algorithm (PSO)

Particle swarm algorithm is another new swarm intelligence algorithm after ant colony algorithm, and it has become an important branch of evolutionary algorithm [26, 27]. The particle swarm algorithm is a stochastic search algorithm based on group collaboration developed by simulating the foraging behaviour of a flock of birds. It basically combines the ideas of quantum physics to modify the “evolution” method of PSO (that is, the method of updating the position of particles). To update the position of the particles, center on the best current local position information and the best global position information of each particle. In PSO, the movement of particles is carried out in the following three types. The full name of the particle swarm algorithm is Particle Swarm Optimization, so it is abbreviated as “PSO.”

The function produces a particle evolution formula with a value range of 0-1: obey a uniformly distributed random number, and takes -1 and 1 with a certain probability. Our method is as follows:

is the contraction and expansion coefficient of PSO, the value of a depends on the situation, it can be fixed, and it can be dynamically changed in a certain way, generally according to the following formula:

That is, a linearly decreases from to with iteration, where is the maximum number of iterations. The algorithm flow of POS is as follows: Initialization. Initialize the initial positions of particles randomly, and set the current best position of each particle as follows: , and let the global best position be

Calculate the objective function value of the particle according to the calculation formula of the objective function ; update the new local optimal position of each particle according to the following formula, assuming that we are maximizing the objective function:

Update the global optimal position according to the following formula:

Repeat the above calculations until the general number of iterations in the algorithm reaches a certain value.

2.3. Evaluation Index of Image Fusion

After the fusion process is completed, it is not a simple matter to judge the quality of the image algorithm [28]. Images include image fidelity and image readability. Image fidelity refers to the degree of deviation of the evaluated image from the standard image; the smaller the deviation, the higher the fidelity. Readability refers to the ability of an image to provide information to a person or machine, not only in relation to the application requirements of the image system, but also often in relation to the subjective perception of the human eye. Image quality indicators include aspects such as resolution, color depth, and image distortion. The evaluation of image algorithm quality should consider the following aspects. First, the result image after fusion should contain as much of the important information of the original image as possible; second, the result image after fusion should not introduce wrong image information, so as not to mislead the result image, the observation, and subsequent processing of the image; third, in the process of acquiring the image, the image quality may be reduced due to climate or other factors, or the image quality during the preprocessing process is not very good. At this time, the result of the image fusion algorithm should not fluctuate too much; fourth, when the acquired image is mixed with noise, the algorithm should minimize the effect of noise on the result of the image fusion; fifth, the image fusion algorithm needs to be processed in real time. The occasion should be able to be easily transplanted to the hardware platform [29, 30]. The evaluation criteria for the result image after fusion can be divided into two parts, including the subjective method of direct observation by the human eye and the objective method of calculation using image parameters and related formulas. According to the subjective method of direct observation by the human eye, the viewer evaluates the image quality of the result according to the existing evaluation level and his own image evaluation experience. Sometimes, the viewer can also provide some standard images used to make comparative judgments. According to the subjective method of direct observation by the human eye, it is not easy to operate in practical applications and is easily affected by the observer’s own conditions and psychological aspects. Therefore, objective methods that combine image parameters and related formula calculations are often used in practical applications. It can not only overcome subjective the shortcomings of the method but also can achieve automatic evaluation of the result image. Because the preprocessing process of the image is completed by the computer, the objective method for the later quality evaluation is effectively combined with the preprocess to improve the efficiency of the entire image processing.

Image fusion maximizes the extraction of favorable information from the respective channels and finally combines them into a high-quality image to increase the utilisation of image information, improve the accuracy and reliability of computer interpretation, enhance the spatial and spectral resolution of the original image, and facilitate monitoring.

3. Image Fusion Model of Image Edge Detection and Virtual Technology

3.1. The Detection Operator of the Image Edge

The image information contains many image features that can be cited, among which the commonly used features include the following: color features, geometric features, and texture features. The feature extraction work in this paper focuses on edge extraction. The selection of edge features is mainly based on this consideration: the edge features are invariant, and the edge of the same scene remains basically unchanged under different conditions (lighting and color). The two most useful features that effectively describe this change are the rate and direction of grayscale change. Generally, the grayscale change rate along the edge is relatively small, while the grayscale perpendicular to the edge changes more drastically. The actual digital image is represented by the magnitude and direction of the gradient vector. The neighborhood of each pixel is checked, and the gray-scale change rate is quantified, and the difference is approximated by differential differentiation. The benefits of edge algorithms include real-time technical processing, the ability to reduce the impact of broadband limitations, increased security of sensitive and private data, operational data reliability, and the versatility of application development. Figure 1 shows a specific process diagram: (1)Roberts edge operator

The Roberts edge operator is a kind of analysis using the edge of a partial image, and the edge amplitude is calculated as follows: (2)Sobel edge operator

The Sobel edge operator calculates the level value in the field, and the level value convolution formula is as follows: (3)Laplace operator

The Laplacian operator is an edge detection operator. It is commonly used for edge detection. For a continuous function , its position is defined as follows:

One of the two commonly used forms of expression using difference equations in practice is as follows:

The numerical approximation method involving the diagonal field is given by the following:

In digital images, Laplacian operators can be implemented with the help of various templates. The two common templates are as follows:

As shown in Table 1, there are only two types of Laplacian operator templates, which have both the smoothing characteristics of Gaussian operators and the sharpening characteristics of Laplacian operators.

The Sobel operator is a discrete difference operator that is used to approximate the gradient of the image brightness function. The Laplace operator is the simplest isotropic differential operator with rotational invariance; the Roberts edge operator can be used to obtain the contours of the edges of a fixed-format document image, and the Hough algorithm can be used to extract straight lines from the contoured image.

3.2. Virtual Technology Integrated Image Model

Panorama is a technology that uses real images to create a virtual environment. A panoramic image is a representation of a continuous image created by processing discrete sample images. Panoramic images can be divided into spherical and cubic shapes according to the different shapes of the image surface. We call them spherical panoramic images or panoramic cube images.

3.2.1. Spherical Panorama

A spherical panorama is the shape of a spherical surface connected to photos taken by a normal camera. Obviously, the spherical panorama is the description of the panorama closest to the human eye model, but the spherical mapping is a nonuniform sampling representation, which will cause image distortion and distortion of the scene. The two poles are particularly serious. When storing spherical projection data, one will be lost. Project the computer data storage structure and the real plane photo on the spherical image, so that a nonlinear deformation process can occur in the image plane. A nonlinear image conversion operation must be performed, resulting in a decrease in display speed. At present, the spherical panorama is mainly obtained by a special fish-eye camera (Figure 2 shows the picture taken by the fish-eye camera), and the spherical panorama is finally obtained by restoring the distortion, synthesizing the picture, and adjusting the brightness.

3.2.2. Cube Panorama

The generation of the cube panorama is to map the real image to the six surfaces of a cube. This structure pattern has good regularity and easy data storage. The expanded cube panorama is shown in Figure 3. However, at the junction of the cube surface, the image is prone to change. The camera is required to be placed very accurately when the camera is sampled. The six sides are perpendicular to each other to avoid optical distortion, and each plane image captured requires a 90° wide-angle lens, to avoid image distortion.

The spherical panorama takes the longitude and latitude coordinates of the sphere and directs them to a grid of horizontal and vertical coordinates, a grid that is approximately twice as high as it is wide. Thus, from the equator to the poles, the horizontal stretch intensifies and the north and south poles are stretched into a flattened grid across the upper and lower edges. The spherical panorama allows a realistic 360 panorama of the entire horizontal and vertical. The cube panorama is a panorama divided into six sides, front, back, left, right, top, and bottom, which when viewed are combined into a confined space to realise the entire horizontal and vertical 360 panorama.

The use of panoramas in image data fusion allows a nonlinear deformation process to take place in the image plane, performing a nonlinear image transformation operation, thus reducing the display speed and avoiding image distortion.

The method of taking photos with a normal camera and then stitching them together to form a panorama does not require scene modeling and can be viewed in real time. In the component environment, the processing time has nothing to do with the complexity of the scene, so the real world can be preserved without special hardware acceleration, powerful graphics, and real-time interactive development. Therefore, people often use this method to create panoramas. Panorama generation is a complex process, including the establishment of a panorama model, image collection, image patchwork, image synthesis, and panorama. Show and explore five stages. The process of generating a panorama is shown in Figure 4.

3.3. Image System Model Based on Virtual Reality Technology

Existing virtual reality imaging systems usually have two main modules: preprocessing and interaction. In the preprocessing module, the system completes the function of generating virtual scenes, while the interactive module mainly supplements the interactive activities between the system and participants and performs the roaming function. The programming module is responsible for finding the panorama corresponding to the current viewpoint. And set the current viewing direction. Part of the panoramic image is sent to the buffer, and the projection module is responsible for converting the cylindrical projection image in the buffer into a plane projection image and displaying it on the observation screen. Image fusion technology is an important branch of digital image processing, widely used in spatial texture detection, heritage conservation, medical images, public security forensics, virtual reality, and other fields. Due to the influence of various factors such as illumination, angle, displacement, and dithering of the captured images, image fusion is prone to distortion and deformation. The projection model and image fusion are used to study the distortion and deformation and to improve the traditional fusion techniques. In summary, the existing virtual reality image system has a frame model as shown in Figure 5.

As shown in Figure 4, the frame model of the image system based on virtual reality technology is composed of two parts. The first part is the interaction module, and the second part is the preprocessing module. These two modules are independent but related. In the preprocessing module, there is a splicing module that analyzes the image sequence into a panoramic image and then processes it into a virtual scene through the link module. Then, carry out the joint processing between the interactive module and the preprocessing module.

4. Best Image Fusion and Image Fusion Algorithm

4.1. Best Fusion of Images
4.1.1. Direct Integration in the Airspace

In this paper, software experiments are carried out on the fusion of pictures, the images in Table 2 are merged into the original images, and the corresponding critical fusion factors and actual fusion factors are calculated. The measured data are shown in Table 2.

As shown in Table 2, there is a big difference between the actual melting factor and the critical factor. The purpose of this research is to bring the actual melting factor closer to the critical melting factor to improve the ability to resist hidden images. The watermark image is fused into the lean image, and the fusion factor is 0.07. The fused image is shown in Figure 6(a), and its peak signal-to-noise ratio is 28. The extracted hidden image is shown in Figure 6(b), and the watermark is extracted. The normalized correlation coefficient of is 0.9. The higher the actual melting factor and criticality factor during the experiment, the better the peak signal-to-noise ratio of its fused image, and the better the effect of hiding the image.

To determine the robustness of the algorithm, perform various jamming attacks on the algorithm. The attack types are as follows: image cropping, JPEG compression, adding Gaussian noise, salt and pepper noise, product noise, Gaussian filtering, etc., and normalized correlation coefficients are shown in Table 3, and the extracted watermark is shown in Figures 6 and 7.

Based on Figure 8, it can be seen that the shear impact parameter is 2 with a correlation coefficient of 0.66, the Gaussian noise has a mean value of 0, a variance of 0.02, and a correlation coefficient of 0.3, and the product noise has a mean value of 2, a variance of 0.03, and a correlation coefficient of 0.2.

4.1.2. Research and Analysis of the Best Fusion Factor

By multiplying the image watermark to reduce the gray level by 0.6 and obtain the best geometric transformation, it is used to hide the image watermark, and the fusion factor is measured to be 0.15. Due to the method of reducing the gray level and the best geometric transformation, the fusion factor is not only greater than the direct fusion factor of the image in the spatial domain, and the image is fused into the image with this fusion factor. The fused image is shown in Figure 9(a), and its peak signal-to-noise ratio is 28. The extracted hidden image is as a shown in Figure 9(b), and the normalized correlation coefficient of the extracted watermark is 0.9.

In order to determine the robustness of the algorithm, various types of attacks are performed on the algorithm as follows: image cropping, JPEG compression, adding Gaussian noise, salt and pepper noise, multiplicative noise, Gaussian filtering, etc., and the normalized correlation coefficients measured are as follows. As shown in Table 4, the extracted watermark is shown in Figures 9 and 10. The coefficients in Table 4 are calculated by normalizing the parameters of the experimental target to obtain the correlation coefficient.

As shown in Figures 10 and 11 and Table 4, it can be seen that the best fusion algorithm uses the iterative changes related to image fusion to greatly increase the fusion factor of the image and greatly improve the image’s antiattack ability. Since the calculation of the fusion factor is related to two images, the matrix and number of iterations used in the fusion factor and various attacks can be used as the key to improving the security of the algorithm.

4.2. Experimental Results and Analysis of Image Fusion Algorithm

The data sets used in the experiment in this article are collected data set and standard data set. The algorithm proposed in this article is tested. The spatial data set contains two point cloud images, including three-dimensional coordinates and intensity information. The two point clouds are collected from different angles. The laboratory data set contains four point cloud data, which also contains coordinate information and color information. In this part, the abovementioned image fusion algorithm will be tested and compared with the execution time and final fusion accuracy of conventional algorithms. During the experiment, the Nuke data set was used, and an accurate fusion algorithm was executed based on the initial fusion. The first is to reduce the resolution of the point cloud to different degrees to obtain point clouds with different resolutions. The number of iterations is calculated during the execution of the algorithm after sampling. When experimenting, the total number of iterations between the proposed algorithm and the conventional algorithm is 90.

As shown in Table 5, the sampling parameter tables of the two data sets are described, respectively, including the resolution, the number of source and target points, and the number of iterations. As the resolution of the room data set decreases, the number of source points decreases significantly; the number of target points also decreases significantly, but the number of iterations increases. The number of source and target points and the number of iterations all show a clear downward trend as the resolution of the laboratory data set gets lower. In the fusion process, the time consumed by the algorithm in this paper and the traditional algorithm under different data sets are recorded, as shown in the table below.

In the text, algorithms refer to types of data fusion, including linear programming, quadratic programming, integer programming, and hybrid programming, with and without constraints. As shown in Table 6, in the room data set, the traditional algorithm consumes 60 times, and the algorithm time in this paper is 33, which is shortened by 27 minutes and increased by about 40%. Under the lab data set, the traditional algorithm consumes 120 times, and the algorithm in this paper is 40, which shortens 80 minutes and improves about 60%. Each iteration of the process consumes a different amount of time for both the room data set and the laboratory data set, a significant improvement compared to the traditional method. It saves time consumption to a large extent, and the efficiency is greatly improved. However, compared to the time comparison of the algorithm in the room data set and the lab data set, the time consumption under the room data set will be more, indicating that the execution efficiency of the algorithm in this data set is higher, but no matter from which data set, all the algorithms in this article are effective and beneficial.

5. Discussion

This paper uses the combined application of virtual reality technology and Nuke software to study the image fusion algorithm and the application of the algorithm, builds an image fusion model and model system through virtual reality technology, then proposes an image fusion algorithm, and then uses NUKE software to compare the algorithm. Through experiments and analysis, it is obtained that the algorithm shows sufficient protection in privacy protection, and comparing the image fusion algorithm with traditional algorithms, it is concluded that the image fusion algorithm has obvious advantages in time consumption. Image fusion can increase image spatial resolution, improve image geometric accuracy, enhance feature display capability, improve classification accuracy, provide change detection capability, replace or repair the defects of image data, etc. bring into play the advantages of different remote sensing data sources, compensate for the shortcomings of a certain kind of remote sensing data, and improve the applicability of remote sensing data.

Data Availability

No data were used to support this study.

Conflicts of Interest

All authors declare that they have no conflict of interest.