Abstract
At present, in the process of image synthesis information acquisition of urban virtual geographic scene, there is information complexity. The existing acquisition technology is easy to disturb in the process of information positioning and transmission, resulting in large acquisition delay and affecting the final synthesis quality of the image. In response to the above problems, the method of synchronous information acquisition for urban virtual geographic scene image synthesis based on wireless network technology is studied. Based on the construction of an urban virtual geographic environment, the spatial localization of the sign target of geographic scene image synthesis is carried out, and the optical flow method is used to register the urban geographic scene images. Based on the greedy algorithm of the beacon to design the synchronous wireless network route for urban virtual scene image synthesis, and after the grid division of the wireless network in the synthesis information acquisition area, the packets of each acquisition node are acquired and transmitted according to the designed wireless network route to realize the synchronous acquisition of image synthesis information. The comparison experimental data show that the acquisition delay of the studied information synchronization acquisition method is less than 0.5 s, the acquisition synchronization rate is significantly improved, and the quality of the synthesized images is better by applying the information acquired by the method, and the practical use is better.
1. Introduction
Urban virtual geographic scene image synthesis is an important technology in computer vision applications, and the quality of its synthesis results will directly affect the effect of urban virtual scene construction. In the construction of urban virtual geographic scenes, it is necessary to combine the synthesis information with the geographic scene information acquired by the camera for parameter resolution and other operations, in order to improve the authenticity of the virtual geographic scene. For virtual geographic scene image synthesis, the most important thing is the efficiency and accuracy of synthesis information acquisition [1, 2]. Current research results related to the synchronous acquisition of image synthesis information are mainly applied to medical images, and the synchronous acquisition of medical imaging synthesis information is achieved by establishing techniques such as ultrahigh field, weighted imaging, and time encoding or by combining RF communication technology [3–7]. These methods have a high synchronization rate and relatively high information acquisition accuracy, but when applied to virtual geographic scene image synthesis information acquisition, it is difficult to achieve a high information acquisition rate and accuracy due to the large amount of information to be acquired and the data being complicated, and there are limitations in the use. Synthetic information acquisition using NFC technology and wireless mobile networks requires a certain amount of manual effort to assist in achieving synthetic information acquisition [8]. This leads to the high cost of this method and is not suitable for wide-scale use. Synthetic information acquisition using synthetic adversarial networks requires the establishment of synthetic adversarial networks, which has the problem of large delays [9–11].
Wireless sensor network is an infrastructure-free self-organized, multihop wireless network, which can sense, collect, and process various monitoring object information in real time and has wide application prospects in the military, industrial automation, intelligent transportation, environmental monitoring, etc. It is also one of the international research hotspots that have received much attention. Image synthesis requires sensors to monitor a large number of geographical parameters to enrich the virtual scene detail information and improve the quality of synthesized images. The use of wireless sensor networks in wireless network technology can effectively enhance the data transmission and processing speed and reduce the acquisition delay. Based on the above analysis background, this paper will study the method of synchronous acquisition of image synthesis information of urban virtual geographic scenes based on wireless network technology.
2. Research on the Method of Synchronous Information Acquisition of Urban Virtual Geographic Scene Image Synthesis Based on Wireless Network Technology
2.1. Geographical Scene Image Synthesis Sign Target Spatial Localization
Target location in a virtual geographic environment is the key to realize the combination of reality and virtual. Before collecting the image synthesis information of an urban virtual geographic scene, this paper first constructs the urban virtual geographic environment and then locates the sign target of geographic scene image synthesis.
Therefore, in order to improve the accuracy of scene image synthesis information acquisition, it is necessary to locate the target synthesis symbol of geographic scene image. When taking urban geographical images, the internal orientation element in photogrammetry is the parameter to determine the relevant position between the camera lens center and the image, namely, the vertical distance between the camera center, , and the image, , and the position coordinate of the image main point (the vertical foot of the main optical axis on the image surface) relative to the image center. The external orientation element is the position and attitude of the photography center, including six parameters, among which three straight elements are used to describe the coordinate value of the photography center in the spatial coordinate system, and the other three angle elements are used to determine the spatial attitude of the photography beam at the moment of photography [12]. According to the schematic diagram of the OpenGL projection principle shown in Figure 1, the relationship between internal and external azimuth elements and the transformation matrix of the OpenGL imaging process can be determined.

When internal azimuth elements such as width and height , focal length , and coordinate of main point of the photographic plate (CCD) are determined, the internal orientation elements are set as parameters to the function parameters of OpenGL’s imaging function glFrustum using the isometric relationship to simulate the real camera imaging process, which corresponds to the following [13]:
Substitute the above equation into the projection transformation formula of OpenGL, and the following projection matrix can be obtained.
In the above formula, and , respectively, determine the near and far cutting surfaces of the projection cone. In OpenGL, model transformation and view transformation have duality, so the model transformation matrix can also be realized by glLookAt(). Assuming that the camera position parameter in the external orientation element is and the angle element of the space attitude is , the external parameter matrix can be calculated. According to the quantitative relationship between OpenGL perspective imaging view functional parameters and internal and external orientation elements, the projection matrix and model view required for camera imaging of urban virtual geographic scene image are calculated through camera position, attitude, and internal parameters. According to the inverse process of OpenGL urban virtual geographic scene imaging, the imaging projection ray of the target in three-dimensional space is calculated, and the intersection of the ray and the three-dimensional scene is obtained, that is, the actual coordinates of the synthetic recognition target in the real world, where the calculation formula of pixel coordinates of the monitoring target imaging on the simulation image is as follows [14]:
In the above formula, and are screen coordinates, and are the width and height of the actual image, and are the width and height of the virtual simulation image, and and are the pixel coordinates of the target on the actual image. Through the screen coordinates of the target in the simulation image, the intersection points of its projection rays on the near and far cropping surfaces in the projection cone can be obtained, and the corresponding coordinates are and , respectively. According to the inverse process of OpenGL imaging, through screen coordinates and the corresponding depth value , the real world coordinates in three-dimensional space can be calculated by pressing the following formula:
In the above equation, is the transformation relation matrix between the virtual scene and the view in OpenGL and is the OpenGL viewport transformation matrix. Finally, the imaging projection ray of the target in 3D space can be obtained in the virtual geography scene. The ray is intersected with the digital surface model in the virtual scene. The intersection position is the spatial coordinate of the target located on the surface of the earth. After locating the spatial coordinates of the composite mark target of the geographic scene image, the scene image is registered.
2.2. Image Registration Processing of Virtual Geographic Scene
On the basis of recording urban virtual geographic scene images, in order to retain more original urban geographic scene information and improve the accuracy of the image information synthesis, this study uses an optical flow method to register urban geographic scene images. Let the three input images with different exposure levels be , , and , and their exposure times are , , and , respectively. The optical flow method detects the brightness when the constant flow of the motion is the most accurate, so in order to accurately calculate the optical flow between and , ’s first exposure must be consistent with ; at the same time, transfer ’s exposure to be consistent with , then for subsequent optical flow computing, this process can be expressed as [15] where is the exposure ratio between and ,; is the exposure ratio between and ; is the image that matches the exposure value of image after exposure correction of image ; is the image that matches the exposure value of image after exposure correction of image ; is the gamma coefficient, with a value of 2.2; and function is a clipping function whose purpose is to ensure that the pixel value of the image is within the range [0,1]. The calculation formula of exposure ratio is as follows:
In order to obtain a consistent structure of the image magnitude sequence of the urban geographic scene, it is necessary to find the optical flow from to and from to . Firstly, the optical flow from to is calculated assuming that the coordinate system in which is located is , and is a two-dimensional coordinate in the coordinate system. In the optical flow method, the mapping relationship between images is represented by the error between feature points, and the objective function can be expressed as [16] where is the optical flow vector from to at each pixel position and defines . is the weight parameter that regulates the distance between points and , is a square neighborhood of point , is the regularization factor, and is the weighting function for each pixel point. The defined equation is as follows [17]: where means that the value is 1 if is in the range and 0 otherwise. is the pixel point pixel variance; is the pixel value of the image pixel point.
Assuming that there is a preliminary estimate for , the feature points can be adjusted to obtain the optimal value. In this study, the coarse-to-fine search method is adopted to update the optical flow quantity iteratively. Taking the linear minimization of the above function as the optimization objective, the conjugate gradient method is used to solve the optimization objective function, and the corresponding optical flow estimation from to is obtained. In the same way, the optical flow estimation from to is obtained. After the optical flow between the images is obtained, images and that are consistent with the structure can be obtained by conversion. Finally, the bicubic interpolation method is used to adjust the pixel points in the images to complete the registration processing of the urban geographical scene images [18]. In order to realize the synchronous acquisition of synthetic information of city virtual geographic scene, the wireless sensor network synchronous acquisition route is designed.
2.3. Build Wireless Network to Realize Image Synthesis Information Acquisition
The acquisition and transmission of image synthesis information of urban virtual geographic scenes mainly rely on multiple sensors and wireless communication networks to be realized, while the computational capacity, storage capacity, and communication capacity of nodes in wireless sensor networks and the energy carried are very limited, so the routing algorithm of wireless sensor networks is particularly important. Based on the geographic location information of the image synthesis markers positioned above, a beacon-based greedy algorithm is used to design the synchronous wireless network routing for image synthesis of urban virtual scenes.
When building a wireless network to obtain image synthesis information, the target area of urban scene image synthesis information covered by the wireless network is divided into equal length grid sets, and the division results are stored in the database table. The specific division steps are as follows [19]: (1)Determine the parameters for dividing the grid. Specify the grid distribution range, including the minimum latitude and longitude and maximum latitude and longitude of the wireless network coverage area. The coverage area is divided into square cells with specified grid side lengths, and the grid cell side length is set to 30-50 m according to the actual urban geographic scenario(2)Record the geographic information of each grid, including the horizontal coordinates, vertical coordinates, minimum longitude, minimum latitude, maximum longitude, maximum latitude, central longitude, and central latitude of each grid. And label the network location information of the image synthesis markers. The purpose of dividing the grid is achieved by executing a quadratic loop system, i.e., the maximum latitude minus the minimum latitude of the network coverage area and dividing it by the grid edge length to get the number of rows of the grid table maximum longitude minus minimum longitude and dividing it by the grid edge length to get the number of columns of the grid table. The boundary value of the outer loop is used as the boundary value of the inner loop, and the loop body calculates the geographic information of the grid in the current row of the current front to get the divided wireless network
According to the actual demand of urban virtual geographic scene establishment, a certain number of nodes are selected from the divided wireless network to build the source nodes for image synthesis information acquisition and transmission. Hardware devices such as cameras and sensors are placed at the image synthesis identification points to collect multiple geographic information in real time. The control and transmission of synchronous information data acquisition is realized through the wireless network source nodes. Influence the design of wireless network communication routing in order to reduce the delay of information acquisition.
As shown in Figure 2, source nodes s, s1, and s2 send packets to target nodes D, M, and N. When the source node s sends a packet to the target node D, the packet is first sent to the local smallest node from s in the greedy algorithm mode, and then, it enters the surrounding algorithm mode to the beacon node P, and then, it recovers from B to the greedy mode and sends to D. If the source node knows the beacon cache , the next time when point S sends data to the target node M and N, it directly takes a route through point B instead of passing through point P, so that the number of routing hops can be saved. Point S will send the information of beacon cache to s1 and s2, then s1 and s2 will also directly bypass point B when sending messages to D, M, and N, which can reduce the return length of the path when bypassing the void and save the number of routing hops for them to bypass the void [20].

Based on Figure 2, when sending data packets to the target node of urban virtual geographic scene image synthesis, first, query the target node to see whether it is located in the shadow area of the target node characterized by local beacon cache. If there is a shadow area of the target node, the indirect target node is directly extracted from the beacon cache, and the grouping mode is set to data greedy mode. The data packet is sent directly to the indirect target node, that is, the beacon node. After reaching the indirect target node, query the target node to see if it is located in one of the shadow areas of the target node characterized by the beacon cache of the node. If so, continue to the indirect destination address. If it does not exist, the urban virtual geographic scene image synthesis packet is sent directly to the target address. If the target node receives the urban virtual geographic scene image synthesis packet with this mode, the whole data transmission process is completed here. If the target node receives a packet sent in the mode of DATA|FLD, the target node also needs to send back the beacon information found during the beacon discovery process to the source node and the nodes in the shadow area of the source node. When no route hole is found in the whole routing process, the sending process is all done by greedy algorithm and the size of beacon cache in the packet is zero, then the target node has no beacon node information to be returned and the data sending process is completed. According to the above designed wireless network communication route transmission of each hardware device to obtain synthetic information. So far, under the transmission communication network architecture built by wireless network technology, the synchronous acquisition of synthetic information of urban virtual geographic scene images is realized by the above process.
3. Experimental Study
The above paper addresses the problems of the current geographic scene image synthesis information acquisition method and uses wireless network technology to build a communication network in order to improve the acquisition speed of image synthesis information. In this section, the experimental study on the method of synchronous acquisition of image synthesis information of urban virtual geographic scenes based on the wireless network technology proposed above will be conducted by setting up the corresponding experimental environment with reference to the actual situation. The following contents are the specific experimental scheme design and the experimental result analysis.
3.1. Experimental Protocol Design
In order to verify the application performance of this method in the synchronous acquisition of the image synthesis information of an urban virtual geographic scene, experimental analysis is carried out. The experimental tool is matlab2016, the size of image sample is 120, the training set is 50, the scale of image segmentation is 12, the gray coefficient of the image is 0.34, the similarity coefficient is 0.38, and the scale of wavelet decomposition is 18. The synchronous collection of image synthesis information of the urban virtual geographic scene is set according to the above parameters. In this experiment, the NFC-based information synchronization acquisition method and the time-coding-based information synchronization acquisition method are selected as the comparison method 1 and the comparison method 2, respectively, with the acquisition methods studied above. The practical application of the three methods is evaluated by comparing the latency of the images captured by the three methods and the quality of the images synthesized by applying the captured synthetic information.
3.2. Experimental Data and Analysis
Using three synthetic information acquisition methods to acquire the same amount of data for image synthetic information, the average latency comparison of the data acquired at each acquisition point is shown in Table 1.
Analyzing the data in Table 1, we can see that the acquisition delay of both comparison methods grows more obviously as the amount of synthetic information data increases, and the larger the amount of data increases, the faster the growth rate of the acquisition delay of both comparison methods tends to accelerate. Although the acquisition delay of this paper shows a small fluctuation, the overall value is less than 0.5 s, which can meet the demand of synthetic information acquisition of a large range of geographical scene images. The smaller the synthetic information acquisition delay, the higher the synchronization rate of the acquisition method and the better the performance. The reason for the small delay of this method is that before collecting the image synthesis information of urban virtual geographic scene, the urban landmark building target is located first, which improves the collection efficiency.
The image quality comparison after applying the synthesized information collected by the three methods for virtual city scene image synthesis is shown in Table 2, and the four indexes of the average gradient, spatial frequency, information entropy, and peak signal-to-noise ratio (PSNR) of the image are selected for image quality evaluation in this experiment.
By analyzing the data in Table 2, it can be seen that the synthesized images collected by the three methods all meet the lowest standard, and the generated images have no obvious distortion compared with the true value. However, compared with the data collected by the method in this paper, the value of the image quality evaluation index for the synthesized images is more superior. From the value of the peak signal-to-noise ratio (PSNR), the image synthesized by using the information collected by the method in this paper has a larger value in both the case of hue mapping and the case of linearization, and the synthesis effect is better. The peak signal-to-noise ratio of this method is 47.5427. The main reason is that this method constructs the network topology, realizes full coverage, and improves the image acquisition quality in the process of synchronous acquisition of image synthesis information of urban virtual geographic scene.
The above experimental data analysis shows that the urban virtual geographic scene image synthesis information synchronous acquisition method studied in this paper based on wireless network technology has a low acquisition delay, a higher synchronization rate, and a better application effect compared with other methods.
4. Conclusion
Due to the massive, multisource, and heterogeneous characteristics of urban geospatial data, real-time rendering of large-scale urban 3D scene has always been the difficulty of 3D geographic information technology. In order to establish a highly realistic and reliable virtual scene for urban planning, this paper studies the synchronous acquisition method of image synthesis information of virtual urban geographical environment based on wireless network technology, verifies the feasibility of this method through experiments, and verifies that this method greatly improves the quality of composite images. Firstly, locate the target space of geographical environment image synthesis signature, then register the obtained virtual geographical environment image, and, finally, build a wireless network to realize the synchronous collection of image synthesis information. In this paper, there is still room to improve the synchronization speed of the network. We can try to improve the performance of the network from the perspective of hardware optimization to further reduce the acquisition delay.
Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Conflicts of Interest
The authors declare no competing interests.