Abstract
Image processing technology is a popular practical technology in the computer field and has important research value for signal information processing. This article is aimed at studying the design and algorithm of image processing under cloud computing technology. This paper proposes cloud computing technology and image processing algorithms for image data processing. Among them, the material structure and performance of the system can choose a verification algorithm to achieve the final operation. Moreover, let us start with the image editing features. This article isolates software and hardware that function rationally. On this basis, the structure of a real-time image processing system based on SOPC technology is built and the corresponding functional receiving unit is designed for real-time image storage, editing, and viewing. Studies have shown that the design of an image processing system based on cloud computing has increased the speed of image data processing by 14%. Compared with other algorithms, this image processing algorithm has great advantages in image compression and image restoration.
1. Introduction
By improving computer hardware technology, larger storage devices and faster processors will enable computers to process digital images more efficiently. However, conventional images based on image matrix representation are not highly efficient because they require a lot of unnecessary information and a lot of storage space. Considering the different types of unnecessary information in the image represented by the image matrix, researchers have proposed many imaging methods that can improve the performance. Although the reconstruction efficiency of these imaging methods is significantly improved compared to pixel matrix imaging, people are exploring ways to create deeper images. And the quick and easy downloads do not stop here. In recent years, real-time target detection has been widely used, and feature extraction is an indispensable part of target detection algorithms. At present, moment features are widely used in many aspects of image classification and recognition processing. If the image is moving, rotating, or scaling up or down in equal proportions. The computer system should display a fixed feature when recognizing these images, if images should remain unchanged. To improve the inspection efficiency of printed circuit board (PCB) solder joints, Wang et al. aim to detect PCB solder joints through image processing methods. Through a series of image processing algorithms, they completed the threshold segmentation and feature extraction of the solder joint image; then, the sphericity was determined according to the area and circumference, as well as the shape parameters and eccentricity of the calculated area, paving the way for the identification of defect patterns. However, there are some errors in the research process, leading to inaccurate results [1]. Hussain et al. offer a CMOS image sensor with a resolution of . The sensor is specially designed for applications where each pixel is used exclusively, measuring approximately and image sensor chip size approximately . The proposed sensor is simulated with a single-pixel input current variation of 2 pA to 100 pA and a corresponding measurement value of 2 mV to 855 mV per pixel. Moreover, they proposed a new method of pattern detection and recognition in the case of blood coverage, which can accurately segment the patterns in the blood. However, there are errors in image segmentation [2]. Venkatram and Geetha put forward the main purpose of big data which is to quickly view the cutting-edge and latest work being done in the field of big data analysis in different industries [3]. Since many academicians, researchers, and practitioners are very interested, it is rapidly updated and focuses on how to use existing technologies, frameworks, methods, and models to use big data analytics to take advantage of the value of big data analytics. However, the analysis process is very complicated.
According to the current technical level and development trend of video image processing systems, this document carries out a great design and implementation of logic devices that will be implemented with large logic devices in the design. In particular the computing cloud in video image processing systems and analyzing infrared, reason for the image unevenness, the theory, and the method of infrared unevenness correction is studied, and a feasible image enhancement algorithm based on infrared image characteristics is proposed and realized through experiments.
2. Graphics Processing Method Based on Cloud Computing
2.1. Cloud Computing Technology
2.1.1. Data Storage and Management Technology
Cloud computing uses distributed storage technology to store unwanted data in multiple distributed storage devices, and to maintain efficient and reliable storage, the space customers need to achieve the level of machine requirements and reduce the number of model applications [4]. In some large-scale projects, such as FIFA and League of Legends, large amounts of data will be stored on the cloud platform. Players only need to download the software and log in to the cloud platform to use it. This significantly reduces the need for computer equipment. [5]. The basic framework diagram of data storage and management technology is shown in Figure 1.

2.1.2. Virtualization Technology
Virtualized focus of the service equipment Virtualized multi-individual visualization. This is the main content of the LAS precalculation [6, 7]. This is the main goal: the main material of the virtual machine, the system of the operation system, the superlevel of the ear, the cutting high horizontal application program, the general physical equipment division, and the virtual machine [8]. The original operating system and applications were virtual, the machine form was run at a virtual level, and many virtual machines could be executed on natural machines. Virtual multimachine can be applied to different operating systems in the enterprise, such as management systems and operating systems [9].
2.2. Graphics Processing Algorithms for Cloud Computing
The computer system includes computational and detection targets. Depending on the hardware structure and system performance, a proven algorithm can be selected to perform the final action [10].
In the cloud processing system, the working environment is more complicated. For the processing of the original image, processing steps such as noise, interference, image clarity, and image improvement are required [11]. According to the current research status domestically and internationally, the commonly used image smoothing methods are the average sector method, intermediate filtering method, filtering method, selective masking, media collection filtering, and other methods [12].
The average vector method is a spatial processing method that uses the average of pixel gray values instead of pixel gray values. The types of smoothed images are
To control the image without blur, the threshold method can be used to reduce the blurring effect caused by the neighborhood average [13].
2.2.1. Spatial Low-Pass Filtering Algorithm
We know that the slow part of the signal belongs to the low-frequency part of the frequency part, and the fast part of the signal belongs to the high-frequency part of the frequency part [14]. The spatial frequency of the image and the interference frequency of the edge are higher. Therefore, low-pass filtering can be used to remove noise, while frequency-domain filtering can be easily achieved by spatial rotation. Therefore, as long as the impulse response matrix of the spatial system is designed reasonably, the noise can be filtered [15]. The basic flowchart of the system using low-pass filtering algorithm to remove noise is shown in Figure 2.

If there is a two-dimensional function , input the filter system and the output signal is recorded as . Suppose the impulse response function of the filter system is ; then, there is
When the input is a distinct image , the output is a distinct image and the pulse response function is an order of to avoid duplication. should be satisfied. The discrete form of the filtering sector is
Because noise is not spatially irrelevant in the image, the noise is higher than the spatial frequency spectrum of the general composition, and low-pass filtering can be used to remove the noise in the image. The following are the different forms of the low-pass spatial response function in the case of , represented by matrix :
It can be seen that using the second filter, the result is similar to the result achieved by the simple neighborhood average method under a window.
2.2.2. Median Filtering Algorithm
In an image contaminated by noise, if linear filtering is used in the processing, most of the linear filtering flow is relatively small, and the image edge is blurred while removing the noise. Under certain conditions, the average filtering method can get better results in removing noise and protecting image edges. In other words, this is a nonlinear image emphasis technology, which has an excellent suppression effect on interference pulses and speckle noise and can more appropriately maintain the edges of the image [16].
The operation process of median filtering is as follows:
Here is a combination ; arrange the magnitudes of the numbers as follows:
The median of the category is represented by . For example, the sequence is , and the median value of this sequence is 100.
Suppose the input sequence is , and the subset of natural numbers is represented by , where the length of the window is . Next, the output filter is represented as
The filter can be represented in the form of a two-dimensional window. Let represent the gray value of each point of a digital image. The two-dimensional central filter of filter window can be represented as
2.2.3. Edge Detection Algorithm
The local intensity of the target in the image represents that the edge detection method, background area, etc. change greatly. It serves as a basis for image analysis, such as image fragmentation and texture characteristics. The first step is edge detection, which is by the sharpness strength of continuity. The image intensity sequence can be divided as follows. The grayscale pixel value of the image link is different, and the image intensity returns to the starting point after maintaining a small change. The images obtained using various detection methods have a high edge detection effect and can suppress noise. Image processing methods usually use general edge detection methods.
2.3. Digital Image Processing Algorithm
The result of sampling and quantization is a table. There are generally two ways to represent digital images:
The various elements of the image layout are individual values called pixels. In digital image processing, the general table and gray level have the integral power of 2, namely, and . For TV images in general laboratories, is 256 or 512 and grayscale is 64-256, which can meet the needs of image processing. For images with special requirements such as satellite images, take and the grayscale is 8-12 bits.
Let the number be the number of bits required to store the digital image, and there are
where is the relational expression of the gray level number . When , the above formula becomes
The chain code represents a binary image composed of straight lines and curves, which is very suitable for describing the boundaries of images. Using chain code ratio matrix expressions can save a lot of bits.
2.4. Edge Detection Algorithm
The edges of the image are usually related to the continuation of the image intensity or the first derivative of the image intensity. The continuation of the image intensity can be divided into the following: (1)Step discontinuity, that is, the gray value of the pixels on both sides of the discontinuity has a significant difference(2)The line is not continuous; that is, the image intensity suddenly changes from one value to another and then returns to the original value after maintaining a small stroke
Edge detection is the most basic function for detecting important local changes in an image. In one direction, the end of the step is related to the local top of the dominant function of the image. The slope is a measure of the change of the function, and the image can be regarded as a series of sampling points by continuously operating image intensity. Therefore, the situation of the same dimension is similar, and discrete hierarchical approximation functions can be used to detect large changes in image gray values.
Two important properties are related to gradient: (1)The direction of vector is the direction of the maximum rate of change when the function increases(2)The magnitude of the gradient is given by the following formula:
In practical applications, the absolute value is usually used to approximate the gradient amplitude:
According to vector analysis, the slowly changing vector is
The angle is the angle relative to the -axis.
3. Image Processing System Design Experiment
3.1. Experimental Parameter Design
In this experiment, MATLAB is used for modeling, and then, the sample data of this article is imported. The compressed sensing sparsity is 1000; that is, after the original image is wavelet transformed, the wavelet coefficients are sorted, and then, 1000 large coefficients are retained and reset the remaining coefficients to zero. Observe the sparse wavelet coefficients of the observation matrix. The size of the observation matrix is , and then, the observation results are transmitted to SOPC for reconstruction. This experiment was conducted to determine whether the SOPC system regenerated by OMP was functioning normally. Therefore, the wavelet coefficients after zeroing are used to observe the original image instead of the original image [17]. In this experiment, the wavelet coefficients after zeroing are equal to the original image and the reconstructed wavelet coefficients are equal to the reconstructed image.
3.2. Image Processing Programmable System Design
(1)Design input: there are many ways to introduce design. At present, the two most commonly used are circuit diagrams and material description languages. For simple drawings, you can use charts or ABEL language design. For complex designs, schematic diagrams or material description languages or a mixture of the two can be used, and hierarchical design methods can be used to describe units and hierarchical structures. When the software design and input check for grammatical errors, the software will create a list of grammatical errors for the design and input(2)Design realization: design realization refers to the drawing process from design input files to bitstream files. In this process, the training software automatically compiles and optimizes the design files and performs mapping, placement, and routing of selected devices and creates the corresponding bitstream data files(3)Device configuration: FPGA device configuration modes fall into two categories: active configuration features and passive configuration features. Active configuration mode is a configuration operation program guided by GAL devices that control the external storage and preparation process. Passive configuration is a controlled synthesis process(4)Design verification: this is consistent with the design verification process including functional simulation, timing simulation, and equipment testing. Functional simulation verified the design logic. In the process of introducing the design, part of the operation or the entire design can be simulated. Timing simulation is a delay simulation of device layout and operation after the design is implemented, and the timing relationship is analyzed. Equipment testing is to use test tools to test the final function and performance indicators of the equipment after the experiment or programming, as shown in Figure 3

4. Image Processing Algorithms Based on Cloud Computing
This section analyzes the performance of the compression algorithm, the complexity of the processing process, and the image reconstruction of compressed sensing.
4.1. Image Coding Compression Performance Test
To test the performance index of the algorithm, this paper selects the real scene geometric regular image collected by the web camera as the original image of compression coding. The image resolution is in pixels. Figure 4 is an image effect frame encoding with different quantization factors in a compression encoder. The difference in image quality after compression coding can be instinctively compared through human vision [18, 19]. The experiment compares the processing effect map from the file size reduction ratio, peak signal-to-noise ratio, time complexity, visual effect, and other aspects. The running results of the reduction algorithm with different quantization coefficients are shown in Table 1.

From the data in Table 1, it can be seen that different choices of system parameter settings will have a greater impact on the image compression effect. The larger the quantization coefficient, the smaller the amount of compressed image data, the larger the image compression ratio, and the smaller the peak signal-to-noise ratio of the image. At the same time, the compression time of the algorithm is also less, and the visual effect of the image can be seen in obvious blocking effects. The complexity of the algorithm is . The real object taken by the image cannot be distinguished. To observe the comparison of data more intuitively, draw the table into a picture, as shown in Figure 4.
It can be seen from the experimental data that the system quantization coefficient is set to 15, and the visual effect of the compressed image is obvious. At this time, the image has no obvious distortion, and the difference from the original image is small. At the same time, the image compression, peak signal-to-noise ratio, compression time, and other parameters are compared. So it can be ensure the compression ratio under the condition of get a better image quality.
4.2. Image Processing Algorithm Analysis
The fuzzy image is restored based on the super Laplace prior model. From the above analysis, it can be seen that the regularization term index and the regularization parameter of the algorithm have a great impact on the restoration quality of the image and the execution time of the algorithm. In this paper, based on SVPSF, the image formed by the single-lens imaging system can be restored in blocks through the super Laplace prior algorithm. By taking the values of the parameters and selected by Dilip Krishnan’s experiment as 0.5 and 256, respectively, the block restoration is based on the SVPSF image.
4.2.1. The Influence of Parameter on the Image Restoration Algorithm
The image is restored by accurately establishing the model through the super Laplacian operator; usually, the range of is 0.5-0.8 and the index has a great influence on the restoration effect. Different intervals correspond to different algorithm models. When , it is the Laplace restoration model, which does not fit the heavy-tailed distribution of the image very well. When , it is a Gaussian distribution model, and the fitting effect is very different. When is between 0 and 1, it is a super Laplace model, and when is between 0.5 and 0.8, the restoration effect is better. Therefore, it is necessary to analyze the value of parameter and restore the restored image with different values of a parameter to obtain different SSIM. The experimental data is shown in Table 2.
According to the analysis of the experimental data in the table, the restored image and SSIM change with the value of parameter . SSIM increase monotonously in the range of 0.4-0.55. When , the SSIM value is the largest, and the similarity is increased by 1.5% compared to that before optimization; it decreases monotonously in the range of 0.6-0.9. It can be seen from Figure 5 that the parameter had influence on the restored image SSIM.

4.2.2. The Influence of Parameter on the Image Restoration Algorithm
It is solved by the semiquadratic penalty method, and the variable is introduced while giving the blurred image . is the weight of a regularization process change, and its value increases monotonously from to to ; as changes, the number of iterations of graph restoration also changes. At the same time, the number of iterations is closely related to the running time of the restoration algorithm and the restoration effect, so this article analyzes the parameter .
From the data analysis in Table 3 and Figure 6, it can be seen that under the condition that parameter does not change, the SSIM of the restored image changes with the change of parameter .

From the data in the table, it can be known that as the parameter gradually increases, the SSIM of the restored image shows a trend of first increasing and then decreasing. When is between 200 and 600, the value of the restored image SSIM gradually increases, and there is a maximum value of 0.87. When is between 1600 and 4600, the SSIM value of the restored image gradually decreases, and the SSIM value after restoration is smaller than that before optimization.
4.3. Image Reconstruction Analysis of Compressed Sensing
This article introduces the SOPC implementation of compressed sensing based on the OMP reconstruction of Cholesky matrix decomposition. The following is an analysis of the experimental results of SOPC. The results of the analysis are shown in Table 4.
From the comparison of the data in the table, it can be seen that the PSNR of the SOPC reconstructed image is not high. Analysis shows that three reasons affect the PSNR: (1)All data in this SOPC system are represented by fixed-point numbers, so the accuracy of the algorithm is affected to a certain extent, so the PNSR of the reconstructed image is not high(2)In this system, LFSR is used to generate a random observation matrix. Since the random number generated by LFSR is not completely random, it affects the incoherence of the observation matrix to a certain extent. Reconstructed PSNR is affected(3)Before observing the wavelet coefficients, the small coefficients in the wavelet coefficients are reset to zero, and then, some small details are lost, so PNSR will be affected
To more intuitively observe the high and low changes of the data, draw the table into a graph, as shown in Figure 7.

According to the data analysis in the figure, it can be concluded that among the 5 images, the house image has the highest PSNR. The analysis shows that the house image is relatively regular, and the coefficients obtained after the wavelet transform are relatively sparse, and the wavelet coefficients are reset to the minimum of the house image, so the PSNR after reconstruction is the maximum.
In this experiment, the size of the observation matrix is still , but the sparsity is increased from 500 to 1400, with an interval of 10 each time. Five images were reconstructed, and the relationship between the sparsity and PSNR obtained is shown in Table 5 and Figure 8.

4.4. Discussion
This paper builds a wavelet transform model under Quartus II. Compared with the model in the reference, the simulation parameter in this paper has a better effect in the range of 0.5-0.8. In the literature, the value of is between 0.5 and 2 due to the difference in the model. This is because the model in this paper optimizes parameters such as image compression, peak signal-to-noise ratio, and compression time to reduce interference, so the value range is concentrated, which facilitates the control of the model and does not cause model distortion. In addition, the model in this paper can process images with a signal-to-noise ratio between 500 and 1400, while other methods have smaller signal-to-noise ratio intervals. Therefore, the method in this paper can process images with a large signal-to-noise ratio range to a small value and has a high degree of recovery.
5. Conclusions
The hardware implementation scheme of the image processing algorithm is proposed. By comparing the PC implementation of the image processing system and the dedicated digital signal processor (DSP) implementation, the structure of the cloud computing-based on-chip programmable system is constructed, and the various parts of image acquisition, storage, and real-time display of each part of image processing are carried out, and the overall structure design is improved. The structure design has been improved.
The cloud computing application introduced in this article is an important cloud imaging system project. Different choices of system parameter settings will have a greater impact on the image compression effect. The larger the quantization coefficient, the smaller the amount of compressed image data, the larger the image compression rate, and the smaller the peak signal-to-noise ratio of the image. At the same time, the compression time of the algorithm is less, and the visual effect of the image can be seen in the obvious occlusion effect. It is impossible to distinguish the real objects captured in the image.
Because the image data itself contains a large amount of information, the realization of image processing algorithms puts forward higher requirements on hardware devices. With the development of embedded system technology, the functions of embedded microprocessors are becoming increasingly powerful. The combination of style and image processing will also become a complex system project.
Data Availability
The data underlying the results presented in the study are available within the manuscript.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by grants from Hubei Province Philosophy and Social Science Research Key Project “Research on artificial intelligence universal education of primary and secondary schools in the new era” (no. 19D101).