Abstract

As a sensor with a wide field of view, the panoramic vision sensor is efficient and convenient in perceiving the characteristic information of the surrounding environment and plays an important role in the experience of artistic design of images. The transformation of visual and other sensory experiences in art design is to integrate sound, image, texture, taste, and smell with each other through reasonable rules, to create more excellent crossborder art design works. To improve the sensory experience that art design works bring to the audience, the combination of vision and other sensory experiences can maximize the advantages of multiple information dissemination methods and combine the omnidirectional visual sensor with the sensory experience of art design images. In the method part, this article introduces the omnidirectional vision sensor, art design image, and sensory experience modes and content and introduces the hyperbolic concave mirror theory and the Micusik perspective projection imaging model. In the experimental part, the experimental environment, experimental objects, and experimental procedures of this article are introduced. In the analysis part, this article analyzes the six aspects of image database dependency test, performance, comparison of different distortion types, false detection rate and missing detection rate, algorithm time-consuming comparison, sensory experience analysis, and feature point screening. Among the feelings of the art design image, for the first image, 87.21% of the audience’s feelings are happy, indicating that the main idea of this image can bring joy to people. In the second image, the audience’s feelings are mostly sad. For the third image, more than half of the audience’s feelings are melancholy. For the fourth image, 69.34% of the audience’s inner feelings are calm. It explains that the difference in the content of art design images can bring different sensory experiences to people.

1. Introduction

At present, the research of panoramic video acquisition equipment has become a hot research topic. Any point in the panoramic image corresponds to a certain point in the surveillance space; so, the spatial position calibration algorithm is simple, and there is no need to aim at the target during monitoring, making the algorithm for acquiring, detecting, and tracking moving objects in the surveillance range more visual information. Simple, it is a fast and reliable means of collecting panoramic visual information. The image collected by the omnidirectional camera uses the principle of specular reflection, which makes the omnidirectional image and the image seen by the human eye very different, and there is a certain degree of deformation. The use of omnidirectional cameras for video capture has the advantages of low cost, easy installation and debugging, and no dead corners for monitoring. It provides strong technical support for realizing dynamic image understanding and further improves the economy, safety, and intelligence of the entire security system. Images are obtained by various observation systems observing the objective world in different forms and methods. They are entities that can directly or indirectly act on the human eye to produce visual perception. About 75% of the information we get from this world comes from the visual system, which is the image we see. Human image recognition ability is very strong. The image stimulates human sensory organs, and people recognize it as a certain image, know its purpose, etc. In the process of human image recognition, it is necessary to have both the image information that enters the sensory organs at the time and the image information stored in the brain. By searching for the existing image information in the memory, and comparing the image information that enters the sensory organs at the time, we can identify the image to be identified. If there is no information about this image in the brain, then the brain will open up a place to put the image in. After multiple recognitions, the image will form a memory in the brain. When you see the image again, you can compare it and identify it. Based on the swift mode of the big data era of modern society, the social population has not enough time to read cumbersome social information and emotional communication through much text. The maverick young group mainly uses image and visual communication methods to rush. Because image vision has more circulation and rendering than text language and can change real life, it saves time to a certain extent and enhances the viewer’s visual experience of the world. At present, with abundant information, the surrounding social life is also full of various and wonderful image visions, most of which include advertising posters and film packaging. The application of image vision is also very wide. It can be said that it is ubiquitous. The reason social culture is a visual culture is that it is more influential than written language from many angles.

Based on the role of omnidirectional visual sensors in the sensory experience of artistic design images, many scholars at home and abroad have conducted related research. Zhou proposed an omnidirectional stereo vision sensor based on a single camera and a reflection system. As a key component, a camera and two pyramid mirrors are used for imaging. Four pairs of virtual cameras can realize omnidirectional measurements in different directions in the horizontal field, with good synchronization and compactness. In addition, the invariance of perspective projection is ensured during the imaging process, and the imaging distortion caused by curved mirror reflection is avoided. The research established the structural model of the sensor and designed the sensor prototype. It also discussed the influence of the structural parameters on the field of view and measurement accuracy. In addition, actual experiments and analyses were carried out to evaluate the performance of the proposed sensor in measurement applications. The results proved the feasibility of the sensor and showed considerable accuracy in 3D coordinate reconstruction. The author has conducted research on the omnidirectional stereo vision sensor, but has no prospects for the application of this sensor [1].Svetlichnaya studied the variants of artistic activities aimed at adapting a person’s existing external image to various social and cultural situations. The communication practice of using art design to shape a person’s external image is positioned as a “flexible” image model. Recommended for communication between different communities, a person’s external image is considered to be a means of providing instant information exchange between communicators of different social groups, pointing out the visual components of the person’s external image that affect the translation of social and cultural information, such as clothing and hairstyle. The artistic design method in modern culture is defined as creative, which helps to form a unified visual image of the social group participating in it. The method of artistic design, in the context of designing a person’s external image, will affect its purposeful formation and adaptation, and this image will be transformed into a social group. According to the research results, combined with the communication goals of representatives of different social groups, suggestions are made on the choice of artistic design methods in the process of designing the external image of the characters. The author created a relevant model for art design in the research, but did not combine relevant data and diagrams to illustrate [2]. Abusaada believes that in the literary works of phenomenological theory, emotion is conceptualized as an important factor related to changing people’s emotions in urban environment research. The research is aimed at demonstrating a toolkit for creating an emotional city atmosphere based on communication between people and places. To better understand the relationship between the sensory body theory and the reconstruction of the emotional urban atmosphere in the urban environment, the research conducted a literature survey of reasonable approaches and proposed a toolkit related to multisensory experiences. The research groundbreakingly discussed the concepts of sensory bodies, important driving forces, and daily multisensory experiences as a contribution to urban research applications. In addition, the author also clarified the possibility of creating an emotional city atmosphere by using emotional concepts as a predesign stage process. The work of multisensory experience in the urban environment needs to solve the important driving force of the sensory body and becomes a set of perceptual dimension urban research tools. The research carried out sentiment analysis on works of art, but the authors did not combine some modern techniques to illustrate [3]. The subject matter described here includes methods, systems, and computer-readable media for reducing Wi-Fi scanning using cellular network to Wi-Fi access point mapping information and uses mobile communication devices to receive Wi-Fi and cellular signals and create or obtain a database of mapping between Wi-Fi access points and cellular network information. The mobile communication device detects the signal from the base station in the cellular network. The mobile communication device determines at least one recommended access point based on the data derived from the database. In response to determining that there is at least one Wi-Fi signal, the mobile communication device attempts to connect to at least one recommended access point. However, whether the mobile communication device determines whether there is any Wi-Fi signal still cannot be proven [4, 5].

This paper studies the sensory experience of art design images based on omnidirectional visual sensing. In the method part, this article introduces the omnidirectional vision sensor, art design image, and sensory experience modes and content and introduces the hyperbolic concave mirror theory and the Micusik perspective projection imaging model. In the experimental part, the experimental environment, experimental objects, and experimental procedures of this article are introduced. In the analysis part, this article analyzes the six aspects of image database dependency test, performance, comparison of different distortion types, false detection rate and missing detection rate, algorithm time-consuming comparison, sensory experience analysis, and feature point screening. The innovation of this article lies in the application of omnidirectional visual sensors to the sensory experience of art design images, combined with related schematic diagrams for explanation, and this article also studies the relationship between human multisensory organs and art design to obtain sensory elements importance in art design. The experience of applying the senses in art design will make the design more humane, more participatory, and interactive. However, this article uses an omnidirectional vision sensor. When faced with a more complicated actual environment, precise positioning may not be possible, or even positioning may not be completed. If you want to solve all situations with only omnidirectional vision sensors, it is still not realistic.

2. Image Sensory Experience Method Based on Omnidirectional Visual Sensor in Art Design

2.1. Omnidirectional Vision Sensor

The omnidirectional vision sensor is the direct source of information for the entire machine vision system. It is mainly composed of one or two graphic sensors, sometimes with light projectors and other auxiliary equipment. An instrument that uses optical elements and imaging devices to obtain image information of the external environment usually uses image resolution to describe the performance of a visual sensor. The accuracy of the vision sensor is not only related to the resolution, but also related to the detection distance of the measured object. The farther the measured object is, the worse its absolute position accuracy. Below, we explain it using the omnidirectional vision lens, the imaging of the omnidirectional vision system, and the omnidirectional image.

The emergence of omnidirectional vision lenses solves the defects of ordinary lenses. The omnidirectional image collected by the omnidirectional vision sensor contains horizontal environmental information, and a picture can contain the image information that can be captured by several ordinary cameras at the same time. For systems that require a large field of vision applications, omnidirectional vision undoubtedly provides a good solution.

The imaging of the panoramic vision system is divided into two projection processes: First, the light around the panoramic vision sensor hits the surface of the curved mirror, which belongs to the projection from the surrounding environment to the surface of the curved mirror. The projection on the surface of the curved mirror may cause distortion of the image of the object in the surrounding environment; the other projection is the projection of the curved mirror to the plane of the image. The mechanism of this projection is equivalent to the usual camera projection. Through these two projections, the objects around the panoramic vision sensor are generated as distorted panoramic images on the imaging plane [6, 7].

It can be seen from the panoramic image: (1) the effective field of view of the panoramic vision sensor is relatively wide. With the UP-OmniVision panoramic vision sensor as the center, the imaging within a radius of 4 m is quite clear [8]. (2) Objects in the image inevitably have a certain degree of distortion, such as curved floor tiles and deformed wooden boards. (3) The artificial road signs on the same area in the panoramic image are different from the panoramic vision sensor in the distance, resulting in different pixel areas occupied by the artificial road signs in the panoramic image [9]. For example, the red and orange artificial road signs closer to the panoramic vision sensor are relatively large, while the green and purple artificial road signs farther from the panoramic vision sensor occupy a relatively small pixel area. Figure 1 shows two different catadioptric cameras. Though different kinds of reflex cameras to capture the scenery, the omnidirectional images obtained are different. When some reflex cameras project the image onto the curved mirror of the camera, it may cause distortion of the object images in the environment. Using a suitable reflex camera, we can get the projection of the corresponding image plane, so that the image is complete and undistorted, as shown in Figure 2.

The advantages of omnidirectional vision sensor applied to image detection are as follows: (1) a omnidirectional vision sensor can monitor a large range of scenes; (2) the cumbersome work of image fusion of multiple cameras is avoided when the image is tracked, and the detection algorithm is more simple; and (3) when acquiring scene images, the placement of omnidirectional vision sensors in the scene is more free, and real-time images of the scene can be obtained without aiming at the target. In systems that require panoramic real-time processing, the use of omnidirectional vision sensors to obtain images is a fast and reliable way to collect visual information [10].

2.1.1. Hyperbolic Concave Mirror Theory

Establish the relationship between the horizontal coordinate of the object point and the horizontal coordinate s of the image point. The expression of this hyperbola is as follows:

Among them, .

The lower half of the hyperbola can be expressed as

Here, suppose the horizontal coordinate of the object point is a function of the horizontal coordinate s of the image point, the relationship between and is deduced against the optical path. The reflected light passes through the points (, 0) and (0, ); so, the straight line where the light lies can be expressed as a two-point formula:

After finishing,

The intersection point of the incident ray and the hyperbolic concave mirror is , the equation about is obtained simultaneously, the equation is solved, and two solutions are obtained. The reflected light intersects the lower half of the hyperbola. Take the smaller solution and substitute it to get . Taking the derivation, we can know that the tangent slope of the hyperbola at point is

Since the normal line is the perpendicular line of the tangent line, and the slope of the normal line and the slope of the tangent line have a negative reciprocal relationship, the normal slope of the hyperbola at point is

It can be known from the point oblique formula that the normal equation of the hyperbola at point is

From the law of light reflection, we know that; so, there is

Moreover, because

Substituting it in to get the slope of the incident light , from the point oblique formula, the straight line where the incident light lies is

Substituting into the position of the horizontal coordinate of the object point, the object-image relation function of this position is

2.1.2. Micusik Perspective Projection Imaging Model

The sensor plane is a virtual plane perpendicular to the visible axis of the reflector. The center origin is the intersection of the image axis and the plane. The coordinate system is created based on the viewpoint of the mirror. Each possible image format of a microscope camera describes the process of creating a point on the image from a local point to a mirror point, that is, from the mirror point to the sensor surface and from the sensor surface to the camera surface point [11, 12].

(1) Conversion from Mirror Surface to Sensor Plane. The area vector generated by the single-view mirror a and the mirror point projects the vertical point onto the sensor plane through the optical center of the perspective camera:

Among them, the function represents the geometric shape of the mirror, and the function represents the relationship between and . This is determined by the reflection parameters of the reflector refracting mirror based on the projection transformation geometry. The relationship between the orthogonal point and the space vector is as follows.

(2) Conversion from Sensor Plane to Image Plane. Since the optical axis of the SLR camera cannot form a certain angle with the plane , the center of the image is centered on the sensor, and there is a certain distortion in the digital processing of the image. It is the conversion relationship from the sensor plane to the image plane, and the relationship between the two reference planes can be expressed by affine.

Among them, .

2.2. Art Design Image

Since the entry of art design into the 21st century, the rapid development of science and technology, the substantial improvement of design expression techniques, the increasing improvement of design tools, and the rapid development of output technology have all promoted the steady development of the field of art design. Due to the development of science and technology, the field of design has been strongly influenced by it, not only the efficiency of production has been greatly improved but also the realization of the design has made a substantial leap [13]. Moreover, the emergence of new materials has provided designers with more possibilities and enlightenment. People’s pursuit is improving, the means and expressions of design must be constantly innovated, and the development from a single dimension to a multidimensional direction is a favorable way to achieve this goal. Secondly, in today’s society, whether it is an audience or a designer, because of the fast paced life, people are receiving more and more pressure from work and life, and people cannot breathe. Therefore, people’s demand for entertainment is becoming stronger and stronger. Entertainment can not only separate people from the tense atmosphere and get a moment of rest but also allow people to discover the beauty of life and the warmth of the world when they relax [14]. Therefore, if the designer can cut in at this point, it will definitely bring a sense of visual joy to the audience.

The performance of traditional art design is that whether the design elements used are abstract or realistic, most of the information presented to the audience is a flat image, and this image still uses the principle of designing a gestalt. The basic elements such as color, texture, dots, lines, and planes are consciously combined. This is also to more effectively and intuitively understand the content of the information to be expressed and to meet the aesthetic psychology of the audience [15, 16]. Nowadays, under the influence of factors such as the development of science and technology and people’s desire for emotional enhancement, art design has shown a different form from the past, with multidimensional characteristics. It has broken the traditional paper medium’s situation of covering the sky on one hand using a single paper medium, like the development of emerging media, from two-dimensional to-dimensional development, from static viewing to dynamic interactive experience transformation. This kind of change is like the change brought to music by harmony. Those who have studied music know that when they first start, they must be a single syllable, which sounds boring and boring. With the improvement of skills and understanding, there will be gradually the addition of Hexuan, and the addition of Hexuan is like an injection of fresh blood, bringing new life to quiz and a new auditory feast. The relationship between intervals is like the relationship between the audience and the work. One to three or more variable parameters are called sound in harmony. According to the superimposed relationship of three degrees, combined in depth, it becomes the multidimensional “harmony” of artistic design and “Spin.” However, in the traditional art design, its unique temperament has not been obscured or obliterated it, multidimensional art design only uses different media, and the multidimensional expression of art design uses various media that can magnify its advantages., in order to highlight the purpose of its design and more complete and efficient information visual services. When doing art design, we cannot simply start from people’s visual sensory experience, but should consider the comprehensive application of multiple senses [17, 18]. Table 1 shows the relationship between color and sense in art design.

2.3. Sensory Experience

Sensory experience is an experience that integrates vision, hearing, touch, smell, and taste. It is the mental and physical changes that people recognize through their own sensory organs. In the context of the experience economy era, when people receive external information, they need the participation of various sensory systems. Today’s society pays more and more attention to the humanization and individualization of design and emphasizes the basis of people’s experience; so, art design should pay attention to other sensory functions [19, 20]. The content of sensory experience mainly includes the following three aspects:

2.3.1. Sensory Impact

Visual impact can change people’s temporary “numbness.” In the art design, strong color contrast is often used to achieve the effect of visual impact. Auditory impact is generally the arrangement of music and the processing of special sounds in the process of brand image design and the use of auditory channels to convey important brand information [21].

2.3.2. Sensory Pleasure

Sensory pleasure refers to obtaining sensory comfort and enjoyment through people’s vision, hearing, touch, taste, and smell. It is a method often used in brand image design. Although this way of expression is simple and direct, the effect it produces should not be underestimated [22, 23].

2.3.3. Sensory Interaction

“Interaction” means to relate to each other and communicate with each other. It is the communication process of emotional changes between people and things. Sensory interaction refers to participating in the environment through people’s sensory organs, resulting in a kind of interesting communication [24, 25].

Table 2 shows the analysis of the stimulus form of the five senses.

3. Image Sensory Experience Experiment Based on Omnidirectional Visual Sensor in Art Design

3.1. Experimental Environment

In this paper, the recognition of artistic design images is developed by the Eclipse platform and realized by the Java language [26, 27]. The software development environment of the whole system is shown in Table 3.

3.2. Sources of Subjects

Most of the current image databases have not established special image collections for the study of image color casts. To conduct “case study” and “statistical research” on the color cast evaluation method proposed in this article, the two methods of downloading on the Internet and camera shooting are used to collect a large number of images, and these images are randomly divided into two groups, one is the “training group,” which is used for related exercises before the formal test, and the other is the “experimental group,” which is used for the formal test. First, 30 volunteers were selected, ranging in age from 15 to 55 years old, with normal vision and normal color vision.

3.3. Experimental Procedure

Before the formal test, the volunteers explained the purpose and operation process of the test in detail and conducted the corresponding training, so that the volunteers have a comprehensive and correct understanding and cognition of the “color cast,” and the arbitrary color the image can correctly judge the color characteristics of the image. The entire 50 test process is divided into multiple stages, and the interval between the two stages is provided for volunteers to rest to avoid visual fatigue. 25 images are displayed at each stage. The first 5 images are selected from the training group to stabilize the subjective evaluation results of volunteers. The score data of these 5 images are not included in the result set. The last 20 images are from the “experimental group” and the evaluation result income result set. In the formal test, volunteers are required to record the evaluation results one by one to avoid memory deviation. The evaluation results are divided into “essential color cast images,” “true color cast images” (color cast images), “normal non-color cast images,” “Unrecognized images.” After each volunteer has completed the subjective evaluation of all images of the “experimental group,” the result set needs to be preprocessed to remove the images with disputed classification results: the number of people only when a certain image is judged to be a certain category when it is not less than 18, and the image is marked as this category image and kept in the result set. If the number of people classified into the 4 categories is less than 18, the image is removed. After the preprocessing is completed, each image in the result set has an effective subjective evaluation mark, and the custom image library is now completed.

3.4. Experimental Description

Randomly select 200 images from the result set as the test image library for the effectiveness of the objective evaluation method, including 60 “true color cast” images, 35 “essential color cast” images, 85 “normal no color cast” images and 20 “unrecognizable” images; the remaining images in the result set are used as the training image library in the objective evaluation method research process. When the parameters are determined, according to the difference between the subjective evaluation results in the training library and the objective evaluation results obtained, correct the parameter threshold. The SIFT feature matching algorithm used in this article includes two stages: the first stage is the generation of SIFT feature;, that is, the feature vectors that are independent of scale, scaling, rotation, and brightness changes are extracted from multiple images to be matched; the second stage is the SIFT feature vector matching.

4. Image Sensory Experience Analysis Based on Omnidirectional Visual Sensor in Art Design

4.1. Image Database Dependency Test

Because the algorithm proposed in this paper that is only tested on the LIVE2 image database, it is impossible to ensure that the performance of this method can maintain a good advantage in other image databases. It can be seen from Table 4 that, compared with SSIM, 56BLIINDS-II, DIIVINE, and BRISQUE, the method in this paper maintains good consistency in the test results of degraded images in these two databases and is mature in terms of performance. The image quality evaluation algorithm is comparable. Therefore, based on the above data analysis, it can be concluded that the superior performance of the algorithm in this paper does not depend on a specific image database, and the test results of other image databases still maintain a high degree of credibility.

It can be seen from Table 4 that for JP2K images, using this algorithm is the best in performance; for JPEG images, although both SSIM and BLINDS-II algorithms have higher performance advantages than this algorithm, the algorithm proposed in this article is not bad. For blurred images, the performance advantage of this algorithm is significantly higher than other algorithms; for WN image performance, the overall evaluation can also be ranked second. Therefore, it can be seen that the algorithm proposed in this paper can maintain good consistency.

4.2. Performance Comparison of Different Distortion Types

Distortion, also known as “distortion,” refers to the deviation of a signal from the original signal or standard during transmission. In an ideal amplifier, the output waveform should be the same as the input waveform except for amplification, but it cannot be done. When the output waveform is the same as the input waveform, this phenomenon is called distortion.

It can be seen from Figure 3 that whether it is a single distortion type or multiple distortion types coexisting, the PLCC evaluation index value of this algorithm is higher than the performance evaluation index value obtained by the BRISQUE algorithm and the RMSE value obtained by the algorithm in this paper. They are also smaller than the RMSE value obtained by the BRISQUE algorithm. Generally speaking, the performance of the algorithm in this paper is better than the BRISQUE algorithm. At the same time, the accuracy of the prediction results of this algorithm is better than that of the BRISQUE algorithm for the five types of degraded images in the LIVE2 database. The algorithm makes full use of the human visual characteristics, so that its evaluation results are more in line with the subjective evaluation results. Compared with the current mature objective quality evaluation algorithms, the algorithm in this paper shows superior performance in all aspects. The method given in this paper belongs to a new type of nonreference image quality evaluation algorithm, which maintains good consistency with the visual perception characteristics of the human eye.

4.3. False Detection Rate and Missed Detection Rate
4.3.1. False Detection Rate

Figure 4 shows the error detection rate of the three methods. Among these methods, the false detection rate of the SSC method is relatively the highest. The contour detected by this method has a lot of texture interference. Combined with the detected images, it can be seen that the method does not perform the same operation in texture suppression. Although the integrity of the main contour is very high, the texture will interfere with the highlights of the contour, resulting in a detection effect which is not ideal. In terms of error detection, the method in this paper performs relatively well and can maintain a relatively stable error detection rate, indicating that the method performs well in suppressing background texture. Combined with the detected images, it can be seen that the contour details of the image are seriously lost.

4.3.2. Missed Detection Rate

Figure 5 shows the missed detection rate. Among these methods, the SSC method performs better in missing detection, and the missed detection rate is relatively low, indicating that the method detects better contour integrity. In terms of missed detection rate, CORF is relatively high, indicating that the texture is excessively suppressed. Combined with the detected images, it can be seen that the contour details of the image are seriously lost. Compared with the SSC method and the CORF method, the method in this paper has an absolute advantage in the balance of missing detection and false detection.

4.4. Algorithm Time-Consuming Comparison

For image quality evaluation, the computational complexity of the algorithm is another important indicator for judging the performance of the algorithm. Table 5 shows the average computational complexity of each full-reference image quality evaluation method. The computational complexity of the full-reference image quality evaluation method can be divided into three levels. The first level is PSNR and SSIM. They run much faster than other algorithms, but their predicted scores do not fit well with subjective scores. The second level is SIQM, GSS, and the algorithm of this paper. They run fast and have excellent performance. The algorithm of this paper has achieved the best performance indicators. The third level is VIF and IFC, which have relatively high computational complexity. According to the weighing calculation accuracy and calculation efficiency of the screen image quality evaluation method, the algorithm can achieve excellent results.

4.5. Sensory Experience Analysis

As shown in Figure 6, it shows the audience’s four feelings for four different art design images. The four feelings include happiness, sadness, melancholy, and peace. Clearly, for the first image, 87.21% of the audience’s feelings are happy, indicating that the main idea of this image can bring joy to people. In the second image, the audience’s feelings are mostly sad. For the third image, more than half of the audience’s feelings are melancholy. For the fourth image, 69.34% of the audience’s inner feelings are calm, explaining that the difference in the content of artistic design images can bring different sensory experiences to people.

4.6. Feature Point Selection

Figure 7 shows the number of feature points under different thresholds. The threshold in this paper can be determined by the results of multiple experiments. Taking the actual PCB image as an example, if 1000 matching point pairs are obtained by rough matching, the final matching point pairs that pass the nearest neighbor ratio test when different thresholds are set are shown in the figure. When the threshold is a higher value, the number of matching point pairs will increase, but the computational complexity will also increase, resulting in a decrease in overall efficiency; when a lower value is taken, the result is the opposite. In general, a good balance can be achieved between efficiency and accuracy.

5. Conclusion

Image is the most intuitive way of visual language and information reading, it helps the public to quickly and accurately read useful information. This article reanalyzes information design from the perspective of artistic aesthetics and proposes that artistic images should be used to accurately convey information. Artistic images are aesthetic and can meet people’s aesthetic needs. Aesthetic activities are the unity of sensibility and rationality. Through artistic images, they have a perceptual understanding of information, which arouses the emotional resonance of readers and leaves a deep impression in their minds. At the same time, this article also has certain limitations and deficiencies. The sensor used in this article is a panoramic vision sensor, and the positioning effect obtained is still good, but this is for the more structured scene of the experimental environment; if the real environment is more complicated, it may not be possible to have such precise positioning or even unable to complete the positioning. It is difficult to truly perform positioning and navigation tasks in life by relying on a single sensor. Therefore, a method of combining multiple sensors is required. This is not only the best way to improve positioning accuracy and success rate, but also a hot trend in future positioning research. To improve the artificial intelligence and object-oriented nature of art design, an art design system design method based on image processing technology is proposed. The color difference compensation method is used for the image brightness balance restoration processing for the artistic design image, the image fusion is combined with the pixel quantization tracking method, and the wavelet noise reduction technology is used to realize the image noise reduction processing, thereby completing the image processing in the artistic design. In the future, the use of image recognition can more effectively realize the image output in art and improve the output quality of art design images, the output signal-to-noise ratio of the image is high, and the human-computer interaction performance of the system is better.

Data Availability

The data underlying the results presented in the study are available within the manuscript.

Conflicts of Interest

The author declares that he has no conflict of interest regarding the publication of the research article.