Abstract

At present, the human visual perception system is the most effective, accurate, and fast image processing system in the world. This is because human eyes have some special visual features, but the features closely related to image enhancement include color constancy and brightness constancy. This paper presents a new image enhancement framework and computational model which can better simulate human visual features. It is based on the analysis of color constancy and luminance constancy and Retinex theory. And, this is a new image enhancement method in the compressed domain based on Retinex theory. In Retinex theory, DCT coefficients consist of incident components (DC coefficients) and reflection components (AC coefficients). By adjusting the dynamic range of DC coefficients, carefully adjusting AC coefficients, and using the threshold method for block suppression, the compressed domain image can be enhanced. On the basis of Retinex theory, the incident light and reflected light components are considered synthetically, the dynamic range (DC coefficient) of the incident light component and the details of the reflected light component (AC coefficient) are adjusted, and then the incident light component is reexamined. Moreover, it achieves a better image enhancement effect and avoids the blocking effect.

1. Introduction

With the development of modern society, posters have gradually become an important medium of cultural communication. Poster design conveys its message through its unique expression. Posters have a natural and moving charm. The development of poster design is not stagnant but keeps pace with the times, keeping a high degree of concern and forward-looking thinking on society and things. Nowadays, the poster design industry has embarked on a broader road of exploration under the guidance of new art aesthetics and has experimental characteristics. In the future, the design forms of image media in poster design will be richer and more diverse, and the development trend will be better and better. This may be a design experience with audience participation, or it may be “out of nothing” that advocates simplicity first. With the progress of the times, the carrier of posters is constantly changing, and the new media is constantly developing. Poster design has also moved from plane to space full of greater development possibilities, which can better communicate and interact with the audience.

The study in [1] defined a human visual system model which describes the monochrome features of each spatial location image as encoding the luminance and chrominance of the corresponding three colors of each spatial location image. There are many requirements for human perception [2], but the functions of fast analysis and selective coding of information in scenes in image compression are very suitable for human perception requirements. The basic information types of image processing are human brain perception, processing, and interpretation [3]. The study in [4] improved by building edge-enhanced images and original sketches. Spatial filters used in the human visual system seem to be a low flux of color stimulation and band pass of photometric stimulation [5]. Image processing technology focuses on two main tasks [6], such as improving image information for human interpretation and processing of image data and for storage, transmission, and representation of automatic machine perception. Disjointed information [7] is used as an indicator of image comparison and applied to perceptual image quality assessment, image registration, and video tracking. Recent developments in digital technologies utilizing large-scale image and video data such as the Internet, multimedia systems, digital television, digital cinemas, and mobile/wireless devices have increased the demand for efficient processing [8], storage, and transmission of visual information. Image processing apparatus [9] can quickly detect objects. In the image processing apparatus of [10], an input unit is used for inputting image information. The purpose of the image relocation algorithm of [11] is to adjust the image to a display screen that is different from the aspect ratio or size of the image by uneven and perceptible content image size adjustment. Algorithms are capable of computing and clustering types of photos and videos for precise and nearly repetitive media items [12]. The study in [13] relied on more and more high-resolution multispectral digital art images (mainly painting and painting) and complex computer vision methods. This paper proposes a new research method [14], which takes the physical illumination model as the core, that is, dynamically calculating HDR illumination in the virtual environment. Our method allows the reuse of existing virtual environments as input and computes HDR images in photometric units. The study in [15] had some common difficulties in many visual learning tasks. One of them is the lack of supervision information because labels can be boring, expensive, or even impossible. Another difficulty is the high dimension of visual data. However, these difficulties can be alleviated by mixing labeled and unlabeled training data.

Based on the analysis of color constancy, brightness constancy, and Retinex theory, an enhancement framework and computational model that can better simulate human visual characteristics are proposed in this paper, and it is applied to poster processing to verify its effectiveness. The threshold method is used to suppress the block effect, thus realizing image enhancement in the compressed domain. Simultaneous processing of incident and reflection components can make full use of the low-frequency and high-frequency information of the original image to enhance the details, preserve the color, and suppress the block effect.

2. Theoretical Framework

2.1. The Primary Function of Posters

In the basic theory of social symbol theory, symbol theory holds that all symbol systems are social, which is reflected in the social purpose of symbol systems. Therefore, posters also have their social purposes. Information dissemination is one of the most obvious social functions of posters, especially in the era of underdeveloped information technology. However, great changes have taken place after the industrial revolution. As we all know, the industrial revolution has greatly improved the level of productivity development, and for the first time in human history, products exceeded demand. In this context, various disciplines are discussing the methods of influencing consumers’ decision-making, which directly lead to the development of new disciplines including marketing and advertising. Therefore, the social function of posters has also changed greatly, from the initial information dissemination to the persuasion of the audience. It can be said that persuading the audience to accept the information conveyed by posters is the primary role of posters in today’s society. Based on the above understanding of the main social functions of posters, it is the key to explain how posters convey information and persuade audiences through the image system.

2.2. Human Visual Properties

The human visual system is a complex concept of biology and related disciplines. People initially studied and utilized the visual characteristics of HVS to manufacture products or put forward standards. For example, in the process of transmitting TV signals, the color signal is compressed to a smaller bandwidth, which reduces the overall signal transmission bandwidth, so that color TV can be popularized under the condition of limited bandwidth at that time. Another example of applying human visual features is image compression, such as the JPEG compression standard, which takes advantage of the insensitivity of human eyes to the details of high-frequency signals. In the compression process, most high-frequency coefficients are quantized to zero, so reducing the required storage space will only bring imperceptible image quality loss. The human visual system is a comprehensive three-dimensional world perception system. The human eye receives reflected light from surrounding objects through retinal cells. HVS processing starts with the eyes and is mostly processed by the brain. At present, how the brain processes this information is still unknown. The research on HVS mainly focuses on the visual characteristics of HVS, which are called human visual characteristics. Through computer science, they try to simulate these characteristics and functions of human eyes and apply human visual characteristics to information processing fields such as image enhancement.

3. Algorithm

3.1. Human Eye Structure and Visual Processing Process

The eye is the starting position of HVS information processing. Light passes through the lens and falls on the retina behind the eyeball. The eye structure diagram is shown in Figure 1. Six muscles located on the outside of the eye are connected with the eye so that the eye can focus on any target and adjust the shape of the lens. The shape of the lens determines the focal length. By adjusting the shape of the lens, the scene captured by the eye can be focused behind the retina. The discoid vascular membrane composed of two annular muscles is called an iris, and the gap in the center of the iris is called a pupil, which is located in front of lens. The diameter of the pupil determines the amount of light entering the eye. One of the two annular muscles is used to enlarge the diameter of the pupil, and the other is used to reduce the diameter of the pupil.

The retina is essentially composed of a group of receptors that can sense the light entering the eye. The structure is divided into three main layers: the receptors are located at the bottom of the retina, the central cells of the retina are located at the top of the retina, and the polar cells are located in the middle of these two layers. The light entering the eye passes through the upper two layers before reaching the receptor located behind the retina. The light-absorbing material is attached to the lower part of the receptor layer, which can effectively prevent the scattering of light received by the retina due to astigmatism. Rod cells and cone cells are two very important photoreceptor cells in the eyes. Cone cells are used to distinguish colors under strong light conditions. They are the organs of bright vision. Rod cells are dark vision organs, which work under the condition of less light.

Rod cells are generally about 25 times more sensitive than cone cells because of their larger diameter and longer length. Generally, cones can be divided into three categories. The first category responds to the red part of the visible spectrum, the second category mainly responds to the green part, and the third category mainly responds to the blue part. This constitutes the physiological basis of the tricolor theory of color vision. The tricolor theory explains why the three primary colors must be red, green, and blue, while other colors can be composed of these three colors.

The receptor is the main organ for measuring the amount of light entering the retina. The amount of light measured by a single receptor (expressed in energy) can be calculated by integrating all wavelengths . It is known that three types of cones are sensitive to the red, green, and blue parts of the spectrum. Record as the response curve of these three receptors, where . is used to denote the light that falls on a receptor and has a given wavelength , so that the light energy measured by a single receptor can be obtained by the following formula:

When observing an object with an unsmooth surface, the light ray perceived by the receiver can be considered as the common product of the reflected light of the object (light reflected from the object surface into the eye) and the incident light irradiated on the object surface; that is to say, the light ray perceived by the human eye is composed of the reflected light and the incident light L, as shown by the following formula:

Then, the complete calculation formula of energy is as follows:

The receptors transmit the measured energy information to the polar cells located in the second layer, and then the polar cells use the central retinal cells to form some chromosome binding connections to stimulate nerve endings to transmit the information to the cerebral cortex. Of the 126 million receptors in the retina, rod cells account for the vast majority of about 120 million, and the remaining 6 million are cone cells. Most pyramidal cells are located in a very small area called the fovea. This area is located in the center of the retina. The resolution of the center of the retina is much higher than that of its edge. Although there are about 126 million receptors in the retina, only about 1 million central retinal cells carry this information to the cortex through nerve endings, so this information needs to be compressed in some way to pass through the axons of the central cells. Processing begins with the retina. A human visual system called the luminance visual model has been developed to describe the process of light entering the eye and reaching the cerebral cortex through the retina, as shown in Figure 2.

In order to establish the model quickly, this paper simplifies the visual model of human eyes into a linear system. The light reflection in Figure 2 refers to the light response system of human eyes. Lamant, Rohler, and others measured the spatial frequency characteristics of the human eye system, which are also called low-pass characteristics. Spectral response refers to the spectral response system of human eyes. Internal imaging is formed by light passing through two reaction systems: optical reaction and spectral reaction. Imaging signal acts on receptors in the retina, including rod cells and cone cells. Corner compression refers to the lateral inhibition effect of the visual neural network, because the number of receptors is much larger than the number of retinal central cells, so this information must be compressed before it can continue to be transmitted. The lateral inhibition effect shows that the visual signal perceived by human eyes is formed by the weighted sum of the outputs of other adjacent receptors, not just a single receptor. The internal imaging signal produces signal through cell reaction, and then the signal produced by corner compression is the final visual subjective luminance signal, which is processed by the cerebral cortex and finally causes visual perception.

3.2. Human Visual Characteristics and Poster Image Enhancement
3.2.1. Dynamic Range of Vision

The dynamic range of vision is the range of light intensity that human eyes can perceive. The whole dynamic range of human eyes ranges from 10 to 2 cm/m2 to 106 cd/m2, which can be said to be relatively wide. Such a wide dynamic range is beyond the capabilities of machinery, displays, and even photosensors. Take the gray image that can be displayed by the display as an example. Generally, the dynamic range of an 8-bit gray image is only 256 gray levels from 0 to 255. The smaller the gap in each level, the smoother, more delicate, and natural the brightness change, which indicates the wider the dynamic range.

3.2.2. Spatial Resolution

Spatial resolution is the ability of human eyes to observe and distinguish details in scenes. The inherent properties of human eyes determine that the resolution of human eyes depends on the physiological structure of eyes.

3.2.3. Image Contrast

Contrast refers to the measurement of different brightness levels between the brightest white and the darkest black in an image, which is also called the gray contrast of an image. The larger the difference range of image gray contrast, the greater the contrast. The smaller the difference range of image gray contrast, the smaller the contrast. When the contrast reaches 120: 1, it is good, and vivid and rich colors can be easily displayed at this time. When the contrast reaches 300: 1, it can support all levels of colors.

3.2.4. Relative Visual Acuity Curve

The human eye has different sensitivity to different wavelengths of light. Light with different wavelengths but the same radiation power can be seen in different brightness and colors in human eyes. Therefore, the general object’s perception of the brightness of the spatial and wavelength distribution of optical radiation iswhere V (λ) is called the relative luminous efficiency function of the visual system. For human eyes, V (λ) is a bell-shaped curve, and the relative visual acuity curves of rod cells and cone cells are different.

3.2.5. Contrast Sensitivity

Under the uniform background with brightness , there is an area with brightness higher than the background brightness; that is, the brightness of this area is , and the brightness difference that human eyes can distinguish is a function of . With the increase of background brightness , the two-degree difference must also increase, so that human eyes can distinguish the area with brightness . Experiments show that brightness can change in a considerable range, can still keep a fixed value, and the ratio is constant, which is called the Weber ratio. Weberby also believed that the visual characteristics of human eyes have a certain degree of logarithmic characteristics because if the brightness is logarithmic and then differentiated, the following formula can be obtained:

3.3. Retinex Theory and Algorithm

Land believed that color constancy is the result of the joint action of retina and cerebral cortex. So, he put forward the Retinex theory, that is, retina + cortex. Land proposed the application of the Retinex theory in the field of digital poster images, where long-wave, medium-wave, and short-wave wavelengths correspond to three color channels of digital poster RGB images, respectively. The image to be enhanced in R, G, and B channels is considered to be composed of two parts: the reflected light component and the incident light component. Estimating incident light components by comparing pixel values between pixels is called illuminance estimation. The illuminance component is removed from the image, and the reflected light component is obtained to restore the original appearance of the object, thus realizing the enhancement of the original image. The specific process is as follows (the three channels of color image are the same, taking single-channel processing as an example).

Think of the image as the product of the incident light component and the reflected light component:where is the input image, is the reflected light component, and is the reflected light component. Then, the incident light component of the scene is estimated by various methods to obtain the illuminance estimation , and the reflection estimation value is obtained by the following formula:where is a positive number infinitely close to 0 to ensure that the divisor is not 0.

How to estimate the incident light component of the scene to get the illuminance estimation is the core step of the Retinex algorithm. In the process of comparing the pixel values between pixels, path selection, that is, how to select the compared pixels, is the most important link, and it is also one of the common bases for distinguishing different Retinex algorithms. Path selection determines which pixels in the image to compare and how to compare, which directly affects the accuracy of estimating the incident light component of the scene and further determines the enhancement effect.

Provenzi’s RSR method is an improvement of the Random Path Retinex method proposed by Land, which changes the path selection from a one-dimensional path to a two-dimensional region and can better reflect the influence of surrounding pixels on target pixels. MSR and MSRCR algorithms are proposed by Jobson. They were first applied to space pictures taken by NASA and achieved a good enhancement effect, which attracted wide attention. The implementation process of MSR is as follows (take single-channel processing as an example).

Step 1. Divide the image into two light quantities: incident light component and reflected light component, as shown in formula (6).

Step 2. Take logarithms on both sides of formula (6) and separate the incident and reflected components as follows:

Step 3. Filter the original image using Gaussian smoothing filter as follows:where the symbol is a convolution operation, a low-pass filtered image obtained after smoothing processing; that is, an illuminance estimate of an incident component. is a Gaussian filter, and the expression is as follows:

Step 4. Subtract the original image from in the logarithmic field as follows:where is the logarithmic domain reflected light component.

Step 5. MSR algorithm selects several standard deviations of Gaussian functions for linear weighted average:where is the number of different standard deviations and generally three are taken; that is, are weights, is three different standard deviations, and is a Gaussian function with parameter .

Step 6. Take an opposition number to to obtain an enhanced image, namely,In equation (10), the standard deviation parameter of Gaussian function is called scale parameter, and the enhancement effect is determined by its size. The stronger the detail enhancement ability, the smaller the value of , and the worse the color fidelity ability and vice versa. MSR algorithm selects three scales from small to large for weighted fusion to achieve a better balance effect. However, the MSR algorithm does not consider the local information of the image and uses three fixed scale parameters to fuse the whole image, so it cannot enhance the local information of the image pertinently.

3.4. Image Compression in DCT Domain

Image compression in the DCT domain is one of the main contents of the JPEG image coding standard. The basic idea of JPEG image coding can be easily extended to other image or video compression schemes based on DCT (such as MPEG and H.26X). Firstly, the image is divided into nonoverlapping subblocks of 8 × 8 pixels, and then each subblock is transformed in spatial frequency domain by DCT transform. The DCT transform of the 8 × 8 subblock is defined as

Among them,

After quantizing the DCT coefficients, each coefficient is divided by the corresponding quantization parameter and the result is rounded to the nearest integer. The initial value of the quantization parameter depends on the compression standard. The quantized DCT coefficients are stored in a row or column matrix in a zigzag order for average information encoding. Coefficients will be compressed to zero after quantization, leaving only a few nonzero coefficients in subblocks. When stored in zigzag order, these nonzero coefficients will be in front of the queue. To reconstruct the original image from the decompression process, the compressed image can be decoded first and multiplied by the corresponding quantization coefficients; then the quantification is inversed, and finally the image is transformed by DCT. The inverse DCT transform IDCT is defined as

3.5. YCbCr Color Space

YCbCr color space is widely used in digital poster images in which luminance information is represented by a single component and color information is stored by two chromatic aberration components and . The difference between the blue component and the reference value is component and component is the difference between the red component and a reference value. YCbCr, HSI (hue, saturation, brightness) color space is more in line with the human description and interpretation of color. At the same time, to simplify the processing and reduce the amount of computation, this algorithm transforms the image to YCbCr color space for processing. Assuming that each color component of YCbCr space and RGB space images is stored in 8 bits, the relationship between the two spaces is as follows:

Although the relationship between YCbCr space and RGB space is not linear, from the point of view of algorithm implementation, we can define a YCbCr space, where has a linear relationship with RGB space. After conversion to the Y, Cb, Cr space, the contrast of the image can be enhanced only by processing the luminance component of the image and the and component operations are also required to preserve the color information.

It is assumed that, in the YCbCr color space, the DCT coefficient of the luminance component Y of an image of length and width is , is an adjustment scaling factor for quantizing DC coefficients, and is an adjustment scaling factor for quantizing AC coefficients. The DCT coefficients of the adjusted component are given by

The contrast of the enhanced image is times that of the original image. However, if only the parameters of the component are carried out, the color of the original image cannot be preserved. In RGB color space, the three components R, G, B of each pixel in the original image have the same effect on the color, and image enhancement algorithms based on RGB color space often process these three channels in the same way. This algorithm adopts a similar strategy: while adjusting the parameters of the component, the parameters of the and components are simpler than those of the Y component, and DC coefficients and AC coefficients are treated separately in the process. Assuming that the DCT coefficients of the and components of the image are D, respectively, and the adjusted scaling factors of the component are , the adjusted DCT coefficients and of the and components are

The function of U and V is to calculate DCT coefficients, realize compression function in calculating image processing, reduce data amount, and improve transmission efficiency.

4. Experiment

4.1. Experimental Flow

The Retinex theory defines image as the product of incident light component and reflected light component of the scene: namely,

Then, various methods are used to estimate the incident light component of the scene to obtain the illumination estimation , the illumination estimation value is removed by the following formula, and then the reflection estimation is obtained as the final output result:

To improve the overall contrast of the poster image, the dynamic range adjustment of the incident light component must be carried out to obtain :

To increase the clarity of the poster image, the reflected light component must be enhanced in detail, and the following results are obtained :

Finally, the output image is obtained by multiplying and , and the poster image is enhanced:

4.2. Experimental Results
4.2.1. Comparison with Traditional Image Enhancement Algorithms in Compressed Domain

Comparing the algorithm with the traditional compression domain image enhancement algorithm, Jinshan Tang proposed a compression domain image enhancement algorithm with DCT domain frequency band adjustment as the core. The algorithm transforms the image into the compressed domain and uses the same coefficients for each frequency band to adjust. The processing results are compared as shown in Figures 35.

The quadratic function τ (x) has no parameters and is easy to realize. Experiments on different images show that τ (x) has a good enhancement effect on dark areas with low contrast. The parametric function η (x) has an adjustable parameter γ near 1, which has a good dynamic range compression effect for both dark and overbright areas.

The mean value, standard deviation, and entropy of each processing result in Figures 35 are given below, and the R channel of each image is selected for comparative explanation in Tables 13.

All the algorithms involved in the comparison are image enhancement algorithms in the compressed domain. It is more targeted to choose the JPEG image quality standard proposed by Wang as the evaluation standard. As shown in Table 4, the higher the value, the better the image quality.

As can be seen from Figure 6, compared with the traditional compressed domain image enhancement algorithm, the image quality enhanced by this algorithm has been significantly improved. The results show that the image quality of this algorithm is better than that of the traditional algorithm, which is more suitable for the visual characteristics of human eyes and accords with the subjective evaluation of human eyes.

4.2.2. Comparison with the Traditional Retinex Algorithm

The processing effect of this algorithm is compared with the processing effect of MSR, as shown in Figures 79.

In the above experimental results, (a) of each image is the original image, (b) is the processing result of MSR, (c) is the processing result of mapping function in the dynamic range adjustment process of illumination component, and (d) is the processing result of mapping function in this algorithm. As can be seen from Figures 79, the color of MSR processing results is darker, and the brightness and color of the processing results are considered in Y, Cb, Cr color space. The processing effect is better than MSR, and the color is more natural. The details are also richer.

Objective evaluation of image quality, such as average value, standard deviation, and entropy of processing results, is shown in Tables 57. The mean value reflects the average brightness of the image, the standard deviation reflects the contrast of the image, and the entropy reflects the amount of information contained in the image.

It can be seen from Tables 57 that the brightness, contrast, and information content of the images processed by this algorithm are relatively good. The following image quality evaluation method based on statistics proposed by Jobson is used to further compare the processing results. The statistical results show that when the average gray level is between 90 and 190 and the standard variance is between 30 and 85, the image quality is better, as shown in Figure 10.

The specific method of statistics is to divide the image into nonoverlapping subblocks with the same size (generally 50 × 50 or 60 × 60); then calculate the standard deviation of each subblock, respectively, and then average the standard deviation of the subblocks; and multiply the average value of standard deviation with the gray value of the image to get the evaluation result. The larger the result value, the better the image quality. The statistical results of the images in this chapter are shown in Table 8 and Figure 11.

As can be seen from Table 8, the quality evaluation results of the images processed by this algorithm are generally higher than those of the traditional MSR algorithm. The original image value is relatively small, and the brightness and contrast of the image are not good. The quality evaluation data processed by this algorithm is improved compared with that processed by the traditional MSR algorithm.

4.2.3. Comparison with HE and MSR Algorithms

The poster image enhanced by this algorithm is compared with HE and MSR algorithms, and the results are as follows:

Compared with the existing HE and MSR methods, the proposed algorithm has clearer processing results. The calculation of the incident component is more accurate than the traditional MSR algorithm, and the constancy of brightness and color is considered. The combination of the two methods further shows that the adaptive filter selection strategy and weighted fusion computing framework adopted in this algorithm are superior to the cumulative distribution equalization of the existing HE algorithm and the calibration filter of the MSR algorithm.

At the same time, it can be seen from Figures 1214 that the processing results of this algorithm and the existing HE and MSR algorithms have reached the function of stretching the dynamic range of the gray scale, but the gray scale distribution of the results processed by this algorithm is concentrated between 120 and 200, which is more in line with the average range of gray scale required by the best human vision. The gray mean and variance of the statistical image are used to verify the gray distribution, as shown in Tables 9 and 10.

As shown in Table 9 and Table 10, the average gray level of the algorithm processing results in this chapter is generally higher than that of the original image, HE, and MSR processing results, and most of them are concentrated between 140 and 180 gray levels, which is more in line with the observation range of human eyes. However, the standard deviation of the gray level of the algorithm processing results in this chapter is reduced, which shows that the brightness difference between pixels is reduced, and the gray level is more concentrated near the median value, which is more suitable for human vision to observe. Experimental results show that the contrast and texture of low illumination images with different degradation degrees can be effectively enhanced, and better visual effects can be achieved in brightness, color, and detail. Finally, the output image is obtained by multiplying the incident component and the reflection component, which enhances the contrast while keeping the color information of the original image, and makes the enhanced image more natural in color. In the future, the quality evaluation of the processed image is often a relatively subjective view in enhancement processing. According to different application requirements, the evaluation criteria may vary greatly. Most of the existing image evaluation criteria are calculated according to the gray level of pixels, which to some extent ignores that pixels with the same gray level may have different meanings in the image. In addition, color information often reflects the quality of an image better than brightness information. Therefore, how to put forward a quality evaluation standard with more visual significance and taking into account color information will become an important part of the follow-up research.

5. Conclusion

Aiming at the problem that the existing image enhancement algorithms are not ideal in the spatial domain, the image contrast is enhanced in the compressed domain, but the details and color information of the image cannot be preserved. An image enhancement algorithm in DCT compressed domain based on retina theory is proposed. Different from the traditional Retinex algorithm, which discards the incident light component and only enhances the reflection component, this algorithm processes the incident and reflection components at the same time, fully considers the image details, and defines a new spectral content ratio. By manipulating the spectral content ratio to process the reflection component, the details of the image can be effectively enhanced. Finally, the output image is obtained by multiplying the incident component and the reflection component, which enhances the contrast while keeping the color information of the original image, and makes the enhanced image more natural in color. In this algorithm, the threshold method is used to suppress the block effect, and the information of adjacent subblocks can be used in the subblock processing in the DCT domain, so that halo artifacts can no longer be produced around the boundary with obvious light and shade jump. Experimental results show that the algorithm can effectively enhance the contrast and texture of low illumination images with different degradation degrees and achieve good visual effects in brightness, color, and detail. With compressed images, especially JPEG images, becoming the popular data transmission standard in the network, a new image enhancement algorithm based on retina theory is proposed in the compressed domain to make full use of the incident light component information of images. Based on the Retinex theory, DCT coefficients are divided into an incident component (DC coefficient) and a reflection component (AC coefficient). By adjusting the dynamic range of DC coefficients, carefully adjusting AC coefficients, and using the threshold method for block suppression, the compressed domain image can be enhanced. To enhance the compressed image better, we must make full use of the information contained in the DC coefficient.

Data Availability

The experimental data used to support the findings of this study are available from the author upon request.

Conflicts of Interest

The author declares that no conflicts of interest regarding this work.