Abstract

One of the fundamental problems in massive multiple input multiple output (MMIMO) systems based on the frequency division duplexing (FDD) mode is the acquisition of the channel state information at the base station (BS). Indeed, in these systems, the reciprocity of the uplink (UL) and downlink (DL) channels is not applicable. Thus, the users are supposed, after having estimated their subchannels, to feedback them to the BS. Since the size of the information to be returned is proportional to the number of base station antennas that is very large, then we witness an excess of the number to be returned for each user. It is therefore essential to find a way to reduce this quantity, which affects the overall performance of the system. In this article, we propose two new MMIMO-OFDM-FDD-DL schemas based on the Givens rotation quantization used in the 802.11ac standard to reduce the amount of information to be returned for each user. The first is a fusion between the Givens rotation (GR) and the arithmetic coding (AC) technique, and the second is based on the combination of the GR method and the universal compression technique, Lempel Ziv Welch (LZW). The proposed schemes named GR & AC and GR & LZW (with an improvement of this proposal based on a modified version of LZW (LZWM) named GR & LZWM) for the first and second, respectively, are compared and both can achieve a compression ratio of up to 95% (or even 98%) and 83%, considerably exceeding recent advanced techniques.

1. Introduction

MMIMO systems can significantly increase the capacity and energy efficiency of communication systems by using an excessive number of antennas (more than 100 at the BS). It has been shown in [1] that increasing the number of antennas at the BS is always beneficial even with a very noisy channel estimate, because in this case, the BS can recover information even with a low signal-to-noise ratio (SNR). This strongly justifies the use of a very large number of transmit antennas compared to current cellular systems [2]. This is the reason this technology has been named as the most promising for 5G and 6G communications.

Note that in the MMIMO technology, obtaining the channel state information (CSI) for transmission (BS) and reception (mobile station) is essential for beamforming, signal detection, and optimization of resources allocation. Moreover, the manner of obtaining this CSI differs generally with respect to the mode of duplexing used. There are two nowadays: the time domain duplexing (TDD) mode and the frequency domain duplexing (FDD) mode.

In the TDD mode, the CSI can be easily obtained at BS. Indeed, in this mode, each user sends their pilot symbol to the BS and the BS estimates all subchannels. Since in the TDD mode there is reciprocity of the downlink and uplink channels, we only need one to deduce the other. Therefore, after estimating the channels in the uplink, the BS knows those in the downlink in an implicit way. Unlike the TDD mode, in the FDD mode, the downlink and uplink channels cannot be considered reciprocal: they are different. In this case, it is up to the BS to send the pilot symbols in a broadcast, and the estimation of the downlink subchannels will be done at the level of each user to obtain the CSI. After that, each CSI is sent back to the BS. Since the characteristic of MMIMO systems is the presence of several antennas at the BS, we witness not only an excess of pilot symbols to estimate the transmission channel but also the amount of feedback information becomes essential. Therefore, a fundamental question arises: in a FDD MMIMO system, how to significantly reduce the feedback overhead?

The answer to this question is so unclear that early work on MMIMO systems [2, 3] chose the TDD mode which is more reasonable and easy to use in this context. Recall that in the TDD mode, even if the downlink and uplink channels are reciprocal, the front-end radios are not, and there are problems of calibration, synchronization, and pilot contamination. In addition, the most current wireless communication networks operate in FDD mode. Thus, the answer to this question becomes a capital necessity. In other words, it would be interesting to see how to use the FDD mode optimally in MMIMO systems. This is the reason this work will focus on this issue.

All works that have attempted to address this problem can be categorized as follows. (i)The codebook principle is widely used. This technique consists of building a codebook (using the Grassmann manifold theory, for example) available at the transmitter and the receiver. This dictionary contains a number of predefined precoding matrices with their associated indices. Once the receiver obtains the CSI after channel estimation, it chooses from the dictionary matrix the one that best approximates the channel conditions and feeds back its index on the return path. Sending the index only on the return path significantly reduces the feedback overhead. The disadvantage of this technique is that the selected matrix is an approximate version: there will be quantization errors, which reduces the beamforming gain. In addition, the accuracy of the selected matrix is strongly related to the size of the dictionary: the larger the size, the better the accuracy matrix, but the longer waiting time is important. The size of the codebook is also proportional to the number of transmission antennas. Therefore, the codebook principle is only appreciated for a reduced transmission antenna number (thus a small amount of feedback). In [4], the authors proposed a new technique to use the codebook approach in a MMIMO context in the FDD mode. The proposed technique, based on the construction of a dictionary by the Grassmann manifold method and the minimization of the choral distance, allows a considerable reduction in the amount of feedback. However, the solution becomes ineffective when the number of antennas exceeds 64(ii)Antenna grouping beamforming (AGB) approach has also been proposed to solve this problem [5]. Here, the idea is to do the grouping of transmission antennas to have a very reduced representation of the number of excessive antennas by using the properties of the designed grouping patterns. This will allow a considerable reduction in the number of pilot symbols and the amount of feedback. However, this method is similar to that of the codebook but more complex to achieve. As a result, there are large quantization errors as the number of antennas becomes large(iii)In addition in this register of reducing feedback overhead, Shen et al. [6] proposed an adaptive angle-departure (AoD) subspace codebook for channel feedback in FDD massive MIMO systems. Their key idea is based on the observation that path AoDs vary more slowly than path gains. In the angular coherence time, by using the constant AoD information, the proposed AoD adaptive subspace codebook is able to quantify the channel vector more accurately. The authors show that the feedback overhead of the proposed codebook scales only linearly with a small number of dominant AoDs (paths) instead of the large number of antennas at the BS. Moreover, their proposal makes it possible to obtain a good quality of channel feedback, while requiring a low overhead(iv)In contrast to the previous techniques, other works have been based on the fact of making the compression of the CSI at the level of the users with specific compression techniques (arithmetic coding [7], Givens rotation [8], time domain quantization [8], and LZW compression [9])

It should be noted that none of these works specify whether it is more optimal to return only the precoding matrix (partial CSI) or the channel matrix (full CSI). Indeed, in a classical MIMO system or MMIMO, the BS does not need the full channel matrix (full CSI) to beamform properly. Only the precoding matrix (partial CSI) is sufficient. Recall that the precoding matrix comes from the singular value decomposition of the full channel matrix. Therefore, in almost all works, to diagonalize the channel transmission and eliminate cochannel interference, users feedback only the precoding matrix after the estimated channel and singular value decomposition. However, it is quite possible for users to feed back the estimated channel matrix entirety, and in this case, it is up to the BS to do the singular value decomposition (SVD) of each subchannel to retrieve the precoding matrix relative to each subchannel.

In this paper, we will first make a comparative study between the full channel matrix and the precoding matrix resulting from the latter to choose the right candidate to be feedback. After that, we will make a comparative study of the application of compression techniques (GR, AC, and LZW) in a MMIMO context to know which of its techniques gives better performance in terms of compression ratio. Finally, we will propose two new approaches to reduce the downlink feedback overhead (GR&AC and GR&LZW) based on the combination of GR used in 801.11 and methods AC and LZW, respectively. The proposed techniques can reach 83% and 97% compression ratios, respectively. The rest of the paper is organized as follows: In Section 2, the system model is defined. Section 3 will present the comparison between the partial CSI and the full CSI. In Section 4, we will briefly describe the quantification of the Givens rotation, AC, and LZW compression in the MMIMO context. Section 5 will cover the proposals of improvement of the GR as well as the numerical results. Finally, Section 6 will serve as a conclusion.

2. The System Model

Consider a multiuser MIMO downlink channel operating in the FDD mode with antennas at the base station (BS) and mobile stations (MS); each mobile station has antennas for .

is assumed to be greater than or equal to : sum of all MS’s antennas.

In Figure 1, the vector represents the transmitted data of the BS to the MS, where .

A preprocessing is done on the BS side for each stream , using the precoding matrices , where . After DL transmitter preprocessing, the -component signal broadcast by the BS to the MSs can be expressed as where .

As shown in Figure 1, the frequency domain received signal of the MS for the considered OFDM subcarrier can then be expressed as where is a length- additive white Gaussian noise with zero mean and a covariance matrix given by , whereas is an ()-component channel transfer matrix connecting the DL transmit antennas of the BS with the MS’s receive antennas, which can be expressed as

A postprocessing on the postcoding matrix is performed at each MS side to eliminate the multiuser interference (MUI). After that, the detected data on the considered OFDM subcarrier can be given by .

The procedure described in this section is the same for all orthogonal frequency division multiplex (OFDM) subcarriers. To facilitate paper reading, in the following, we will consider a single OFDM subcarrier.

2.1. The SVD-Based Beamforming

Consider the SVD of the channel transfer matrix : Now, let us perform the SVD of our matrix assuming that its rank is : and that . where is diagonal and and are unitary matrices. In SVD-based beamforming, the transmit signal is precoded using the matrix , where stands for the transpose-conjugation.

In this context, Equation (3) becomes

Let us now collect the received DL signal vectors into a vector . Then, according to Equation (7), it can be shown that the overall DL-received signal vector of the MSs can be expressed as where

For ease of reading, it is assumed in the following that all MSs have the same number of antennas:

Based on Equation (8), it is not possible to perform separately the postprocessing which allows the transmitted data of each UE to be detected at the BS side. As shown in Figure 1, one postcoder matrix is used to separate to separate the desired sign and the interfering signal at the different UEs.

Therefore, the postprocessing at each MS side can be done using the ZF solution:

Then, Equation (8) becomes

Since is exactly diagonal, there is no MUI, and the MMIMO channel is reduced to separate and independent SISO channels. However, the noise component is colored by the postfilter and can be potentially enhanced.

Finally, the transmitted data from all UEs can be detected by performing a simple equalization process:

2.2. Discussion

The SVD beamforming and ZF MUI cancellation presented in the two previous subsections can only be implemented if the DL CSI is known at both the BS and each UE. In the considered FDD system, the BS sends pilot sequences which are known at the MSs side. Based on the reception of these pilot sequences, each MS estimates its CSI. After that, each MS has to feed back the CSI to the BS. There are two possibilities: (i)Each MS can feed back its channel matrix. However, this is not optimal for SVD beamforming scheme. Because in this case, the BS will be forced to perform a SVD decomposition to get its precoder matrix(ii)To avoid this previous drawback, only the precoding matrices can be sent directly to the BS instead of the channel matrix

It is important to note that, in downlink MMIMO-OFDM system, each MS has to feed back its precoding matrix for each subcarrier. This feedback overhead can be essential and its reduction is necessary.

2.3. Channel Models for Massive MIMO

Let us talk a bit about the massive MIMO channel model. Indeed, the channel model plays an important role in system analysis as well as performance evaluation. One factor that critically determines the channel pattern is the configuration of an antenna array. Each measurement is made based on a given grating, and there are several typical antenna configurations that can be placed into three different groups: one-dimensional (1D), 2D, and 3D gratings. Each of these configurations as well as their size affects the properties of the channel by attributing different massive MIMO performance. For example, the distance between antenna elements essentially dictates the mutual coupling and correlation matrix, affecting the performance of massive MIMO. We can list two types of channel models found in the literature, namely correlation-based channel models and geometry-based stochastic channel models [10]. (i)Correlation-based models are mainly used for theoretical evaluations of system performance. This family of channel models can be sorted into three different types of channel models. These are, namely, the following: (1)Rayleigh channel model: it assumes that there is no correlation between transmitting antennas or receiving antennas. Here, the elements of the fast fading matrix are i.i.d. complex Gaussian random variables. Moreover, the most important property of such a model is given by increasing the number of transmit antennas tends to infinity, which causes the rows of the transmit channel matrix to become nearly orthogonal [2], which gives a property known as channel hardening and mitigates the impact of fast fading, which simplifies the complexity of scheduling schemes(2)Correlation channel model: this model inserts antenna correlation due to antenna spacing and broadcast environments(3)Mutual coupling channel model: as the number of antennas increases, the effect of mutual impedance also increases. Additionally, the load impedance and antenna impedance must also be characterized to reflect a more realistic channel(ii)Geometry-based stochastic channel models are cluster-based models that define the propagation channel containing multiple clusters with different delays and power factors. In the case of a 3D channel model, the two strongest clusters are split into three subclusters with a fixed delay offset [10]. This model is used for practical evaluations of wireless communication systems

In this work, we use a channel model based on the correlation, more particularly that of Rayleigh.

3. Full CSI vs. Partial CSI

3.1. Simulation Settings

The considered simulation parameters are listed in Table 1. We consider several values of in order to make simulations that concern a large part of the telecommunications systems using OFDM as long-term evolution (LTE) where can be , IEEE802.11 where is up to 512 or IEEE802.16 where .

3.2. Comparison

Figure 1 gives an overall view of the system with the feedback mechanism. In blue, it is the feedback of the precoding matrix (partial CSI) and in yellow that of the matrix of the estimated channel (full CSI). To determine which of these candidates is more beneficial to return, we have mapped the structures of the two matrices. Figures 2 and 3 show that the structures of the dies are almost similar to an MMIMO configuration but remain valid for all other configurations. This similarity is further proven with the application of the LZW compression algorithm. Thus, from Table 2, we can see that the compression ratios for the two matrices are slightly identical. Thus, instead of sending back the full CSI which requires the BS to do an SVD decomposition for each subchannel, it is better to send the partial CSI so that the BS will directly precode which will reduce the processing time.

In the rest of the document, we will consider that the users will return on the return path only the precoding matrix after SVD decomposition; thus, the feedback is partial. Therefore, we will focus on improving the method of the Givens rotation. Indeed, this method requires that the matrix be unitary.

Since the previous comparative study gives the reason to return only the precoding matrix (unitary) instead of the total matrix of the channel (nonunitary), it is then legitimate to look at how to improve the method of the Givens rotation which the method used in the IEEE WLAN standard.

4. Givens Rotation, AC, and LZW

4.1. Givens Rotation Quantization

The Givens rotation (GR) is proposed by Kim and Aldana [11] to reduce the feedback overhead, exploiting the unitary property of the precoding matrix . The GR feedback compression method is adopted in the IEEE WLAN standard, and its principle is to represent a unitary matrix in a special form with complex diagonal matrices as follows: where

The matrix is an diagonal matrix, is an identity matrix, and is an Givens rotation matrix.

According to Equations (14) and (15), the parameters to be determined to identify the precoder matrix are

Consider a unitary matrix of dimension . According to the GR method seen above (Equation (14)), can be decomposed as follows:

The precoder matrix can then be reconstructed through the six parameters: , and .

Therefore, in a general way, instead of all elements of the matrix , it is sufficient to consider the parameters and . Note that these angle parameters can vary from to for and from to for .

Now, and can be quantized according to Equations (19) and (20); and represent the quantized angles. where and the number of bits per angle . where and is the number of bits per angle . Finally, the receiver feeds back the quantized parameters and to the transmitter which can recover the quantized precoder matrix by using where

When we assume that , the total number of bits for each user can be calculated using the formula , where represents the number of OFDM subcarriers. Based on the fact that the feedback overhead without compression is , the feedback overhead compression rate can be expressed by . Unfortunately, this feedback overhead compression rate decreases with the growth of antenna ports. This last disadvantage limits the performance of GR in MMIMO.

However, the GR remains an interesting method because, as mentioned previously, the SVD decomposition is only performed at each MS side. Therefore, to take advantage of this particularity of the GR approach, we propose its improvement in Section 5.

4.2. Arithmetic Compression

Figure 4 gives an overall view of the transmission model with arithmetic coding from the insertion of the pilot signs to the estimation of the channel to the preprocessing of precoding and feedback.

4.2.1. Description of Encoding and Decoding Principles

Arithmetic coding is a statistical coding, that is to say that more than one character is represented, the less bits need to be encoded [12]. The principle of the algorithm is as follows: First, we begin by subdividing the interval in intervals:

Then we must make a mapping of the symbols of the alphabet that make up the sequence to compress to the intervals:

So any real in the interval may represent the symbol .

1: Input: : sequence to compress; : the symbol probabilities are known .
2: Output Val: a real value in the range which represents the sequence.
3: (Cum= cumulative probability and coincides with the various terminals: );
4: fordo
5:      being the number of the symbol alphabet (the number of intervals).
6:                          the lower bound
7:                         the upper bound
8: fordo
9:  
10:  
11:  
12: return   Val is the code word but any value from [low and high) is good.
1: Input: : bit stream (compressed version of the sequence); : the number of symbols to be decoded; : the symbol probabilities are known.
2: Output: the decoded sequence .
3: Begin-decoding     (Cum = cumulative probability and coincides with the various terminals: );
4: ;
5: fordo
6:  
7:                          the lower bound
8:                          the upper bound
9: fordo
10:  
11:  
12:  
13:  while () and () do we find the index j such this condition is fulfilled
14:    ifthen
15:     ;
16:    else
17:     ;
18:
19:
20:
21:
22:
23:return

To better understand arithmetic coding, we will use a simple example. Let us go back to our sequence to compress: “MIMOMISO.” The length of this sequence is 8. Table 3 defines the probabilities of the characters in our sequence as well as the relative intervals, respectively.

Once this is done, we can now apply the arithmetic coding algorithm defined previously and we get Table 4.

Thus, 0.2983979258125 is the arithmetic representation of the “MIMOMISO” message. In practice, any number between 0.2983979258125 and 0.2984185253125 represents the correct message.

Figure 5 shows the coding process.

Decoding is just as simple: knowing that the encoded message is in the interval ; thus, the first letter is indeed an . We then subtract the lower bound (for ), and we divide by the probability (for ); we thus obtain 0.1290611355 which is included in the interval , and so, the next letter will be an . We continue to subtract the lower bound (for ) and divide by the probability (for ); this gives 0.516244542 which is in the interval , and therefore. the next letter will be an , etc.

4.2.2. Solving the Arithmetic Coding Overflow Problem

The two preceding algorithms give the classical arithmetic encoding and decoding. However, one of the main problems with arithmetic coding is the phenomenon of overflow: overflow and underflow. These are issues related to the precision of machines (hardware). To overcome this, the most used technique is that of scaling. The principle of this can be described as follows: (i)Resizing interval: we increase the size of the interval, but we must preserve the information(ii)Incremental encoding: transmission of the code portion by portion as the encoding progresses(iii)We start from the observation of 3 possibilities concerning the interval during encoding: (1)Interval is included in the lower half: (2)Interval is included in the upper half: (3)The interval contains the center point 0.5(iv)When the gap is in one of the halves, it will stay in that half throughout the encoding of the sequence(v)The most significant bit (MSB) is 0 if the interval is in (vi)MSB is 1 if the interval is in (vii)So when the interval is in one of the halves, the most significant bit becomes known(viii)Once the encoder and decoder know which part we are in, we can ignore the other half(ix)To implement the algorithm on a finite precision system, which is our case, we can resize the half in question to regain the interval in full. This will result in the loss of information on MSB, but it does not matter because we have already sent the MSB. We continue the coding, and each time we are in one half, we send a bit. This is an incremental coding(1)If (i)Transmit bit 0 and add a 0 to the p number of pending bits(ii)(iii)(2)If (i)Transmit bit 1 and add a 0 to the p number of pending bits(ii) (low-0.5)(iii) (high-0.5)(3)If and (i)Add a 1 to the p number of pending bits(ii) (low-0.25)(iii) (high-0.25)

Note that the interval is scaled (zoomed) for each output bit. This multiplication by 2 means moving the binary point one position to the right. Sometimes it is easier to work with integers than to work with comma numbers. To apply arithmetic coding to integers, just do the following: (i)Use symbol frequencies instead of probability(ii)Replace by (iii)Replace 0.5 by (iv)Replace 0.25 by (v)Replace 0.75 by

Here is the formula to calculate the following interval:

Thus, Algorithms 1 and 2 become Algorithms 3 and 4, respectively: First we ask: , , and .

1: Input: : sequence to compress; : the symbol probabilities are known.
2: Output Val: a binary number that represents sequence .
3: (Cum = cumulative probability and coincides with the various terminals: );
4: for {} do
5:      is the number of symbols in the alphabet (the number of intervals)
6: low = ; high=       the lower and upper bound; the number of bits
7: First_qtr=; half= ; Third_qtr= ;
8: bits_To_follow:= ; Number of bits to look at in order to know which part we are on
9 fordo
10:  
11:  
12:  
13:  while 1 do
14:    if                    the lower part
15:    transmission of bit “0”; ;
16:    low =2 low; high =2 high;
17:    else if                  the high part
18:    transmission of bit “1”; ;
19:    low = low-Half; high = high-Half;
20:    low =2 low; high =2 high;
21:    else if      n the middle
22:    bits_To_follow =bits_To_follow + 1;
23:    low = low-First_qtr; high = high-First_qtr;
24:    low =2 low; high =2 high;
25:    else
26:    break;
27:    End if;
  After that we encode the symbol EOF_symbol as before to allow the decoder to stop.
28:return   bin2dec converts the binary vector to a decimal number
   =0
1: Input: : Bit stream (compressed version of the sequence); : the number of symbols to be decoded; : the symbol probabilities are known.
2: Output: the decoded sequence .
3: Cum = cumulative probability and coincides with the various terminals: ); ;
4: fordo
5:     is the number of symbols in the alphabet (the number of intervals)
6: low = ; high=
7: First_qtr=; half=; Third_qtr= ;
8: while 1 do
9:   
10:  
11:  
12:  whiledo
13:   
14:  
15:  
16:  while 1 do
17:   if       the lower part         we do nothing
18:   else if                    the high part
19:   val = val-Half; low = low-Half; high = high-Half;
20:   else if        In the middle
21:   val = val-First_qtr; low = low-First_qtr; high = high-First_qtr;
22:   else
23:   break;
24:   End if;
25:   low =2 low; high =2 high; val =2 val + bit;     Move to next input bit
26:   if                    the end
27:   break;
28:   End if;
29:   A=symbol;
30:   return
4.3. LZW Compression

Among the most powerful data compression techniques, we can mention the compression algorithms based on the use of a dictionary. These are lossless data compression techniques that uses a dictionary with a predetermined state, but its value may be changed during the encoding or decoding process. One of the best-known dynamic dictionary-based compression algorithms is the LZW algorithm, which was created as a modification of the LZ78 algorithm [13]. It is a general-purpose compression algorithm in the sense that it can work with almost any type of data. LZW is a dictionary-based compression method [9] that associates a variable number of symbols (subsequences) with a code of fixed length. The algorithm works by traversing the input sequence to search a subsequence that is not in the dictionary. When such a subsequence is found, the index of the subsequence without the last element (that is, the longest subsequence of the dictionary) is extracted from the dictionary and sent to the output; then the new subsequence (including the last element) is added to the dictionary with the next available code. The last element entered is then used as the next starting point for searching for subsequences [9]. Increasingly long subsequences are stored in the dictionary and made available for later encoding as single output values.

Figure 6 gives an overall view of the transmission model with LZW compressing from the insertion of the pilot signs to the estimation of the channel with the preprocessing of precoding and feedback.

The algorithm works best on data with repeated models. Like any compression algorithm, the LZW is defined by compression: obtained by replacing the subsequences with the codes associated with them in the dictionary and by decompression: performed by replacing sets of code words with their associated subsequences in the dictionary.

Thus, the coding and decoding algorithms are, respectively, presented in Algorithms 1 and 2, where BDC and DBC stand for, respectively, binary to decimal conversion and decimal to binary conversion.

1: Initialize the dictionary with all integer values from to
2: Consider for each input binary sequence of length his BDC as a code
3:
4: =first input binary sequence of length
5: while not end of the input sequences do
6: =next input binary sequence of length
7: if the code of concatenated with is in the dictionary then
8:  Replace by concatenated with
9: else
10:   
11:   Output code of
12:   Add as the code of concatenated with in the dictionary.
13:   Replace by
14: Output the code of
1: Initialize the dictionary with all integer values from to
2:
3: OLD=first input code of the codes stream
4: Output the DBC of OLD
5: while not end of input codes stream do
6:  CURRENT = next input code of the codes stream
7:  Output the sequence of or a multiple of bits corresponding to CURRENT in the dictionary
8:  
9:  Add in the dictionary as the code of the sequence of bits corresponding to OLD concatenated with the first bits of that corresponding to CURRENT
10:  Replace OLD by CURRENT

In the LZW encoding and decoding, we can see the following: (i)The LZW algorithm is very simple. In addition to this, the dictionary is designed at the same time as the encoding or decoding and does not need to be saved(ii)The efficiency of the algorithm increases as the number of repetitive sequences in the input data increases(iii)At each MS side, after the GR compression, all output sequences of length bits are concatenated to get the AC or LZW input binary sequence(iv)At the BS side, the AC or LZW decoding algorithm is used before the GR one to retrieve the precoding matrix , for

5. Improved GR

To improve the GR feedback compression, we have proposed two methods in which the AC or LZW algorithm is applied to the output quantized angles and which are sequences of length bits.

5.1. Validation of the Number of Quantizer Bits per Angle

Figures 710 show the bit error rate (BER) as a function of signal-to-noise ratio (SNR) in the multiuser UL FDD MMIMO-OFDM context. The modulations are, respectively, -QAM and -QAM. Two schemes are considered, one with quantized feedback overhead compression by considering several values of and one without compression. Several values of the parameters , , and are also considered.

As we can see, the quatization error is negligible when reaches . The performances with or without quantization are the same in all considered configurations. Therefore, is enough for the elimination of the quantization noise affect in the system performance. It is this number of quantizer bit that has been adopted in the IEEE 802.11ac standard [14].

5.2. Compression Rate

The compression rates of conventional GR and the proposed GR with AC- (GR & AC) and LZW- (GR & LZW) based feedback overhead compression for different MMIMO configurations and number of OFDM subcarriers () are listed in Table 5. Note that the number of quantizer bits is fixed to as previously validated.

As demonstrated in Section 4, the compression rate of conventional GR is . That is why, in Table 5, for any MMIMO configuration and any number of OFDM subcarriers , the compression rates are and when the numbers of antennas at each UE are, respectively, and .

Unlike the conventional GR compression rate which only depends on the number of antennas per UE, the GR & LZW one takes advantage on the evolution of others parameters, especially the number of OFDM subcarriers. Because, the more increases, the more there is repetitive binary sequences of length . Accordingly, the compression rate increases considerably as the structure of LZW algorithm enables improvement of compression capacity when the repetitive sequences become large.

Then, observing the behavior of the GR & LZW compression rate in Table 5, we can note that, for the same MMIMO configuration (when and are fixed), the compression rate of the proposed GR & LZW increases with the number of OFDM subcarriers .

In order to always push the compression rate, we have tried to improve the LZW algorithm that we have just applied on the GR method. To fully explain the new method proposed, GR & LZWM, we will be based on a simple example of compression with LZW.

Let us take the sequence “MIMOMISO” to compress with LZW. In this case, the incoming stream will correspond to the ASCII codes of the letters contained in the sequence. For example, the ASCII code of is 98. So, the ASCII version of our sequence is as follows:

To better understand LZW compression as well as the proposed modification, let us apply it to the text “MIMOMOSI” (see Tables 6 and 7 where the ASCII decimal codes of , , , and are 77, 73, 79, and 83). This example shows (i)The LZW algorithm is very simple. In addition to this, the dictionary is designed at the same time as the encoding or the decoding and does not need to be saved(ii)The efficiency of the algorithm increases as the number of repetitive sequences in the input data increases

5.3. LZW Modified

It is important to note that all outputs of the LZW encoder must be with the same number of bits which depends on the largest value in the dictionary. In the previous example, each output must be encoded with bits. Indeed, it is the length of the number which will be that of the other numbers of the output. However, apart from this number, the others can be coded with 8 bits because they do not exceed .

So, it will be interesting to see how to make the outputs of the LZW encoder individually coded with the minimum number of bits required. To do this, we have adopted another demarcation from the LZW algorithm name LZWM. (i)First, we have defined a word from the dictionary as a reserved word: for the initial dictionary (0 to 255), the number 255 is reserved. This number will serve as a flag to warn the decoder about the number of bits encoded. So our initial dictionary becomes 0 to 254.(ii)The first word of the output is 255: to signal to the decoder that the number of coded bits is (iii)Each time we encode a word, before putting it in the output sequence, we check whether it is greater than : (1)If so, we place in the output sequence 29-1 to indicate that now the number of bits is 10(2)And so on, until the end of the sequence is encoded(iv)Otherwise, we place the latter code in the output sequence

5.3.1. Comparison between LZW and LZWM

Table 8 gives the LZWM encoding of our previous sequence. So in this example, the gain from the modification is not very visible (we even lose; Table 9). Indeed, the length of the sequence is very reduced; suddenly, there is no jump, i.e., going to 10 bits, 11 bits, and so on.

However, this article is part of a MMIMO context, which means that the sequences that we will be called upon to compress will take very large dimensions. Thus, this gain could become interesting. Moreover, the study allowed us to define the minimum and maximum gains of this method compared to the classical one.

To generalize, let us set the number of starting bits and the final number of bits. For and , we have the following: (i)At a minimum, we have before going to 10 bits. 8 is the number of bits inserted so the flag(ii)At a minimum, we have before going to 11 bits(iii)At least we have before going to 12 bits

In general, we will have a gain: (i)At least we have which gives (ii)Likewise, at most

Therefore, let us keep the same previous simulation parameters 1, and we consider the case where we have a single user and a multipath MMIMO channel; the comparison of the two algorithms gives Table 10.

Though Table 10 confirms that the modified version of the LZW algorithm has a gain of about 2% compared to the classic version, this percentage gain may seem small, but let us just remember that we are in a MMIMO context, so 2% of the data handled is necessarily significant. In addition, we will see later that if we apply these two algorithms to the GR method, the difference in favor of LZWM becomes more important.

Thus, Table 11 gives the comparison of the application of the new method proposed for some MMIMO-OFDM configurations.

In summary, we can say that excessive feedback overload is a major handicap in FDD MMIMO systems. Therefore, this excess downlink feedback must be compressed by sophisticated algorithms. Significant efforts are being made in this direction, and several solutions are proposed. We have proposed two lossless compression solutions based on the GR & LZW compression algorithm and a derivative of the latter: GR & LZWM. These algorithms can be considered evolutions of the Givens rotation method which is adopted in the WLAN standard IEEE 802.11/c. The results of the simulations show that the proposed methods can achieve more than 95% and 98% compression ratios, respectively.

6. Conclusion

Multiuser MMIMO-OFDM systems operating in the FDD mode are a promising solution to deliver unprecedented spectral efficiency in future wireless communication systems. Several powerful preprocessing and postprocessing techniques are proposed in the literature to both diffuse and eliminate potential multiuser interference.

However, since UL and DL CSI are mandatory on each side of the transceiver, MMIMO-OFDM systems in FDD mode require a large amount of feedback from the CSI. Therefore, limited feedback is now an important research topic.

In this work, we have proposed three improvements to the conventional Givens rotation feedback compression which is adopted in the IEEE 802.11 standard.

The first GR & AC proposal is based on a lossless compression technique, the arithmetic coding. By opting for arithmetic coding with resizing to avoid the overflow problem and to be able to transmit as the compression principle unfolds, the proposed GR & AC algorithm has achieved a compression rate of up to 83%.

The second GR & LZW method is based on the LZW compression technique which is also a lossless compression method. With the LZW-based solution, the simulation results show that up to 95% compression ratio can be achieved. Finally, this second proposition has in turn been improved by modifying the coding principle of the LZW algorithm: GR & LZWM (modified). And this proposal brings improvements over the second in terms of compression rates which can reach 98%.

All the proposed algorithms brought a feedback reduction gain compared to that of GR used in the IEEE 802.ac standard and presented the same performance in terms of BER if the number of quantization bits became greater than or equal to 8.

Data Availability

In this work, there is underlying data.

Conflicts of Interest

The authors declare that they have no conflicts of interest.