Abstract

The commercialization of the fifth-generation technology and the rapid development of Internet of Things technology have made mobile communication networks increasingly complex. Simultaneously, the massive terminal connections have caused serious interference between networks. Large-scale array antennas have become a hot topic of recent studies to improve the wireless transmission characteristics and spectrum resource utilization efficiency. This study was aimed at exploring using large-scale array antennas combined with neural networks in ultradense cells to improve the wireless signal transmission quality. The simulation results showed that the scheme realized not only the wireless signal transmission with a neural network but also the effective recovery of the source signal. Similarly, the application of this scheme effectively reduced the power consumption during signal transmission, the delay, and the interference.

1. Introduction

In wireless communication systems, large-scale antennas play an important role [13], especially with the gradual commercialization of the fifth-generation communication and the requirements of Internet of Things applications [4]. They are essential for the massive terminal connections, which demand higher system capacity. The application of large-scale antennas can overcome the multipath fading in the communication environment and improve resource utilization, further enhancing the system capacity through the space division multiple access technology. Therefore, large-scale antennas have become an important technology in modern and future communication systems. Large-scale antennas can also be deployed in ultra-dense cells and hotspots to effectively reduce path loss, delay, and interference to achieve a reduction in power consumption [5]. Some studies [68] have investigated large-scale antenna techniques that simulate stationary beamforming and channel state precoding. Two studies [9, 10] focused on the capacity of downlink beamforming based on multiple-input and multiple-output (MIMO) and gave a closed expression. According to previous studies, this approach effectively increased the channel capacity by times compared with the conventional approach. Based on the statistical properties of the wireless channel and considering the superiority of artificial intelligence neural networks in exploring the transmission channel and receiving information perception, previous studies [11, 12] combine neural networks and MIMO to build an artificial intelligence-based MIMO architecture model. This approach can improve the channel conditions and reduce signal distortion due to channel fading and increase the spectral efficiency through channel estimation and symbol discovery. The multipolarized uniform linear large-scale MIMO with plane and spherical waves and the multipolarized circularly distributed large-scale MIMO techniques were investigated to reduce the orthogonal properties of the channel [13, 14]. The authors suggested that the performance of the multipolarized, circularly distributed large-scale MIMO antenna array is better than that of the multipolarized, linearly distributed large-scale MIMO antenna array. The transmission environment is harsh, and interference is extremely serious due to the complexity of the wireless network structure. Studies [15, 16] investigated the interference management technique based on deep learning MIMO beamforming to reduce the interference. The authors used a combination of maximum transmission ratio beamforming and forced-zero beamforming techniques, and the results showed that this approach greatly improved the data transmission rate. Similarly, in a previous study [17], the authors adopted a cooperative strategy to compare techniques for interference cancellation in large-scale MIMO downlinks between large-scale MIMO and network MIMO. The results showed that the quality of service of large-scale MIMO systems is better than that of network MIMO systems in a cellular small-scale fading environment. However, the scheme has a more complex theory model. In previous studies [10, 15, 18, 19], the authors used the antenna selection algorithm for maximum capacity transmission to reduce system complexity. The study compared three kinds of schemes for the sensitivity of the transmission power: beamforming for maximum rate transmission, forced-zero beamforming, and minimum mean-squared error beamforming. Other studies [17, 20] investigated the multiantenna beamforming coordinated technique based on the multiple cell to improve the communication quality of users at the edge of the cell, and the results showed that the scheme significantly improved the communication quality of the users. A hybrid large-scale MIMO beam assignment scheme was proposed in a previous study [20, 21, 22] to overcome the deficiency that the current large-scale MIMO beam assignment only serves a single user. The scheme obtained the user spectrum resource allocation via the spatial user covariance matrix, and the results showed that the scheme significantly saved the power consumption.

In this study, a massive MIMO beamforming technique was proposed on the basis of neural networks. This technique used the learning function of neural networks to enhance the tracking and identification of user signals. Therefore, in-depth research and exploration were conducted in this area in this study. The main contributions of this study are reflected in the following aspects: (1)Exploring the large-scale MIMO beamforming technique based on neural networks(2)Simulating and analyzing the performance of the neural networking-based large-scale MIMO beam assignment technique

This study has been organized as follows: the second part introduces the neural network-based large-scale MIMO beam assignment technique; the third part discusses the model simulation and analysis; and finally, the conclusion is presented.

2. System Model

To simplify the system model, we assumed a region with a massive MIMO cell serving multiple users simultaneously. Each cell had transceiver antennas with array element intervals of (one-half wavelength), wavelength , plane waves, and speed of light . There were single-antenna user terminals, which were randomly distributed. The elevation angle of users to the array antennas was . The neural network-based beam assignment model of the large-scale MIMO antenna is shown in Figure 1.

In Figure 1, the array antennas () are used as input signal units to the neural network. According to the antenna theory described in previous studies [2, 6], the delay of the signal from the source emission plane wave to each antenna was (), where is the azimuth from the signal source to the antenna. The symbol “О” in the figure represents neuron nodes of each layer; is an additive signal for each neuron; is the weighted value; and is the output signal of neuron. The source signal has been denoted by . The signal received by each antenna was expressed as . The signal received by the first-layer nodes was denoted by .

The neuron output signal of the output layer was denoted by , where is the activation function of the input layer.

The input signal of the output layer neuron was denoted by :

The neuron output signal of the output layer was expressed as , and the output signal of the output layer was denoted by where is the activation function of the output layer. The last output result was expressed as

Next, we used a back propagation neural network (BP) to establish the weights and thresholds for each layer of the neural network. According to the error propagation direction of the BP neural network, a gradient descent algorithm was used to calculate the weights and thresholds by each layer and finally output the expected value of the approximation. The sample mean-squared error function [23] was expressed as where denotes the target value of the output signal. According to equation (5), using the gradient descent algorithm, the threshold correction and weight correction for each layer were expressed, respectively, as where the parameter is the adaptive learning rate factor.

From equations (2)–(4), the output layer weight adjusted formula was expressed as

Similarly, the threshold adjustment formula for the output layer was expressed as

The threshold adjustment formula for the input layer was expressed as

Using equation (1)–(5), we derived the following expressions:

Therefore, the amendments to the weights and thresholds were expressed as

For the algorithm design, the fastest descent method in nonlinear programming is usually used [11, 15, 23] to minimize the error in equation (5), which means that the negative gradient direction of the error function modifies the weights. However, this method usually suffers from low learning efficiency, slow convergence, and a tendency to fall into the local minimal value point state. To avoid the aforementioned situation, this study adopted the additional momentum approach described in previous studies [23, 14] to change the effect of the trend of the error minima on the error surface. The algorithm is as follows: (1)Calculate the duration of the signal source arriving at each array antenna according to , and obtain the signal data at a certain moment by sampling(2)Enter the sampled data in 1step as the training data into the neural network for training(3)Initialize the neural network(4)Input training samples(5)Calculate the inputs and outputs of the input layer neural network according to equations (1) and (2)(6)Calculate the input and output of the output layer neural network according to equations (3) and (4)(7)Calculate the input and output layer errors(8)Amend the input and output layer weights and thresholds according to equations (1)–(10)(9)If the error accuracy is satisfied, go to step 10; otherwise, update the number of training steps and go to step 5(10)If all the sample data are trained, go to step 11; otherwise, go to step 4(11)End of the training

3. Simulation

In this study, we simulated the network in figure 1. The array antennas are the input elements of the input layer in a neural network. In this case, each antenna receives a signal from a source with azimuth . Each neural unit in the hidden layer was connected to an antenna with an additional threshold . The output layer of the network had neurons, where the outputs of the neurons and the neurons in the hidden layer were connected by a weight . Finally, the output unit in the output layer produced a signal through an adder, which was the signal received from the source by the array antenna.

Next, we set the following parameters for the simulation, as shown in Table 1.

First, a random number generator was used to generate a sequence of decimal as a signal source, which was denoted by . Similarly, a matching random azimuth was generated for each source information, which was denoted by . Then, the delay in the signal arriving at each antenna was calculated using equation (5). Finally, the signals received from each antenna were sampled and sent to the neural network. The neural network was learned and trained, and finally, results of the output neuron were used to synthesize the signal source through an adder. To verify the plan in this study, we used two cases for simulation.

In the first case, all data samples were used to participate in the training, and the gradient descent method had a learning rate of 0.05 and 20 iteration steps. Figure 2 shows the results for the convergence of the weights and learning rate. Figure 2 shows that the gradient rate of change was close to the value of 1.9172 at 6 iterations of the program and the learning rate decreased from 0.05 to 0.007 and stabilized at 20 iterations. During this process, the training result verification checks up to two times.

Figure 3 shows that during 20 iterations, the minimum mean-squared error of the training and test data was close to the optimal result and the accuracy of the target setting at the sixth iteration. The optimal results and accuracy were fully achieved after the 20th iteration, when the minimum mean-squared error was about 0.2474.

In Figure 4, the linear relationship is shown between the sample data and the output target in the training, testing, and validation states of the network. A linear relationship was observed between the sample data and the output target, and an meant no correlation between the sample data and the output target value. The fitted equation is shown in Figure 4.

Figure 5 shows the comparison between the sample data and the simulation data for the two cases of all sample data and 20% of the sample data involved in the training. Figure 5 illustrates that in both cases, the network simulation data basically matched the sample data.

4. Conclusions

Large-scale array antennas are important devices for current and future wireless information transmission, considering the advantages of large-scale antennas such as high speed rate, resistance to multipath interference, and the possibility of using space resources. Therefore, this study investigated the beam assignment technique of large-scale array antennas based on the neural network. The principle of this technique was to use the array antenna as the input unit of the neural network and output the final data by the error backpropagation network in the output layer of the network through the data complex adder. Through simulations, this study used all the data involved in training and 20% of the data involved in training with its parameters unchanged. The results revealed that both cases could recover the sample data very well. In conclusion, the scheme used in this study was feasible in the combination of array line and neural network for data reception.

Data Availability

For the dataset problem in the paper, in fact, we have made a description that we use an -decimal random sequence with systematic generation.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

Dr. Xie Chaochen Research Start-up Fund, the project name, is a research on self-organizational optimization technology for ultradense heterogeneous networks based on the analysis and decision-making of big data. The project study is to use big data analysis and mining technology to obtain valuable information in the ultradense heterogeneous of 5G network, to explore the formation mechanism of ultra-dense heterogeneous network architecture, to establish a self-organizational scheme of network model based on big data analysis, and to use intelligent, multiobjective optimization decision algorithm to combine networks to achieve general improvement of network coverage, system capacity, and service quality. This study was aimed at providing a theoretical basis for promoting the application of ultradense heterogeneous networks, and the content of the study is closely related to the current practice, which is not only of practical significance but also of high academic research value (project grant ID: BSJ2019027).