Abstract
In view of the explosive growth of big data technology, precision marketing and personalized recommendation have also received the dividends of big data technology. The traditional extensive operation has not effectively combined the products of businesses with the needs of users, and the success rate of marketing is low, which has led to the crisis that operators may become “pipelines.” To meet the personalized needs of users and precision, marketing technology has been developed. This paper is based on big data technology and personalized recommendation algorithm theory and takes the marketing strategy of the actual telecommunications industry as an empirical research method. The experimental results verify that the analysis model based on big data has a good recommendation effect in precision marketing and personalized recommendation and confirm the advantages of the recommendation algorithm based on user collaborative filtering in personalized recommendation algorithm.
1. Introduction
The most representative and widely used technologies in big data technology are precision marketing and personalized recommendation [1]. The application of big data in precision marketing mainly depends on the support of huge data volume and the analysis technology of big data. Turck [2] puts forward the influence of big data on the times as early as 2014. For the precision marketing scheme, Cook [3] believes in and relies on the traditional marketing mode and uses one-to-one judgment for marketing. Literature [4] well explains the change of big data technology to marketing mode and predicts the value of big data to precision marketing through technical analysis and deep judgment of big data.
In addition, big data technology can also be applied to personalized recommendation. Up to now, recommendation algorithms mainly include content-based recommendation, association rule-based recommendation, collaborative filtering (CF), neural network-based model, and hybrid recommendation [5]. Collaborative filtering can be divided into model-based collaborative filtering and memory-based collaborative filtering, and model-based collaborative filtering can also be subdivided into user-based and project-based collaborative filtering [6, 7]. The first recommendation system well known in the industry is Tapestry [8], an email push service developed by Xerox Company in 1992. It distributes emails based on the common behavior records among group users, which is called collaborative filtering, and this is the first time that collaborative filtering algorithm has appeared in the world. The collaborative filtering algorithm has been in the mainstream recommendation algorithm because of its easy implementation, strong interpretability, and good results. However, because collaborative filtering algorithm is a nonglobal retrieval recommendation, it only uses the user and item that generate records, which has the problems of sparse data and cold start. To solve this problem, the content-based recommendation algorithm came into being, which does not directly rely on user behavior data, and uses item feature information and user preference to realize item recommendation, and has a good performance in cold start. However, the performance of content-based recommendation is limited to some extent because the description information of items is not comprehensive enough and difficult to extract. Then, with the increasing number of data that can be collected and perceived in the Internet and the vigorous development of big data processing technology, the hybrid recommendation algorithm integrating multisource heterogeneous information is widely used because it can alleviate the data sparsity and cold start problems in traditional recommendation systems [9]. According to the different fusion methods of hybrid recommendation algorithm, it can be divided into weighted type, switching type, crossover type, and feature combination type. It is worth mentioning that in the Million Dollar Recommendation Algorithm Competition held by Netflix in 2006, the algorithm model that won the first prize was formed by the fusion of more than 200 recommendation algorithms.
After 2010, emerging technologies represented by deep learning began to detonate the Internet and artificial intelligence. DeepCrossing [10] proposed by Microsoft Shan Ying et al. in 2016 is one of the most typical and basic models in designing personalized recommendation algorithms based on deep learning technology. It transforms high-dimensional sparse features into low-dimensional dense features by adding an embedding layer, which better solves the problem of large-scale data sparsity in industrial scenes. At the same time, DeepCrossing uses stacking layer to connect segmented feature vectors, and it is an innovative measure to train the model with multilayer residual network. In 2016, Cheng and Heng of Google YouTube put forward Wide & Deep (Wide & Deep Learning) [11] model, which combines the Wide layer of manually combining features with DNN (deep neural networks) layer of automatically combining features, and achieves a balance between memory ability and generalization ability. At the same time, Wide & Deep’s recall and ranking two-level recommendation model has also become the blueprint of research and development of deduction algorithm based on deep learning in recent years. DeepFMDeep Factorization Machine proposed by Guo and Huifeng of Huawei in 2017 [12] and xDeepFM (eXtreme deep factorization machine) proposed by Lian of Microsoft in 2018 [13], respectively, strengthen the feature combination power of shallow parts by adding FM (factorization machine) layer and FM+CIN (compressed interaction network) layer, and make full use of the combination information between features to enhance the model expression ability. They are two models used and referenced by the industry after the Wide & Deep model. In the field of e-commerce where recommendation technology is widely used, Zhou et al. of Alibaba in China put forward two network models: DIN (deep interest network) [14] and DIEN (deep interest evolution network) [15] in 2018 and 2019. By adding an activation unit between the embedding layer and concatenate layer, the former model can adjust the weights of different features according to different candidate products. The latter introduces ARGUR (attentional update gate) door on the basis of DIN to simulate the evolution process of users’ interests, which is no longer limited to intuitive feature combination. It is a very innovative measure. After being deployed in Taobao Mall, it has achieved a 22% CTC (click-through-rate) prediction rate improvement. In addition, PNN (probabilistic neural network), AFM (attentional factorization machines), and NFM (neural factorization machines) are also representative models of neural network-based recommendation algorithms in recent years.
By using the marketing characteristics of big data and analyzing personalized recommendation algorithms, we can verify the practical significance of big data and the optimal personalized recommendation algorithm.
2. Overview of Big Data-Related Theories
2.1. The Concept of Big Data
The concept of big data originated from the computer industry, which is a large number of complicated data that cannot be analyzed by conventional software. The analysis and processing method of big data needs simplified process and high efficiency in order to analyze effectively. Big data is divided into two types: structured and unstructured. The proportion of structured data is between 80% and 90%, and this proportion is increasing year by year. According to the survey results, more than 80% of the data generated by enterprises in 2019 are unstructured data, and it is predicted that the unstructured data will achieve exponential growth in 2020 [16]. Big data, like traditional data, needs to adopt the links of acquisition, storage, management, and analysis, so as to realize the later data research of data. The process for big data can be represented in Figure 1.

2.2. Data Collection
For big data, the key to data processing is data collection. Only on the basis of a sufficient amount of data can the whole running process of big data flow. For current data collection, there are many channels to collect data as shown in Figure 2. The ways and characteristics of data collection are shown in Figure 2.

2.3. Data Cleaning
There are two main purposes for big data cleaning: one is to clean up irrelevant data, but to clean up low-quality data. Generally speaking, it is to clean up garbage data. Data cleaning in big data environment is different from traditional data cleaning. For traditional data, data quality is a very important feature, but for big data, data availability becomes more important, and junk data in traditional sense can also turn waste into wealth.
2.4. Data Conversion
Data transformation is the process of transforming data from one format or structure to another. Data transformation is very important for data integration and data management activities. Data conversion can include a series of activities: you can convert data types, clean up data by removing null or duplicate data, enrich data, or perform aggregation, depending on the needs of your project.
3. Marketing-Related Theories
3.1. Marketing Theory
All industries will experience a long period of market monopoly and then gradually move towards marketization. In the monopoly period, the industry is actually a seller’s market, which basically uses 4P theory to guide the marketing strategy of enterprises. However, due to the national conditions and the needs of market economy, after breaking the monopoly of the industry and gradually marketization, the identity of telecom operators has gradually changed to the identity of integrated service providers, so it is necessary to introduce 4C theory. However, in the process of application, it is also necessary to pay attention to the fact that the marketing strategy of telecom operation based on 4C theory depends on professional marketing personnel and special marketing departments, so as to achieve better marketing effect by utilizing and integrating network resources and integrating various marketing means.
Professor Jerome McKinsey put forward the 4P theory in 1960, that is, product, price, channel, and promotion. He believes that marketing is to produce matching products according to market demand, set appropriate prices, and hold appropriate promotional activities to meet customer needs and achieve various goals of enterprises [17].
4P theory is to think from the perspective of enterprises, from the perspective of enterprises, what kind of products enterprises need to produce, what kind of prices to set, what kind of sales channels to choose, and how to spread the selling points and promotion methods of products. Compared with 4P theory, the core of 4C theory is user-centered. 4C refers to customer cost, convenience, and communication.
Some people even think that “4C” should be used to replace “4P” in the marketing activities of enterprises in the new era. However, many experts and scholars still believe that from the operational level, it is still necessary to organize specific and practical activities through a series of marketing activities represented by 4P theory. Therefore, 4C theory only deepens 4P theory but cannot replace 4P theory. In fact, although 4P theory and 4C theory are independent of each other, they should be complementary, not substitute. See Table 1 for the relationship between 4P and 4C.
At present, China’s market is still in the transition stage from 4P era to 4C era, and each enterprise’s own situation is different. Many enterprises themselves have not achieved perfection in basic aspects such as product technology, cost and service. Therefore, in the future, most enterprises in China should take 4P theory as the basic marketing framework and 4C theory as an effective reference.
3.2. Precision Marketing Theory
With the explosion of data in the information age, new technologies such as big data have also developed rapidly, and precision marketing theory has emerged as the times require. Precision marketing is based on “precision,” that is to say, companies and enterprises need more accurate, measurable, and high-return investment in specific marketing activities, more results-oriented and action-oriented marketing plans, and more and more investment in direct marketing. On the basis of fully mastering and understanding customer information, aiming at the preferences of specific users, aiming at the marketing of specific products, combining traditional marketing with modern big data marketing with the help of the integration of market information and customer information, this is a development trend of marketing refinement, as shown in Figure 3.

The collection and summary of massive data is only the first step for enterprises to successfully implement precision marketing. For the collection of data samples, the user portrait technology was produced under the support of big data precision marketing theory, and some scholars have long proposed “customer painting” concepts like “[18]; it is related to data mining and big data analysis. Tags are established depicting users through data information. Specifically, it is the process of visualizing the comprehensive portrait of users by analyzing abstract information such as users’ age levels, living habits, and behavioral preferences. In short, it is to label users, so that product marketing activities can focus more on users’ motives and behaviors and avoid empiricism. User portrait theory is developed from big data application. It is mainly based on the data generated by users, and the characteristics of users are label, and display it intuitively through the front-end page, accurately grasping the characteristics and needs of users, so that the system can be used.
According to the needs of users, users carry out targeted marketing activities.
User portrait theory is applied to personalized product and service recommendation, accurate marketing, and user portrayal at multiple levels. For example, when enterprises carry out precision marketing, user portraits can help them develop products, discover target customers, and provide personalized services.
The core idea of precision marketing is precision and measurability [19]. Its characteristics are as follows: The first characteristic is to meet the internal needs of users and have an impact on user behavior [20]. The second feature is to know that the requirements are the needs of users and then know how to meet the requirements. The third characteristic is that precision marketing cannot be perfect. The characteristics and advantages of precision marketing lie in mastering the needs of users and giving enterprises a specific direction.
Thus, it can be said that the most important step is data acquisition or sampling, and SMOTE sampling can be used to collect data finely. The composition formula of sample is as
In order to improve accuracy, linear interpolation can be used, such
The characteristics of samples can be expressed by six ratios. (A)True rate, the proportion of samples in positive samples of true value, in which the predicted value is also the proportion of positive samples, as shown (B)False positive rate, which represents the proportion of negative samples with true value and positive samples with predicted value, is as shown in (C)False negative rate, indicating the proportion of positive samples with true value and negative samples with predicted value, is as shown in (D)Accuracy rate, indicating the proportion of correctly predicted samples to all samples, as in (E)Accuracy, expressed in the predicted positive sample, where the true value is the proportion of the positive sample, as shown in (F)Recall rate, representing the classification accuracy of a few samples, is as shown in
After sampling, it is said that enterprises that use big data technology for demand analysis and mining, make good use of big data, and transform its value into productivity will have strong competitive advantages and become industry leaders.
4. Personalized Recommendation Algorithm
4.1. Storage and Computing Platform HDFS
With the characteristics of dynamic expansion, high fault tolerance, and high concurrency distributed file system, HDFS can solve the storage problem of massive data well. After decades of development, HDFS has good fault tolerance, ease of use, and dynamic expansion ability. HDFS adopts the classic Master/Slave (master-slave) architecture in distributed cluster. Master (master node) is also called NameNode (name node), which is mainly responsible for managing and maintaining the metainformation of data in the system and all Slave (slave node) nodes. When it is found that a DataNode does not send a normal heartbeat message to it, NameNode will back up the data from other normal Slave nodes. Among them, Slave is also called DataNode, which is responsible for storing massive data written by Client [21].
Its characteristics are shown in Figure 4.

HDFS has excellent dynamic fault-tolerant performance because of its primary node backup and distributed storage mechanism, and it allows distributed file clusters to run on a large number of cheap servers. HDFS is widely used.
4.2. User-Based Collaborative Filtering Recommendation Algorithm
User-based collaborative filtering algorithm (UserCF) is based on the following assumptions: similar users have the same or similar interests in the same project; users’ interests do not change over time. The basic principle of its realization is as follows: based on the current user preference information (usually score), the similar users (neighbor users) are found by similarity calculation, and then, the preferred items of neighbor users are recommended to the current user. It mainly includes user similarity calculation, user nearest neighbor calculation and project recommendation weight calculation.
4.2.1. User Similarity Calculation
The UserCF calculations are expanded on top of the user-item scoring matrix, using cosine similarity (cosine similarity) and Pearson correlation coefficient (Pearson similarity) and other methods to complete the user-user similarity calculation, and then get the user-user similarity matrix.
For the cosine similarity calculation method, the algorithm first abstracts the user’s interest records into vectors and then calculates the similarity between users by cosine function. The larger the cosine value, the higher the similarity of users. Cosine similarity calculation is as follows:
In the formula, and are user rating vectors; and are their vector modulus , which are the interest sets of users and , respectively; and is the common interest set of users and For the modified cosine similarity calculation method, cosine similarity calculation mainly reflects the relative difference in vector direction, which is insensitive to numerical value and difficult to measure the numerical difference of users in each dimension [22]. Accordingly, the modified cosine similarity introduces the use of score average to reduce the calculation inaccuracy on the basis of cosine similarity calculation, and the modified calculation equation is as where and are the average interest scores of users and , respectively.
Pearson correlation coefficient calculation method is as
The Pearson correlation coefficient calculation method is similar to the modified cosine calculation method; the only difference is that the item set involved in the calculation in the denominator of Pearson correlation coefficient is the interest set shared by and .
4.2.2. Finding the Nearest Neighbor
The screening of neighbor users is relatively simple, and there are two commonly used methods.
For the -nearest neighbors (KNN) method, the principle of the algorithm is more intuitive; it does not pay attention to the specific value of each user similarity, and only takes the -bit user with the highest similarity with each user as the nearest neighbor of the current user.
For the threshold method, the threshold method confirms the nearest neighbor user by setting the similarity threshold. If the similarity between user and current user exceeds, is selected as a similar user of .
4.2.3. Calculation of Project Recommendation Weight
There are two common methods for project recommendation weight.
First, the direct calculation method, by predicting the score of items selected by the current user to the neighbor user, and then arranging them in descending order according to the predicted score, Top- is taken to generate the recommendation result. In the scoring prediction strategy, the simplest one is to directly sum the scores of items recommended by neighboring users, that is,
where is the prediction score of user to item obtained by mean value method, is the nearest neighbor user set obtained by KNN or threshold method, is the number of nearest neighbors, is a certain user in , and is the score of user to item .
Second, the weighted calculation method, there are two main weighting methods. One is to introduce the similarity between the user and the neighboring set as the weight. The second is to introduce the average score values of users and on the basis of method 1 and then calculate the weighted average. Two methods of the calculation are shown in Formula (13) and Formula (14), respectively:
In the formula, the meanings of , , , and are the same as those of formula (12); and are the average scores of users U and V, respectively, and is the similarity between users and .
4.3. Item-Based Collaborative Filtering (ItemCF) Algorithm
ItemCF and UserCF are very similar in algorithm implementation, so this paper will not analyze the implementation steps of ItemCF in detail, but only briefly describe the differences in its implementation process. The calculation of ItemCF is also based on the user-item scoring matrix. First, it calculates the similarity between Item and item and then finds its nearest neighbor set by threshold method or KNN and then recommends it. The difference is that ItemCF calculates the similarity based on the cooccurrence between two items.
Although UserCF and ItemCF have the same idea in algorithm implementation, their applicable scenarios and recommendation effects are different. From the perspective of recommendation scenarios, UserCF is more suitable for recommendation in news, blog, or microcontent scenarios, while ItemCF is more suitable for shopping websites and other scenarios where the number of users far exceeds the number of items [23]. From the diversity index of recommendation, UserCF is more inclined to recommend popular items because it pays more attention to the hot spots in interest groups. ItemCF is easier to get highly diverse recommendation results, and from the perspective of the whole system, it is easier to find items in the long tail. At the same time, generally speaking, the recommendation accuracy of UserCF is better than that of ItemCF.
4.4. Content-Based Recommendation Algorithm
Content-based recommendation is calculated according to the content information or metadata of the project and is realized by probability statistics and machine learning technologies. The calculation process based on content recommendation can be shown in Figure 5:

4.4.1. Feature Extraction
Content data classes of projects can be divided into structured data content features and unstructured content features. For structured features, feature extraction is relatively simple, and the feature vectors of each item can be obtained after discrete processing by binary method. Unstructured features cannot be expressed in a fixed format, so it is complicated to extract them. Taking the common unstructured text data as an example, assuming that the text set to be recommended is and all the words (dictionaries) in the text are , the feature vector of the text can be expressed as , where represents the weight of the first word in the dictionary library in the text.
4.4.2. User Preference Calculation
User preference calculation refers to calculating the preference score of each item (content attribute) by the user’s display score or implicit operation.
User preferences are commonly calculated by direct statistics, which is achieved by calculating the user’s scores on each label or feature. Taking the application scenario of university library as an example, assuming that users’ scores for computer technology and computer books (TCL classification number TP3) are {3, 4, 3}, their preference scores for TP3 books can be calculated as . At the same time, users’ interest in borrowing will fluctuate greatly with time, so implicit features such as time can be introduced when calculating users’ interest preference. If it is assumed that difftime is the time difference between the time when the user borrows books and the current time, and is a time attenuation factor less than 1, the user’s book preference score for TP3 class can be calculated as
4.4.3. Calculation of Project Recommendation Weight
Content-based item weight calculation is mainly through calculating the text similarity between the user preference set and the items in the complete set of items, so as to obtain the recommendation set (RecSet) which is the most similar to the user in content semantics. Among them, the text similarity calculation is similar to UserCF user similarity, and the similarity of two feature vectors is calculated by cosine similarity and Pearson correlation coefficient calculation method to get the recommended weight of each item.
4.5. Recommendation Algorithm Based on Neural Network
With a large number of deep learning frameworks such as DeepCrossing, Wide & Deep, and FNN put forward in 2016, recommendation algorithms have entered the era of deep learning in an all-round way [24]. Up to now, recommendation algorithms based on neural network emerge one after another. This section introduces the most classic Wide & Deep recommendation algorithm based on neural network.
4.5.1. Wide & Deep
Wide & Deep in order to complete the personalized recommendation of hundreds of millions of users, Wide & Deep comprehensively considers the memory ability and generalization ability of the model and designs the whole model into a two-level recommendation form to reduce the pressure of online deployment. Wide & Deep recommendation architecture has been imitated by the industry after its publication, and now, it is a reference blueprint for algorithm development in large-scale recommendation systems.
The first level of Wide & Deep model is the candidate generation model, which consists of three parts: input layer, ReLu neural network layer, and recommendation weight calculation layer. The purpose is to quickly screen candidate items and reduce candidate items from millions to hundreds. The input layer can be divided into Wide part and Deep part. The Wide part focuses on inputting shallow category features such as age and gender, while the Deep part focuses on inputting embedding text and user behavior records. The ReLu layer mainly trains DNN after concatenate the feature vectors inputted by Wide and Deep and inputs the results into the next layer. In the weight calculation layer, Wide & Deep first transforms the recommendation weight calculation problem into a superlarge-scale multiclassification task with the help of softmax function and then generates the recall results with the help of nearest neighbor retrieval in the online serving stage. In this case, Softmax is expressed as where represents the probability that the item belongs to a class in the video library for the user when is predicted.
The energy function in a given state is defined as
Regularizing the formula results in where is the normalization factor whose value is
When the explicit layer vector is given, the probability that the implicit element is activated is
Similarly, the probability that the explicit is activated is
They are calculated to obtain the best fitting training data such as
The second level of the Wide & Deep model is the ranking model, and its overall architecture diagram is similar to sorting link, but the biggest difference lies in feature engineering. In order to make the ranking results as accurate as possible, Wide & Deep inputs a lot of strong features combined with engineering experience and application scenarios into the network model. Taking Youtube at the beginning of Wide & Deep application as an example, Wide & Deep inputs the embedding vector of the last videos watched by users recently, the time of watching the same type of video last time, the number of times the video has been exposed to users and other characteristics in order to improve the recommendation accuracy of pushed videos.
5. Big Data-Driven Model
Big data is the underlying data support. It can well rely on the volume of big data and ETL technology to improve the efficiency and accuracy of the overall model. After machine learning, the stored massive data is used to build an automatic decision-making model of data-driven performance management. According to the specific application situation, supervised learning and unsupervised learning algorithms are selected to analyze the data flow.
5.1. Supervised Learning Behavior Analysis Model
Supervised learning behavior analysis model mainly relies on massive data of big data for analysis. At the same time, it uses machine learning decision tree, support vector machine, regression analysis, and other technologies to deeply predict and analyze the data and adds a series of machine training steps to establish an automatic decision model. The supervised learning behavior analysis model is shown in Figure 6.

5.2. Unsupervised Learning User Analysis Model
Compared with the supervised learning behavior analysis model, the unsupervised learning user analysis model is also based on big data as the data technology foundation. However, the machine training used in unsupervised learning uses a large number of algorithms, such as -means algorithm, BIRCH algorithm, and CURE algorithm, which is a more in-depth analysis principle for user analysis. At the same time, the machine training model also adds a training model for users including user attribute analysis and so on. The unsupervised learning user analysis model is shown in Figure 7.

6. Practical Analysis
6.1. Precision Marketing Analysis
This paper takes telecom companies as an example to analyze the role of big data in accurate marketing and the comparison of different personalized recommendation methods. The data set comes from the customer marketing data of telecom companies from 2020 to 2021. Among them, an average of 10,000 customers’ information and their marketing plans are selected for analysis. The user information analysis of the selected users can be obtained as shown in Figure 8.

From Figure 8, we can see the identity label and proportion of users, which is beneficial for us to carry out the next analysis, that is, the analysis of the average telecom business expenses of the current group is as shown in Figure 9.

It can be seen from Figure 9 that the average monthly consumption of employees is the highest. We can meet their needs by recommending some telecom services above and below the average value to their customer groups, but this analysis method is not accurate, so we need to drill down to the next level to analyze the specific ones. Here, we take employees as an example to drill down the user label level, as shown in Figure 10.

We can know more specific average monthly consumption of employees from Figure 10, which is very helpful for our accurate marketing. With enough data samples, we can make more accurate estimates of marketing more easily.
We use the supervised learning behavior analysis model (SLB) and unsupervised learning user analysis model (ULU) based on big data as mentioned above to make decision analysis on this data sample. At the same time, in order to verify the efficiency and accuracy of the big data-driven model, we add a microsample-driven prediction model (MPM) for comparative analysis. Data parameter settings are shown in Table 2.
We first consider the analysis efficiency of the three models, because in the face of massive data of big data, the analysis time is very unacceptable without efficient analysis efficiency. The analysis efficiency of the three models is shown in Figure 11.

It can be seen from Figure 11 that the analysis efficiency of SLB and ULU, two analysis models based on big data, is relatively high, because they are driven by big data and can analyze and process data well, while the analysis efficiency of MPM model is very low.
Besides paying attention to the efficiency of model analysis, the accuracy brought by the model is also a point worthy of attention. Through the training test of the three models, the figure about the accuracy of the model is shown in Figure 12.

From Figure 12, we can know the accuracy of the three models in simulating and analyzing the data. Among them, the accuracy of two models based on big data, SLB and ULU, are above 0.9, which shows that there is not much difference between the actual training effects of unsupervised mode and supervised mode, but the accuracy of supervised learning is slightly higher than that of unsupervised learning mode, because supervised learning can regress the difference calculation and improve the overall accuracy.
6.2. Analysis of Personalized Recommendation Algorithm
For personalized recommendation methods, we can use neural network-based recommendation algorithm (W&D), content-based recommendation algorithm (CR), and user-based collaborative filtering recommendation algorithm (UserCF) to analyze and compare the efficiency and other characteristics of recommendation algorithms in telecom services.
Firstly, the sample data is imported into the data source for algorithm analysis. The efficiency of big data running in the algorithm is also a point worthy of our attention. The analysis efficiency of three algorithms is shown in Figure 13.

It can be seen from Figure 13 that the analysis efficiency of the recommendation algorithm based on user-based collaborative filtering is the fastest, because the main customer group of telecom services is mainly users, and it can be seen that the actual application fields corresponding to different algorithms are still different.
Tags for users have a hierarchical structure. Personality recommendation algorithm needs to drill down the level of tags. Deeper levels can often recommend users more accurately. The drill down rate of levels is shown in Figure 14.

From Figure 14, we can see that the fastest speed of hierarchical drilling is the recommendation algorithm based on neural network, mainly because neural network is also built by hierarchical structure, which can predict the label of the next level in time, so its speed is incomparable to the other two algorithms.
Finally, we compare the accuracy difference of the three algorithms for personalized recommendation through the analysis of accuracy, as shown in Figure 15.

From Figure 15, we can see that the accuracy of the recommendation algorithm based on user-based collaborative filtering is the highest, which is similar to the conclusion in Figure 15, because the telecom service is mainly based on users, and the accuracy of the algorithm can reach high accuracy under the condition of sound user system.
7. Conclusion
Based on the theory of big data, this paper uses the theory of precision marketing and personalized recommendation algorithm. Taking the data of a telecommunication industry as the research object, the following conclusions are drawn from the precise marketing and personalized recommendation of big data.
Precision marketing can better analyze the needs of users to achieve marketing purposes; the more the number of samples, the more accurate the analysis; the big data-driven model improves the efficiency and accuracy of precision marketing; for personalized recommendation algorithm, user-based collaborative filtering recommendation algorithm is suitable for personalized recommendation algorithm, which has a significant improvement in analysis efficiency and accuracy compared with the other two algorithms.
Data Availability
The experimental data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declared that they have no conflicts of interest regarding this work.
Acknowledgments
This work was funded by the high-end training project for teachers in higher vocational colleges in Jiangsu Province/funded by the “Qinglan Project” of Jiangsu Higher Vocational Colleges.