Abstract

With the development of the Internet platform, the traditional economic model has been gradually abandoned by the times due to its unilateral sex transactions and underdeveloped information dissemination characteristics. A new economic model was born with the development of the information age. The network economic platform is a new productivity organization platform and a pillar industry for the development of the new economy. Taking advantage of the wide spread of the Internet and the participation of individuals or organizations, network economic platforms play an important role in promoting multi-platform economic growth and mass innovation. However, with the development of the network platform economy, the network platform appears saturated. The growth trend of the online platform economy is not as good as before, and online platforms are mainly monopolized by several major online platforms, so it is difficult for small businesses or companies to have high economic returns. In the face of these shortcomings of the network platform, it is necessary to analyze the economic status of the network platform through big data and develop new development strategies. Faced with these defects of the network platform, it analyzes the economic status of the network platform through big data and develops new development strategies to make the network economy grow effectively. This paper draws conclusions by analyzing the economic growth ability, real-time resource allocation ability, customer mining ability, and flexible price adjustment ability of the network platform through technologies such as big data. The network platform for big data analysis has an average efficiency of 93.8% in allocating resources in real time. It is also far superior to traditional network platforms in mining customers and flexibly adjusting prices. The growth economy of online platforms under big data analysis is 65.7 times than that of traditional platforms. To this end, a new strategy is needed: vigorously adjusting the resource allocation of the network platform and in-depth mining of platform information can effectively grow the platform economy.

1. Introduction

With the development of network technology and the widespread popularization of mobile communication technology, the traditional economic model is gradually declining. The Internet-based network economy model has developed rapidly in recent years and has become one of the largest economies today. Network economy exists in people’s daily life in various forms. Previous traditional industries have also joined the ranks of the network economy through Internet technology. The network economy is a big concept, such as the sharing economy and the network platform economy which are its components. Internet technology provides a broad development space for new industries. The economic model based on the Internet economy has three upgrade processes: the first stage is the industrial Internet era. It greatly improves labor productivity. At this stage, trade transactions are all in a unilateral sales state. Not only is the information dissemination very slow, but the purchase intention of buyers is not very high. The second stage is the “Internet +” era. At this stage, the Internet technology is unprecedentedly developed, and the Internet is combined with many fields. At this time, the economy is two-way spread, and trade transaction information is frequent. The third stage is the rise of the online platform economy. Network platform is a new economic model. It promotes online transactions between merchants and customers through the platform, and the platform extracts platform fees from it to achieve a three-way win-win common economy. The platform economy represents higher labor productivity and resource utilization. However, there are many problems in the development of the network platform. For example, network platforms sometimes do not measure resource allocation very well. Improper allocation of resources can easily lead to conflicts between merchants and customers, network platforms cannot tap the potential value of customers well, resulting in waste of resources, network platforms are monopolized by several large platforms, and multi-platform development cannot be achieved.

This paper takes the new development strategy of big data analysis network platform economy as the topic and analyzes the network platform economy through big data and other technologies, so as to solve the problems of the development of network platforms and realize the effective growth of network platform economy. The innovations of this paper are as follows: (1) it analyzes the network platform economy by means of big data analysis. (2) It analyzes whether the new strategy of network platform economy under big data analysis can make the economy grow effectively through comparative experiments. Therefore, this study is innovative.

With the development of Internet technology, the network platform economy has become one of the most important economies. More and more people are conducting research on network platforms. Among them, Lehdonvirta et al. connected providers around the world through an online web platform. He found that it has obvious advantages compared to the traditional economic model [1]. Solel research showed that the online platform is an online model. It realizes economic development through network technology [2]. Drahokoupil and Piasna pointed out that the work on the platform is a kind of differentiated work, which improves work efficiency through online information dissemination [3]. Meilhan experiment found that the way to obtain the economy through online platforms occupies a large part by analyzing the proportion of income sources of laborers [4]. Kenney and Zysman experiment investigated large network platform companies and learned that the network platform economy mainly relies on intermediaries to obtain profits [5]. Although the network platform economy is the main body of people’s economic income, the network platform has not yet reached the highest standard in terms of resource utilization.

The Internet platform has greatly promoted economic development, but it has also exposed many problems. It is necessary to analyze the network platform economy through big data technology. Among them, Kim used big data to analyze the economic source of the network platform and learned that 80% of the economy of the network platform comes from the merchant's check-in platform fee [6]. Kuftinova et al. conducted big data analysis on a network platform company, and he used machine learning to predict results [7]. In order to improve the accuracy of big data analysis in the economic analysis of the network platform, Yu adopted a clustering big data analysis algorithm [8]. Sun and He used the combination of convolutional neural network and big data to improve the mining and analysis capabilities of big data. This solves the difficult problem of network platform information analysis [9]. Nishiyama research showed that online platforms are ubiquitous in people's lives, and it is crucial to use big data analysis to understand the economy of online platforms [10]. Big data can indeed effectively analyze the network platform economy, but there is a lack of research on new development strategies.

3. The Method of Big Data Analysis Network Platform Economy

With the development of the Internet, mobile clients, and the Internet of Things, all kinds of data have grown exponentially. This is the product of the advent of the information age [11]. The network platform economy, as the major economy in recent years, has produced huge amounts of data and information while making full use of the Internet [12]. Only by quickly analyzing and extracting useful data information from this information, the network platform will go further. Otherwise, the huge data information will become cumbersome and compress the memory space. Big data technology is to extract useful information for enterprise development from huge data. Big data technology is to extract useful information for enterprise development from huge data, which can effectively manage and predict network platform economy [13]. The big data analysis process is divided into two parts: data processing technology and data analysis technology [14].

3.1. Data Processing Technology

In general, the source data information cannot be directly analyzed. It requires some preparatory work on the source data, that is, the data processing stage [15]. Data processing is divided into three processes: data extraction technology, data preprocessing, and data transformation. The data processing model diagram is shown in Figure 1.

3.1.1. Data Extraction Technology

The huge and useful amount of source data is a common problem in big data processing. The vast majority of data is actually irrelevant to what is actually being analyzed; that is, there are too much dirty data. It also encounters similar problems when analyzing the economics of network platforms [16]. Therefore, it is necessary to extract data from the source data file. Data extraction technology will greatly reduce the complexity of data, and it has a good effect on clarifying research objects and improving data processing efficiency [17]. It has a good effect on clarifying the research object and improving the data processing efficiency of the network platform.

The rough steps of data extraction: It analyzes the research object and needs to analyze the feature vector of the data, such as time, location, and type. It classifies data with the same characteristics. It sets the data range for the classified data. It examines the data in the dataset against the requirements of the data range. It transfers the data in the same data range to the dataset that needs to be studied. After the above steps, it extracts massive data information into the specified dataset. In the process of transferring data, attention should be paid to preserving the integrity of the source data, because it may be necessary to cross-use the data in the subsequent research process, and preserving the integrity of the data is of great significance to the subsequent research [18].

3.1.2. Data Preprocessing

Because the data information generated by the network platform is too large, it is very easy to cause data loss events during the process of data transfer and storage, resulting in data distortion or data incompleteness, and serious degradation of data quality. The data preprocessing process is to solve the problem of data quality degradation during the data transfer process. Data preprocessing mainly includes processes such as data cleaning, data constraint, data transformation, data analysis, acquisition of new data, and data integration. The data preprocessing process diagram is shown in Figure 2.

Data preprocessing usually uses a proximity sorting algorithm for deduplication when dealing with duplicate data in the network platform economy. The algorithm works by sorting the data, checking for duplicate data, and eliminating duplicate data. Its specific implementation steps are as follows: it creates keywords for sorting. Generally, it sorts a subset of data attributes as a key. Then, it sorts the data by the keyword and puts the data with similar serial numbers into adjacent positions, so that the required data can be selected in an area. The sorted data need to set a sliding data window. It compares the data in the dataset with the data in the data window, and sets the size of the data window to be data records. Then, each piece of data must be compared with  − 1 pieces of data in the window when entering the data window. Its comparison is to check whether there is a repetition. Once there is a duplicate occurrence of the bar that data are deleted, the data deduplication operation needs to be completed [19].

However, the adjacent sorting algorithm has serious algorithm defects: the deduplication operation of the first adjacent sorting algorithm depends too much on the selection of keywords. Once the keywords are not selected well, the deduplication effect will not be achieved. The size of the second window is difficult to control to a reasonable size. If the value of is too small, the deduplication time may be too long, and if the value of is too large, data may be deleted by mistake. In view of the above two defects of the adjacent sorting algorithm, it improves the algorithm:(1)When the keyword is selected, it uses the source data to preprocess the keyword, regularize the keyword, and unify the format(2)It increases the number of keywords, that is, selects different keywords from multiple angles and deduplicates from multiple angles(3)It sets the size of the data window to be variable; that is, the size and position of the data window can be adjusted adaptively

By improving the adjacent sorting algorithm, it can preprocess the extracted target data without changing the source data during data preprocessing. This ensures the integrity of the data, greatly shortens the preprocessing time, and effectively improves the efficiency of data processing.

3.1.3. Data Conversion

Generally speaking, big data is a feature dataset with multiple attributes. The dimensions of data are always much more than the attributes required for experimental analysis, so big data itself is a data redundancy in attributes [20]. Therefore, it is necessary to perform multi-attribute dimensionality reduction operations on big data. The purpose of data transformation is to reduce the multi-attribute dimension in the source data to be consistent with the attributes to be studied. It thus achieves the purpose of data dimensionality reduction, reduces data complexity, and improves data processing efficiency. It reduces the economic data complexity of the network platform and improves the data processing efficiency.

The core step of data transformation is to map the attributes of the source data to the attributes of the target data. It uses the gray relational analysis method to test each attribute of the source data. The algorithm principle of the traditional gray relational analysis method is as follows:

In formula (1), Kab represents the correlation coefficient between the b-th data in the a-th sequence and the b-th data in the source data, and Rob represents the attribute of the b-th data in the source data. Rab represents the correlation between the b-th data in the a-th sequence and the b-th data in the source data, and ρ is the resolution coefficient. mina is the smallest data in the a sequence, and maxa is the largest data in the a sequence.

It uses the correlation coefficient to find the correlation degree as follows:

In formula (2), is the degree of association, and the real N represents the number of data elements in the sequence.

By improving formula (2), it obtains a new assignment formula as

In formula (3), is the average sum of the weights of the jth data, tj is the weight of the jth data, and rj is the combined assignment weight.

By testing the correlation between the source data and the target data, it saves the attributes with high correlation, deletes the unmatched attribute data, and reduces the attribute dimension of the data.

3.2. Data Analysis Technology

Data processing is the prework of data analysis. Data analysis is the key core technology of big data technology. Data analysis analyzes the data through automatic processing, and summarizes and organizes the data to find out the hidden characteristics of the data. Based on this hidden information, it can help the network platform to predict risks and precise resource allocation to a large extent, and help the platform to make correct decisions in time [21]. There are four types of existing big data analysis methods: classification algorithm, association rule algorithm, data regression algorithm, and data clustering algorithm.

Faced with the characteristics of the network platform, it is first necessary to use the classification algorithm to summarize and organize the network platform data. In the cross-event of the network platform, the association rule algorithm needs to be used to judge whether there is an association between several things. When the data division of the network platform is to be processed, the data clustering algorithm needs to be used for classification processing. When it is necessary to predict the expected value of the network platform, it is necessary to use a data regression algorithm to perform nonlinear budgeting on the data, so as to achieve the effect of accurate budgeting. The big data analysis and processing architecture is shown in Figure 3. The following will introduce several commonly used data analysis algorithms in network platforms.

3.2.1. Decision Tree Classification Algorithm

The meaning of classification is to find data with the same data characteristics in a huge data group and divide them into different data categories according to the data characteristics. It generally divides the categories by using a classification model and then maps the data in the source data to the specified data categories. Then, it maps the data in the source data to the specified data category, which is usually used in network platform data classification and forecasting trends. It is often used in data classification and forecasting trends. There are many algorithms for data classification, but the most used one is the decision tree-based classification algorithm [22].

A decision tree algorithm is an algorithm that mimics a tree. It takes the data attribute as a node and the value of the attribute as a branch of the node. The specific algorithm process of the decision tree is as follows.

It sets the data source as T and the data category as . It sets an attribute V to divide the data source into multiple subsets, where V has n values of , and the data source is also divided into many subsets ; then, the mapping of T1 is V1.

It sets the mapping of Ti as Vi, when is the number of instances of the Dj data category, and DjV represents the number of particles in the corresponding Dj category for Vi.

Then, the probability of occurrence of Dj category is

And the probability of occurrence of attribute Vi is

When the attribute is Vi, the probability of the data category Dj is

The pheromones for the data categories are

In formula (7), G(D) is the pheromone of D class data.

The conditional elements for the data categories are

In formula (8), F(D, V) is the conditional element that divides the C category according to the V attribute.

The pheromone gain is

The pheromone of the data attribute V is

The establishment of the decision tree algorithm is generally divided into two processes, the growth and cutting of the decision tree.

Growth method: Each time the attribute is divided, the data with the most obvious characteristic attributes are divided. It divides the source data into many subspaces by attribute partitioning. It repeats the division step until the data in each subspace are of the same class [23].

Cropping method: The attribute features of the data cannot be selected too deep; otherwise, the data will be unrepresentative. Each time, it crops the smallest amount of training data.

3.2.2. Logistic Data Regression

Data regression analysis is to abstract the attribute characteristics of the dataset and find the attribute dependence between the data through the mapping relationship. It is generally used to predict data changes in network platforms, and regression analysis includes linear analysis and nonlinear analysis [24].

Linear regression can be regarded as the solution of one-dimensional linear formula: independent variable x and dependent variable y:

In formula (11), a0 and a1 represent coefficients of regression analysis.

If the variance of the dependent variable y is constant, then a0 and a1 can be estimated linearly. If there are m x, y datasets in the dataset, then a0 and a1 can be expressed as

In formulas (12) and (13), x is an average value of and y is an average value of .

In linear analytical regression, the value of x can be either continuous or discrete. However, continuous regression analysis is prone to errors, so when using discrete variables, it is necessary to use the logistic model to eliminate the influence [25].

Logistic regression analysis is also a linear regression method, and the formula is expressed as

In formula (14), f(x) is a hypothetical function, and e is a natural base.

Logistic regression analysis has an excellent prediction effect in predicting the two-sided problem of network platform data.

3.2.3. Association Rule Algorithm

The association rule algorithm is used to find hidden relationships between data, that is, situations where other data can be found through one data analysis. The association rule algorithm is divided into two processes: it finds the data with high occurrence frequency in the source data and finds the corresponding association rule from the high-frequency data. The commonly used rule algorithm is the Apriori algorithm, and its algorithm process is as follows.

It traverses the dataset and finds the dataset A1 with higher frequency. It records the high-frequency data generated by dataset L as A2, and filters and cuts A2 according to the A1 dataset. It then traverses the A2 dataset to find a higher dataset B1, until it can no longer find data with a higher frequency.

The cropping operations are as follows.

It tests the preselected set An for higher frequency data. It removes infrequent data items and then compresses the space traversing the dataset. It repeatedly searches for less frequent data in the An + 1 dataset and prunes. It keeps shortening the scope of the dataset traversal until it can no longer be traversed, and it succeeds.

3.2.4. Data Clustering Algorithm

Data clustering algorithm is an unsupervised learning algorithm. Data clustering is to divide the source data into several categories, which are divided according to the attributes of the data. Therefore, data in the same category have similar characteristic data. However, the similarity of data features of different categories is very low. A data clustering algorithm often used in the network platform is the Gaussian mixture model clustering algorithm. The Gaussian mixture model diagram is shown in Figure 4.

Gaussian mixture clustering is a core algorithm: it assumes two pre-estimated parameters, predicts the value of the other data by assigning a specified data value to one of the data, and then predicts the first data value in turn. It predicts in both forward and reverse directions until the data on both sides tend to be close to the true value [26].

It assumes that one of the variables is X, and then the predicted probability is

In formula (15), is the ith component in the model.

The Gaussian model needs to predict three parameters, and formula (15) is sorted into

In formula (16), is the probability that the ith class is selected, is the selected class, and is the quantity that needs to be processed.

Its prediction of the above three parameters usually uses the Bayesian algorithm to predict the parameters, and the algorithm steps are as follows:

It sets the initial values of , , and , and substitutes these three parameters into formula (16) to get

It solves for , which requires taking the logarithm of formula (17) and setting the derivative of to be 0:

After arranging formula (18), we can get

In formula (19), N is the number of data points, and is the probability of attribute i.

It finds the maximum likelihood number of , which can be obtained:

It loops the steps of formulas (17)–(20) until the predicted two numbers reach the exact value and then stops the loop.

4. Experiment and Analysis of Big Data Analysis Network Platform Economy

4.1. Experiments on the Economic Effectiveness of Network Platforms
4.1.1. Sample Data

In order to make a multi-faceted comparison between the network platform economy of big data analysis and the traditional network platform economy, the sample selection of the experiment must be very strict. Improper selection of experimental samples will cause deviations in experimental results, and the selection of samples should have data from different levels.

In order to ensure the validity of the experiment, the experiment selected 24 network platform companies as the sample data of the network platform economic effectiveness experiment. The 24 companies are divided into large scale, medium scale, and small scale. There are 8 companies in three different size types. It analyzes the economic situation of these sample companies over a period of time and calculates the data indicators that have a greater impact on the economy of network platform companies. Table 1 shows the data of economic indicators of the network platform.

From the data in Table 1, it can be found that companies of three different sizes are greatly affected by the data in the economic indicators of online platforms. Among them, the impact rate of small-scale companies is about 60%, the impact rate of medium-scale companies is about 80%, and the impact rate of large-scale companies is more than 90%.

4.1.2. Correlation Analysis of Samples

The quality of the online platform economic sample directly affects the results of the experiment. Therefore, when choosing the economic indicators of the network platform, the experiment also needs to analyze the degree of correlation between the economic indicators of the network platform and the network platform economy [27]. The correlation analysis of samples is to solve the problem of improper selection of samples. Correlation analysis of samples can amplify the characteristics of experimental data. It is necessary to observe whether these network platform economic indicators can affect the economy of the network platform. The metrics data in Table 1 represent a large share of all three companies of different sizes. And because the data dimension of this experiment is not very large, the correlation analysis of the network platform economy is carried out on all the 6 index data in Table 1. Table 2 shows the analysis result of the economic correlation of the network platform.

By analyzing the data in Table 2, it can be concluded that the resource integration capability has the highest degree of relevance to the network platform economy, and the platform guarantee capability has the lowest degree of relevance to the network platform. But in general, all indicators have a great impact on the economy of online platforms, and the data dimension of the experiment is low. Therefore, the six index data in the table can be used as indicators to evaluate the economic quality of the network platform.

4.1.3. Validity Analysis of Samples

In order to test whether the network platform economic index is effective for the experiment of the network platform economy of big data analysis and the traditional network platform economy, the experiment adopts the method of k-fold cross-validation to test. The experiment needs to divide all the sample data into k equal parts, select k-1 part as the test set of cross-check, and the remaining part of the data as the test data. The experiment will be carried out k times, and the test data of each experiment will be changed to different pieces of data until each piece of data is tested once. The final experimental result is the average of k cross-checks.

The data dimension of this experiment is not high, and the amount of experimental data is not large enough. Therefore, k = 4 is selected; that is, the experiment will perform 4-fold cross-validation. In the experiment, 800 items were randomly selected as test data, according to the 4-fold cross-validation method. Among them, 600 sets of data are test set data, and the other 200 sets of data are test data. It conducts experiments on the network platform economy of big data analysis and the traditional network platform economy, respectively. The 4-fold cross-validation results of the experiment are shown in Table 3.

From the data analysis in Table 3, it can be seen that the most affected in the network platform economy of big data analysis is the ability to allocate resources in real time. In the traditional network platform economy, the most affected is the ability to integrate resources. However, the average impact rates of the two methods of network platform economy in the abovementioned six network platform indicators are 89.2% and 75.4%, respectively. Since the two methods of network platform economy have a high rate of influence under the influence of these indicators, a comparative experiment can be carried out on the two methods of network platform economy.

4.2. Economic Comparison Experiment between Big Data Analysis Mode and Traditional Network Platform

The economic model of the big data analysis network platform is based on the big data analysis technology to realize the management and control of the network platform. The traditional network platform is a platform with the Internet as the medium. The experiment will compare the four dimensions of the economy of the two network platforms, namely, the ability of economic growth, the ability to allocate resources in real time, the ability to tap customers, and the ability to flexibly adjust prices.

4.2.1. Comparative Experiment of Economic Growth Capability of Network Platforms

Whether it is a network platform for big data analysis or a traditional network platform, the most important point to measure the quality of a platform is the economic situation of the network platform. In order to ensure the validity of the experiment, the experiment adopts the control variable method. The experimental environment must be the same, the network platform resources must also be the same, and only the network platforms used are different. Since the economy of the online platform does not grow all at once, it takes time to grow, for which the experiment lasted for one year. It sets an initial capital of 300 million, which is allocated to the same network resources for both network platforms. The economic growth capacity of the network platform is shown in Figure 5.

From the data analysis in Figure 5, we can see that the network platform economy of big data analysis is growing exponentially. The economic growth is huge, with an increase of 19.7 billion in one year, while the traditional online platform economy grows in a linear function, and the growth is slow, with an increase of 2.2 billion in one year.

4.2.2. Comparison Experiment of Real-Time Resource Allocation Capability of Network Platform

When the network platform encounters high-frequency access, the system often crashes. For example, sometimes the response speed of accessing a certain product on the network platform is very fast, while the response speed of accessing another product is very slow. This is caused by the poor ability of the network platform to allocate resources in real time. The real-time resource allocation capability of a network platform reflects the stability of a platform and is an important indicator to measure the quality of a network platform. The experiment will test the real-time resource allocation capability of large-, medium-, and small-scale network platforms. It sets the task volume (the number of visits per millisecond) of 4 different gradients for the network platform. The task volume is 1000, 2000, 3000, and 4000. The real-time resource allocation capability of the network platform is shown in Figure 6.

From the data analysis in Figure 6, it can be seen that the larger the scale of the network platform, the stronger the ability to allocate resources in real time. But at the same scale, the real-time resource allocation capability of the big data analysis network platform is much better than that of the traditional network platform.

4.2.3. Comparison Experiment of Network Platform Mining Customer Capability

When mining customer capabilities, it reflects the expansion capabilities of a network platform. Mining customers is reflected in many aspects, generally the rate of return of customers, the increase rate of new customers, the increase rate of new business of old customers, and so on. The experiment also uses the control variable method to ensure that the environment and resources of the two network platforms are the same, and analyze their ability to mine customers independently. The experiment analyzes three network platforms of different scales. Figure 7 shows the result of mining customer capabilities on the network platform.

From the data analysis in Figure 7, it can be seen that the larger the scale of the network platform, the stronger the ability of the network platform to mine customers. But at the same scale, the network platform for big data analysis has an average of 20% more customer mining capabilities than traditional network platforms.

4.2.4. Comparison Experiment of Network Platform’s Ability to Flexibly Adjust Price

To judge whether an online platform can maximize the platform's revenue, it mainly depends on whether the online platform has the ability to adjust the price flexibly. When the market price of the online platform is high, the price can be appropriately reduced. When the market price of the online platform is low, the price can be appropriately increased. Figure 8 shows the ability of online platforms to adjust prices.

From the data analysis in Figure 8, it can be seen that the larger the scale of the network platform, the stronger the ability of the network platform to flexibly adjust the price. However, under the same scale, the flexible price adjustment ability of the big data analysis network platform is much better than that of the traditional network platform on average, and its maximum can reach 96%.

4.2.5. Experimental Analysis of Two Network Platforms

Through the comparison of the two network platforms, the experimental results show that the network platform of big data analysis is superior to the traditional network platform in terms of economic growth ability, real-time resource allocation ability, customer mining ability, and flexible price adjustment ability. The specific comparison data are shown in Table 4. As a result, the economic development of the network platform for big data analysis is far better than the economic development of the traditional network platform.

5. Discussion

With the development of technologies such as big data and data mining, network platforms have gradually emerged in recent years. However, the network platform has been in a state of economic saturation through the development of these years, so it is urgent to develop a new development strategy for the network platform economy. It is necessary to use the data processing ability and data analysis and prediction ability of big data to realize the monitoring of the network platform economy to help the growth of the network platform economy.

6. Conclusions

Through the comparison of the network platform economy of big data analysis and the traditional network platform economy, the following conclusions are drawn: (1) the economic growth capacity of the network platform economy of big data analysis is 65.7 times than that of the traditional network platform economy. (2) The network platform of big data analysis is more than 10 percentage points better than the traditional network platform in terms of real-time resource allocation ability, customer mining ability, and flexible price adjustment ability. (3) Through the analysis of the network platform through big data, it is possible to adjust the resource allocation in a timely manner and deeply mine customers, which can effectively increase the network platform economy. However, the development of the online platform economy is prone to the problem of economic monopoly, and the vast majority of the economy is occupied by a few big companies. Therefore, diversifying the online platform economy is the direction to be solved in the future.

Data Availability

No data were used to support this study.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this article.