Abstract
Following the speeding up of a process of financial globalization, the risks faced by financial markets have become more complex and diversified. Correlated patterns among financial assets exhibit characteristics of nonlinearity, asymmetry, and tail correlation. The original linear correlation analysis method is no longer suitable, but relevant information describing financial risks. In order to confirm whether an asset is safe, the key is to study and master its volatility, and this is based on our mastery of volatility measurement skills. This article is based on smart sensor big data security analysis and Bayesian analysis. The risk measurement of financial assets based on the empirical probability model is studied. The GARCH- (1,1) model is selected according to the Akaike information criterion (AIC) after the generalized autoregressive conditional heteroskedasticity (GARCH) model is established by the EViews software. According to the results of probability integral transformation, a series of correlation coefficients and degrees of freedom of -copula are obtained by the maximum likelihood estimation method. This paper uses the risk-adjusted return on capital (RAROC) method to evaluate the risk performance of financial assets. Financial institutions can only retain and absorb the financial market risks that cannot be avoided and transferred. The edge user node sends the service request to the edge server node. The edge server uses the model proposed in this paper to evaluate the user’s trust and selects the corresponding service level according to the trust level corresponding to the calculated credibility results. The data show that the edge calculation takes 0.2581 seconds, while the linear search takes about 64 seconds. The results show that intelligent edge computing improves the accuracy and efficiency of financial asset risk measurement.
1. Introduction
For nearly half a century, the world’s financial and economic markets have developed rapidly, and the basic characteristics of financial and economic markets have been gradually replaced by economic globalization and financial integration. With the increasingly destructive power of financial risks to the financial system, many high-risk financial derivatives have been born, making the possibility of financial turbulence continue to increase, which has led financial institutions, investors, regulators, and academia to pay close attention to financial risk measurement. The Bayesian posterior method is a method that integrates the prior information about the unknown parameters with the sample information, then obtains the posterior information according to the Bayesian formula, and then infers the unknown parameters according to the posterior information.
Currently, big data is widely used in all walks of life, with the big data era gradually taking shape. Being the irreplaceable information technology, which has created great advantages that provide a strong guarantee for market development and change the traditional business model to improve the economic benefits of enterprises, it has created great advantages for all industries. Also, big data technology has changed the behavior of consumers, which can provide better services for consumers, optimize and improve product quality, improve economic benefits for corporations, reduce the inventory of corporate products, and use big data technology. It forecasts changes in the market and provides a more important guarantee for enterprises. Big data analysis technology would be developing faster and faster in China, and it is believed that in the near and distant future, it will be widely used in various industries and can effectively control the technology of big data analysis.
The risk measurement of financial assets can improve the balance of information in the financial market. Liu et al. believe that in recent years, with the explosive development of smart cities, green energy management systems have received extensive research, with their focus on engineering web-based power generation systems using edge infrastructure including deep reinforcement learning. First, they gave an overview of energy management based on the Internet of Things in smart cities. Then they proposed a framework and software model of an IoT system based on edge computing. After that, they proposed an effective energy-dispatching scheme with deep reinforcement learning for the proposed framework. Their research lacks a certain degree of innovation [1]. According to Cao et al., as a complement to existing remote cloud centers, multiaccess edge computing (MEC) configured for deployment proximal to mobile user terminals as a promising technology for 5G heterogeneous networks has been identified. To adapt to stochastic and changeable environments, augmented intelligence (AI) is introduced in MEC with a view to making intelligent fashions. They introduced the basic concepts and main applications of MEC and reviewed the existing basic work using various ML-based methods. In addition, they also discussed some potential problems of AI in MEC for future work. Their research is not accurate enough [2]. Xu et al. believe that edge caches are vulnerable to cache pollution attacks (CPAttacks), leading to interruption of content delivery. To solve this problem, they proposed a CPAttack-detection scheme based on the hidden Markov model (HMM). According to the CPAttack model, the cache status of edge devices is characterized by two parameters: content request rate and cache loss rate. Then, using the observation sequence constructed by the cache state, they developed a detection algorithm based on HMM to distinguish CPAttack in the time-invariant content request process. In order to cope with the lack of training data and the dynamic change of the cache state, they designed an algorithm based on adaptive HMM (AHMM) to detect CPAttack during the time-varying content request process. Their research lacks necessary data [3]. Luo et al. believe that vehicle edge computing (VEC) integrates mobile edge computing (MEC) into the vehicle network, which can provide more functions to execute resource-constrained applications and reduce the waiting time for connected vehicles. First, they proposed a dual importance (DI) evaluation method to reflect the relationship between vehicle priority (PoV) and content priority (PoC). Then, they proposed a method based on fuzzy logic to select the most appropriate content replication tool (CRV) to assist the content distribution and redefine the number of content request tools in each segment. They proposed an algorithm based on immune cloning to solve this problem. Their research method has certain flaws [4].
According to the inference node allocation algorithm proposed based on the edge computing idea, different rule sets and conditions of different simulation environments are designed in the simulation environment, and the two evaluation parameters designed for its real-time performance and resource balance are observed to verify the performance of the proposed algorithm. Both real-time performance and resource balance have been greatly improved. This article uses the CCA method to analyze the risk level of the insurance sector and the securities sector and combines the analysis results of the banking sector and the overall financial sector to explore the specific risk situation of my country’s financial system. The financial system covers a wider range. Bayesian networks can combine prior information or expert knowledge with sample data to solve problems in related fields. When modeling with less sample data, prior information and domain knowledge play an important role.
2. Intelligent Edge Computing and Bayesian Posterior Probability Model
2.1. Intelligent Edge Computing
Each mobile device in the edge node contains a trusted execution environment to store pseudonyms, and malicious adversaries cannot tamper with the stored information. When the key generation center traces a single point of failure of the malicious vehicle, the edge node assists the key generation center to complete the dual traceability of the malicious vehicle. The vehicle migrates multitasking to the mobile device for processing. Before receiving the processed message, the vehicle should check the integrity of the message obtained from the mobile device and share data with vehicles in the same area through edge nodes, reducing system redundancy and reducing system overhead [5].
Therefore, the wireless access network has the ability of low-latency, high-bandwidth transmission, and wireless network information perception and opening, avoiding bottlenecks and system failures. At the same time, computing tasks and data sinking, that is, localized deployment, can effectively reduce the computing load and storage load of mobile systems, thereby achieving the goal of optimizing mobile network operating costs [6]. In addition, mobile operators can use mobile edge computing platforms to form a new industry chain that will benefit from the cooperation of mobile cloud platforms, application developers, and network equipment manufacturers. Mobile edge computing can be applied to the business model of some specific applications, which can benefit from both application service providers and application service users [7].
Based on the real-time information of the wireless access network, the degree of congestion of the wireless cellular network and the occupancy of network bandwidth can be estimated, which will assist such applications to make decisions to improve the quality of service. In the same-frequency networking mode, the task migration system adopts a full spectrum reuse mode to improve the utilization of the spectrum, but this method will introduce interference between cells. Compared with the same-frequency networking, the cells in the interfrequency networking mode use different frequency band resources [8]. Although the spectrum efficiency of this method is lower, the resource management algorithm is simple, and cell users under the same frequency have higher transmission efficiency. From the perspective of edge servers, based on the strong computing and communication capabilities of edge servers, edge computing system managers expect to be able to share information related to road safety and popular content uploaded by vehicles with other vehicles in real time, thereby improving road traffic safety and obtaining more benefits [9].
In order to realize the rapid and efficient diffusion of traffic information, vehicles and edge servers need to obey the following calculation migration rules. Through tree search, MCTS obtains the optimal decision of each mobile device on the offload rate, computing resource rate, and communication resource rate. The optimal decision obtained by MCTS is to jointly simulate the future state trajectory and output the best strategy for the current and future moments. DNN is used to generate prior probability distribution that guides MCTS search to accelerate the convergence of MCTS. In order to train DNN, this paper collects training data and labels from the iterative results of MCTS [10].
2.2. Bayesian Posterior Probability Model
The parameters of the Bayesian network are the probability distribution corresponding to each node variable, which are usually obtained by training and learning from the training sample data set. However, when the fault relationship is simple and clear, sometimes the corresponding network parameters can be given directly by analyzing the characteristics of the problem. Bayesian theory is a very effective modeling method for evaluating chemical abnormal events with low probability and high risk [11]. Directional energy is defined as
Among them, and are odd-even symmetric orthogonal filter banks in the direction and the scale . The expression of the likelihood function is as follows:
According to the nature of exchangeability and conditional probability:
The prior information is updated through historical data, and the posterior probability density function is obtained. Its expression is as follows:
The probability density function of prior distribution is as follows:
The expectation and variance of are
Use the sample reflected by the probability density function to estimate the expected value of the distribution:
Among them, is the number of iterations.
Define the texture gradient as the distance between these two histograms:
In order to ensure stability, the improved characteristics are defined as
Among them, is a parameter used to optimize features.
The Bayesian framework combines the prior probability and the posterior probability, that is, the observation likelihood probability obtains the final pixel-level saliency detection result:
Among them, and represent the prior distribution of the target and background, respectively. An efficient and scalable feature extraction algorithm for time-series filters available features in the early stages of the machine learning pipeline to obtain features that are meaningful for classification or regression, while controlling the selection of unrelated features as far as possible.
The Bayesian network model construction process is shown in Figure 1. The failure prediction method is based on the failure physical model, through the analysis of the failure of the equipment, mining the relevant characteristic parameters when the failure occurs, and establishing the failure model of the equipment on the basis of studying the operating principle and physical nature of the equipment; this method is generally applicable for simple equipment with simple system composition and simple operating principle, and the data collected from the sensor is combined with the failure model to predict the potential failure of the equipment, so as to achieve the purpose of reducing the number of maintenance and maintenance costs, and extending the maintenance cycle of the equipment. And it is not suitable for equipment with a complex structure, such as the complex electronic information system and mechanical equipment system. Generally, fault prediction technology based on data and statistics is used. After the Bayesian network model is constructed in this paper, according to the given prior probability, the corresponding posterior probability of each fault node factor at the time of failure can be inferred [12, 13].

As shown in Figure 1, the state of node and model evaluation should be determined first, and then the probability of node occurrence and equipment failure rate should be determined by using the Bayesian network structure, while the principle and method of Noisy-OR model transformation is lag behind.
2.3. Financial Asset Risk Measurement
When investors make investment decisions, they only pay attention to the returns and risks of the asset portfolio. At the same time, for investors, the mean and variance can truly describe the returns and risks of the portfolio. And the risk preference of all investors in the market is risk aversion, that is, the investment preference of investors follows the principle of second-order random dominance. Because the object of venture capital is new projects or startups, there are a lot of uncertainties in its market prospects and investment income, so venture capital is a veritable high-risk investment behavior [14].
In order to reduce the risk and ensure the realization of the expected return, venture capitalists must use scientific methods to measure the risks of the projects or enterprises they invest in. Because of the insufficient sample size and the nonrepeatability of single-period investment, the historical data needed for venture capital measurement cannot be obtained accurately, and the required historical data can only be obtained from similar projects. After the financial risk status is identified, it is necessary to quantitatively measure the more important risks. Risk measurement is the core of risk management, which directly determines the effectiveness of risk management [15, 16].
The Bayesian network is established based on a bow-tie model. Risk source and failure of control measures in advance are mapped to root event, intermediate nodes are built from root event to risk event, the leaf node is mapped from risk event, safety barrier node is mapped from postaccident control measures, and the result node is mapped from accident consequence to establish the risk Bayesian network of evaluation object. The main methods measure the market risk such as interest rate sensitivity analysis, volatility analysis, VaR, stress test, and extreme value theory. Because duration only measures the linear relationship between a bond price change and yield change, but the actual market situation shows that the change between a bond price and interest rate is nonlinear, especially when the yield changes greatly, duration cannot accurately measure the interest rate sensitivity of bond price. Therefore, the second-order estimation of the interest rate sensitivity of bond price, namely convexity, must be used to correct the error of duration estimation [17].
3. Financial Asset Risk Measurement Experiment
3.1. Experimental Parameters
In a distributed D2D-ECN network, the primary user MD and the auxiliary user SD are randomly distributed within the coverage of the local base station LBS. D2D-ECN is a powerful and user-friendly trading platform that uses proven and mature technology to help traders reach a higher level of forex trading. Traders get guaranteed execution without requotes as long as there is depth available in the market. When the distance between the primary user and the auxiliary user is less than the preset distance threshold, MD and SD can establish a data transmission link through the device through technology. The channel model not only considers the path loss based on distance but also considers small-scale fading [18, 19]. The international financial mechanism refers to the general term for the rules, conventions, policies, mechanisms, and organizational arrangements for the regulation of international payment, settlement, exchange, and transfer of various currencies. The specific simulation parameters are shown in Table 1.
3.2. Data Selection
Due to the limited data of personal housing mortgage loans contained in the database and a large part of the sample index that is incomplete, plus the factors of the bank, this paper samples the data in the database; part of the sample is extracted from the database for empirical analysis. In this paper, the daily closing prices of energy stocks in Shanghai Stock Exchange and Shenzhen Stock Exchange are selected by using a risk control strategy. The data comes from DaWisdom software [20].
3.3. Model Parameter Estimation
After establishing the GARCH model with EViews software, according to the AIC criterion, select the GARCH- (1,1) model. The advantage of AIC is that it has a commercial bank background, natural resource information advantages, and a low cost of obtaining high-debt and high-quality corporate information. Then establish the corresponding GARCH- (1,1) models for SH and SZ, respectively, so that the respective mean values of the two sequences and the relevant parameter values and the degrees of freedom of distribution can be obtained, and then use the MATLAB2010 software to calculate the heteroscedasticity . The iterative formula calculates sequences and converts them into probability integrals. According to the result of the probability integral conversion, the maximum likelihood estimation method is used to obtain a series of correlation coefficients and degrees of freedom of -Copula [21, 22].
3.4. Image Edge Pixel Distribution
For each image, we calculate the proportion of background pixels whose edges are within 10% of the image size from the four edges of the image. The pixels within 10% of the image size are selected to be consistent with the parameter selection of the background reference set in the method. The edge user node sends a service request to the edge server node, and the edge server uses the model proposed in this paper to evaluate the user’s credibility and selects the corresponding service level according to the credibility level corresponding to the calculated credibility result [23, 24].
3.5. Performance Evaluation
This article uses the RAROC method to evaluate the risk performance of financial assets. Financial institutions can only retain and absorb financial market risks that cannot be avoided and transferred. For this reason, financial institutions and their business departments must prepare a part of special capital to resist risk losses to ensure the normal operation of the institution. The model is highly conservative, and the choice of confidence level should also consider the constraints of sample data. The higher the confidence level is chosen, the less likely the actual loss will exceed VaR [25].
3.6. Data Analysis
Reduce the low degree of relevance through dimensionality reduction and centralized processing of data, thereby reducing the workload of data processing. This article uses principal component analysis to reduce the dimensionality of the collected data. The analysis principle of PCA is to remove the components with small variances and retain the components with large variances. In this way, the dimensionality of the data set is reduced. Therefore, when using principal component analysis to study complex signals, some components can be ignored to improve research efficiency. After the principal component model is established, the statistics of the test sample can be completed through the principal component model. Whether there is a fault can be assessed according to how much the value of the statistic differs from the control limit. CPV determines the retention of the number of principal elements, and the PCA analysis is used to model the sample set [26]. Big data is the term for big data sets. Big data sets are those that have grown beyond the simple database and data processing architectures used in the early days, when big data was more expensive and less feasible.
3.7. Bayesian Model Test
After obtaining the inductive strength of the conclusion and the similarity between categories, the Bayesian model can be tested. The Bayesian model test is divided into two parts: firstly, the average value of all subjects is used for testing; secondly, the test is conducted by using single-subject data to test whether the model can cope with individual differences, that is, whether it can make personalized predictions. At the same time, the outlier detection and the determination of the membership degree of the sample points in this paper are all carried out in the feature space. In order to ensure the accuracy of the outlier detection and the membership degree of the sample points obtained, it is also necessary to find a suitable mapping feature space for the training set [27]. No matter how the Bayesian model test establishes the hypothesis, it does not need to give the significance level. As long as the posterior distribution of the parameters is given, by calculating the posterior probability of each hypothesis, and comparing the posterior probability of the hypothesis, the desired value can be obtained by the test results.
4. Financial Asset Risk Measurement Analysis
4.1. Edge Computing Simulation Analysis
The problems faced by the edge computing platform in its promotion are analyzed. Then, the typical edge computing platform is analyzed from the perspective of architecture, and the demand parameters of edge computing application scenarios are listed. Finally, a classification model of the edge computing platform is proposed. We use 5 users and 1 edge server, where the cost of computing power provided by the edge server can be shown as , and we set in this chapter. The local computing power of 5 smart terminals is set to . The convergence of the edge computing technology is shown in Figure 2. represents the total computing power obtained by all smart terminals from the edge server. When increases, the total net income increases first, and then when exceeds a certain threshold, gradually decreases. When reaches a certain threshold, the user’s net income does not increase. This is in line with our expectations, because the benefits of blockchain-based digital currencies (such as Bitcoin) within a fixed period of time are limited, and the benefits will not continue to grow when the user’s computing power is large enough. The change trend of edge server net income with different is shown in Figure 3. First, it increases with the increase of and then begins to decrease after passing the threshold. This result is also correct, starting to provide users with computing resources to increase the probability of users gaining benefits, and at the same time, it can also increase the shared benefits they receive. When continues to increase, the revenue of digital currency will not increase, the cost of providing computing resources increases, and the net revenue decreases. In summary, the trend of the total net income in Figure 3 is correct.


Compare the result of intelligent edge calculation with the calculation result of the MATLAB convex optimization toolbox CVX. The calculation results of each smart terminal are shown in Table 2. Observation shows that smart edge computing is consistent with CVX, which verifies the correctness of the design algorithm. The calculation results of intelligent edge computing are shown in Table 3. First, the numerical edge calculation is consistent with the CVX and linear search calculation results, and the edge calculation takes 0.2581 seconds in the calculation time, while the linear search takes about 64 seconds. The CVX calculation tool takes about 5788 seconds. This comparison reflects the design of only edge computing based on correctness while taking into account computational efficiency.
First of all, the relationship between the cost coefficient and the overall net income is shown in Figure 4. At this time, the block capacity is set to . The result can show that when the cost coefficient increases, the overall net income decreases. The relationship between the cost coefficient and the sharing ratio is shown in Figure 5. At this time, the block capacity is set to . Observation shows that when increases, the self-revenue ratio of all smart terminals decreases. The explanation of this phenomenon is as follows. When increases, the edge server needs more costs. In order to ensure the fairness of the income and maximize the overall net income, the proportion of the income allocated to the edge server will inevitably increase, and the result is consistent with our expectations. The effect of block capacity on the sharing ratio is shown in Figure 6. At this time, the cost coefficient is set to USD/G Hash. It can be found that when increases, the self-revenue ratio of all smart terminals will also increase. This is because the increase in block capacity increases the variable revenue and thus the overall revenue. In order to ensure the fairness of revenue sharing when the edge server needs to bear the cost, the proportion of users will be appropriately increased. It should be noted that the overall return does not increase linearly with . When is too large, the probability of winning will decrease.



4.2. Nonparametric Bayesian Dynamic Asset Allocation and Empirical Analysis
Taking the daily income data of three types of assets as samples, they are China stock funds (Harvest CSI 300 ETF (I59919) and Harvest CSI 500 ETF (159922), China bond funds (Cathay Pacific SSE 5-year Treasury ETF, 511010), and Gold funds (Huaan Gold Easy ETF, 518880). All data comes from the WONDER database. This section assumes that the risk-free interest rate is zero. In order to cover a more comprehensive scale and have a good representation in the market, which can better reflect the situation of China’s stock market, we use Harvest CSI 300 ETF and Harvest CSI 500 ETF as research samples. In addition, because gold is naturally a currency, has a high value, and is an independent resource, investing in gold can usually help investors avoid problems that may occur in the economic environment, so this article uses the representative Huaan Gold Easy ETF as a research sample. In asset allocation, bond assets can play the role of asset allocation stabilizer, so this article uses more stable national debt for the sample, the Cathay Pacific SSE 5-year Treasury Bond ETF [28].
The sequence of the cumulative returns of the four assets in the past three years is shown in Figure 7. It can be seen that the stock market has experienced a substantial rise in the past three years, but then there has been a substantial rise and fall during the two years. In fact, this round of Chinese stock market changes is completely independent of the stock markets of other countries, and the fluctuation range is very large. Bonds have basically maintained a stable upward trend. There are certain fluctuations in the gold market, but the overall trend is rising. The descriptive statistics of the daily return rates of the 4 assets are shown in Table 4. It can be seen from Table 4 that the total average daily return rate of the 4 asset return rate series is positive, and the ARCH test shows that the 4 asset return rate series have significant heteroscedasticity.

The correlation of the 4 assets is shown in Table 5. From Table 5, the unconditional correlation between assets during the sample investment period can be seen. Harvest CSI 300 ETF (159919) and Harvest CSI 500 ETF (159922) have a strong positive correlation. Harvest CSI 300 ETF (159919) and Huaan Gold Easy ETF (518880) have a certain positive correlation, while Cathay Pacific SSE 5-year Treasury Bond ETF (511010), Harvest CSI 300 ETF (159919), and Cathay Pacific SSE 5-year Treasury Bond ETF (511010) have a negative correlation with Harvest CSI 500 ETF (159922).
The four characteristics of the data construction of the four assets in the past 3 year-portfolio weights are shown in Figure 8. Both the driving weights and the Cathay weights represent their overall ratings. The return sequence is converted to the main component. As the explanatory power of data changes declines, the first main component explains the total data 68.64% of the change, while the other principal components explained 25.52%, 0.05%, and 0.01% of the data changes, respectively. The first principal component explains most of the data changes. Among them, the weight of Harvest CSI 300 ETF (159919) and Harvest CSI 500 ETF (159922) are positive, and that of Cathay Pacific SSE 5-year Treasury ETF (511010) and Huaan Gold Easy ETF (518880) is negative, as shown in the figure. In the dynamic asset allocation model, the construction of feature portfolios is not fixed, because during the investment period, the covariance of assets will change over time, so the weight of each asset in each feature portfolio will also change with time, that is, according to changes in investment opportunities and changes in the covariance matrix of assets, the feature-investment portfolio is dynamically constructed.

4.3. Comparative Analysis of Model Effects
The model prediction effect evaluation is shown in Table 6. The Sharp prediction model is shown in Table 7. In order to evaluate the accuracy of the predictions of different asset return generation models, this paper uses the logarithmic prediction score (LPS) and logarithmic prediction tail score (LPTS) evaluation methods to compare the model prediction effects, that is, the log prediction score and the log prediction tail. The smaller the score, the better the prediction effect of the model. In order to measure the investment performance of the asset allocation model, the model can be used to predict the expected portfolio return rate and volatility to evaluate the model. Specifically, the Sharpe ratio is used to compare the models. It is generally believed that the larger the Sharpe ratio value, the model investment performance the better. Because this kind of economic evaluation relies on the assumption of the investor’s utility function, this kind of evaluation is not reliable in a statistical sense. However, the economic impact of the model will give investors a strong incentive to use this method.
It can be seen from Table 6 that the logarithmic prediction scores (LPS) of several models are relatively similar, indicating that these models have good fitting effects on the sample data, and the logarithmic prediction tail score (LPTS) value is compared. It is found that the TSVL-DPM model has the smallest score value, which is significantly smaller than the SV-N model, indicating that the TSVL-DPM model has the best effect in extreme event prediction. It can be seen from Table 7 that the Sharpe ratio of the TSVL-DPM model is the largest, indicating that the TSVL-DPM model performs best in the dynamic asset allocation strategy model.
5. Conclusions
For all current social enterprises, in the financial investment project system, doing a good job of risk control has more significant application significance. The important goal of social enterprise management and development is to maximize the benefits, and financial investment activities themselves have income uncertainty. In this case, if each enterprise wants to achieve the goal of maximizing benefits, it needs to conduct a comprehensive risk analysis of financial investment activities and eliminate its risks as much as possible to ensure that the project can obtain the corresponding benefits. This paper chooses the EGARCH model to describe the volatility of financial assets, adopts the extreme value theory on the sample residual sequence after parameter estimation, constructs the EGARCH-GPD model, and finally realizes the dynamic value-added reseller (VaR) estimation of financial assets. In the empirical analysis of the combination model, the comparison with other classic models and the return test results of VaR estimation show that the combination model is indeed more accurate and effective in estimating financial risk VaR. Through the denoising, normalization, power spectrum analysis of these three vibration signal sets, and the analysis of the vibration signal in the normal state, the normal state standard threshold is set. The training sample data set is obtained by comparing the vibration signal set in the state of broken teeth with the normal standard threshold, and the verification sample data set is obtained by comparing the vibration signal in the worn state with the normal standard threshold.
When financial institutions measure the market risk of overseas financial assets, even if they use the VAR model, they simply consider the risk of the price fluctuation of financial assets in the country where they are located, and there is no effective management of the exchange rate risk after translation. This paper proposes to improve the traditional VAR model based on continuous time, which not only considers the respective volatility of securities return and exchange rate. VaR’s risk calculation method for financial assets or investment portfolios is based on a statistical analysis of the past return characteristics to predict the volatility and correlation of their prices, thereby estimating the maximum possible loss. Therefore, it is not the whole of systematic risk management based solely on the objective probability of losses that the risk may cause, and only focusing on the statistical characteristics of the risk. Because the probability cannot reflect the willingness or attitude of the economic subject to the risks it faces, it cannot determine the share of the risk that the economic subject is willing to bear and should avoid when faced with a certain amount of risk. In the sensitivity analysis of the new VAR model, their relative and absolute VaR values are monotonically increasing, and VaR is more sensitive to the correlation coefficient than the volatility of financial asset prices and exchange rates. In this paper, a fast numerical method is proposed, and the vectorization analytical expression of the objective function is derived. The parallel calculation of tmcmc is realized, and the efficiency of model parameter correction is greatly improved.
Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest
The author declares no conflicts of interest.
Acknowledgments
This work was supported by the National Social Science Foundation of China (14BTQ049), Shandong Natural Science Foundation (ZR2020MG003), and Special Project for Internet Development of Social Science Planning Special Program of Shandong Province (17CHLJ23).